From: MCCARTY@UTOREPAS Subject: Date: 12 May 1987, 23:50:02 EDT X-Humanist: Vol. 1 Num. 1 (1) This is test number 1. Please acknowledge. From: MCCARTY@UTOREPAS Subject: Date: 13 May 1987, 00:06:41 EDT X-Humanist: Vol. 1 Num. 2 (2) This is test number 2. Please acknowledge. From: MCCARTY@UTOREPAS Subject: Date: 13 May 1987, 23:08:57 EDT X-Humanist: Vol. 1 Num. 3 (3) This is a test of a new BITNET mailer for people involved with the support of computing in the humanities. Please acknowledge receipt of this message. A more complete explanation and welcoming message will be forthcoming. Thanks very much. From: IAN@UTOREPAS Subject: Date: 14 May 1987, 16:05:17 EDT X-Humanist: Vol. 1 Num. 4 (4) Message received. From: MCCARTY@UTOREPAS Subject: Date: 14 May 1987, 20:17:18 EDT X-Humanist: Vol. 1 Num. 5 (5) Welcome to HUMANIST HUMANIST is a Bitnet/NetNorth electronic mail network for people who support computing in the humanities. Those who teach, review software, answer questions, give advice, program, write documentation, or otherwise support research and teaching in this area are included. Although HUMANIST is intended to help these people exchange all kinds of information, it is primarily meant for discussion rather than publication or advertisement. In general, members of the network are encouraged to ask questions and offer answers, to begin and contribute to discussions, to suggest problems for research, and so forth. One of the specific motivations for establishing HUMANIST was to allow people involved in this area to form a common idea of the nature of their work, its requirements, and its standards. Institutional recognition is not infrequently inadequate, at least partly because computing in the humanities is an emerging and highly cross-disciplinary field. Its support is significantly different from the support of other kinds of computing, with which it may be confused. Perhaps you don't think so. In any case, let us know what you do think, about this or any other relevant subject. HUMANIST is one of the inaugural projects of a new special interest group for the support of computing in the humanities, which is currently applying for joint affiliation with the Association for Computing in the Humanities (ACH) and the Association for Literary and Linguistic Computing (ALLC). Information about this SIG may be obtained by sending a message to George Brett (ECSGHB@TUCC.BITNET). Currently anyone given access to HUMANIST can send mail to all other members of the network without restriction. It is expected that the members will at least be civil to each other, however spirited the argument! New members are welcome, provided that they fit the broad guidelines described above. Please tell anyone who might be interested to send a note to me, giving his or her name, address, telephone number, university affiliation, and a short description of what he or she does to support computing in the humanities. I will then add that person to the list. If anyone should wish to be dropped from the list, please send a note to that effect. Willard McCarty Centre for Computing in the Humanities University of Toronto (MCCARTY@UTOREPAS.BITNET) From: MCCARTY@UTOREPAS Subject: Date: 15 May 1987, 11:15:21 EDT X-Humanist: Vol. 1 Num. 6 (6) My apologies for a recent flood of junk mail relating to a bad address for one of our members. Please bear with me while I figure out the arcane manners and methods of this very promising tool. From: JACKA@PENNDRLS Subject: Date: Friday, 15 May 1987 1536-EST X-Humanist: Vol. 1 Num. 7 (7) Query- Is this a LSERVER? If so, take a look at how Knut Hofland has set up his Bulletin Board....JACK From: MCCARTY@UTOREPAS Subject: Date: 18 May 1987, 20:09:38 EDT X-Humanist: Vol. 1 Num. 8 (8) For those of you who happen to know less than I do about the Bitnet facility that runs HUMANIST, I have just sent out a lengthy memo by Eric Thomas on the "revised List Processor," or ListServ. The memo lists commands available to you. Please note the section entitled, "How can I send commands to LISTSERV?" If you're as ignorant about these things as I was a few days ago, you'll need the help of some local expert. HUMANIST has, I'm happy to say, reached addresses on ARPA-net, uucp, and JANET (in the UK). It remains to be seen whether messages sent to HUMANIST from these networks will be successfully redistributed. I'll be asking one person from each to send a test message. This will mean, alas, more junk-mail, which I hope you will excuse. From: SUE ZAYAC Subject: Scholarly Information Journal Date: Mon, 18 May 87 09:51 EDT X-Humanist: Vol. 1 Num. 9 (9) Hello: I've just sent a complimentary copy of the Columbia University "Scholarly Information Center Journal" to all of you whose names I had from the original meeting of the Ad Hoc SIC for Support Issues at USC in April. The SIC journal is published quarterly by the Columbia University Libraries and the Center for Computing Activities and features articles aimed at informing the scholarly community of available information resources and current trends in information technology. We are particularly trying to slant this towards the humanists, not the EE and CS people. If your name has been added to this mailing list since and you would like a complimentary copy of this journal send me mail, either here or to sue@cunixc.columbia.edu (for you unix buffs). The editor, Bea Hamblett has just pointed out to me that, although subscriptions are free to the Columbia community, there is a $10 subscription fee for others. The issue I sent you neglects to include this informaiton. If you would like to subscribe, send your money, name and address to: Bea Hamblett Editor, SIC Journal Academic Information Services Group Columbia University 612 West 115th Street New York, N.Y. 10025 Bea will also entertain suggestions for articles. If you have an idea of something you would like to contribute, send mail with your proposal (not the full article) to Bea at us.bea@cu20b.columbia.edu (P.S. Do not send jokes about the name of the journal. We've already heard them all.) Susan Zayac SLZUS 280-3724 ------ From: MCCARTY@UTOREPAS Subject: Date: 16 May 1987, 18:35:10 EDT X-Humanist: Vol. 1 Num. 10 (10) The recent flood of junk mail you have received was due to what is called a "mailer loop," which is something like an echo that continually increases in volume. This loop began when a message was sent by HUMANIST to Joel Goldfield using a bad address (mea cupla!) and incorrectly returned by network software to HUMANIST, which then dutifully sent it out as a regular message to all members of the list, including Joel at his bad address.... Fortunately for us all, the alert network wizards at our Computing Services pounced on the echo and silenced it. These things happen, I'm told. My apologies. Next week I will be sending out information on the ListServ mailer that should allow you to do various things with it. As for now, you can enter the fray simply by sending a note to HUMANIST at UTORONTO.BITNET. Yours, W.M. From: JACKA@PENNDRLS Subject: Date: Wednesday, 20 May 1987 1520-EST X-Humanist: Vol. 1 Num. 11 (11) If anyone out there would like to receive the Online Notes from the Center for Computer Analysis of Texts, please let me know electronically and I will add your name to the mailing list. Thank you. JACK ABERCROMBIE From: LOU%UK.AC.OXFORD.VAX1@AC.UK Subject: test missage Date: 20-MAY-1987 18:09:05 X-Humanist: Vol. 1 Num. 12 (12) Here comes a copy of the test message I am about to send to HUMANIST at UTORONTO, cc: MCCARTY at UTOREPAS, so that you can see if messages from JANET are properly redistributed HONK BEEP! [A[A[A[A[A[A[A[A[A[A[A[A [10L [B[B[B [5m FLASH!!!![m [4h VT100s can be really [4l [But the last bit wont make much sense except on a VT100] L From: MCCARTY@UTOREPAS Subject: Date: 20 May 1987, 08:29:50 EDT X-Humanist: Vol. 1 Num. 13 (13) Our local network expert advises me that it would be much better for members of HUMANIST to request the document I said I was sending you (and didn't!) from the ListServ node nearest you. There is a real danger of burdening the network, apparently. The following list of locations may help. From: Subject: Date: X-Humanist: Vol. 1 Num. 14 (14) UTORONTO - here CANADA01 - U of Guelph DEARN - Germany FRECP11 - France TAMVM1 - Tulane UGA - U of Georgia UIUCVMD - U of Illinois SUVM - Syracuse U HEARN - Holland BITNIC - Bitnet Network Center, NYC OREGON1 - Oregon IRISHVM1 - Ireland From: Subject: Date: X-Humanist: Vol. 1 Num. 15 (15) So, for example, if you're at Rochester, you don't want to ask UTORONTO's listserv to send information, since SUVM is much closer. So, you will likely want to ask your local expert to help you identify the nearest node, if it's not obvious from the above list (which is incomplete), and to figure out what commands are necessary in your system, for example, to get a copy of LISTSERV MEMO, which describes ListServ and its commands, and to get an up-to-date listing of the members of HUMANIST -- which is growing every day. From: ENGHUNT@UOGUELPH Subject: Date: Tue, 19 May 87 21:15:37 EDT X-Humanist: Vol. 1 Num. 16 (16) Willard; Thanks for all your efforts in this endeavour: they are bound to bear fruit in due course. Keep up the good work. Stuart From: Dr Abigail Ann Young 1-416-585-4504 Subject: computer generated maps of the British Isles Date: 19 May 1987, 16:10:23 EDT X-Humanist: Vol. 1 Num. 17 (17) This is a general plea. I realise that many people on this list will not have a great interest in computer-drawn maps, but if you know of anyone who does, or if you know of another list where such a person might see this note, please pass it on!! I have already posted this plea on the ENGLISH discussion started by Marshall Gilliland at UofSaskatchewan, and on CSNEWS at MAINE, so my apologies to anyone who has already seen it, and especially to those who have sent replies! Thank you, Abigail Young The Records of Early English Drama project here at Toronto, of which I am a part, are engaged in collecting, editing, and publishing any documents which offer external evidence for the production and performance of drama, music, folk drama, or semi-dramatic folk activities in the British Isles before 1642, when most such activities were banned by the Puritan Commonwealth. As an offshoot of this, we are eager to use this data, much of it previously unknown, to generate maps of late medieval and renaissance England to show the distribution of various dramatic or folk activities, and the touring routes of professional companies or actors and musicians during the period. Is there anyone out there who is interested in, or has experience with, this kind of mapping? At the moment, we are experimenting with a micro-based map-making package, MapMaster, but are still interested in finding out about other possibilities, especially those which are mainframe-based. I would be very glad to hear from anyone who has such interests, or who could put me in touch with others who might be. Thank you!! Abigail Ann Young (YOUNG@UTOREPAS) Records of Early English Drama 85 Charles Street West Victoria College University of Toronto From: RSTHC@CUNYVM Subject: Date: Tue, 19 May 87 16:00 EDT X-Humanist: Vol. 1 Num. 18 (18) I have received your BITNET mail (obviously). This is the reply you requested. RST From: JMBHC@CUNYVM Subject: Date: Thu, 21 May 87 13:01 EDT X-Humanist: Vol. 1 Num. 19 (19) Yes, could I would like to receive your On Line Notes. My mailing address s: Joanne M. Badagliacco Director, Academic Computing Services Hunter College, CUNY 695 Park Avenue New York, NY 10021 Thanks very much. From: JMBHC@CUNYVM Subject: Date: Thu, 21 May 87 13:41 EDT X-Humanist: Vol. 1 Num. 20 (20) You might contact Dr. Keith Clark, Department of Geology & Geography, Hunter College, BITNET address: KCCHC@CUNYVM. This is his very area of expertise. Joanne Badagliacco From: MCCARTY@UTOREPAS Subject: Date: 21 May 1987, 20:17:53 EDT X-Humanist: Vol. 1 Num. 21 (21) Two HUMANISTs today have suggested to me that we need to have some understanding about what gets sent to everyone and what gets sent to individuals. Let me offer this rule: that specific answers to open queries should be sent directly to the questioner and NOT to everyone -- unless, that is, the answer is a particularly interesting one. The suggestion I got really points to the necessity for self-regulation so that we don't bury each other in junk mail. I'm particularly sensitive to this issue, since I indirectly managed to bury everyone in dartvax chatter last week. Any guidelines for the running or using of HUMANIST would be of interest to us all, I'm sure. Perhaps this could be a useful discussion. I'd very much appreciate comments on my new WELCOME MESSAGE and my first attempt at a user's guide to HUMANIST. These will be coming along shortly. From: MCCARTY@UTOREPAS Subject: Date: 21 May 1987, 20:32:37 EDT X-Humanist: Vol. 1 Num. 22 (22) Welcome to HUMANIST HUMANIST is a Bitnet/NetNorth/EARN electronic discussion group for people who support computing in the humanities. Those who teach, review software, answer questions, give advice, program, write documentation, or otherwise support research and teaching in this area are included. Although HUMANIST is intended to help these people exchange all kinds of information, it is primarily meant for discussion rather than publication or advertisement. In general, members of the network are encouraged to ask questions and offer answers, to begin and contribute to discussions, to suggest problems for research, and so forth. One of the specific motivations for establishing HUMANIST was to allow people involved in this area to form a common idea of the nature of their work, its requirements, and its standards. Institutional recognition is not infrequently inadequate, at least partly because computing in the humanities is an emerging and highly cross-disciplinary field. Its support is significantly different from the support of other kinds of computing, with which it may be confused. Perhaps you don't think so. In any case, let us know what you do think, about this or any other relevant subject. HUMANIST is one of the inaugural projects of a new special interest group for the support of computing in the humanities, which is currently applying for joint affiliation with the Association for Computing in the Humanities (ACH) and the Association for Literary and Linguistic Computing (ALLC). Information about this SIG may be obtained by sending a message to George Brett (ECSGHB@TUCC.BITNET). New members are welcome, provided that they fit the broad guidelines described above. Please tell anyone who might be interested to send a note to me, giving his or her name, address, telephone number, university affiliation, and a short description of what he or she does to support computing in the humanities. I will then add that person to the list. If anyone should wish to be dropped from the list, please send a note to that effect. Willard McCarty Centre for Computing in the Humanities University of Toronto (MCCARTY@UTOREPAS.BITNET) From: MCCARTY@UTOREPAS Subject: Date: 21 May 1987, 20:33:03 EDT X-Humanist: Vol. 1 Num. 23 (23) How to Use HUMANIST Currently anyone given access to HUMANIST can communicate with all other members without restriction. A member need not be on Bitnet but can use any comparable network with access to Bitnet. Thus, to send mail to everyone simultaneously, use whatever command your system provides (e.g., NOTE or MAIL) addressed to HUMANIST at UTORONTO. Your message is then sent by your local software to the UTORONTO node of Bitnet, where the "Revised List Processor" (or ListServ) automatically redirects it to everyone currently on the list of members. Restricted conversations or asides can, of course, develop from the unrestricted discussions on HUMANIST by members communicating directly with each other. This is particularly recommended for replies to general queries, so that HUMANIST and its members are not burdened with messages of interest only to the person who asked the question. If, for example, one of us asks the rest about the availability of software for keeping notes in Devanagari, suggestions should be sent directly to the questioner's e-mail address, not to HUMANIST. Please use your judgment about what the whole group should receive. We could easily overwhelm each other and so defeat the purpose of HUMANIST. Draconian methods are available for controlling a discussion group, but self-control seems preferable. This is not to discourage controversy -- quite the contrary -- but only what could become tiresome junk-mail. New members will be interested to know that ListServ at UTORONTO maintains an archive of messages for the past month. If you have just joined and want to know the recent history of discussions, enter the following command (or its equivalent on non-VM/CMS systems): TELL LISTSERV AT UTORONTO GET HUMANIST LOG8705 ListServ will then send you the contents of the monthly archive. ListServ accepts several other commands, for example to retrieve a list of the current members or to set various options. These are described in a document named LISTSERV MEMO. This and other documentation is available to you from your nearest ListServ node and is best fetched from there, since in that way the network is least burdened. You should consult with your local experts to discover the nearest ListServ; they will also be able to help you with whatever problems in the use of ListServ you may encounter. Once you have found the nearest node, type the following: TELL LISTSERV AT XXXXXX INFO ? The various documents available to you will then be listed. Suggestions about the running of HUMANIST or its possible relation to other means of electronic communication are very welcome. Please let me know what you think about these matters directly, at the address given below. Willard McCarty Centre for Computing in the Humanities University of Toronto (MCCARTY@UTOREPAS.BITNET) From: CMI011%UK.AC.SOTON.IBM@AC.UK Subject: Date: Thu, 21 May 87 23:59:13 GMT X-Humanist: Vol. 1 Num. 24 (24) Is this the correct way to send stuff to HUMANIST? If so, can I make three suggestions as to the service, and add one genuine contribution. a) Those of us outside the IBM world are pretty much in the dark about what LISTSERV is.... (at least, I assume its IBMese 'cos it looks like nasty VM/CMS type commands). Could you, for the ignorant, explain what its all about? b) when I logged in tonight, I got 8 messages from HUMANIST which took a while to read and digest. And the thing has hardly started! Would not a weekly digest be more appropriate, such as other SIGs use? Otherwise I shall spend all my life reading mail! on the same tack, are copies of each of these messages really being sent individually across the Atlantic to readers in the UK? That seems pretty wasteful - could not someone in each country do a redistribution? c) would someone care to define an etiquette for contributions to this list? eg what is the maximum length a contribution should be? I ask because I am about to write a report on my last year's teaching of computing arts courses, which I would consider of vague interest to readers. If its 20 pages, I guess its too long, and obviously 1 page is fine - what about 6 pages? 8? How much are people prepared to read on a naasty orange VDU or whatever you have in front of you? d) Heres my real question, which I asked on the English HUMBUL bulletin board last year, and got no answers to. Do any of you out there teach Prolog in introductory courses to Arts students? if so, what do you set in the way of assignments? I need exercises for my students! sebastian rahtz From: "DD ROBERTS (PHILOSOPHY)" Subject: Date: Sat, 23 May 87 13:26:00 EDT X-Humanist: Vol. 1 Num. 25 (25) In September I will be teaching a course in which material stored on the computer will be used as a textbook. The material is previously unpublished manuscripts of a philosopher. This is the first time I have tried anything like this, and I wonder if anyone would be willing to give me a few hints as to how best I might organize things like: access to the material, format of the material, and so on. Thanks. From: Leslie Burkholder Subject: Reading on-line Date: Sat, 23 May 87 17:22:36 edt X-Humanist: Vol. 1 Num. 26 (26) You should probably print-out the material rather than have your students read it on-line. Some research (here, by Chris Haas) shows that reading for content is as good as reading from paper only when the screen is a black-on-white one with about 1000*1000 pixels. (These screens are found on Suns, eg.) Performance decreases when something like an IBM-PC screen is all that's available. In addition, there are problems in easily conducting searches in the text on-line (eg, Where did the author say something about that before?, What was the main point of this section supposed to be?) Leslie Burkholder From: Leslie Burkholder Subject: Prolog exercises for arts students Date: Sun, 24 May 87 18:06:23 edt X-Humanist: Vol. 1 Num. 27 (27) How about the following? (1) Logic puzzles (the sort of thing found in many collections). These have the advantage of showing that Prolog programming is not pure logic programming. (2) A program that writes poetry (especially, haikus). An ELIZA program. (3) A parser for a fragment of English using Prolog definite clause grammars. (4) History databases and retrieval. Lots of stuff by people in various ways attached to the LCA micro-Prolog group at Imperial College (eg Richard Ennals). There are quite a few intros to Prolog written by people in this group but they are aimed, it seems, at primary and secondary rather than college students. (5) A spelling and grammar checker. (Distinguish "their" and "there"). (6) Something to play tic-tac-toe (noughts and crosses). (7) Various kinds of search problems (the farmer, the fox, the goose, and the grain; the waterjug problem). (I teach a logic and Prolog course to arts students.) Leslie Burkholder From: MCCARTY@UTOREPAS Subject: Date: 25 May 1987, 00:13:40 EDT X-Humanist: Vol. 1 Num. 28 (28) Members of HUMANIST continue to suggest to me that the flow of messages should be regulated by an editor, who would decide what should and should not be sent to the membership at large. I stubbornly continue to think that all members should be editors. So, I wonder, what do you want? To wait and see? To have an editor immediately? Let me propose an alternative. The immediate use of HUMANIST seems to be for asking rather specific questions, of great interest to the questioner but not so much, I suspect, to most others. My proposal is that all who reply to such a question should send their replies directly to the questioner, not to HUMANIST. The questioner would then gather together the responses, attach them to the original question, and post the results to HUMANIST, as he or she would see fit; or, perhaps, the questioner might offer the results to anyone who asks for them. McLuhan talked about the "global village"; we seem to be in the midst of one. As I know from living in a housing cooperative, making a suddenly created community work takes some thought and seems to involve much error. Which direction shall we blunder in? It has occurred to me that we could very effectively distribute reviews of software and other written work of interest among ourselves, using HUMANIST either before publication, as a means of getting comments, or as an indirect means of publication. So, another proposal: that we announce to the membership the availability of such things and then send them to whomever asks for them -- directly, not via HUMANIST. Since software tends to change so rapidly, electronic publication seems more suitable, though it still doesn't get as much credit. Are there any other such uses for HUMANIST? From: MCCARTY@UTOREPAS Subject: Date: 26 May 1987, 23:03:02 EDT X-Humanist: Vol. 1 Num. 29 (29) An interesting item available from our Computing Services: From: Subject: Date: X-Humanist: Vol. 1 Num. 30 (30) February 13, 1987 Academic's Guide to Microcomputer Systems Second Edition The University of Toronto Computing Services (UTCS) announces the second edition of the Academic's Guide to Microcomputer Systems, a book on the application of microcomputer hardware and software in academia. The Guide is written primarily by the UTCS Microcomputer Support Group and is addressed to both the complete novice and the experienced user, although the emphasis is on introductory explanations. This edition of the Guide comprises eight volumes, varying in size. Volume 1 contains material of an introductory and general nature. In its section on hardware, this volume presents a detailed treatment of the typical components of a microcomputer, including the enhancements commonly required for academic applications. It points out the many pitfalls of selecting a machine and suggests means of avoiding them. The section on software divides academic software by type and supplies for each a general discussion. Among several useful appendixes is an extensive glossary of terms. Volume 2, "Hardware Evaluations," contains brief reviews of various commercially available systems, while each of the remaining volumes covers a particular type of software, offering detailed reviews of several of the best packages. The reviews are not limited to listings of features and commands but attempt to describe the basic approach of the package in terms of its advantages and limitations for academic work. The publisher plans to issue new editions more or less annually. Each edition will be available to individuals in hardcopy and to educational institutions in electronic form via BITNET. Colleges and universities interested in distributing the Guide to their students and faculty may obtain it in electronic form free of charge. There are two requirements only: a BITNET/NetNorth address and agreement to certain minimal conditions by a site administrator with signing authority. BITNET/NetNorth enquiries should be directed to Dr Martha Parrott (PARROTT at UTORONTO). Printed copies of the Guide are available at CAN$21 for the complete set (Volume 1 is available in a loose-leaf binding for an extra $1.50). We regret that, for mail orders, the entire set must be purchased; add $3 per set for postage in Canada, $6 in the U.S., and $8 overseas. Please send the appropriate amount, in Canadian funds, to Ms. Dale Wright, Information Office, University of Toronto Computing Services, 255 Huron Street, Room 350, Toronto, Ontario, Canada M5S 1A1. From: LOU%UK.AC.OXFORD.VAX1@AC.UK Subject: Date: 26-MAY-1987 17:56:55 X-Humanist: Vol. 1 Num. 31 (31) Do all Humanists know about the Oxford Text Archive? For those who don't, I am currently trying to find out how to send a copy of our snapshot (listing all our machine-readable texts) to the fileserver at Toronto so that you can get hold of it from there direct; if I fail, let me know and I'll mail you one direct. For those who do, since this is our tenth year (at least) of operation, I've been worrying about the future. There follows some of the fruits of this in the form of a proposed Code of Practice, designed to liberalise and maybe improve the facilities we currently offer. I'd be very grateful for comments, suggestions, reactions, advice on it before I start actually doing anything about it. Lou Burnard A CODE OF PRACTICE FOR A DIGITAL TEXT ARCHIVE Draft L.D. Burnard 24 May 1987 ----------------------------------------------------------------- Purpose The purpose of a digital Text Archive is to promote and facilitate the computational analysis of literary and linguistic texts for scholarly and educational purposes. To that end, the Archive provides facilities for the long term storage and maintenance of machine readable texts, publicises information about the existence and availability of such texts and encourages so far as possible the distribution of such texts on a scholarly and non-profit making basis. The Guidelines Depositors of texts in the Archive and users of texts obtained from it are required to conform to the following guidelines which together constitute the DITA Code of Practice. 1. To use the text for purposes of scholarly research only and not for profit. To publish in an appropriate scholarly context the results of analyses carried out using the text. Where requested, to make such analyses available to the depositor of the text in advance of publication. 2. To acknowledge in such publications or other work carried out both the original depositor of the text and the Archive itself. 3. Not to hold the Archive liable for any errors of transcription discovered in the text, but to notify the Archive of all such errors as soon as possible. 4. Whenever substantial alteration enrichment or revision of the text as received has been performed, to inform the Archive of the nature and scope of such alterations. To make any such revised version of the text available to the Archive for redistribution. 5. Not to incorporate the text or any derivative of it in any commercially distributed electronic or other form of publication, except when explicitly so licensed by the Archive or the depositor of the text. Access to texts Texts deposited with the Archive are assigned to one of five Accessibility Categories, each with its own characteristics as defined below. Wherever possible, depositors are urged to place texts in category F or U. The Archive may impose maintenance charges for texts in categories A, X and 0, but will maintain texts in categories F and U free of charge. The depositor is assumed to be the owner of the materials deposited with the Archive, or to have obtained appropriate permissions from the copyright owner. The Archive will not accept responsibility for any breach of copyright by the depositor. F. Texts in category F are freely available for scholarly purposes. Copies may be obtained from the Archive on payment of a small fee to cover material costs, provided that the above Code of Practice is observed by the recipient of the text. Such recipients may make further copies of the texts for re-distribution on a non-commercial basis only, provided that (a) no charge is made for such re-distribution other than to cover material costs (b) each such copy is accompanied by a copy of the Code of Practice (c) all subsequent users conform to the Code of Practice U. Texts in category U are also freely available for scholarly purposes. Copies may be obtained from the Archive on payment of a small fee to cover material costs, provided that the above Code of Practice is observed by the recipient of the text. A record will be kept by the Archive of all such copies issued, which will be made available to the depositor on demand. No further copies may be made by recipients of the text without further reference to the Archive. A. Texts in category A are subject to the same conditions as those in category U, with the additional proviso that copies will be issued only on receipt of written instructions from the Depositor, which should not however be unreasonably withheld. X. Texts in category X have been deposited with the Text Archive for the benefit of specified local (Oxford) users only. They may not be redistributed; wherever possible however the Archive will be able to identify a source from which potential users can obtain their own copies of such texts. 0. Texts in category 0 are deposited with the Archive for security purposes only. They are available to the Depositor only and their existence in the Archive will not normally be publicized. END OF DOCUMENT From: CAMERON%UK.AC.EXETER@AC.UK Subject: HUMANIST NETWORK Date: Wed, 27 May 87 14:32:31 BST X-Humanist: Vol. 1 Num. 32 (32) KEITH CAMERON DEPARTMENT OF FRENCH AND ITALIAN UNIVERSITY OF EXETER EXETER EX4 4QH GB 392-264209 INTERESTED IN KEEPING ABREAST OF DEVELOPMENTS ON HUMANIST NETWORK. Have and am working of computer-assisted concordances and research into development of expert system for teaching/correction of French and French phonetics. From: JACKA@PENNDRLS Subject: Date: Wednesday, 27 May 1987 0955-EST X-Humanist: Vol. 1 Num. 33 (33) A number of you asked to be put on the mailing list for the Online Notes. If you have not received back issues of the Notes the reason is I cannot reach you electronically because the path out is not there! Please help me on this if you still wish to subscribe to the Notes....JACK From: ENGHUNT@UOGUELPH Subject: Date: 28 May 1987, 09:17:05 EDT X-Humanist: Vol. 1 Num. 34 (34) For the linguists, a neologism from computer land: When the system garbles a message, it can be said that the system GARBAGIFIED the message. . From: MCCARTY@UTOREPAS Subject: Date: 29 May 1987, 23:26:24 EDT X-Humanist: Vol. 1 Num. 35 (35) I have suggested in passing that HUMANIST could be used to distribute files as well as messages, e.g., reviews of software or other technical reports. With some prodding by a fellow HUMANIST, I've discovered that ListServ allows for centralized distribution of files. Files can be sent to, stored on, and fetched from a ListServ node relatively easily, so that a HUMANIST with information in high demand does not have to be bothered with sending it to each interested person directly. This service is available to us, and I suggest that we use it. So that we don't wear out our welcome with the good people of UTORONTO who sponsor our discussion group, I propose that we limit centralized storage to those files that are truly of general interest. Highly specialized material is better kept by the originator and sent out on request than maintained on an expensive storage medium in Toronto. So, if you have a candidate for centralized distribution, please send it to me; we can argue about its generality and then have it posted or not. In the language of a VM/CMS system, the following command will get you a list of the files maintained for HUMANIST: TELL LISTSERV AT UTORONTO SENDME HUMANIST FILELIST And this command will cause a selected file to be sent to you: TELL LISTSERV AT UTORONTO GET For translation of these commands into the language of your system, please consult with your local experts. I have discovered that in many cases wisdom is the cultivation of ignorance. Yours, W.M. From: CAMERON%UK.AC.EXETER@AC.UK Subject: Filelist Date: Sat, 30 May 87 11:42:45 BST X-Humanist: Vol. 1 Num. 36 (36) TELL LISTSERV AT UTORONTO SENDME HUMANIST FILELIST From: MCCARTY@UTOREPAS Subject: Date: 1 June 1987, 14:13:58 EDT X-Humanist: Vol. 1 Num. 37 (37) Two recently promulgated errors: (1) I was given the wrong information about fetching a list of resident files from the UTORONTO node. I will let you know when the proper syntax has been determined, and I'll then be able to give you some idea of how much we can afford to store centrally. (2) A bad address for a colleague on MLNET has caused a rash of irrelevant chatter, for which I apologize. The problem is being investigated, but since other messages get through to him at the same address, it's difficult to know what's responsible. Yours, W.M. From: MCCARTY@UTOREPASFrank Wm Tompa Subject: electronic publication Date: 1 June 1987, 14:35:53 EDTMon, 1 Jun 87 08:43:12 EDT X-Humanist: Vol. 1 Num. 38 (38) Because HUMANIST is non-refereed, I would think that it falls in a completely different category than a refereed journal. I believe that an author could honestly claim that circulation in HUMANIST is similar to circulation of a technical report -- regardless of its quality, it is not perceived as a publication (wrt brownie points). On the other hand, such circulation does not in any way infringe on journal publication of the same document. I would claim that, in CS at least, there would be no conflict nor any perceived conflict. P.S. Were there to be a conflict, I as an author, would definitely choose a refereed journal if I thought the paper was of sufficient quality. From: MCCARTY@UTOREPAS Subject: Date: 2 June 1987, 21:45:04 EDT X-Humanist: Vol. 1 Num. 39 (39) A contribution from Jeff Gillette, whom many of you must know, at least by reputation (as author of the Duke University Toolkit, that is): From: Jeffrey William Gillette Subject: Re: Electronic publishing Date: Sun, 31 May 87 20:48:10 EDT X-Humanist: Vol. 1 Num. 40 (40) Regarding the subject of electronic vs. printed journal publishing, Perhaps I might throw my own 2 cents into the discussion. Let me start with two Would there be any advantage in contacting one or several professional journals who have editorial members involved in ACH, your Humanist project, or some subset of the two, inviting them to publish the "best of the Humanist discussion" - that is applicable to the appropriate discipline(s). It seems to me that the short question/answer/comment format of, e.g. Byte, would not be entirely appropriate. Perhaps, however, several "mini-columns" or longer discussions that take place each month would prove useful to the larger audience of the traditional journals. The first objection I see to this type of arrangement would be the problem of credit - does the university consider the regular computer column of Dan Brink or Bob Kraft in their particular discipline journals as valid a con- tribution to the academy as their more traditional contributions. Could some type of peer review of such columns be set up by which tenure commitees could be satisfied that this publication was indeed a valid contribution to the discipline? In the case of a school like Duke I have serious doubts. Perhaps other faculty are more enlightened. I know this suggestion is unpolished, but I guess it is as useful as any, given the ofttimes languid pace with which electronic media are being integrated into many of our disciplines. Peace, Jeff From: CMI011%UK.AC.SOUTHAMPTON.IBM@AC.UK Subject: Date: Wed, 3 Jun 87 16:18:25 GMT X-Humanist: Vol. 1 Num. 41 (41) Heres my thoughts on the 'how to run Humanist' debate, which is of general interest beyond Humanist, I think (the debate, not my thoughts!). I think Willard's introductory note to new readers adequately defines what is and is not proper material for the list, so that leaves three problems: a) should the distribution be a weekly digest? at present, Toronto just forwards incoming material as soon as it arrives. This means that most days one gets a Humanist or two, each of which may add to a debate. This is all very fine in theory, but in practice I find it confusing as I get flustered by the size of my mailbox. I would much prefer it if all messages were collecte d for a week and then mailed on to us in a batch, so that, say, every Monday morning one got a huge mail full of peoples thoughts. As it is, I wonder who else is behaving like me - I waited for a few days, then printed out all the Humanist stuff and am now replying. b) mail vs. filestore. I would say that no contribution should normally exceed 150 lines, and that most should fit on a single screen or two. It simply is not possible to comfortably read the text on a VDU for most of us (I exclude the SUN owners, lucky dogs). I would go so far as to suggest that a Humanist editor (presumably Willard - sorry!) be asked to look at messages and put those over 150 lines into a file server system. c) "publication" - is this a real issue? anything as long as an article that is going to give any brownie points in ones institution isnt appropriate for electronic distribution in the present state of the technology, unless we start agreeing standards (Interleaf files? TeX? raw PostScript) its just not on to present a complex document on the screen across n continents. Ergo, only paper publication will do, so the Humanist version isnt the 'real' version, so editors should not be upset. . . However, as is common practice anyway, sending out drafts for comment is normal practice, so why not do it via Humanist? For myself, I intend to use Humanist for a) gossip cum question and answer b) sending out stuff that I would otherwise send xeroxes of c) giving synopses of work which is being fully published elsewhere. Flame off, as they say. lets get into the adverts - does anyone over in North America want to go and buy "Information Technology in the Humanities", ed. S. Rahtz, Halsted Press (John Wiley) 1987? It consists of 14 chapters by English academics about the problems of introducing computing into humanities curricula, both general issues (whether to learn programming, whether word-processing is academic etc) and discussions of particular subjects (the more unusual ones - Archaeology, Music, Classics etc - are present, while the much-written-about Computer Assisted Language Learning is not). Its about teaching ABOUT computing, not teaching WITH it. Does anyone else teach Prolog? Leslie Burkholder's ideas were welcome, and I would be curious to hear others. my students dont get any further than a dating agency - I wish I could get them as far as foxes, geese and grain! sebastian rahtz From: Michael Sperberg-McQueen Subject: Weekly updates? Date: Wed, 3 Jun 87 16:58:46 CDT X-Humanist: Vol. 1 Num. 42 (42) This is to comment on Sebastian Rahtz's suggestion to make HUMANIST produce a weekly digest instead of a daily stream of new mail. I agree that daily influxes of new mail can be distracting, but I think we are better off with the system as it stands. I won't dwell on the obvious point that a weekly digest would involve a fairly substantial change to LISTSERV (which runs a continuously active 'server machine' to accept mail sent to a list, and forward it to the active members of that list automatically; most LISTSERV machines also handle subscriptions and cancellations without human intervention). Nor shall I belabor the point that a weekly digest requires preparation and editing -- thankless work for which it really seems unfair for us to volunteer Willard. ListServ makes it relatively easy to run a conference, but only because the host must seldom intervene. If the conference host must edit the mail and produce a digest manually, who will ever volunteer to be a host? The mild inefficiency of sending multiple copies of the mail across the Atlantic is readily corrected if someone on that side has a VM/CMS machine and the new "distributed LISTSERV" -- assuming UToronto has the distributed version too. But is it worth the effort? A digest, however well constructed, is never a substitute for the actual conversation, and not often a perfect guide to it: our individual interests varying so much, a single digest might find it hard to serve everyone's purposes. Also, while I can usually find five minutes to read one day's Humanist mail, I won't often find twenty minutes for a week's worth. And I at least find the development of discussion over real time more interesting. For context, I do save the mail up and print it off periodically for review; as S.R. points out, that's not hard to do. Michael Sperberg-McQueen From: LOU%UK.AC.OXFORD.VAX1@AC.UK Subject: Date: 4-JUN-1987 16:33:32 X-Humanist: Vol. 1 Num. 43 (43) Concerning LISTSERVER and multiple copies flying (more like waddling) across the Atlantic... Which JANET IBM CMS site would like to volunteer to be a distributed LISTSERV node? It should be one fairly near water [for walking on], with a local committment to supporting arts computing in general and networking in particular and of course a really uptodate mail server. Come to think of it Sebastian ...... Meanwhile, I would still like to know where I should send the Text Archive snapshot so that people can get it direct. And also how to get whatever's there myself. I agree with Michael SPMcQ that the only point of having Humanist is as a continuous flow of information (or whatever it is). Unfortunately, the JANET/EARN/BITNET connexion is still plagued by machine failures, inconsistent address tables, lost messages, inadequate messaging protocols etc. So instead of a flow we tend to get nothing at all for a few days and then 29 in a row, just like buses. This may also have something to do with the astonishment with which I have been reading the discussion about whether or not electronic publication "counts" in some sense. In my book, this medium has a very long way to go indeed before it could reasonably be called publication. For a start, it's not accessible to everyone. Lou From: MCCARTY@UTOREPAS Subject: Date: 4 June 1987, 23:40:39 EDT X-Humanist: Vol. 1 Num. 44 (44) It seems obvious that electronic publication, like other activities related to computing, seldom if ever counts professionally in the humanities. For this reason, I'm told, much good courseware doesn't get designed by those who are best qualified to design it, and even articles and reviews printed in reputable journals rarely mean very much on a c.v. In part, as a friend just pointed out to me, this is justified; the integrity of the discipline is at stake. I wonder, however, how some of the finer work in computing can be rigidly distinguished with respect to worth from compilation of bibliographies or editing of texts, for example? In any event, those of us who find ourselves supporers of computing in the humanities can do something about defining our work academically and raising its standards. We can also, together, lobby for its recognition, and thus the SIG of which this discussion group is a product. Take electronic publishing. Following the analogy nearest at hand, we could form an electronic journal, with an editor, an editorial board, and all the rest. What would the relationship be between this journal and the printed variety? How would the medium influence the message? In length of articles? In style? Alas, it would in language, but English is the current lingua franca. It seems to me that the sine qua non of worthwhile electronic publication is quality of language and thought as well as accuracy of information. This means good editing and demanding review. It could be done (I think should be done), though at some risk to the first few contributors. Any takers -- or givers? Having completed two substantial academic papers for which neither the research notes nor the written words touched paper until the very end, I have some confidence that electronic publication in our sense need be neither trivial nor slapdash. My faith in this is that if we do a first-class job in our common profession, it will begin to be properly recognized. To be practical: we could form a separate ListServ for this journal; contributions to it would be mailed to the editor; he or she would mail these to the editorial board for consideration, or to readers; rejection, revision, publication would ensue; the readership would consist of the members of the discussion group. Some subjects come immediately to mind: computing activities at university X; reviews of software and hardware; proposals for design and implementation of software; computational methodologies; administrative structures for support of computing in the humanities; and so forth. In some cases, articles could be "reprinted" from newsletters or small journals, whose circulation is unfortunately restricted; in other cases articles might anticipate publications in print. No doubt there are many problems with such a proposal. If they seem worthy, please point them out. From: CMI011%UK.AC.SOUTHAMPTON.IBM@AC.UK Subject: Date: Fri, 5 Jun 87 14:06:31 GMT X-Humanist: Vol. 1 Num. 45 (45) a) OK, so other people prefer their meat daily. I ride a bike, so I am not used to Lou Burnard's buses, but I dont actually find that the HUMANIST messages are backing up like that. My site gets them via a UUCP link to a proper JANET site, so maybe that reshuffles them. Anyway, daily or weekly meat is relatively trivial - what about size of contributions? b) Willard's note about an electronic journal is attractive, but I do not see it working for the simple reason that the technology isnt up to it (nor looks like being). All the paraphernalia of a typeset book isnt there just as a nicety, it all serves a purpose, and the e-mail environment that most of us live in (I think I am well off in a Unix world, if I had to read Humanist on this IBM I use for sending, I would give up!) does not have the facilities to convey thoughts as we are used to do on paper. That said, there is no reason why a conventional journal should not operate as Willard suggests (I presume some do) - why not do all that he suggests but just print it at the end (which is all most of us would do anyway!). Now if you suggest distributing papers in TeX format for us to print locally... but there would be howls of protest about that. Does one NEED a new journal? all the problems (who are the editors? how does it get credibility? build up a circulation? etc etc) would be there for the E-Journal as for the paper one (save ONLY for printing costs, and you would simply be asking people to pay for that on the fly instead of as a subscription ). I think Willard's most interesting point is when he suggests that computer work be given as much credence as, say, editing a text. That I must agree with - but I dont think an E-Journal would help. What do punters think of the floppy disc SCOPE? in many ways, thats an attractive option, as the PC environment is much more civilised than the e-mail one. What about all Willard's ideas going toward something like that? Sebastian Rahtz P.S. sorry, Lou, I dont think we have LISTSERV here. Computers are for FORTRAN in Southampton! but I am pursuing the matter. From: R.J.HARE%UK.AC.EDINBURGH@AC.UK Subject: Electronic Publishing Date: 05 Jun 87 17:10:18 bst X-Humanist: Vol. 1 Num. 46 (46) Picking up only one of Sebastian Rahtz's points, I think its true to say that (though there are other reasons), 'humanists' here at Edinburgh are reluctant to do a lot of work on the computer (any computer!) because such work is not visible to the people who assess (dangerous term at the moment!) their work. An electronic journal would be similarly invisible to such people, and therefore not much better from that point of view than any other work on the computer. It's interesting to realise that this may well be a general problem. This is a subjective opinion, formed as a result of casual conversations with various people around here, but I think that this situation might be changing (locally!) - the sooner the better I feel. Roger Hare. PS. What's a bus? From: MCCARTY@UTOREPASDan Brink Subject: credit Date: 5 June 1987, 15:33:00 EDTThu, 04 Jun 87 19:12:36 MST X-Humanist: Vol. 1 Num. 47 (47) In a HUMANIST note dated May 31, Jeff Gillette addresses the issue of academic "credit" for computer activity: Do tenure and promotion committees value programming, software reviewing, and other of the activities so typical of HUMANIST addressees? (A recent issue of the Chronicle covered this matter in some detail; I would recommend it to anyone interested in such matters.) The Chronicle of Higher Education (3/18/87: XXXIII, #27, p. 1: "Software for teaching given little credit in Tenure Reviews" I do think that computing activity is little valued by colleagues (unless their printers won't work). In my department, for example, writing software ranked dead last in a list of 35 activities considered worthy for English faculty. And, in fact, I have dropped the MLJ column because the activity is so little valued. Further, we all know of cases in which well known and respected humanists have been forced out of the profession. At the same time, I was not hired to work with computers. (The micro didn't exist when I was hired!) So, it is to some degree my own doing. And I have not really used the computer as a tool in my own research; I have worked on the improvement of the tool itself. A number of people, like Jones, Oakman, Smith, Abercrombie, etc. have dealt with the problem by moving to computer science departments or by receiving an administrative assignment officially condoning their humanities computing activities. That, I think, is the key. Those entering the field now would be well advised to be sure that their computer activity officially be made part of their job description. Computer familiarity will probably be a real plus for humanists entering the job market in the next few years; they should be sure that they will be given full credit for their activity down the road. But programming, running computer centers, etc., is not, and probably should not be, valued as research. Enough of this; back to FLEXTEX . . . From: CAMERON%UK.AC.EXETER@AC.UK Subject: Phonetics/Publications Date: Sat, 06 Jun 87 13:55:30 BST X-Humanist: Vol. 1 Num. 48 (48) This week I have been absent from Exeter for three days - on my return I found a dozen communications from the Remote Grey Book. The subject under discussion is obviously an interesting one but I found wading through my electronic mail and following the discussion on the print out a very tiresome process. I do not think that I should welcome a plethora of articles in this form. Abstracts, bibliographies, yes. The danger of ready availibility is ready forgetability or ignorability. As for assessing 'status' of electronic articles, we obviously have to ask the inevitable question of why an article is written. If its aim is uniquely to gain promotion, then electronic diffusion wi have very little status, or one comparable to that of home-produced material at present. A student here has produced an elementary program for automatic correction of French phonetic transciption. Who else is working in this field? Who has perfected the process? Who would be prepared to cooperate? Keith Cameron From: MCCARTY@UTOREPAS Subject: Date: 7 June 1987, 16:59:11 EDT X-Humanist: Vol. 1 Num. 49 (49) Although it is barely more than a week old, our talk about electronic publication has already taken on an interesting shape. The following is an elaborated summary. It is not intended to conclude our discussion but to alert you to it directions. The discussion began with the possibility that HUMANIST could be used to distribute files centrally and with the recognition that printed journals do certain things rather poorly. A proposal for an electronic journal, however, provoked significant criticism on two grounds: (1) that it would give its contributors little or no professional recognition, and (2) that the electronic medium is at present incapable of representing typographic effects (such as italics) and the characters of many languages other than English. It seems highly unlikely that as humanists we can correct the almost universal disregard for work in computing among the committees that govern hiring, tenure, and promotion. There seem to be two reasons for this disregard. The major one is that academic "job descriptions" seldom if ever recognize this kind of work. One of us suggested that when individuals are hired for academic jobs they should attempt to get computing activities explicitly mentioned among their duties. First, however, we need to understand the appropriate analogies. What sorts of work are we talking about? Is some of it equivalent to committee work? to compilation of bibliographies and other duties of running a journal? to editing of texts? What kind of involvement with computing in the humanities (if any) should be recognized as genuine research? We could ask, for example, what a specialist in literary criticism or theory does that in principle is more research-like than the designer of software for sophisticated textual analysis. The second reason for the disregard from our academic masters and colleagues may be the often poor quality of the writing (and sometimes thinking) associated with computing. The informality of the medium may have quite a bit to do with this. Mainframe editors are in general so primitive and screen images so difficult to proofread that we are tempted to slap something down and dash it off without much thought. We can do something about this, it has been suggested, by peer-review and editorial intervention. Nevertheless, informality in an electronic discussion allows for the communal development of ideas. Some of us have suggested that in cooperation with one or more established journals we distill from our discussions items to be printed in those journals, on the analogy of Bix in Byte magazine, though not in the same format. (The president of the ACH has mentioned the possibility of such publication in the quarterly newsletter of that organization.) It has also been pointed out that in computer science, articles can be circulated as "technical reports" without any perceived conflict with later publication in refereed journals -- and without the "brownie points." Our discussion seems to have converged on an understanding of what the electronic medium is and is not suitable for. Thus to attempt a close analogue of the refereed journal would be a mistake; at the same time, there is a need for a means of publication that takes advantage of the informality and rapidity of electronic mail. To allow for this almost conversational informality, editorial intervention needs to be kept at a minimum. Editing is required in the transition into print, however. Perhaps, as an affiliated activity of the ACH and ALLC, we should ask for the help of these two organizations in publishing summaries of our discussions here and various newsworthy items. I would also like to suggest that we talk with the editors of such journals as CHum about circulating versions of articles to be published there. Reviews of software are particularly crucial. Please let us all know what you think. (The length of this note is 73 lines & about 4000 characters long.) From: CAMERON%UK.AC.EXETER@AC.UK Subject: Electronic Publication Date: Mon, 08 Jun 87 10:57:39 BST X-Humanist: Vol. 1 Num. 50 (50) As intimated in last communication about HUMANIST publication, in favour of SHORT notices, abstracts, reviews. bibliographies, publicity, etc. New methods demand new styles and it should be possible to develop a new style of writing for electronic publication - universal system of truncation, abbreviation, notation which would be interpreted and formatted by universally distributed local programs etc. KCC. From: GW2%UK.AC.YORK.VAXA@AC.UK Subject: MILTON PROJECT PROPOSAL Date: 11-JUN-1987 10:37:30 X-Humanist: Vol. 1 Num. 51 (51) First thoughts prompted by reading the discussion about electronic publishing: The new medium makes possible new forms of intellectual work, forms of collective research and collaborative writing that have not yet been defined, professionally or institutionally. In a spirit or experiment, i want to propose a collaboration, to any and all who are interested. The focus will be John Milton. The substantial example & inspiration might be Christopher Hill's MILTON AND THE ENGLISH REVOLUTION, together with Stephen Greenblatt's RENAISSANCE SELF-FASHIONING. They suggest a form of cultural history which could be refined and extended by collaboration. Anybody there?? From: Willard McCarty Subject: Date: 11 June 1987, 22:07:54 EDT X-Humanist: Vol. 1 Num. 52 (52) A fellow HUMANIST here mentioned to me over dinner (very good it was, too) his frustration with notes that don't contain the name of the sender. On some systems one can easily guess at least the first or last name, but if your userid is "43256_XRRG," for example, it's not so easy. A human name gives the imagination something to work with. On a VM/CMS system a user gets his or her full name included by putting it in the NAMES file and then opting for the "long" option when sending a note. I don't know what one does on other systems. Alternatively, you can just sign your name to the bottom of your notes to HUMANIST. So, I propose that henceforth we submit only signed notes. If for some reason you want to send an anonymous message, send it to me and I'll pass it on to HUMANIST without your name or userid attached. Thanks for your patience. From: CAMERON%UK.AC.EXETER@AC.UK Subject: CALL CONF/EXETER Date: Fri, 12 Jun 87 09:47:59 BST X-Humanist: Vol. 1 Num. 53 (53) For all HUMANIST readers - accommodation still available if required. UNIVERSITY OF EXETER PROGRAM STRUCTURE and PRINCIPLES in CALL Lopes Hall, September 21 -23 1987. COST 50 pounds all inclusive - pro rata rates available (Draft programme) MONDAY September 21 16.30 - 18.00 Registration 18.00 Reception 19.00 Dinner 20.15 S.Dodd (Exeter) CALL and the chalkface. D.F.Clarke, (U.E.A.) Design considerations in the production of extended computer assisted reading materials TUESDAY September 22 08.00 Breakfast 09.30 P.Hickman, (La Ste Union) Structuring interactive grammar practice programs. D.Ferney, (Wolverhampton Poly.) A computer model of the French native speaker's skill with grammatical gender. 10.45 Coffee 11.15 O.Durrani, (Durham) Designer Labyrinths: Text mazes for language learners. A.Benwell, (Lanchester Poly.) How we use HELP facilities. 13.00 Lunch 14.30 A.Kukulska-Hulme, (Aston) Liberation or constraint : the useful- ness of a program interface to a vocabulary database. G.A.Inkster, (Lancaster) Databases as a learning activity. 15.45 Tea 16.15 Workshop : Reading Programs - D.F.Clarke (U.E.A.); I.Morris (Man chester Poly.). Language Programs - D.Ashead (B'ham); O.Durrani (Durham). 18.30 Wine reception 19.00 Dinner 20.15 J.D.Fox, (U.E.A.) Can CAL aid vocabulary acquisition? L.M.Wright, (UC, Bangor) Aspects of text storage and text compression in CALL. WEDNESDAY September 23 08.00 Breakfast 09.30 D.Scarborough (City London Poly.) The computer as a teaching resource on a Commercial French course. J.E.Galletly (Buckingham) Elementary verbal phrase syntax- checker for French sentences. 10.45 Coffee 11.15 Workshop: Language programs : M.Blondel (City London Poly.); B.Farrington (Aberdeen); P.Hickman (La Ste Union); D.Ferney (B'ham); M.L'Huillier (Brunel). 13.00 Lunch 14.15 B.Farrington, (Aberdeen) A.I. Grandeur et servitude M.Yazdani, (Exeter) Tools for second language teaching. Future projects. 15.45 Tea KCC/EXETER From: GW2%UK.AC.YORK.VAXA@AC.UK Subject: MORE ABOUT PROPOSED MILTON PROJECT close Date: 12-JUN-1987 14:55:09 X-Humanist: Vol. 1 Num. 54 (54) MILTON PROJECT : SECOND PROPOSAL -------------------------------- A From: GW2%UK.AC.YORK.VAXA@AC.UK Subject: MILTON PROJECT: A SECOND PROPOSAL Date: 12-JUN-1987 16:49:44 X-Humanist: Vol. 1 Num. 55 (55) I want to offer some more details about the proposed collaborative project on Milton. Don't let the initial reference to Hill & Greenblatt put you off ...my interest in Milton begins from an encounter with that body of work, but is not circumscribed by it. I would like to invite collaboration on a close linguistic study of the early poetry, exploring Milton's affinities with Spenser and Shakespeare, within the historical framework of the development of Early Modern English. Obviously this kind of inquiry lends itself, in its initial phase at least, to the use of a computer to compile the data. Once the OED is available on-line it will be possible to ask complex questions about the history of the language with a new precision. I'm not sure how to formulate the question of 'affinity' in a fashion appropriate to a package like the Oxford Concordance Program. The question that 'really' interests me comes after this preliminary work. I want to test my merely speculative sense that Milton stands (ever-belated) at the close of a period of intense linguistic innovation; that his freedom (at the level of word-formation and syntactical experiment) is significantly diminis inhibition, anxiety and self-censorship are the shaping ideological conditions of Milton's writing. This would lead into an investigation of the paternal metaphor in Milton, the successive inscriptions of the father in his texts. (The scrivener-musician fascinates me...) One escape from this intimate tyranny seems to have been the theory and practice promising a vision of a redeemed sexuality. These are, for me, the issues that reading Milton suggests most insistently. I would be delighted to hear from anyone who finds them interesting enough to merit some systematic collective investigation. Geoff Wall From: GW2%UK.AC.YORK.VAXA@AC.UK Subject: TO USERS OF NOTA BENE Date: 15-JUN-1987 11:43:09 X-Humanist: Vol. 1 Num. 56 (56) I'd be very interested to hear from anyone with experience of using the NOTA BENE WP program, to help me decide whether to acquire it for our deptartment's IBM PC. Thanks Geoff Wall York From: JMBHC@CUNYVM Subject: TO USERS OF NOTA BENE Date: Mon, 15 Jun 87 20:05 EDT X-Humanist: Vol. 1 Num. 57 (57) Hi. We have/use nota bene with our faculty. They love it. I've just gotten my own copy (haven't even opened it yet), but I'm sure that I'll make it an often used package. I plan to use it for manuscripts, mainly. At any rate, I'll ask Bob Tannenbaum to reply to you since he know more about it. Joanne Badagliacco, Hunter College From: "Bill Winder (416) 960-9793" Subject: Date: 15 June 1987, 21:07:32 EDT X-Humanist: Vol. 1 Num. 58 (58) I was interested to see that prolog is offered as a humanist programming language. I have used Turbo Prolog for about a year and I'm now considering proposing it for applications in the context of a French lexicography course. Before doing so I would like to have an opinion on a few questions from the forerunners in this area (L. Burkholder et S. Ratz seem to have had such experience -- hopefully there are others). The first question, which has plagued me since I adopted Turbo Prolog as a programming environment, is whether it is sufficiently powerful for large scale (micro) applications. Would not other prologs be better, such as Logicware's, or Marseilles's Prolog II? There have been several criticisms levelled at Turbo Prolog, most of which seem trivial (such as the fact that it is not interpreted, but rather compiled prolog). The criticism that it is a typed prolog might be important -- I have already run into situations where domain typing has forced me to multiply predicates for each argument type. A case in point: to do a sort on a large file (over 64 K) some rather tricky programming techniques are required. The innovation of prolog as a programming language is that it "de- algorithmatises" programming, at least to some degree. But in Turbo Prolog, where large applications require baroque, stack- and heap-sparing procedures, the otherwise logical development of a prolog programme is lost. In short, could anyone suggest reasons for preferring another prolog to Turbo Prolog, or for preferring another language (such as Snobol or Icon)? Does anyone use Turbo Prolog at all? All remarks in this vein would be appreciated. Bill Winder (Winder @ UTOREPAS) From: "Stephen R. Reimer (416) 585-4576" Subject: Date: 16 June 1987, 00:36:52 EDT X-Humanist: Vol. 1 Num. 59 (59) Our esteemed sysop, Willard McCarty, has been a regular user of Nota Bene since sometime shortly before or after the birth of Steve Siebert; indeed, I am sure that Willard has published a review of NB somewhere, but I've forgotten the reference. Perhaps he could be persuaded to say a few words? or, at least, to remind us of where his review appeared? From: SUSAN%UK.AC.OXFORD.VAX2@AC.UK Subject: Date: 16-JUN-1987 09:00:06 X-Humanist: Vol. 1 Num. 60 (60) For reviews of Nota Bene, see (1) Willard McCarty, Computers and the Humanities 20 (1986), 62-71. (2) D.H.A Kaferly, Literary and Linguistic Computing 2 (1987), 37-39. Susan Hockey From: LOU%UK.AC.OXFORD.VAX1@AC.UK Subject: Date: 16-JUN-1987 10:26:52 X-Humanist: Vol. 1 Num. 61 (61) Re Geoff Wall's request for comments on ways of mechanising the search for affinities (if that is what he meant), I thought of quite a neat way of doing this in SQL in the shower this morning. If you have a table A containing columns WD and REF where WD is any feature of potential interest and REF is where it occurs in text A; and likewise a table B with cols WD and REF for text B, then the SQL statement select A.REF AREF, B.REF BREF, COUNT(*) ECHOES from A,B where A.WD = B.WD group by A.REF,B.REF having count(*) > 5 order by echoes desc will produce a list of the places in A and B where the same features are to be found in descending order of the number of such features shared at each place. The 'having' bit rules out places where there are fewer than 5 shared features - obviously this can be tweaked. I've left it deliberately vague what 'features' and 'places' mean - but if you thought of them as 'interesting word stems' and 'sentence' I wouldnt quarrel with you. I haven't tried it, but it ought to work! Joe Raben's work on Miltonic echoes in Shelley (written long before the invention of SQL) is what put the idea into my head. That, and dropping the soap. Lou Burnard From: Willard McCarty Subject: Who Speaks to Whom, and Who Cares Date: 16 June 1987, 08:13:39 EDT X-Humanist: Vol. 1 Num. 62 (62) Some time ago, prompted by complaints about too much from HUMANIST, I suggested that replies to questions, such as the most recent one about Nota Bene, should go directly to the questioner, and that the questioner might then gather up the interesting replies after some interval and publish them on HUMANIST. Could we come to an agreement about whether we adopt this scheme or not? Please send your preference to me directly (MCCARTY at UTOREPAS.BITNET), and when I get a sufficient number of replies, I'll publish them here. Any other scheme to regulate the flow of conversation is also welcome. Yours, W.M. From: Dr Abigail Ann Young 1-416-585-4504 Subject: Date: 16 June 1987, 07:53:14 EDT X-Humanist: Vol. 1 Num. 63 (63) I wonder if anyone else in the T.O. area saw the article/opinion piece in the Toronto Globe and Mail over the weekend bemoaning the degenerative effect which desktop publishing was already having on book publishing, standards of typography and editing, quality of finished book, etc. It did not go quite as far as saying that DTP is responsible for the Decline of Western Civilisation As We Know It, but the author did definitely foresee the end of The Book in the near future. Now obviously this is to say the least grossly exaggerated. I've been hearing these mournful plaints about the decline of typography and editing ever since paperback novels cost 50 cents..... But it is clear that the phenomena observed are real -- that is, standards of typography and editing have declined, the quality of many printed books is very poor (I don't mean the quality of their content, but their "production values"!), etc -- however faulty the reasoning which lays this all at the foot of DTP may be. My perspective on this is as someone very much involved in a scholarly publishing operation. REED, where I've worked one way or another since 1976, sees its volumes of edited texts straight through from transcription of MSS to camera-ready final proof in-house. One of the reasons is, of course, the need to maintain control over the scholarly quality of the edition and its apparatus, but another chief reason is a deep concern over the quality of the final book from the point of view of design and production. Happily, this concern is shared by our publisher, the University of Toronto Press, and its design department. But it seems to me that DTP is far from a force working against quality in production and design: until reading this article it wouldn't have occurred to me that someone could cast recent improve- ments in computer-based typesetting and pseudo-typesetting as the villain of the piece. In fact, I'd always thought that the great advantage of DTP for the scholarly community is that it places the ability to maintain that kind of control over both aspects of the final book, which we have been able to enjoy at REED because we are a large project with a major university press behind us, within the grasp of most academics working on a university campus. I think that more and more university computer centres and departments are likely to make available the equipment and expertise needed to allow individual editors of journals or authors able to control and produce their own work up at least the first proof stage, if they wish. Surely the DTP can be used to reverse a trend toward poor quality in book production, rather than increase it, at least among those groups to whom high-quality book production would be important. I don't really think very much of electronic publication for a lot of reasons, and especially one which Lou mentioned, and no-one else really seems to have picked up in subsequent discussion, that is, that it is accessible to such a small audience of users/readers. But using electronic means to improve the quality of conventional scholarly publishing really seems to me an exciting possibility. As the quality output devices like laser printers improve, it seems likely that DTP may make scholarly books less expensive to produce. What do people think? Is anyone else interested? Abigail Ann Young YOUNG at UTOREPAS From: John Bradley Subject: DTP Discussion Date: Tue, 16 Jun 87 11:33:10 EDT X-Humanist: Vol. 1 Num. 64 (64) Abigail Young's comments on DTP, and the effect on the quality of today's publications strikes a responsive chord in me -- DTP is an area I have been looking at quite closely for some time. I think a certain amount of the negative comment in the Toronto Globe and Mail article can be easily explained: the natural human trait to emphasize the negative side of change. Of course, DTP is, unfortunately, often used in such a way that it degrades quality, and, indeed, such an article can be useful in that it points these problems out. However, I think it is foolish to ONLY look at the negative side. It seems to me that there are two reasons why DTP might reduce the quality of output: (1) The technology cannot do as good a job as traditional methods. (2) The control over the technology is put in the hands of people without the required skills. Item (1) is, I think, gradually fading as an issue. For example, it's true that Laser printer output is not as good quality as a true typesetter, however, PostScript typesetters such as the Linotronic 100 or 300 also make it possible to use DTP software and equipment to produce typeset quality images. Another Point 2 is, I think, really the central issue, and is likely to remain so. Really good work in this area cannot be done by amateurs. In the interest of reducing costs, "camera ready" copy is being produced by people without the ability to judge the quality of what they are producing. Almost every day I see people who have no understanding of what they can reasonably expect from software and hardware: for example, I still talk to at least 1 person a week who is planning to produce camera ready copy for a book by connecting a laser printer or our typesetter to WordPerfect. Each time I explain that WordPerfect CAN drive our PostScript typesetter or laser printer -- but does far too inadequate a job in many ways (for example, of hyphenation and justification) to meet the standards that one would expect for a scholarly work. Clearly, the decision to use WordPerfect for this purpose was made by a person who is ill equipped to decide! I guess the issue is one of education. I believe that high quality work can come out of DTP, and that DTP permits new publishing ventures to be considered and tried. However, the quality issue needs to be better understood in the community. At U of T I am currently attempting to put together a brief seminar to introduce some of these issues to the university community. Perhaps this type of thing is needed at other places. I'd certainly appreciate comments from anyone who might be interested. Please contact me directly at the following address. I'd be glad to produce a summary of comments, and circulate via HUMANIST. .... John Bradley BRADLEY at UTORONTO From: Dr Abigail Ann Young 1-416-585-4504 Subject: Date: 16 June 1987, 11:32:35 EDT X-Humanist: Vol. 1 Num. 65 (65) Is anyone on this list working with English renaissance poetry, eg Spenser? I need to consult such a person about a possible classical/humanistic (in its more traditional historical meaning) literary allusion, but don't want to take up everyone's time. Thank you! Abigail Ann Young Records of Early English Drama University of Toronto YOUNG at UTOREPAS From: Willard McCarty Subject: Autobiographies Date: 17 June 1987, 22:53:28 EDT X-Humanist: Vol. 1 Num. 66 (66) As most of you will know, I have asked anyone interested in becoming a member of HUMANIST to send me a brief description of what he or she does to support computing in the humanities. Little did I suspect when I first asked for these descriptions that I would get so many interesting autobiographical statements. As these have accumulated, I have become more and more convinced that they should be shared with everyone so that we can all get to know each other and see what it is that we do as a professional group. What I propose, then, is to edit what I have into a convenient form and distribute it to all HUMANISTs. Some of you may wish to send me a revised version of what you sent previously (given that you could not have guessed what might be done with those words eventually), and some may not. So, at the risk of angering those who may be away from their e-mail for the next two weeks, let's say that I will wait until July 1 (Canada Day) before I act on this idea to allow anyone who wants to revise his or her life-story to send the improved version to me. Please do not ask me to send you your contribution so that you can decide whether or not to revise it. If in doubt, write out a new one. Be as brief as you can, but try to cover all of what you do and wish to do in this area; specify your institional status; your academic or non-academic background; and anything else you think would be of interest to the rest. You may recall a plan to circulate a detailed questionnaire that would ask for some of this information and other things in a much more detailed form. That questionnaire is still in the works. Meanwhile, as an interim and exploratory measure, the brief autobiographies will help us understand what needs to be asked. Thanks for your cooperation. I think you'll agree that the result will be more than worth the effort. Yours, W.M. From: ZACOUR@UTOREPAS Subject: Date: 18 June 1987, 15:35:18 EDT X-Humanist: Vol. 1 Num. 67 (67) At Willard McCarty's suggestion, a brief baring... Norman Zacour Professor of Medieval History at the University of Toronto (just retired); interested especially in the history of the papacy of Avignon; just finished writing about the treatment of Jews and Muslims in 14th century l egal works; now working on the history of the college of cardinals in the Middl e Ages; has written a short manual on WordPerfect to get students of the Centre for Medieval Studies up and running on the IBM; and some quick programming for a blind friend who is a writer and a professor of English, to simplify his Life with DOS and Company, word processing in general, and keeping data about his students in particular. Interested in hearing about any software that will handle multiple variants of medieval mss., to produce notes giving the lemma, the line number, the variant(s) and their witnesses, etc. From: "Grace Logan (Arts Computing Office)" Subject: Date: Thu, 18 Jun 87 12:48:46 EDT X-Humanist: Vol. 1 Num. 68 (68) Does anyone out there have a good, concise, clear introduction to Artificial Intelligence and/or Expert Systems? Articles,short books, references of any kind would be much appreciated. I would like to include something about these matters in a brief introduction for first year students and I find my own knowledge rather patchy, dis- jointed and impressionistic. I'd be grateful for even a reasonable definition. cheers,, Grace Logan, Arts Computing, U.of Waterloo Waterloo, Ont. From: IDE@VASSAR Subject: nota bene Date: Fri, 19-JUN-1987 13:35 EST X-Humanist: Vol. 1 Num. 69 (69) I am just now reading the comments concerning nota bene and wonder if anyone is aware of the lengthy and very good review of it in the first issue of Bits & Bytes Newsletter? From: R.J.HARE%UK.AC.EDINBURGH@AC.UKR.J.HARE Subject: LISTSERVFILELISTLISTSERVFILELIST Date: 19 Jun 87 11:54:48 bst05 Jun 87 14:04:10 bst X-Humanist: Vol. 1 Num. 70 (70) --- Forwarded message: SENDME LISTSERV FILELIST --- End of forwarded message From: R.J.HARE%UK.AC.EDINBURGH@AC.UKR.J.HARE Subject: Electronic PublishingElectronic Publishing Date: 19 Jun 87 11:55:52 bst05 Jun 87 17:10:18 bst X-Humanist: Vol. 1 Num. 71 (71) --- Forwarded message: Picking up only one of Sebastian Rahtz's points, I think its true to say that (though there are other reasons), 'humanists' here at Edinburgh are reluctant to do a lot of work on the computer (any computer!) because such work is not visible to the people who assess (dangerous term at the moment!) their work. An electronic journal would be similarly invisible to such people, and therefore not much better from that point of view than any other work on the computer. It's interesting to realise that this may well be a general problem. This is a subjective opinion, formed as a result of casual conversations with various people around here, but I think that this situation might be changing (locally!) - the sooner the better I feel. Roger Hare. PS. What's a bus? --- End of forwarded message From: R.J.HARE%UK.AC.EDINBURGH@AC.UKMCCARTY%UTOREPAS@UK.AC.RL.EARN Subject: Forwarded message Date: 19 Jun 87 11:53:42 bst4 June 1987, 07:18:59 EDT X-Humanist: Vol. 1 Num. 72 (72) --- Forwarded message: From: Subject: Date: X-Humanist: Vol. 1 Num. 73 (73) Welcome to HUMANIST From: Subject: Date: X-Humanist: Vol. 1 Num. 74 (74) HUMANIST is a Bitnet/NetNorth/EARN electronic discussion group for people who support computing in the humanities. Those who teach, review software, answer questions, give advice, program, write documentation, or otherwise support research and teaching in this area are included. Although HUMANIST is intended to help these people exchange all kinds of information, it is primarily meant for discussion rather than publication or advertisement. In general, members of the network are encouraged to ask questions and offer answers, to begin and contribute to discussions, to suggest problems for research, and so forth. One of the specific motivations for establishing HUMANIST was to allow people involved in this area to form a common idea of the nature of their work, its requirements, and its standards. Institutional recognition is not infrequently inadequate, at least partly because computing in the humanities is an emerging and highly cross-disciplinary field. Its support is significantly different from the support of other kinds of computing, with which it may be confused. Perhaps you don't think so. In any case, let us know what you do think, about this or any other relevant subject. HUMANIST is one of the inaugural projects of a new special interest group for the support of computing in the humanities, which is currently applying for joint affiliation with the Association for Computing in the Humanities (ACH) and the Association for Literary and Linguistic Computing (ALLC). Information about this SIG may be obtained by sending a message to George Brett (ECSGHB@TUCC.BITNET). New members are welcome, provided that they fit the broad guidelines described above. Please tell anyone who might be interested to send a note to me, giving his or her name, address, telephone number, university affiliation, and a short description of what he or she does to support computing in the humanities. I will then add that person to the list. If anyone should wish to be dropped from the list, please send a note to that effect. From: Subject: Date: X-Humanist: Vol. 1 Num. 75 (75) How to Use HUMANIST From: Subject: Date: X-Humanist: Vol. 1 Num. 76 (76) Sending messages ----------------------------------------------------------------- Currently anyone given access to HUMANIST can communicate with all other members without restriction. A member need not be on Bitnet but can use any comparable network with access to Bitnet. Thus, to send mail to everyone simultaneously, use whatever command your system provides (e.g., NOTE or MAIL) addressed to HUMANIST at UTORONTO. Your message is then sent by your local software to the UTORONTO node of Bitnet, where the "Revised List Processor" (or ListServ) automatically redirects it to everyone currently on the list of members. [Please note that in the following description, commands will be given in the form acceptable on a VM/CMS system. If your system is different, you will have to make the appropriate translation.] ----------------------------------------------------------------- Conventions and Etiquette ----------------------------------------------------------------- Restricted conversations or asides can, of course, develop from the unrestricted discussions on HUMANIST by members communicating directly with each other. This is particularly recommended for replies to general queries, so that HUMANIST and its members are not burdened with messages of interest only to the person who asked the question and, perhaps, a few others. If, for example, one of us asks the rest about the availability of software for keeping notes in Devanagari, suggestions should be sent directly to the questioner's e-mail address, not to HUMANIST. A questioner who receives one or more generally interesting and useful replies might consider gathering them together with the original question and submitting the collection to HUMANIST. Please use your judgment about what the whole group should receive. We could easily overwhelm each other and so defeat the purpose of HUMANIST. Strong methods are available for controlling a discussion group, but self-control seems preferable. This is not to discourage controversy -- quite the contrary -- but only what could become tiresome junk-mail. Please make it an invariable practice to help the recipients of your messages scan them by including a SUBJECT line in your note. Be aware, however, that some people will read no more than the SUBJECT line, so you should take care that it is accurate and comprehensive as well as brief. Use your judgment about the length of your notes as well. If you find yourself writing an essay or have a substantial amount of information to offer, it might be better to follow on of the two methods outlined in the next section. ----------------------------------------------------------------- Distributing files ----------------------------------------------------------------- HUMANIST offers us an excellent means of distributing written material of many kinds, e.g., reviews of software or hardware. Although conventional journals remain the means of professional recognition, they are often too slow to keep up with changes in computing. With some care, HUMANIST could provide a supplementary venue of immediate benefit to our colleagues. There are two methods of distributing such material. The more specialized reports should probably be reduced to abstracts and posted in this form to HUMANISTs at large, then sent by the originators directly to those who request them. Reports of general interest, however, can be kept centrally, on the UTORONTO node, and fetched by individuals when required. To find out what is available centrally, send the following command: TELL LISTSERV AT UTORONTO SENDME LISTSERV FILELIST If you see something you want, then use the following to fetch it: TELL LISTSERV AT UTORONTO GET If you have something that you think worth posting, please send it to me, and we can then discuss its fate. Storage space on the UTORONTO node is not infinite nor ultimately free, so we need to be careful about how much we put there. ----------------------------------------------------------------- ListServ's Commands and Facilities ----------------------------------------------------------------- New members will be interested to know that one of the files centrally maintained by ListServ at UTORONTO is an archive of messages for the past month. If you have just joined and want to know the recent history of discussions, enter the following command (or its equivalent on non-VM/CMS systems): TELL LISTSERV AT UTORONTO GET HUMANIST LOG8705 ListServ will then send you the contents of the monthly archive. ListServ accepts several other commands, for example to retrieve a list of the current members or to set various options. These are described in a document named LISTSERV MEMO. This and other documentation will normally be available to you from your nearest ListServ node and is best fetched from there, since in that way the network is least burdened. You should consult with your local experts to discover the nearest ListServ; they will also be able to help you with whatever problems in the use of ListServ you may encounter. Once you have found the nearest node, type the following: TELL LISTSERV AT XXXXXX INFO ? The various documents available to you will then be listed. ----------------------------------------------------------------- Suggestions and Complaints ----------------------------------------------------------------- Suggestions about the running of HUMANIST or its possible relation to other means of communication are very welcome. So are complaints, particularly if they are constructive. Experience has shown that an electronic discussion group can be either beneficial or burdensome to its members. Much depends on what the group as a whole wants and does not want. Please make your views known, to everyone or to me directly, as appropriate. ----------------------------------------------------------------- Willard McCarty 31 May 1987 Centre for Computing in the Humanities, University of Toronto (MCCARTY@UTOREPAS.BITNET) --- End of forwarded message From: R.J.HARE%UK.AC.EDINBURGH@AC.UK Subject: Accidental messages. Date: 19 Jun 87 12:02:52 bst X-Humanist: Vol. 1 Num. 77 (77) You probably just got a series of your own messages blown back at you. These sho be deleted, not sent to the HUMANIST bulletin Board. This happened accidentally while we were setting up our own local bulletin board. Sorry about that - hope it don't cause too many problems! Roger Hare. From: CAMERON%UK.AC.EXETER@AC.UK Subject: LISTSERV Date: Sat, 20 Jun 87 11:38:51 BST X-Humanist: Vol. 1 Num. 78 (78) TELL LISTSERV AT UTORONTO SENDME LISTSERV FILELIST From: Willard McCarty Subject: An accident and a misunderstanding Date: 20 June 1987, 17:46:27 EDT X-Humanist: Vol. 1 Num. 79 (79) Recently an accident of some sort in Edinburgh resulted in a very large number of old HUMANIST notes being sent to everyone on the list. This certainly demonstrates that the power of electronic mail, like a loosely held firehose, can turn against the user! Please be aware that anything sent to HUMANIST at UTORONTO is immediately and automatically propagated to everyone; no one here decides whether a note should be passed on or not. Once something starts to go wrong we here may be able to stop it, but only if we notice it in time. So be careful, please. We are still working on a facility for centralized storage of files here but have not yet figured out how to provide it. Until then it is useless to ask for LISTSERV FILELIST, since nothing is there except for the monthly log of past messages. In any event, once something is put there, you should fetch this FILELIST by sending the appropriate command directly to LISTSERV at UTORONTO and not -- please note -- by sending the command inside a note to HUMANIST at UTORONTO, as some of you have been doing. Thanks for your continued patience with this adolescent service. Yours, W.M. From: Willard McCarty Subject: Date: 21 June 1987, 23:58:31 EDT X-Humanist: Vol. 1 Num. 80 (80) The following is a slightly edited collection of all the responses I received to my question about how conversations on HUMANIST should be regulated. As you will see, the preference seems to be for freely ranging discussions with a good measure of self-control. This is just what I'd hoped for -- the best form of government for free people, if I may say so. Please note the point about using a "reply" in response number 7. From: UDAA270%UK.AC.KCL.CC.VAXA@AC.UK Subject: Date: 16-JUN-1987 14:54:15 X-Humanist: Vol. 1 Num. 81 (81) In the first days it's quite nice to get the daily messages, proof that one is indeed involved in an international network. But it does get out of hand, and I agree totally that the recent discussions of Nota Bene would have been best addressed to the originator of the query. However, to define and regulate where the fine line should be drawn between universal and personal interest seems an impossible task. As long as originators of general queries are prepared to send out summaries after a certain period, and as long as everyone else realises that a summary is forthcoming, the e-mail shouldn't be so clogged with preliminary remarks. Furthermore, it is easy enough to send a message direct to any originator and ask for a summary to be sent to Everyone if none appears. From: Dr Abigail Ann Young YOUNG at UTOREPAS Subject: Date: 16 June 1987, 08:35:33 EDT X-Humanist: Vol. 1 Num. 82 (82) It seems to me that the flow of conversation on HUMANIST is just fine as it is: I think it's rather fun to have the sense of participation in an actual conversation. Still, I do realise that you may have to impose some kind of restraints on replies: but don't edit out the spontaneity of it! From: Sterling Bjorndahl - Claremont Grad. School Subject: Date: Tue, 16 Jun 87 09:10 PST X-Humanist: Vol. 1 Num. 83 (83) I agree that replies should go to the questioner and not the whole group. I don't know about other mail systems, but on Vax/VMS mail, when I use the "reply" command, the reply gets sent to the questioner, not the list. I think I have heard "somewhere" that this may not be true for all mailers, such as do not read the "reply-to:" line. From: Michael Sperberg-McQueen Subject: Date: X-Humanist: Vol. 1 Num. 84 (84) We can have completely public discussion (everyone replies via the list to any query), with the result that the conference begins to resemble the transcripts one reads in accounts of CB radio conversations (or their simulations on Compuserve). Or we can have tightly controlled discussion (everyone replies to the querier, not to the list), in which case the conference typically resembles the transcript of a Mayday frequency: one call for help after another, and nothing much in between. The people who initiate the queries often fail to report back to the list on what they were told, and the result is in general a very boring conversation. Or we can attempt a middle ground: queries of limited general interest should be answered privately -- or rather I should say answers of limited general interest should be sent privately -- and those who initiate the query should ALWAYS report back to the list anything of general interest. I vote for the middle ground, with a strong leaning toward erring on the side of public conversation. For example, some of the remarks on Nota Bene lately have been probably better sent privately... and some are, I think, of public interest. But I'd rather that the whole be public than that the whole be private.... It seems to me to be a choice between a rather free, often unstructured and frustrating, but sometimes enlightening and exciting discussion, and a much more controlled but much less exciting or interesting discussion. From time to time, if the course of the conversation warrants it, you (or someone else) can issue a plea that purely private concerns be handled off the list, in direct notes. Or the query's originator (like Dr Young) can request private responses because of a conviction that the matter is not of public interest. But so far I haven't found myself regretting the volume of the conference. Quite the opposite, I am looking forward to a fairly vigorous discussion of the merits of Turbo and other Prologs. From: Jeffrey William Gillette Subject: Date: X-Humanist: Vol. 1 Num. 85 (85) Count my vote for your scheme. The Usenet system, which functions by broadcast, in a fashion very similar to your List Server, there is a continuing temptation to respond publically to specific questions of limited interest. The younger (and often imature) members of the Usenet community often respond to such abuses by inundating the offending party with unfriendly "flame" mail. I should hope that Humanist participants would possess sufficient professional courtesy and/or conscience to regulate themselves. If not, however, I might arrange to put some Usenet people onto the Humanist mailing list!!! From: DD ROBERTS (PHILOSOPHY) Subject: Date: Wed, 17 Jun 87 01:32:56 EDT X-Humanist: Vol. 1 Num. 86 (86) I still am enjoying seeing responses to questions of others, and would like that to continue at least for a while. It is not difficult for me to dash through even large quantities of mail, now that I have decided to be quite hard nosed about it (if I'm not now able to use a thing, even if it sounds interesting, I delete it-- confident that if I should need it in the future, somebody out there would help). From: mbb@portia.Stanford.EDU Subject: Date: Wed, 17 Jun 87 10:06:15 -0800 X-Humanist: Vol. 1 Num. 87 (87) I most definitely agree with you: if person A asks "anyone know something about foo", then person B, in responding, should mail directly to A. Person A may want to synthesize the responses he/she gets and then send the synthesis to the general list. I have been the moderator for the TeX digest (TeXhax) for over six months, and this very thing can be a real pain. All it takes is for a few people to do the wrong thing to create a big mess. I imagine that most people use a mailing program and simply give some kind of "REPLY" command. The mailing program may grab the wrong address as the address to reply to. If there's one thing I've learned doing TeXhax, it's that mailer programs are really stupid, fussy and eccentric!! From: GW2%UK.AC.YORK.VAXA@AC.UK Subject: Date: 17-JUN-1987 11:45:43 X-Humanist: Vol. 1 Num. 88 (88) Never too much for me...I really do prefer to have access to all the open conversations in progress. Then I can choose which items to read and which to ignore. But it would help if everyone were more conscientious in their use of subject-headings. From: Subject: Date: X-Humanist: Vol. 1 Num. 89 (89) [This message is 129 lines long.] From: LOU%UK.AC.OXFORD.VAX1@AC.UK Subject: Date: 22-JUN-1987 09:37:59 X-Humanist: Vol. 1 Num. 90 (90) Does any Humanist have experience of using any of the various mainframe free text searching systems (e.g. Basis, Cairs, Stairs, Status, Assassin, 3IP, BRS/Search ...) on anything other than straightforward library catalogue type data? I'm on a UK interuniversity working party which is currently assessing the usefulness or otherwise of such packages and would be glad to hear from anyone (especially from outside the library/info science world) who's had hands on experience of such systems (and lived to tell the tale). I should add that we're considering buying such a system at Oxford t make some of the Text Archive materials more widely available, so any comments on how humanists would like to access e.g. the complete works of Shakespeare, Milton et al from online terminals would also be interesting Lou Burnard PS In line with Willard Directive #918365 this is a Public Communication, but replies to it should be Personal; I hereby undertake to recirculare such of them as seem generally interesting From: "Sterling Bjorndahl - Claremont Grad. School" Subject: "close" pattern matching Date: Mon, 22 Jun 87 07:49 PST X-Humanist: Vol. 1 Num. 91 (91) Dear HUMANIST readers: I would like to hear from those who know about "close" pattern matching algorithms, such as are used in microcomputer spelling checkers to suggest the word you *meant* instead of the non-word you *typed*. Is there a bibliography of published materials on how such algorithms are constructed? Do they use frequency tables, or hashing? Please reply to: BJORNDAS @ CLARGRAD on Bitnet. As a responsible electronic citizen, I promise to digest and report on replies (if any). Sterling Bjorndahl Institite for Antiquity and Christianity Claremont Graduate School Claremont, CA 91711 USA From: CATHERINE%UK.AC.OXFORD.VAX2@AC.UK Subject: typography standards Date: 22-JUN-1987 12:55:57 X-Humanist: Vol. 1 Num. 92 (92) I am writing to add to the discussions about the decline in typesetting standards caused by desktop publishing. For some years now we at Oxford University Computing Service have supplied typesetting facilities to all British universities and polytechnics, as well as to several foreign universities. We have a Monotype Lasercomp for a typesetter, (and we hope in the fairly near future to have a PostScript typesetter). All we provide are the facilities (that is, the typesetter itself and a front end system in the form of a program called Laserset, which somewhat simplifies the commands and does syntax checking). Each user typesets his own document. By now around two hundred books have been typeset in this way. For the most part, the standard has been high, although there have of course been some notable exceptions. We ALWAYS strongly advise users to go to their publisher at the earliest stage possible, and have the typography department of the publisher design the book. When this is possible, the book should look exactly as good as any other book designed by that publisher. Problems arise, however, when the user does not yet have a publisher, or when the publisher does not have a typography department (this, unfortunately, is becoming more common). Our advice is then for the user to spend some time in libraries and bookshops looking for a suitable book whose format the user admires. It is then quite easy to use this as a model. We try to stress the importance of the look of the book, as it is easy with the Lasercomp to achieve high quality results if the design is good. It should be said that we are not engaged in desktop publishing; more in do-it-yourself publishing. Our system is probably at once easier and more difficult to abuse. We can achieve very fine control over the look of a page; finer than is possible with most desk-top publishing programs. However, no basic structure or design is imposed or even suggested, and there is nothing to stop an innocent user from producing a horrendous page. On the whole, however, our users have been quite conscientious and have made considerable efforts to produce texts which have a pleasing appearance. Perhaps this is in part because our system is a bit more difficult to use than a real desk top publishing system. The user must learn 'foreign' terms, such as pointsize, leading, hanging indents, kerning, film feeds and set feeds, all of which remind him that he is dabbling in an area of considerable tradition and expertise and art, and encourage him to walk with caution, possible even respect. Most of our users seem to feel that since they are taking the trouble to typeset their own work, they would like it to look good. From: Willard McCarty Subject: ICON and Prolog; UNIX for humanists Date: 22 June 1987, 15:13:47 EDT X-Humanist: Vol. 1 Num. 93 (93) The following is from Joel Goldfield, with whom HUMANIST is having some trouble communicating. Please send all replies to the following network address: jdg at psc90.uucp From: Subject: Date: X-Humanist: Vol. 1 Num. 94 (94) ICON programming language vs. Prolog Dear Colleagues, Those of you interested in options to Prolog might like reading Mark Olsen's article on the ICON programming language in CHum, Jan.-March issue. He also discusses ICON in relation to SNOBOL. I would be interested in hearing from anyone also working with stylostatistical applications of UNIX and on friendly interfaces of the UNIX system for humanists. --Joel D. Goldfield From: CMI011%UK.AC.SOUTHAMPTON.IBM@AC.UK Subject: Date: Fri, 26 Jun 87 17:22:29 GMT X-Humanist: Vol. 1 Num. 95 (95) I have got so desperate (sorry, gotten, for you people over the sea) that I have printed out two weeks worth of Humanist and am reading it through for things I forgot - so much for electronic browsing! Hands up all those who never do this. Points 1. yes, please sign notes. and say WHERE you are! i exchanged many notes with someone from bbn.alexander.com earlier this year without the faintest idea of where he physically was. (I know now, I asked him). 2. in re the note from Bill Winder about Prolog, can I reply here? i think it is an interesting issue. I have taught 4 languages to Arts students in the last few years (Prolog, Icon, Snobol and dBase command language) and I am not happy with any of them. To summarize what was said at a session at a conference in Southampton at Easter, a) the world doesnt want 3rd generation language programmers - it wants arts graduates with some experience in 4th generation tools, or MAYBE 5th generation languages). Ergo, teach them to use an SQL-based relational database system as the answer to almost everything (this view endorsed by a nice man from IBM) b) if you DO want a '5th gen.' language, then one tries Prolog. But its awful for a beginners course, because a) they have to learn an OS at the same time, b) the syntax is horrid, c) there is a lack of builtin functions (to sort a list, for instance), so students (well, mine) get depressed at lack of achievement. After all, all the beginning part of Prolog is much easier in a modern database. Now Turbo Prolog may get over problems b) and c) (especially if one installs the Toolbox which just came out), but a) remains , and you have the generic Prolog problems with large applications that Winder notes. I would STRONGLY criticise Turbo anyway for not implementing the pretty crucial feature of being able to assert newly learnt ideas during running. The reasons why you have to use the declaration section are clear enough, but that doesnt excuse Borland saying it is more or less C & S standard Prolog - it isnt. All that Turbo has is speed and the (wonderful) environment. You can also get those in a Prolog like Prolog II from Expert Systems International in Oxford, which provides a better growth system. Or to solve the 'large applications' problem, I propose to try next year teaching (aargh no keep away) IBM Prolog, with its interface to SQL/DS, which would take care of the donkey work. c) can I finally mention Icon? I tried this on 15 1st years, with disastrous results (maybe it was my teaching). there was too much syntax, to put it mildly (single and double quote mean different things, for instance), and the environment was non-existent. Do not get me wrong, I like Icon a lot and use it every day, but its for programmers who need it, not for beginners. Anyone care to comment? where does that leave Winder and his French lexicography? what about the Lisp-based stuff from Montreal (deredec etc)? maybe a dedicated class would like Icon. What are other peoples solutions? Question, why do Winders students need to learn programming at all? sebastian rahtz. computer science, southampton, uk From: Willard McCarty Subject: Reminder about the Biographies Date: 28 June 1987, 23:36:54 EDT X-Humanist: Vol. 1 Num. 96 (96) I am preparing to send out or otherwise offer the slightly edited collection of biographies of HUMANISTs to all of you, but I find that I don't have biographical statements from about half of the membership. If you want to have your background, interests, and professional activities known to the rest of us, and so be able to help us define what support of humanities computing means, please be sure that you have sent your statement to me by the end of this coming week. I have statements from the following people: ----- Baldini, Pier (ATPMB@ASUACAD) Balestri, Diane Bjorndahl, Sterling - Claremont Grad. School Borchardt, Frank L. Burkholder, Leslie Burnard, Lou Bush, Chuck Cameron, Keoth Camilleri, Lelio Candlin, Francis E. Cartwright, Dana E. 3rd Evra, James W. van (PHILOSOPHY DEPT.) Griffin, Catherine Hare, Roger Henry, Chuck Ide, Nancy M. Katzen, May Kaufman, Steve, Hebrew Union College account Kennedy, Mark T. Kruse, Susan Lowry, Anita Makkuni, Ranjit McCarty, Willard McCutchan, Walter Mok, Shu-Yan Nardocchio, Elaine Ore, Espen Owen, David -UA CCIT Academic Computing Page, Stephen Rahtz, Sebastian CMI011%UK.AC.SOTON.IBM@AC.UK Richmond, S. ROBERTS, D. D. (PHILOSOPHY) Sano, Haj Sousa, Ronald de Swenson, Eva V. Thornton, Dave or Wall, Geoffrey Weinshank, Don Wiebe, M.G. Young, Abigail Ann Zacour, Norman ----- Thanks very much. Yours, W.M. From: "Stuart Hunter, University of Guelph" Subject: PROLOG, LISP, AI, and all that stuff. Date: Fri, 26 Jun 87 14:43:52 EDT X-Humanist: Vol. 1 Num. 97 (97) In response to Sebastian's recent comment, I have to ask who the "they" is that wants Arts students exposed to 5th generation languages? In the long run, how many of our "run of the mill" undergraduates do we want to expose to anything more that a "competent user" level of familiarity? Don't get me wrong: I'm all for training Humanities scholars in the use of expert systems and 5th-generation languages to solve the basic problems that we, as humanist, tackle. On the other hand, I know that the majority of the undergraduate I deal with couldn't formulate a statement about many of those problems, let alone design an expert system to cope with the problems. I think we have to distinguish between what we are teaching to -- or are using to teach to -- undergraduates and what we need to know ourselves in order to teach and do our research. And that brings me back to Sebastian's statement about "them". Who is it that wants Arts Students who are experts in Prolog? From: Willard McCarty Subject: Article in SCOPE 5.2 (March-April 1987): 13, 19. Date: 1 July 1987, 14:09:01 EDT X-Humanist: Vol. 1 Num. 98 (98) Those of you at ICCH87 who attended the meeting in which the idea for HUMANIST was born will remember that Joe Raben, publisher of CHum and editor of SCOPE, was there. He has described that meeting and its significance in the latest issue of SCOPE. As one of the pioneers of computing in the humanities, Joe must be very familiar with the role of revolutionary outcast and with what one must do to keep the faith, and keep it intelligently, in a time of little recognition or outright rejection. His counsels have an authority for that reason. Joe calls us "the avant-garde, who are uniting humanistic values with technological means." He says that we are developing "a broader vision [than our counterparts in traditional fields] by serving colleagues from many departments," thus learning "the skills of many disciplines." He compares us to the anthropologists, who emerged from history and classics, to psychologists from philosophy, and to linguists from language departments. "And in every case, the exiles have developed more rapidly as innovators and challengers of received knowledge." He recommends that academics without a proper job turn the university's rejection to their advantage by using the time liberated from teaching to develop knowledge and skills far beyond what is possible for their teaching colleagues. Then he gets to us HUMANISTs. "In better touch with comrades on other campuses, because they understand and utilize computer-based communications, they will forge an association with a potential far beyond that of current professional organizations." No academic without a proper appointment needs to be told the other side of the story. I think it's useful, however, to be shown the golden aspects of what usually seems an iron-aged actuality. Nevertheless, if the gold's to be grasped, I also think we need to develop some clear notions of how computing in the humanities can make a scholarly contribution to scholarship. Hand-waving won't help, nor will promises to prove the unprovable. Speaking in terms of my own field, I am forced by experience to recognize the necessarily tentative (and therefore disturbing) nature of literary arguments: like us they're mortal and problematic, but also like us they can occasionally reach beyond our historical and personal limitations. I don't see anything wrong with the traditional scholarly values, which most of our non-computing colleagues appear not to follow anyhow, but I do see great dangers still in pseudo-scientific ideas of truth, evidence, and proof. They will discredit as well as mislead us. So what can computing do for us as scholars that itself deserves a place in scholarship? Good answers to this question should be translatable into software and so be put to the test. [This message is about 50 lines long.] From: CMI011%UK.AC.SOUTHAMPTON.IBM@AC.UK Subject: Date: Thu, 2 Jul 87 12:03:02 GMT X-Humanist: Vol. 1 Num. 99 (99) What about some light relief? Those of you who are Unix types may have come across 'fortune cookies' or 'yow' (part of Gnuemacs), which print a random line or two of wisdom on the screen when invoked. It is also used in my department as part of a lock screen on Sun workstations; systems which are not in use sit and display random quotes at intervals on the screen to stop any given message burning into the phosphor (is this true?). so what, you say? well, in an effort to justify humanities in a computer science world, I have replaced the 'yow' database with verses of poetry (all of TS Eliot, most of Bob Dylan and a lot of Brian Patten). What a relief to have pithy thoughts come up every so often instead of mindless American aphorisms! who said poetry had no place in the modern world? sebastian rahtz. computer science, southampton, uk From: Willard McCarty Subject: Biographies Date: 4 July 1987, 14:57:04 EDT X-Humanist: Vol. 1 Num. 100 (100) All of you should receive within the next day or so the file HUMANIST BIOGRAFY, ca. 50K, 950 lines. Please let me know if it does not arrive intact. As soon as we figure out how to keep files centrally, on UTORONTO, I'll put HUMANIST BIOGRAFY there. It will then be your responsibility to keep your entry updated. Any changes or additions should be sent to me directly. I'd also like to hear from you if you have any suggestions or comments about the format or contents of this file. If a sufficient number of you who didn't send in an entry do so in the near future, I'll issue a supplement. Approximately 5/8 of the membership is represented in the current file. From: Willard McCarty Subject: address correction Date: 5 July 1987, 12:12:30 EDT X-Humanist: Vol. 1 Num. 101 (101) In HUMANIST BIOGRAFY Leslie Burkholder's e-mail address was incorrectly given. Mea culpa. It should be as follows: lb0q@te.cc.cmu.edu.bitnet OR lb0q#@andrew.cmu.edu.bitnet OR lb0q#@andrew.cmu.edu.arpanet From: CMI011%UK.AC.SOUTHAMPTON.IBM@AC.UK Subject: Date: Mon, 6 Jul 87 11:00:41 GMT X-Humanist: Vol. 1 Num. 102 (102) I forgot to say in my note about the Uses and Abuses of Poetry that, of course, the source for my text of TS Eliot and Bob Dylan was Lou Burnard's _Oxford Text Archive_. Jolly highly recommended... The interesting point arises of copyright etc; these bleeding chunks of poetry (a verse or 10 lines, whichever is shorter) are displayed under control of a program which the average punter cannot get at; nor can they normally access the source file. So to that extent I am not giving the text out to all and sundry. Nor am I 'publishing' it in a form that a literary person could read for sense. But if you were sufficiently patient you could acquire the whole of the Waste Land bit by bit, like a jigsaw. Ergo, I ask you all - is this publication? Have we identified a way in which electronic media genuinely confuse the issue? sebastian rahtz. computer science, southampton, uk From: Michael Sperberg-McQueen Subject: copyright Rahtz's copyright conundrum Date: Mon, 6 Jul 87 12:13:16 CDT X-Humanist: Vol. 1 Num. 103 (103) REPLY TO 07/06/87 11:00 FROM CMI011%UK.AC.SOUTHAMPTON.IBM@AC.UK: copyright Sebastian Rahtz's example of semi-/may-be-/may-not-be "publication" of Eliot and Dylan lyrics provides an interesting thought experiment. Is displaying such small chunks of verse on a screen, at random, "publication"? I think there must be two issues here: is it publication if we quote only small pieces of the text? and is it publication if we only show the pieces on a VDT, as opposed to paper? I think both are interesting questions. If my understanding of copyright doctrine is correct, though, the lawyers have already answered the first question: quotation of a line or two of verse -- particularly of a song lyric -- counts as publication and normally requires permission of the copyright holder. Hence the distracting copyright notices you see at the bottom of pages where a short story writer has quoted a popular song for atmosphere. (This may be different in the UK, but I always thought your copyright laws were more stringent, not less, than ours.) On the second issue, one might argue that display on a screen in a public place is a "making public" and thus a "publication." I assume the publishers line up here. One might conversely argue that a departmental terminal room is not really public, and thus preserve the refreshingly literate air of the place by claiming that this should fall into the "fair use" category. I would hate to see this use turn out to be a copyright violation! Medievalists have it easier, I think: if I put stanzas of Minnesang or random quotations from Chaucer up (and got them from a nineteenth-century edition), the copyright issue would not arise. (And if I used a twentieth-century edition, chances are very slim that anyone could tell the difference.) On the other hand, none of the terminal users would understand them. Perhaps one could use Wyatt, Donne, or Sidney? From: Graeme Hirst Subject: I am on vacation. Date: Mon, 6 Jul 87 16:31:41 EDT X-Humanist: Vol. 1 Num. 104 (104) I am on vacation until 10 August 1987. This message is sent automatically by the mailer. For urgent matters concerning /Canadian AI/, contact the new editor, Marlene Jones, 403-297-2666, marlene@noah.arc.cdn. Graeme Hirst From: Graeme Hirst Subject: I am on vacation. Date: Mon, 6 Jul 87 16:34:30 EDT X-Humanist: Vol. 1 Num. 105 (105) I am on vacation until 10 August 1987. This message is sent automatically by the mailer. For urgent matters concerning /Canadian AI/, contact the new editor, Marlene Jones, 403-297-2666, marlene@noah.arc.cdn. Graeme Hirst From: Graeme Hirst Subject: I am on vacation. Date: Mon, 6 Jul 87 16:37:27 EDT X-Humanist: Vol. 1 Num. 106 (106) I am on vacation until 10 August 1987. This message is sent automatically by the mailer. For urgent matters concerning /Canadian AI/, contact the new editor, Marlene Jones, 403-297-2666, marlene@noah.arc.cdn. Graeme Hirst From: Graeme Hirst Subject: I am on vacation. Date: Mon, 6 Jul 87 16:41:04 EDT X-Humanist: Vol. 1 Num. 107 (107) I am on vacation until 10 August 1987. This message is sent automatically by the mailer. For urgent matters concerning /Canadian AI/, contact the new editor, Marlene Jones, 403-297-2666, marlene@noah.arc.cdn. Graeme Hirst From: Graeme Hirst Subject: I am on vacation. Date: Mon, 6 Jul 87 16:44:50 EDT X-Humanist: Vol. 1 Num. 108 (108) I am on vacation until 10 August 1987. This message is sent automatically by the mailer. For urgent matters concerning /Canadian AI/, contact the new editor, Marlene Jones, 403-297-2666, marlene@noah.arc.cdn. Graeme Hirst From: Graeme Hirst Subject: I am on vacation. Date: Mon, 6 Jul 87 16:48:21 EDT X-Humanist: Vol. 1 Num. 109 (109) I am on vacation until 10 August 1987. This message is sent automatically by the mailer. For urgent matters concerning /Canadian AI/, contact the new editor, Marlene Jones, 403-297-2666, marlene@noah.arc.cdn. Graeme Hirst From: Graeme Hirst Subject: I am on vacation. Date: Mon, 6 Jul 87 16:52:01 EDT X-Humanist: Vol. 1 Num. 110 (110) I am on vacation until 10 August 1987. This message is sent automatically by the mailer. For urgent matters concerning /Canadian AI/, contact the new editor, Marlene Jones, 403-297-2666, marlene@noah.arc.cdn. Graeme Hirst From: Graeme Hirst Subject: I am on vacation. Date: Mon, 6 Jul 87 16:55:29 EDT X-Humanist: Vol. 1 Num. 111 (111) I am on vacation until 10 August 1987. This message is sent automatically by the mailer. For urgent matters concerning /Canadian AI/, contact the new editor, Marlene Jones, 403-297-2666, marlene@noah.arc.cdn. Graeme Hirst From: Graeme Hirst Subject: I am on vacation. Date: Mon, 6 Jul 87 16:59:28 EDT X-Humanist: Vol. 1 Num. 112 (112) I am on vacation until 10 August 1987. This message is sent automatically by the mailer. For urgent matters concerning /Canadian AI/, contact the new editor, Marlene Jones, 403-297-2666, marlene@noah.arc.cdn. Graeme Hirst From: Graeme Hirst Subject: I am on vacation. Date: Mon, 6 Jul 87 17:03:14 EDT X-Humanist: Vol. 1 Num. 113 (113) I am on vacation until 10 August 1987. This message is sent automatically by the mailer. For urgent matters concerning /Canadian AI/, contact the new editor, Marlene Jones, 403-297-2666, marlene@noah.arc.cdn. Graeme Hirst From: Graeme Hirst Subject: I am on vacation. Date: Mon, 6 Jul 87 17:07:04 EDT X-Humanist: Vol. 1 Num. 114 (114) I am on vacation until 10 August 1987. This message is sent automatically by the mailer. For urgent matters concerning /Canadian AI/, contact the new editor, Marlene Jones, 403-297-2666, marlene@noah.arc.cdn. Graeme Hirst From: Graeme Hirst Subject: I am on vacation. Date: Mon, 6 Jul 87 17:11:10 EDT X-Humanist: Vol. 1 Num. 115 (115) I am on vacation until 10 August 1987. This message is sent automatically by the mailer. For urgent matters concerning /Canadian AI/, contact the new editor, Marlene Jones, 403-297-2666, marlene@noah.arc.cdn. Graeme Hirst From: Graeme Hirst Subject: I am on vacation. Date: Mon, 6 Jul 87 17:12:47 EDT X-Humanist: Vol. 1 Num. 116 (116) I am on vacation until 10 August 1987. This message is sent automatically by the mailer. For urgent matters concerning /Canadian AI/, contact the new editor, Marlene Jones, 403-297-2666, marlene@noah.arc.cdn. Graeme Hirst From: CMI011%UK.AC.SOUTHAMPTON.IBM@AC.UK Subject: Date: Tue, 7 Jul 87 11:01:29 GMT X-Humanist: Vol. 1 Num. 117 (117) If no more notes appear from me, you can assume the Bob Dylan, Brian Patten and the executors of TS Eliot have haled me off to jail. I think I will put up Chaucer instead... the suggestion of Donne is ironic, in that the machines concerned are called Donne, Webster, Spenser, Jonson, Marlowe and Fletcher. Yesterday Jonson's lock screen display quoted that bit of Eliot about Webster and Donne to me. Anyway, can I return to the question of teaching Prolog? Scott Campbell asks who wants Arts graduates who know Prolog, as opposed to those who have 'a "competent user" level of familiarity'. I disagree with Scott on two points: a) Arts graduates are no different or worse than anyone else. ie they are likely to go into the big bad world and work in a big business. and what will they use in the BBW? - spreadsheets, word-processing and simple database querying - hence the argument for teaching use of SQL, as it gives a very convenient upgrade path to a fuller understanding of relational database design, and mapping of the real world into that way of thinking. So to that extent, no programming need be taught at all. b) I dont agree (pace Philippe Kahn) that "Prolog = AI = 5th generation" ; on the one hand there are expert systems, and there is AI, which may find Prolog a suitable vehicle; but on the other there are 5th generation languages (of which Prolog is a crude example) which many people (supposedly) find easier to use on a day to day basis. You can do anything you like with Prolog, so it need not be seen as an either/or situation, with Prolog or Lisp being seen as only suitable for expert systems, and Icon seen as only suitable for string processing. All these computer languages are problem-solving notations, which different people may find more or less easier to use to solve their given problem. I suppose the question is whether undergraduates have problems to solve; my argument is the specious and out-dated one that learning to solve problems with a formal notation is fun, interesting and vaguely useful. There is a much stronger argument, though, that the students EXPECT to learn some programming. sebastian rahtz. computer science, southampton, uk From: Willard McCarty Subject: Vapographies Date: 7 July 1987, 10:12:25 EDT X-Humanist: Vol. 1 Num. 118 (118) Some HUMANISTs in the UK have complained that HUMANIST BIOGRAFY has not arrived. Please let me know immediately if you haven't received your copy, and I'll send it forthwith. From: CMI011%UK.AC.SOUTHAMPTON.IBM@AC.UK Subject: Date: Tue, 7 Jul 87 17:15:47 GMT X-Humanist: Vol. 1 Num. 119 (119) Job Advertisement ----------------- I would be grateful if HUMANIST readers could draw the attention of job-seekers in their locality to a post of Research Assistant in the Departments of Electronics & Computer Science, and Archaeology, in the University of Southampton, UK; we need someone to work on an graphical database of archaeological artifacts. I can send more details as required, but basically we are after someone with graphical/image-processing background in computer science, and some archaeology. The post is for 2 years, and pays c. 10,000 pounds p.a. Work will be in Southampton on a Sun workstation. Thanks. Please direct enquiries (applications need to be within a month or so) to me: cmi011@uk.ac.soton.ibm sebastian rahtz, computer science, southampton, UK From: Willard McCarty Subject: The ACH and the ALLC Date: 7 July 1987, 22:10:08 EDT X-Humanist: Vol. 1 Num. 120 (120) ____________________________________________________________________ | The Association for Computers and the Humanities (ACH) | | and the Association for Literary and Linguistic Computing (ALLC) | ------------------------------------------------------------------- The following describes the two professional associations that sponsor HUMANIST. For further information, please contact the named individuals directly. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - The ACH - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - The Association for Computers and the Humanities is an international organization devoted to encouraging the development and use of computers and computing techniques in humanities research and education. Traditionally, ACH has fostered computer-aided research in literature and language, history, philosophy, anthropology, and related social sciences as well as computer use in the creation and study of art, music, and dance. As computing applications in the humanities have developed and broadened in the 1980s, the Association has expanded its scope to include areas from word processing to computer-assisted instruction in composition, language, history, philosophy, and anthropology, as well as computational linguistics and cognitive science, which overlap increasingly with work in the area of humanities computing. Founded in 1977, ACH is the primary professional society in the U.S. for scholars involved or interested in any aspect of humanities computing. The Association provides a forum for continuing communication about humanities computing and strives to meet the needs of those who want to gain familiarity with both existing and potential applications of computers in humanities disciplines. Publications ACH members receive a subscription to Computers and the Humanities, a quarterly journal devoted to scholarship in the field of humanities computing. CHum is published 4 times a year by Paradigm Press. The heart of ACH is its quarterly newsletter, which covers the activities of the Association and its members and includes articles on various areas within humanities computing, news of projects and conferences of interest to ACH members, and reports on the activities of governmental agencies and other organizations that affect computer-aided humanities research. Meetings ACH sponsors the bi-annual International Conference on Computers and the Humanities (ICCH), held in odd-numbered years, which brings together scholars from around the world to report on research activities and software and hardware developments in the field. Recently, ACH began to sponsor conferences and workshops on specialized topics in humanities computing, held in even-numbered years. The Ninth International Conference for Computers and the Humanities (ICCH/89) will be held in conjunction with the annual conference of the Association for Literary and Linguistic Computing in Toronto, Canada, in June 1989. In June 1988, ACH will sponsor a conference on Teaching Computers and the Humanities Courses, to be held at Oberlin College in Ohio. Affiliations Because of the interdisciplinary nature of humanities computing, ACH maintains close ties with a number of organizations with overlapping interests, in order to provide its members with information from within specialized areas of the field of humanities computing and to keep others informed of work within the discipline. ACH is closely allied with the European-based Association for Literary and Linguistic Computing and co-sponsors its annual conference. ACH is also closely allied with the Association for Computational Linguistics. ACH sponsors sessions devoted entirely to humanities computing at the annual meetings of the Modern Language Association, the Linguistic Society of America, and at the National Educational Computing Conference. Membership Membership is for the calendar year and is open to all scholars interested in humanities computing. The benefits include Subscription to the quarterly ACH Newsletter. Subscription to Computers and the Humanties. The options of subscribing to SCOPE (Scholarly Communications: On line Publishing and Education), Research in Word Processing Newsletter, and Bits and Bytes Review at reduced rates. Reduced registration fee at the International Conference on Computers and the Humanities and at meetings of the Assocation for Literary and Linguistic Computing. Reduced membership fee in ACH regional affiliate organizations. Discounts on books and special issues of journals devoted to humanities computing. The intangible benefits derived from associating with others who are interested and involved in humanities computing! Membership information ACH MEMBERSHIP Individual: $40 (US) Includes subscription to ACH Newsletter and Computers and the Humanities. NOTE: all issues of both publications for the current year will be sent. OPTIONAL FEES (in US $): NORTHEAST (REGIONAL) ACH MEMBERSHIP $10.00 per year for ACH members SUBSCRIPTION TO SCOPE $25.00 for 6 issues SUBSCRIPTION TO RESEARCH IN WORD PROCESSING NEWSLETTER $12.00 for 9 issues SUBSCRIPTION TO BITS & BYTES REVIEW $40.00 for 9 issues Send application form to : Harry Lincoln, Treasurer Association for Computers and the Humanities Department of Music SUNY Binghamton, New York 13901 -------------------------------------------------------------------- The ALLC -------------------------------------------------------------------- The Association for Literary and Linguistic Computing (ALLC) brings together all who have an interest in using computers in the analysis of text. It is an international and cross-disciplinary association, whose members are drawn from subjects such as literature, linguistics, lexicography, psychology, history, law and computer science. The ALLC works closely with the American-based Association for Computers and the Humanities. Publications The ALLC was founded in 1973 and from 1973 to 1985 published the ALLC Bulletin three times per year. The ALLC Journal with two issues per year began publication in 1980. In 1986 Oxford University Press took over publication of the ALLC periodicals to form a new quarterly journal, Literary and Linguistic Computing. Individual membership of the Association is now by subscription to Literary and Linguistic Computing. Literary and Linguistic Computing contains scholarly refereed papers on all aspects of computing applied to literature and language with the emphasis as much on the computing techniques as on the results of research projects. The range of coverage extends to hardware and software, computer-assisted language learning, word-processing for applications in the humanities, and the teaching of computing techniques to students of language and literature. The journal also has news and notes, diary, bibliography and other items of current interest. Conferences The ALLC organises a general conference on literary and linguistic computing in even-numberedhnears and a conference on a specialist theme in odd-numbered years, when it also co-sponsors the ICCH conferences organised by the Association for Computers and the Humanities. ALLC and ACH members are entitled to reduced rates at ALLC-sponsored events. Recent specialist themes have been Quantitative Methods (Nice, 1985) and Linguistic Databases (Gothenburg, 1987). The next ALLC conference will be held in Jerusalem on 5-9 June 1988 immediately before the second conference of the Association Internationale Bible et Informatique (Computers and the Bible). In 1989 the ALLC conference will be in conjunction with ICCH89 in Toronto, Canada. The proceedings of the ALLC conferences are published by Slatkine of Geneva in a volume of selected papers. Representatives The ALLC has representatives in approximately thirty countries or geographical areas as well as representatives for some twenty-five subject areas. Representatives provide information for the ALLC membership by means of survey papers, organising special sessions at conferences and answering queries and requests for information. They are also able to publicise the ALLC in their own area or discipline. Membership Subscription rates are: Individual Institution UK 12 pounds 24 pounds N. America US$22.50 US$45 Elsewhere 14 pounds 28 pounds Subscriptions should be sent to Winifred Moranville or Journals Subscriptions Journals Marketing Oxford University Press Oxford University Press Walton Street 200 Madison Avenue Oxford New York OX2 6DP NY 10016 UK Payment may be made by credit card. Back issues of the periodicals may also be obtained from Oxford University Press. Contributions to Literary and Linguistic Computing should be sent to Mr Gordon Dixon, Editor-in-Chief, Literary and Linguistic Computing, Institute of Advanced Studies, Manchester Polytechnic, All Saints Building, Oxford Road, Manchester M5 6BH, UK, electronic mail NPUM01@UK.AC.UMIST.CN.PA [on Bitnet/NetNorth/EARN: NPUM01 at PA.CN.UMIST.AC.UK]. Further information may be obtained from the Honorary Secretary, Dr T N Corns, Department of English, University College of North Wales, Bangor, Gwynedd LL57 2DG, UK, electronic mail V002@UK.AC.BANGOR.VAXA [on Bitnet &c.: V002 at VAXA.BANGOR.AC.UK] or the ALLC Chairman, Mrs Susan Hockey, Oxford University Computing Service, 13 Banbury Road, Oxford OX2 6NN, UK, electronic mail SUSAN@UK.AC.OX.VAX2 [on Bitnet &c.: SUSAN at VAX2.OX.AC.UK]. From: Dr Abigail Ann Young 1-416-585-4504 Subject: teaching arts students Prolog Date: 8 July 1987, 12:08:42 EDT X-Humanist: Vol. 1 Num. 121 (121) I am interested by this not from the point of view of a teacher of arts students, which alas! I am not, but as someone who herself had to learn a programming language "late in life" (ca 30) and on her own. It was hard. Aspects of it were not fun. What I was learning was "C" and the applications were very picayune, really, but also very picky and precise. (I assume, by the way, that "5th generation languages" are something bigger, better, and brighter than "C:" it is interesting to see how the Whig view of progress in history as positive lives on in our attitudes towards "progressive" generations of machines and languages.) But what saved me from total frustration and eventual failure was that I had a great deal of experience in learning and teaching natural languages with very strict syntactic and accidental conventions; and that when I was in school "problem-solving" was part of maths. What I mean is that we were taught how to approach problems in an analytical way while we were also being taught geometry and trigonometry. I don't remember much of the maths (if I had been one of the characters in the Musgrave Ritual the body would never have been found) but the lessons in analysis enabled me to understand the idea behind algorithms, etc, in the computer books I was banging my head against. So it seems to me that Sebastian Rahtz has a very good point about the effect on arts students of their learning programming. Perhaps they will never use the particular language you teach them, *but* they will learn how to approach and analyze a problem from a computational point of view. And that will help them both in the Big Bad World, if they have to use computers at all in their work; and in the academic world (the Little Bad World?) where humanists need more than ever to understand how to express a problem clearly in computational terms in order to get not just a correct answer but the correct answer to the question they want to ask. It will also help them, if they remain in the academic world, to view with proper skepticism both those humanists who deny that the computer can be a valuable tool (and they still exist, pace the latest issue of ACH Newsletter) and those who think the computer can solve any question it is worthwhile asking better than a human being can. From: Willard McCarty Subject: Date: 8 July 1987, 18:26:44 EDT X-Humanist: Vol. 1 Num. 122 (122) The following was intended to be a note to all HUMANISTs but did not get processed properly. I'm simply passing it on. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ >Date: Wed, 8 Jul 87 10:52:09 edt >From: jdg%psc90.UUCP@DARTMOUTH.EDU (Dr. Joel Goldfield) >Message-Id: <8707081452.AA17538@psc90.UUCP> >To: ihnp4!princeton!seismo!HUMANIST@utoronto >Subject: > >String-processing languages > In response to the comments of Sebastian Rahtz and others concerning >PROLOG, ICON and string processing let me add one more: researchers who >have access to the UNIX system may find 'awk' to be helpful. Named after >its inventors at Bell Labs (Abbo, Weinberger & Kernigan, I believe, at >the Murray Hill, NJ, facility) it's extremely dense and fast. While the >original documentation is sparse and esoteric, recent books on the UNIX >system have apparently tried to correct this problem. > For information on the ICON language, see Mark Olsen's article >in CHum, Jan.-March, 1987. There's a bit of undefined "jargon" in it, >but mainly geared toward humanists' interests. I describe my applications >of 'awk' in the article appearing in vol. 1 of the ALLC publication that >Etienne Brunet edited for Slatkine (1986), pp. 455-465 approximately. > --Joel D. Goldfield > Plymouth State College (NH, USA) From: Willard McCarty Subject: ALLC/AIBI Conference Announcement & Call for Papers Date: 8 July 1987, 18:33:16 EDT X-Humanist: Vol. 1 Num. 123 (123) -------------------------- ALLC--AIBI Joint Conferences June 1988 -- Jerusalem, Israel Preliminary Announcement The Association for Literary and Linguistic Computing (ALLC) and the Association Internationale Bible et Informatique (AIBI) invite you to attend their International Conferences to be held in June 1988 in Jerusalem. The Fifteenth Annual Conference of ALLC will take place on June 5-9, 1988, and it will be followed immediately by the Second Conference of AIBI on June 9-13, 1988. The conferences are jointly organized by the Organizing Committee in Israel (which will also serve as the Conference Committee for the ALLC one), but the procedures for papers' submission and selection and for registrations will be kept separate. A significant discount will be given, however, for a joint registration, to encourage attendees of each conference to attend the other one also, thus enhancing fruitful exchanges and communications between researchers of closely related interests. Special registration rates will also apply to speakers, students, and members of sponsoring institutions. The conferences are sponsored and supported by a number of academic and professional societies and organizations that will be listed in future mailings. A ``Call for Papers'' for the ALLC conference is enclosed; a ``Call for Papers'' for the AIBI one is being mailed separately. The coordination between the two programs will be assured by a special panel consisting of Y. Choueka and R.-F. Poswick. Both conferences will consist of invited lectures, contributed papers, panels, product-review sessions, poster displays, exhibits and demonstrations. Standard hardware and communications equipment for on-line demonstrations and large-screen displays will be available on site. Every effort will be made to meet special hardware needs if a detailed request is sent well in advance to the Organizing Committee. Selected papers from the two conferences will be published in two separate Proceedings volumes. The conferences will be accompanied by an exhibition of hardware, software, books and other products and services relevant, in general, to the computing-in-the-humanities domain. The timing of the conferences was specially chosen so as to coincide (hopefully...) with some of the most glorious sunny days that Jerusalem can offer, and to assure availability (and lower fares...) of air tickets and hotel rooms, while avoiding the rush summer season when tourists usually crowd the city. A rich and interesting series of cultural, social and tourist events for the registrants and their parties will accompany the conferences. A few Mediterranean beaches, beautiful in so many ways, are also about one hour of driving from Jerusalem; far enough so as not to distract the conscientious wisdom seeker, but close enough for an occasional refreshing jump to the sun-and-sea worshipper... The fascinating appeal and haunting beauty of Jerusalem, with its multi-cultural environments and institutions, its intriguing history, and its unique human and architectural landscapes, coupled with the anticipated characteristic ambiance usually associated with scientific activities and the gathering of scholars and researchers from all over the world, will certainly turn these meetings into an exciting professional and cultural event. So, mark these dates on your calendar, and plan early to join us in June 1988 in Jerusalem. Don't miss this excellent opportunity for an enriching scientific and human experience. In order to receive future mailings about the conferences, and to help us better plan them, please return the enclosed Notification Form, duly filled out, as soon as possible. For further information on the conferences and their programs, and for suggestions for panels, tutorials, etc., please write to the Organizing Committee at the address given in the enclosed form. From: Subject: Date: X-Humanist: Vol. 1 Num. 124 (124) ALLC--AIBI Joint Conferences 5-13 June 1988, Jerusalem Organizing Committee: Yaacov Choueka, Chairman Bar-Ilan University, Ramat-Gan (visiting Bell Communications Research, Morristown) Hillel Weiss, Coordinator Bar-Ilan University, Ramat-Gan Daniel Boyarin Bar-Ilan University, Ramat-Gan Itamar Even-Zohar Tel-Aviv University, Tel-Aviv Ariel Frank Bar-Ilan University, Ramat-Gan Reuven Mirkin Academy of Hebrew Language, Jerusalem Uzzi Ornan The Hebrew University, Jerusalem Yehuda Radday Technion, Haifa Address: Department of Mathematics and Computer Science Bar-Ilan University, Ramat-Gan, Israel, 52100 electronic mail: R70016%BARILAN.BITNET choueka@bimacs.bitnet From: Subject: Date: X-Humanist: Vol. 1 Num. 125 (125) Notification Form Mail to: Organizing Committee ALLC--AIBI Joint Conferences Deprtment of Mathematics and Computer Science Bar-Ilan University Ramat-Gan, Israel, 52100 Title_______Name_________________________________________________ Affiliation______________________________________________________ Address__________________________________________________________ Tel._______________ e-mail address___________________ __ Please send me more information when available __ Please send me the Call for Papers of the AIBI conference __ I plan to attend __ ALLC Conf. __ AIBI Conf. __ Both Conf. __ I plan to submit a paper to __ ALLC __ AIBI Tentative Title: _______________________________________________________________ __ I propose the following panel to the __ALLC __AIBI conference: _______________________________________________________________ From: Subject: Date: X-Humanist: Vol. 1 Num. 126 (126) Association for Literary and Linguistic Computing Fifteenth Annual Conference 5-9 June 1988, Jerusalem Literary and Linguistic Computing-1988 The Fifteenth Annual Conference of ALLC will be held on 5-9 June 1988 in Jerusalem. As has been traditional with ALLC meetings, the full spectrum of literary and linguistic computing in general is expected to be covered at the conference. Specific topics which are currently under vigorous research, such as large textual databases and corpora and linguistic computing in multi-lingual environments, are naturally expected to receive special attention. Papers are invited on substantial unpublished research on the main themes of the conference and similar related areas such as: -computational morphology, syntax and semantics -computational lexicography and lexicology -mechanized dictionaries, lexicons and grammars -lemmatization and parsing -ambiguity and its mechanical resolution -stylistic analysis and authorship studies -statistical linguistics and metrics -research tools: corpora, concordances, indexes and thesauri -full-text systems -natural language understanding -text processing and retrieval Papers that present specific theoretical models coupled with new experimental results are particularly welcome, but contributions dealing with critical evaluations, general reviews and appraisal of theoretical models, software packages and specialized hardware will be also considered. General descriptions of on-going long-range projects are acceptable only if they contain substantial new and unpublished information. Authors should send 6 copies of a one-page abstract and a cover sheet in the format, and to the address, given below. Abstracts should clearly point to the originality and importance of the contribution and its relevance to the conference, and should clarify the operational status of described projects; vague or unsubstantiated claims and plans for the future will be given little weight. Priority in evaluation and consideration will be given to abstracts that are accompanied by an Extended Abstract of 4-6 pages (6 copies). Although not formally required, authors are urged to include these extended abstracts, so as to help making the reviewing process more reliable and balanced. Papers must be received by December 15, 1987. Authors will be notified for acceptance by February 29, 1988. Based on the contribution's contents and on the feedback from the conference, papers will be then selected for inclusion in the Proceedings volume to be published by Slatkine (Geneve). The happy selected authors will be notified by the end of June 1988, and a camera-ready version of the full-length papers must be received by August 15, 1988. Opportunity will be thus given to the authors to include in the final version any refinements or clarifications called for by the oral presentation and its feedback. More details on local arrangements and accomodations, registration fees and forms, etc., will be given in the second call for papers to be mailed during winter 1987. If you would like to receive future mailings, and certainly if you plan to submit a paper or just to attend the conference, please mail the enclosed notification note immediately. Format for submissions: Cover Sheet: ------------ ABSTRACT SUBMITTED TO ALLC 1988 Title Author Affiliation Complete address,including tel. and e-mail address Subject identification (e.g. statistical linguistics, morphological disambiguation, etc.) Abstract -------- Title Author ABSTRACT the text of the abstract, one page of about 30 single-spaced lines (in elite, pica or roman type, 10-12 points). Extended Abstract: ------------------ Title Author EXTENDED ABSTRACT text, 4-6 pages with the same format as the abstract. From: Subject: Date: X-Humanist: Vol. 1 Num. 127 (127) International Advisory Board Paul Bratley Universite de Montreal, Montreal, Quebec Jacqueline Hamesse Universite Catholique, Louvain-La-Neuve R.-F. Poswick Bible et Informatique, Maredsous Klaus M. Schmidt Bowling Green State University, Ohio Don Walker Bell Communications Research, Morristown, New Jersey ----------------------- Important deadlines December 15, 1987 paper submission February 29, 1988 author notification April 5, 1988 end of early registration August 15, 1988 camera-ready version of full papers for the Proceedings ------------------------------ From: Willard McCarty Subject: On the teaching of Prolog Date: 9 July 1987, 16:43:34 EDT X-Humanist: Vol. 1 Num. 128 (128) The following is from Eva Swenson (ESWENSON at UTORONTO): ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ I have been observing the exchange of views regarding the teaching of PROLOG. I think that providing undergraduates with problem-solving tools is good. Pick your favorite tool(s). It does not matter. However, I think that the more basic task is to teach undergraduates, and people in general, how to recognize problems, identify and characterize them, understand their nature. And then to determine which tool may be appropriate for the problem. In my experience, I find that undergraduates (and some instructors) have no patience for this. The tendency is to get to the tool (or toy) as quickly as possible and to try to use it to solve ALL problems. As the saying goes: when one has learned how to use a hammer, everything looks like a nail. This is why I would caution one from trying to learn about relational databases by starting with SQL. Like learning what a nail is about by studying how to use a hammer. Eva Swenson. From: Willard McCarty Subject: Not me.... Date: 10 July 1987, 15:46:26 EDT X-Humanist: Vol. 1 Num. 129 (129) A GUIDE TO THE TRANSLATION OF TECHNICAL PUBLICATIONS o "It is believed that ..." (I think.) o "It is generally believed that ..." (A couple of my friends think so too.) o "It has long been know that ..." (I didn't bother to look up the original reference.) o "While I have not found definite answers to these questions ..." (The data made no sense, but I'm publishing them anyway.) o "It might be argued that ..." (I can answer this objection so well that I shall now raise it.) o "Of great theoretical and practical importance ..." (Somewhat interesting to me.) o "Of extreme purity, ultrapure ..." (Composition unknown.) o "Qualitatively correct ... correct within an order of magnitude." (Wrong.) o "Three samples were chosen for detailed study ..." (The others didn't make sense.) o "Typical results are shown in Fig. 2" (The best results are shown in Fig. 2.) o "The most reliable values are given by Smith ..." (Smith is a friend of mine.) o "Subjected to controlled stress during the experiment ..." (Accidentally dropped on the floor.) o "Handled with extreme care during the experiment ..." (Not dropped on the floor.) o "A discussion of the remaining data will be forthcoming ..." (Some of my results don't make sense.) o "A complete understanding clearly requires much more work ..." (None of my results make sense.) o "I would be remiss not to thank Archibald Thankery for assistance with the experimental aspects of this investigation, and Dr. Samuel Hirschfeld for helpful comments during the analytical phase ..." (Archie did all the work, and then Sam explained it to me.) __________________________________________________________________________ Keebler { hua@cmu-cs-gandalf.arpa } From: Steve Younker Subject: HUMANIST Loops Date: Mon, 13 Jul 87 14:10:36 EDT X-Humanist: Vol. 1 Num. 130 (130) I feel today is a good day to send a message out to all subscribers of UofT's HUMANIST discussion group based on the events of the past week. Any of you who were members of this group a week ago will recall that one of your colleagues went on holiday. Before he left he ensured that any messages that arrived in his 'computer mail box' were answered by an automatic program that informed the sender that he was away. To make a long story short this automatic program got into an electronic discussion with the HUMANIST program at UTORONTO. Since all of you are privy to any electronic conversations that occur with HUMANIST, you all received a ton of electronic mail. Since each piece of mail was essentially the same, it made for some extraordinarily boring reading. Fortunately, Willard was able to catch this process before the network wires began to melt under the load. This morning I came in to work and found a complaint in mailbox from the U.K. about this incident. The person requested that I ensure that this situation would not be repeated in the future for it was inconvenient and expensive to those in the U.K. due to network charges. First of all, I have replied to the individual concerned and I hope I have dealt with his concerns. However, I thought that there may be more of you who were 'silently' annoyed by the incident. So I felt that a full explanation is probably in order. I discussed this with Willard and he agreed. I realize that such an incident can be annoying, but the nature of the beast, (computers) dictates that these 'bugs' will occur from time to time and cannot be predicted. In this case, a user on another computer installed a piece of software that had, to say the least, a devasting effect when it began to 'talk' with HUMANIST. The user involved is not under Willard's control or mine. Indeed, the user could have been anywhere in the world. Willard and I have attempted to set up HUMANIST in such a manner to be as useful to all as humanly possible. Some of you may remember that DARTVAX ran amok in a similar fashion when the discussion group began. That situation was also rather obscure. The point is, computer loops will occur from time to time and some of them cannot be predicted or prevented. I ask you all to react in a kindly manner. Simply smile, laugh, swear a bit maybe, delete the offending files, and carry on enjoying the benefits of HUMANIST. Willard and I realize that this is a new medium for some of you and we will endeavour to provide this service with the minimum of bother and confusion. To make HUMANIST a useful and an efficient medium, I urge you all to keep Willard and/or myself informed of your likes and dislikes of the service provided. I believe the benefits to the academic world far outweigh the occasional annoyance of a computer bug. If you wish to express any concerns that may not be of interest to all members of HUMANIST, please feel free to contact me directly at: POSTMSTR@UTORONTO Steve Younker, Postmaster - University of Toronto From: ATMKO@ASUACADATMKO at ASUACAD Subject: Date: 20 July 1987, 19:25:27 MST20 July 1987, 19:10:25 MST X-Humanist: Vol. 1 Num. 131 (131) Dear Humanist, For a while we were snowed under tons of junk mail and now nothing. Is Humanist alive? Has Humanist fallen victim to an international version of core wars? Did the system operators at CUNY "hit" the misguided list server with a mega-byte of zeros. Please, Arizona is too hot and lonely to be deprived of humanist. The following program was downloaded from the Catspaw bulletin board and requires SNOBOL4+ to run. It is forwarded to Humanist in order to make your life more economical and interesting. * INSIDER.SNO * Produces an industry insider report, * thus saving the cost and nuisance of reading * Infoworld, PC Week, Microbytes, John Dvorak, etc. * * * Set keywords -plusops 1 &trim = 1 &anchor = 1 &maxlngth = 32767 * Use system clock to seed the random-number generator * Define your arrays * Blueboy is the I B M whatever blueboy = array('10') blueboy[1] = 'official' blueboy[2] = 'executive' blueboy[3] = 'vice-president' blueboy[4] = 'officer' blueboy[5] = 'manager' blueboy[6] = 'source' blueboy[7] = 'consultant' blueboy[8] = 'engineer' blueboy[9] = 'vice-president' blueboy[10] = 'vice-president' * newspeak is verb for his statement newspeak = array('10') newspeak[1] = 'confirmed' newspeak[2] = 'denied' newspeak[3] = 'refused to confirm or deny' newspeak[4] = 'refused to comment on' newspeak[5] = 'denied any knowledge of' newspeak[6] = 'agreed that there might be some validity to' newspeak[7] = 'denied' newspeak[8] = 'been uncharacteristically forthright about' newspeak[9] = 'taken the Fifth Amendment when asked about' newspeak[10] = 'been involuntarily retired after prematurely confirmin g' * hearsay is his statement hearsay = array('10') hearsay[1] = 'reports' hearsay[2] = 'an article in Tass' hearsay[3] = 'industry rumors' hearsay[4] = 'authoritative gossip' hearsay[5] = 'unsubstantiated dispatches' hearsay[6] = 'widespread speculation' hearsay[7] = 'unofficial reports' hearsay[8] = 'high-level rumors' hearsay[9] = 'leaks from beta-testers' hearsay[10] = 'informed conjectures' * newstuff is the next product newstuff = array('10') newstuff[1] = 'system' newstuff[2] = 'architecture' newstuff[3] = 'CPU' newstuff[4] = 'system bus' newstuff[5] = 'token-ring network' newstuff[6] = 'local-area network' newstuff[7] = 'entry-level product' newstuff[8] = 'top-end workstation' newstuff[9] = 'video display standard' newstuff[10] = 'operating system' * saywhat is an attribute of the new syste, saywhat = array('10') saywhat[1] = 'be totally proprietary' saywhat[2] = 'run under Microsoft Windows' saywhat[3] = 'be based on the Intel 80486' saywhat[4] = 'remain a closely-held secret until the end of the centur y' saywhat[5] = 'be generally compatible with existing systems' saywhat[6] = 'use a subset of the OS/360 instruction set' saywhat[7] = 'employ scalar interrupts and extensive masked gate array s' saywhat[8] = 'be introduced in the near future' saywhat[9] = 'have DB-15 connectors' saywhat[10] = 'be produced by robots in Singapore' * bluesite is where they're doing it bluesite = array('10') bluesite[1] = 'a product suitability testing facility' bluesite[2] = 'several gamma-test sites' bluesite[3] = 'a national refuge for migrating data' bluesite[4] = 'IRS offices' bluesite[5] = 'proposed MX missile bases' bluesite[6] = 'the corporate detention center for dress-code violators ' bluesite[7] = 'a detoxification clinic' bluesite[8] = 'a number of Fortune 500 companies' bluesite[9] = 'a toxic waste dump' bluesite[10] = 'a maximum-security prison' * bluecity is more where bluecity = array('10') bluecity[1] = ' on Three Mile Island' bluecity[2] = ' in Armonk, N.Y.' bluecity[3] = ' just above Boulder, Colo.' bluecity[4] = ' in midtown Manhattan' bluecity[5] = ' in beautiful downtown Burbank' bluecity[6] = ' near Fargo, N.D.' bluecity[7] = ' in the suburbs of Metetse, Wyo.' bluecity[8] = ' in the Silicon Valley' bluecity[9] = " between Cassini's Division and the Roche Limit" bluecity[10] = ', formerly in Boca Raton until the company learned ' + 'that "Boca Raton" means "Rat Mouth" in Spanish' morestuff = array('5') morestuff[1] = 'further explanation' morestuff[2] = 'detailed announcement' morestuff[3] = 'specific details' morestuff[4] = 'public statement' morestuff[5] = 'voluntary confession' sayblue = array('7') sayblue[1] = 'would be premature at this point in time' sayblue[2] = 'would cause smaller companies to file for Chapter 11,' + ' which would just get us in trouble again with the Antitrust Division' + ' of the Justice Department' sayblue[3] = 'would cost me my pension' sayblue[4] = 'might give clone-makers information they should' + ' not have access to' sayblue[5] = 'could get me transferred to Anchorage' sayblue[6] = 'will have to come from the M*A*S*H cast' sayblue[7] = 'must come from a more authoritative source' bytehead = array('8') bytehead[1] = 'observers' bytehead[2] = 'analysts' bytehead[3] = 'watchers' bytehead[4] = 'spies' bytehead[5] = 'followers' bytehead[6] = 'observers' bytehead[7] = 'analysts' bytehead[8] = 'observers' goodsay = array('10') goodsay[1] = 'the greatest thing since sliced bread' goodsay[2] = 'something the industry has long needed' goodsay[3] = 'an important and significant advancement' goodsay[4] = 'one of the finest achievements of western civilization' goodsay[5] = 'a seminal step, pregnant with fertile possibilities' goodsay[6] = 'the best improvement since they quit using punch cards' goodsay[7] = 'the reason why Big Blue continues to lead the way' goodsay[8] = 'a colossal advancement in personal-computing power' goodsay[9] = 'another reason why no one ever got fired for buying IBM' goodsay[10] = 'the first manifestation of the next generation of perso nal computers' goodmore = array('10') goodmore[1] = 'represents no major breakthrough' goodmore[2] = 'contains no surprises' goodmore[3] = 'employs an unusual huge interface known as the capybara ' goodmore[4] = 'requires an an EE to configure' goodmore[5] = 'uses components yet to be invented' goodmore[6] = 'will work only with IBM peripherals' goodmore[7] = 'requires a three-phase 37-hz 440-volt power supply' goodmore[8] = 'blows up if connected to anything from a different manu facturer' goodmore[9] = 'is compatible with Sidekick' goodmore[10] = 'crashes at the slightest provocation' butmore = array('9') butmore[1] = 'set a standard' butmore[2] = 'be popular with MIS professionals' butmore[3] = 'be what Lotus is to spreadsheets' butmore[4] = 'move us into the next generation' butmore[5] = 'give the other companies something to try to emulate' butmore[6] = 'give the clone-makers fits for at least two months' butmore[7] = 'carry on the tradition of reliability and service' butmore[8] = "give Radar O'Reilly something besides a teddy bear to sl eep with" butmore[9] = 'require substantial additional purchases by users,' + ' thus making IBM stock a good buy' badsay = array('10') badsay[1] = 'William Gates, president of MicroSoft,' badsay[2] = 'Phillipe Kahn of Borland International' badsay[3] = 'Mitch Kaypor, formerly of Lotus Development Corp.,' badsay[4] = 'Steve Jobs, a co-founder of Apple,' badsay[5] = 'Gary Kildall, developer of CP/M,' badsay[6] = 'Adam Osborne at Paperback Software' badsay[7] = 'Bob Wallace, president of QuickSoft,' badsay[8] = 'Esther Dyson, editor of Release 1.0,' badsay[9] = 'Charles Babbage, conceptual founder of computing,' badsay[10] = 'Lee Felsenstein, president of Golemics and designer of t he Osborne I,' notgood = array('10') notgood[1] = 'a disaster waiting to happen' notgood[2] = 'a solution in search of a problem' notgood[3] = 'another chiclet-key PC Jr.' notgood[4] = "Big Blue's biggest blunder since the RISC machine" notgood[5] = 'as big a step backwards as returning to paper tape stora ge' notgood[6] = 'a titanic company finally hitting an iceberg' notgood[7] = "the kind of thing you'd expect from some hackers" + " in a garage, not from the world's biggest computer company" notgood[8] = 'something that only defense contractors could afford' notgood[9] = 'too much, too soon' notgood[10] = 'the DP equivalent of herpes' addbad = array('10') addbad[1] = 'Only IBM would try getting away with this' addbad[2] = 'It will go over like a pregnant pole-vaulter' addbad[3] = 'In two years, it will be as popular as ferrite core memor y' addbad[4] = 'You can bet nobody will try to clone this one' addbad[5] = 'There are people starving on this planet, ' + "and yet we have expensive products like this. That's disgusting" addbad[6] = "They must be relying on the old saying " + "that there's one born every minute" addbad[7] = "I've heard it runs slower than a dBASE sort" addbad[8] = 'Maybe they developed it for the Strategic Defense' + " Initiative. That's the only way it makes sense" addbad[9] = "Perhaps it's only a stopgap until OS/2 is debugged" addbad[10] = 'Only Big Blue would dare try anything like this' catname = array('2') catname[1] = 'Mark Emmer, publisher' catname[2] = 'Ed Quillen, editor' goodadj = array('10') goodadj[1] = 'influential' goodadj[2] = 'respected' goodadj[3] = 'esteemed' goodadj[4] = 'highly regarded' goodadj[5] = 'popular' goodadj[6] = 'noted' goodadj[7] = "insiders'" goodadj[8] = 'revered' goodadj[9] = 'powerful' goodadj[10] = 'innovative' badwarn = array('7') badwarn[1] = "All these reports have about as much credibility " + "as a White House spokesman" badwarn[2] = "If these statements could be transformed into " + "matter, we could go into the fertilizer business" badwarn[3] = "If you believe any of this, come and see me. " + "I've got a bridge I'd like to sell you" badwarn[4] = "Such speculation just proves the truth of the " + "old saying, 'Garbage in, Garbage out'" badwarn[5] = "This baseless gossip ought to contain a " + "self-referential disclaimer" badwarn[6] = "Utter fabrications like this ought to be an " + "embarrassment to everyone involved. Unfortunately, some people " + "persist in circulating them" badwarn[7] = "The circulation of such groundless rumors " + "represents as good a reason as any for joining Ed Meese " + "in his campaign to repeal the First Amendment" * Define formatting functions define('justify(s)f,g,h,t') :(justify_end) justify s len(79) . f = :f(justify_2) g = reverse(f) g break(' -') . h = t = t reverse(g) char(13) char(10) s = reverse(h) s :(justify) justify_2 justify = t s :(return) justify_end * * output(.out,10,,'d:insider.sav') * x = save(10) :f(bad_file) * ident(x) :s(end) * Save it here * Seed the random generator date() len(8) . today len(4) len(2) . s1 len(1) len(2) . s2 + len(1) len(2) . s3 date len(2) . s4 len(1) len(2) . s5 seed = chop((s1 s2 s3) / 2) + (s4 s5) * Define random function *----------------------------------------------- RANDOM define('random(n)') ran_var = seed :(random_end) random ran_var = remdr(ran_var * 4676., 414971.) random = ran_var / 414971. random = ne(n,0) convert(random * n,'integer') + 1 :(return) random_end *---------------------------------------------------------------- getvar boyblue = blueboy[random(10)] speaknew = newspeak[random(10)] sayhear = hearsay[random(10)] stuffnew = newstuff[random(10)] whatsay = saywhat[random(10)] siteblue = bluesite[random(10)] cityblue = bluecity[random(10)] stuffmore = morestuff[random(5)] bluesay = sayblue[random(7)] headbyte = bytehead[random(8)] saygood = goodsay[random(10)] moregood = goodmore[random(10)] morebut = butmore[random(9)] saybad = badsay[random(10)] goodnot = notgood[random(10)] badadd = addbad[random(10)] namecat = catname[random(2)] adjgood = goodadj[random(10)] warnbad = badwarn[random(7)] graf1 = ' An IBM ' boyblue ' has ' speaknew ' ' sayhear + " that the company's next personal-computer " stuffnew + ' would ' whatsay '. The ' boyblue ', who asked that his name ' + 'not be used, did say that the ' stuffnew ' was under development at ' + siteblue cityblue ', but that any ' stuffmore ' "' bluesay '."' graf2 = ' Industry ' headbyte "' reactions were generally " + 'favorable, with many calling the ' stuffnew ' "' saygood '." ' + 'Technically, the new product "' moregood '," one said, "but it will ' + morebut '."' graf3 = ' However, there were some dissenters. ' + saybad ' said it represented "' goodnot '," adding that "' + badadd '."' graf4 = ' ' namecat ' of the ' adjgood ' newsletter, ' + "A SNOBOL's Chance, cautioned that " '"' warnbad '."' top = dupl(' ',15) "SPECIAL CATSPAW INSIDERS' REPORT FOR " today output(.output,6,5000) output = top output = justify(graf1) output = justify(graf2) output = justify(graf3) output = justify(graf4) end From: Willard McCarty Subject: Date: 22 July 1987, 16:48:32 EDT X-Humanist: Vol. 1 Num. 132 (132) Conference on TEACHING COMPUTERS AND THE HUMANITIES COURSES Sponsored by The Association for Computers and the Humanities JUNE 9-11, 1988 OBERLIN COLLEGE OBERLIN, OHIO C A L L F O R P A P E R S The Association for Computers and the Humanities Conference on Teaching Computers and the Humanities Courses is intended for faculty who are offering or developing courses meant to teach humanities students--in literature, language, history, philosophy, art, music--about computer use within their disciplines. The conference will NOT address the teaching of humanities subjects via the computer (computer assisted instruction, CAI). Its focus is courses designed to teach humanists how computers are used, and may be used in the future, as a tool within their disciplines. The conference is also centrally concerned with teaching computing skills to humanities students and faculty. Among the questions to be addressed are: What should be included in such courses? How should they be taught? What level and mix of students should take the course? Should a higher-level programming language be taught? If so, which language is most suitable? Should computing skills be taught before or after the student is familiar with applications of computers within his or her field? Which applictions software should be taught? In how much detail? Papers and proposals for panels on these questions and directly related questions are invited for presentation at the conference. While summaries of existing courses will not be excluded from the conference, we are looking in particular for substantive discussion of the issues surrounding the teaching of courses on computers and the humanities. Please submit five copies of abstracts and panel proposals before 30 November 1987. Abstracts for both papers and panels should be approximately 1000 words long. Panel proposals should include a tentative list of participants. The Program Committee will notify authors regarding acceptance by 31 January 1988. Full papers will be due by 15 May 1988. Selected papers from the conference will be published in a proceedings volume. Please send all abstracts and inquiries to: Professor Robert S. Tannenbaum, Chairman, Program Committee ACH Conference on Teaching Computers and the Humanities Courses Department of Computer Science Hunter College CUNY 695 Park Avenue New York, N.Y. 10021 Inquiries, abstracts, and proposals may also be sent via electronic mail to RSTHC@CUNYVM (Bitnet). From: Willard McCarty IDE@VASSAR Subject: An invitation from Nancy IdePROGRAMMING FOR HUMANISTS/ARTS STUDENTS Date: 22 July 1987, 16:59:27 EDT21 JUL 87 12:14-EST X-Humanist: Vol. 1 Num. 133 (133) I have sent around a copy of the call-for-papers for a conference which the Association for Computers and the Humanities will sponsor next summer on Teaching Computers and the Humanities Courses. This conferecen grows out of a workshop on the topic held at Vassar College last summer. I am interested in organizing a panel or preferably a whole session on the issue of the need and/or value of teaching programming to humanities and arts students. As some of you may know I have long argued that humanities students who intend to use computers in their work should learn to program, for a variety of reasons, many of which have been reiterated in discussions among this group recently. Therefore I am asking those of you who hold a view on this topic to let me know if you would be interested in partcipating in the session at the Oberlin conference. I want to hear from those who favor programmiong as well as from those who do not. Contact me as IDE@VASSAR (Bitnet) if you are interested in the session or in the conference itself. Nancy Ide ide@vassar From: JMBHC@CUNYVM Subject: Vapographies Date: Wed, 22 Jul 87 17:19 EDT X-Humanist: Vol. 1 Num. 134 (134) Didn't receive vaporgraphies here. By the way, is it too late to add mine? Joanne Badagliacco From: Willard McCarty Subject: Date: 22 July 1987, 19:30:24 EDT X-Humanist: Vol. 1 Num. 135 (135) Copies of HUMANIST BIOGRAFY or of the identical but divided HUMBIOS 1, 2, and 3 should have reached all of you by now. For technically interesting reasons we now seem to understand, we had great trouble getting the biographies to HUMANISTs in the UK but have finally succeeded. Anyone who has not received a copy should write to me directly (not through HUMANIST, please) as soon as is convenient. Either an "anthropomorphic peripheral interface error" or some electronic slip-up could be responsible. I will likely be sending out the first supplement to the biographies in late August. I think it's important for as many of us to be represented to each other in this way as possible; no one, no matter how lowly or exalted in knowledge or in status should feel excluded. Building a professional identity will be much more intelligently and effectively done if your contribution is included. Thanks for your help and patience. From: GUEST4@YUSOL Subject: Broken record? Date: Thu, 23 Jul 87 00:10 EDT X-Humanist: Vol. 1 Num. 136 (136) Just read the call for papers for the Oberlin meeting on HUMANIST. Can hardly believe that the same phraseology, the same issues, are being put forward here as for the Vassar conference last summer. Has ASTHC fallen into an endless loop? Is humanistic computing condemned to be the only kind that is currently advancing in a geological timewarp, while others rush merrily past in all sorts of interesting directions? Or am I missing some important byproduct of such slowly grinding millstones? If so, could someone please enlighten me? Am particularly curious about why there is so little interest in teaching HUMANITIES courses about what COMPUTERS are and how they impact on human culture and human nature. I would have thought that was at least as interesting to humanists as the pros and cons of Basic versus Pascal for use in the "discipline". Why do I keep thinking of Humanities not as a discipline, but as the proper study of mankind...??? From: GUEST4@YUSOL Subject: A query re Oberlin COnference of ASTHC,from a Fox among the Hedgehogs?? Date: Thu, 23 Jul 87 00:13 EDT X-Humanist: Vol. 1 Num. 137 (137) Just read the call for papers for the Oberlin meeting on HUMANIST. Can hardly believe that the same phraseology, the same issues, are being put forward here as for the Vassar conference last summer. Has ASTHC fallen into an endless loop? Is humanistic computing condemned to be the only kind that is currently advancing in a geological timewarp, while others rush merrily past in all sorts of interesting directions? Or am I missing some important byproduct of such slowly grinding millstones? If so, could someone please enlighten me? Am particularly curious about why there is so little interest in teaching HUMANITIES courses about what COMPUTERS are and how they impact on human culture and human nature. I would have thought that was at least as interesting to humanists as the pros and cons of Basic versus Pascal for use in the "discipline". Why do I keep thinking of Humanities not as a discipline, but as the proper study of mankind...??? From: Willard McCarty Subject: Joel Goldfield on the Oberlin Conference Date: 23 July 1987, 20:07:26 EDT X-Humanist: Vol. 1 Num. 138 (138) >Date: Thu, 23 Jul 87 10:32:21 edt >From: jdg%psc90.UUCP@DARTMOUTH.EDU (Dr. Joel Goldfield) >Message-Id: <8707231432.AA03398@psc90.UUCP> >To: ihnp4!princeton!seismo!HUMANIST@utoronto > >Dear Colleagues, > I think our "guest" writer may have a point about an "endless loop." >This is not a rhetorical statement, however. We might ask our sponsoring >colleagues to define the goals of the Oberlin conference more specifically. >Might this be more a framework for process, for discussion, than for some >specific conclusions. Do the participants from last year's conference at >Vassar, which I also attended, feel that we no longer need mass discussions >on the topics Nancy Ide is suggesting? > Let's try to observe Willard's suggestion that identification of >the writer appear in the body of each HUMANIST message. Who is "Guest4"? > > Regards, > Joel D. Goldfield > Plymouth State College (NH, USA) From: GUEST4@YUSOL Subject: ID Date: Fri, 24 Jul 87 00:52 EDT X-Humanist: Vol. 1 Num. 139 (139) I appreciate Goldfield's reply, and apologize for being too bashful in my maiden entry to insert more precise identification. Am eager to stand corrected, but my impression is that the topics outlined for Oberlin are almost WORD for WORD the same as those targeted at Vassar. The question is not whether discussion is no longer needed, but whether there is anything else to be discussed -- or so it seems to Sterling Beckwith York University (ON, CDN) From: SRRJ1%UK.AC.YORK.VAXB@AC.UK Subject: Date: 24-JUL-1987 16:10:24 X-Humanist: Vol. 1 Num. 140 (140) COMPUTING FOR THE HUMANITIES? Perhaps HUMANISTS could help us as we develop a campaign for the provision of special resources for computing in the humanities here in York [U.K] - currently there are none. We are now starting to devise ways of introducing data-processing techniques to History students and staff and in making our case to the "authorities" it would be useful to draw on your collective experience. We have an excellent computing service, but the staff are already over-worked and are also a little nervous about us because they are all from science backgrounds. We get on well but do we always understand each others needs? What we would like to knowis - outside the special schemes funded by the UGC (in Britain) and various computer companies - how have others found it possible to make a case for specialist computing advice in the humanities, or should we in any case simply be looking for more specialists in specific applications (database design, programing, text-processing etc) regardless of discipline? In putting together our case we're trying to collect as much information as possible about what goes on elsewhere. If you've time we'd find it very helpful to know 1] What provision is made for computing in the humanities in your institution? 2] How do you justify the provision of such specialist services to the humanities? 3]If you were starting from scratch again, what are the mistakes you'd most like to avoid repeating? Thanks for your time....we look forward to hearing from you! Sarah Rees Joneset al. History Department, Vanbrugh College, University of York, U.K. From: IDE@VASSAR Subject: Oberlin conference Date: Fri, 24-JUL-1987 11:40 EST X-Humanist: Vol. 1 Num. 141 (141) The point is well taken that some of the issues proposed for discussion at the Oberlin conference are the same as those that were discussed at Vassar last year. For this reason I made slight revisions in the call which I hope broaden the scope somewhat. However, my impression at the Vassar conference was that more questions were raised than answered. It may be true that the programming issue has been beaten to a pulp (although a recent re-hashing of that issue on HUMANIST led me to believe that a lot of people had not yet heard it or felt it resolved ssatisfactorily). However, we have not answered such questions as: what exactly is it that we think that humanities students need to learn about computers? Skills only? or more about methodology and approach, as well as conceptual material concerning computers and computing? (The programming issue speaks to this larger concern, since most people who want to teach programming want to do so in order to provide a solid understanding of computers and problem solving techniques than specific programming skills. So a better question vis a vis programming is: what do we intend to teach when we teach programming? and how is this best accomplished?) Can we expect students to understand how computers and computing fit into research in the humanities, given a course on COMPUTING intended for humanities students (as opposed to a course on COMPUTERS AND THE HUMANITIES, which would focus on both more squarely)? Or is it necessary for us to teach the methodology that computers enable us to implement? More broadly, there were two kinds of courses described at the Vassar workshop: skills- oriented courses and courses which were more concerned with methodology and developing problem solving skills in general by using computer applications to show how these things are implemented. Which is better? Is one better than the other for certain contexts? As for implementation, I saw almost no answers to these questions: do we need separate C&H courses or can we integrate materials into existing humanities courses? If not, why not? Why don't existing computer science courses serve the needs of humanities students for some aspects of C&H? Do we think all humanities students need some exposure to computer use for humanities research or should we let our students be self-selected? Can whatever we decide must be taught be done in a single c&H course? In one or more humanities courses? In some mix of special C&H courses, humanities courses with a computing component, and/or computer science courses? Other concerns: what are the realities of establishing a C&H course or even integrating computing materials into existing humanities courses with regard to adminstrative red tape? Do we need specialized faculty? What level of hardware support is required to effectively teach such a course? Should labs be defintitely included and if so, what shape should they take (specific tasks to be completed within the lab or just question and answer, etc.)? Does a minor, joint major, and/or double major in computers and the humanities make sense and if so, what is the focus and intent of such a program? I shouldn't say I saw not answers to these questions at Vassar, since some of these topics were discussed quite thoroughly; instead, I saw no resolution of the questions. I do think the point about the programming issue is well taken, and so I would like to redefine the topic I propose for a panel or session at Oberlin. I would like to address the following question: Assuming a course or courses in C&H at the undergraduate level, what is it that we feel studnets who have taken such courses should know when they complete the course? If anyone has any ideas about other issues that should be raised at the Oberlin conference, PLEASE let me know. We are open to suggestions (hopefully the call was not worded to seem to disallow consideration of questions other than those we listed) and would in fact welcome them. Or, do we know all we need to know about C&H courses, therefore making such a conference irrelevant? Nancy Ide ide@vassar From: Willard McCarty Subject: A Summary of Things Said So Far Date: 26 July 1987, 20:25:30 EDT X-Humanist: Vol. 1 Num. 142 (142) I have been asked to prepare for the Newsletter of the ACH a concise summary of the discussions on HUMANIST since it began in May. Looking over my file of contributions, I find that more has happened than I would have guessed, but because some of the interesting stuff may have been sent only to individuals, I need help in making this summary. Would anyone with interesting private contributions send them to me right away? Because HUMANIST is necessarily like the "longish conversation" described by the poet David Jones ("where one thing leads to another; but should a third party hear fragments of it, he might not know how the talk had passed from the cultivation of cabbages to Melchizedek, king of Salem"), summaries of this kind seem important for us HUMANISTs as well as for others. One day we may have a expense-free conferencing system available world-wide. Until then, the occasional summary seems to me a particularly necessary thing. I would appreciate receiving anyone's thoughts or suggestions on this matter. From: GUEST4@YUSOL Subject: Oberlin Conference agenda and the Call for Answers Date: Mon, 27 Jul 87 00:22 EDT X-Humanist: Vol. 1 Num. 143 (143) Obviously my perception of humanism is seriously defective. I actually ENJOY conferences where there are more questions raised than definitive answers provided. (Perhaps this is just a throwback to the pre_Rabenite days of the not-yet-computerized humanities -- if so, please forgive it.) Particularly when the questions are about teaching and learning, which seems to (or used tom when Mary McCarthy was in school) involve large amounts of actual TRIAL and ERROR by all concerned. THe announcement I read was about as open to other dimensions and other questions than those promulgated at Vassar as the National Security Council was to supplying the Sandinistas with humanitarian aid. Not knowing anything about how ACH runs its affairs, I can hardly comment further, but will eagerly await any further crosstalk on HUMANIST. Sterling Beckwith, York University From: Michael Sperberg-McQueen Subject: Oberlin Conference rhubarb Date: Mon, 27 Jul 87 11:14:46 CDT X-Humanist: Vol. 1 Num. 144 (144) The recent discussion prompted by Sterling Beckwith of York Univ. raises some interesting questions beyond the obvious ones (who is ASTHC? who on earth has ever suggested teaching Basic to humanists, and can we get them some professional help before they do further harm to themselves or others? what is the difference between a study and a discipline, and is a study any good to anyone if undisciplined?). I confess to some confusion on two grounds: why, to people who spend their professional lives re-examining texts and issues that have already occupied decades or centuries of attention, often by the brightest minds of their times, should it seem regrettable, or a sign of being caught in a "geological timewarp", or even odd, to be discussing, this summer, questions that occupied the attention of some people last summer? Apart from pedagogical issues (and let's thank God for humanists who want to address and discuss pedagogical issues directly and explicitly!) the Vassar and Oberlin conferences are, after all, raising fairly substantial questions of what it is we want our students to learn, the nature of the world into which we are sending them, and the relationship both of technology and (more fundamentally) the algorithmic approach to problem-solving. It does not surprise me, and it astonishes me that it should surprise anyone, to see these issues still the topic of conferences and discussions. They will necessarily continue to be so until there is either some more visible consensus, or until everyone's mind is made up and we agree to disagree, or until the nature of the problem is radically changed by developments in our secondary schools. I don't expect to see any of these events in the next few years. True, at Vassar there did seem to be some common assumptions and approaches beneath the wide surface divergences, and I tried at the end of the conference to make those common assumptions explicit. But apart from the fact that not everyone was convinced, those common assumptions remain only the outline of a potential consensus, not the content of an actual consensus, until we are all aware of our shared assumptions. In the meantime, there remains a lot of room for discussion of the issues named in the Oberlin prospectus, including choice of programming language, or other command medium. Proponents of utility naturally suggest Snobol or occasionally Prolog as the language of choice; others, who argue that the point of teaching programming is to show the student more about how the machine itself works, suggest that a more cleanly procedural language like Pascal should be used. (And we know from this list that some have even tried to bridge the two language classes by teaching Icon, with, however, disappointing results.) It is significant, I think, that the former are often interested *primarily* in enabling students to write programs for their own use, often teaching advanced undergraduate or graduate students, and take routine scholarly inquiry as their context, while the latter tend to emphasize the logical structure of computer programming, teach undergraduates, and focus *primarily* on understanding the machine as an end in itself or to broaden horizons, rather than as a tool to help get quick and dirty solutions. I cannot think that this debate is unrelated to "teaching HUMANITIES courses about what COMPUTERS are and how they impact on human culture and human nature." As for courses that explicitly address the social and cultural impact of computing -- I agree they are important, but I don't see any great shortage of them. Nor do I see -- and this is my second point of confusion -- how we expect to conduct useful discussions of computers, what they are, and how they may affect human nature and culture, without arranging to give at least some of our students some concrete knowledge of what happens in the CPU. Many social issues (privacy, organizational efficiency, and so on) may well be addressable without any programming knowledge. I daresay the sociologists are addressing them; I don't see that my training in literature calls me to try to address them, too. And the interesting questions that do seem to belong in the humanists' bailiwick (computers-and-human-creativity, computers- and-human-dignity, can-computers-ever-think, ...) seem to me to require some knowledge both of computing and of the humanities. To take a simple example: Weizenbaum's 'Eliza' program and its 'Doctor' script can lead to far-ranging discussion of profound social and personal issues. But from the relevant chapter in Weizenbaum's book we can see how catastrophically the discussion can go awry when no one can understand a word the others are saying. Weizenbaum does not seem to have understood what the psychoanalysts were talking about, and they clearly were incapable of following his argument -- largely, I think, owing to their technical naivete. They could not see how the program worked, and he could not show them. Any course on the impact of computers on society risks exactly the same difficulties if the students don't have any pragmatic computing skills. --- Since I began this note, there have been a number of further exchanges on this issue, and I have acquired a third point of confusion: why, if one prefers conferences at which more questions are raised than answered, should one complain in the first place that the same issues are going to occupy the time of another conference, worry about "broken records", and imply that no one else in computing spends any time worrying about the same issues from one year to the next? Perhaps there were multiple versions of the announcement, and Canadians got a different one, but I certainly do not see what Sterling Beckwith is talking about when he says the Oberlin announcement was not open to the issues he seems to want to raise. "What should be included in such courses?" is explicitly listed as a topic, as are "directly related" questions and any "substantive discussion of the issues surrounding the teaching of courses on computers and the humanities." It would take casuistry worthy of Ignatius Loyola to say that this call for papers excludes the issues raised by Sterling Beckwith. I will note in closing that Joe Raben deserves better than to be accused implicitly of not being a "real humanist." It would disappoint me to hear this discussion continue on that kind of note. Michael Sperberg-McQueen University of Illinois / Chicago From: Willard McCarty Subject: Date: 27 July 1987, 20:42:40 EDT X-Humanist: Vol. 1 Num. 145 (145) The following observation on the latest bit of discussion was sent to me privately. I've removed the name of the sender so as not to offend a good friend and pass it on to you for your amusement. --------------------------------------------------------------------------- Humanist is fascinating! I had no idea people could get so worked up over a call for papers. As far as I can tell, conference topics seem to follow a sort of cyclic movement in any given field: they're all very similar for a few years, then what's "in" changes, and a new cycle starts. In medieval studies lately it's been women, monks, or mysticism or a combination of the above. From: Willard McCarty Subject: Call for Papers: RIAO 88 Date: 27 July 1987, 21:02:25 EDT X-Humanist: Vol. 1 Num. 146 (146) CALL FOR PAPERS RIAO 88 USER-ORIENTED CONTENT-BASED TEXT AND IMAGE HANDLING Massachusetts Institute of Technology Cambridge, MA March 21-24, 1988 Conference organized by: Centre National de la Recherche Scientifique (CNRS) Centre National de Recherche des Telecommunications (CNET) Institut National de Recherche en Informatique et Automatique (INRIA) Ecole Nationale Superieure des Mines de Paris Centre de Hautes Etudes Internationales d'Informatique Documentaires (CID) US participating organizations: American Federation of Information Processing Societies (AFIPS) American Society for Information Science (ASIS) Information Industry Association (IIA) This conference is prepared under the direction of: Professor Andre Lichnerowicz de l'Academie des Sciences de Paris and Professor Jacques Arsac correspondant de l'Academie des Sciences de Paris A GENERAL INTRODUCTION: RIAO 88 is being held to demonstrate the state of the art in information retrieval, a domain that is in rapid evolution because of developments in the technology for machine control of full-text and image databases. This evolution is stimulated by the demands of end-users generated by the recent availability of CD-ROM full text publishing and general public access to information data bases. A group of French organizations has taken the initiative of preparing this conference. Its wish in promoting this forum is not only to stimulate and challenge researchers from all nations but also to increase an awareness of European technology. This "call for papers" is beeing distributed world-wide. We want to reach individuals in the research communities throughout the university and industrial sectors. The conference will be held in Cambridge, MA. We hope that it will encourage the exchange of European and American viewpoints, and establish new links between research teams in United-states and Europe. CALL FOR PAPERS General theme Full-text and mixed media database systems are characterized by the fact that the structure of the information is not known a priori. This prevents advance knowledge of the types of questions that will be asked, unlike the situation found in hierarchical and relational database management systems. You are invited to submit a paper showing how the situation can be dealt with. Special attention will be given to: - techniques designed to reduce imprecision in full-text database searching; - data entry and control; - "friendly" end-user interfaces. - new media A large number of specific subjects can be treated within this general framework. Some suggestions are made in the following section. Specific themes A) Linguistic processing and interrogation of full text databases: - automatic indexing, - machine generated summaries, - natural language queries, - computer-aided translation, - multilingual interfaces. B) Automatic thesaurus construction, C) Expert system techniques for retrieving information in full-text and multimedia databases: - expert systems reasoning on open-ended domains - expert systems simulating librarians accessing pertinent information. D) Friendly user interfaces to classical information retrieval systems. E) Specialized machines and system architectures designed for treating full-text data, including managing and accessing widely distributed databases. F) Automatic database construction scanning techniques, optical character readers, output document preparation, etc... G) New applications and perspectives suggested by emerging new technologies: - optical storage techniques (videodisk, CD-ROM, CD-I, Digital Optical Disks); - integrated text, sound and image retrieval systems; - electronic mail and document delivery based on content; - voice processing technologies for database construction; - production of intelligent tutoring systems; - hypertext, hypermedia. Conditions for participation The program committee is looking for communications geared toward practical applications. Papers which have not been validated by a working model, a prototype or a simulation, or for which a realization of such a model seems currently unlikely, may be refused. Authors must submit a paper of about 10 pages doubled spaced, and a 100 word abstract. Four copies must be sent before October 30 to one of these two addresses: - RIAO 88, Conference Service Office, MIT, Bldg 7, Room 111 CAMBRIDGE, MA 02139 - RIAO 88, CID, 36 bis rue Ballu, 75009 PARIS FRANCE Each presentation will last 20 minutes followed by 10 minutes of discussion and questions. Arrangement have been made with the international journal "Information Processing and Management" for publishing expanded versions of some papers. High quality audiovisual techniques should be used when presenting the paper. Separate demonstration sessions can be scheduled if requested. Particular attention will be paid to : - the use of readily available equipment for demonstrations (IBM PC, APPLE, network connec- tions...); - pre-recorded video or floppy disk displays. Hardcopy printouts of results should be avoided if possible. English is the working language of the conference. For further information call: in North America : Karen Daifuku, ------- PROGRAM COMMITTEE French co-chairman US co-chairman Prof. Christian FLUHR Dr. Donald WALKER Universite Paris XI/INSTN BELL Communications Research J.C. Bassano (F) Universite d'Orleans A. Bookstein (USA) University of Chicago J. Bing (N) Norwegian research Center for Comp. and law E. Black (USA) T.J. Watson IBM Research Center C. Boitet (F) Universite de Grenoble J. Boucher (CAN) Universite de Montreal C. Chen (USA) Simmons College Y. Choueka (Israel) Bar-Ilan University C. Ciampi (I) Instituto per la Doc. jiuridica X. Dalloz (F) Centre National de la Cinematographie T. Doszkocs (USA) National Library of Medecine E. Fox (USA) Virginia Polytechnic Institute E. Garcia Camarero (SP) Universitad Complutense de Madrid C. Goldstein (USA) National Library of Medecine G. Grefenstette (F) Universite de Tours H. Hjerppe (S) University of Link%oping D. Kayser (F) Universite Paris XIII P. Kirstein (UK) University College of London R. Marcus (USA) Massachusetts Institute of Technology P. Mordini (F) Ecole des Mines de Paris C. D. Paice (UK) University of Lancaster A. S. Pollitt (UK) The Polytechnic Queensgate Uddersfield F. Rabitti (I) Instituto dei Elabor. della Informazione J. Rohmer (F) Bull, Louveciennes G. Sabah (F) LIMSI(CNRS) Orsay T. Saracevic (USA) Rutgers University W. Turner (F) CDST (CNRS) Paris H. J. Schneider (FRG) Technische Universit%at Berlin C. Schwartz (FRG) Siemens M%unchen assisted by a Technical and an Organisation Committee. APPLICATION FORM TITLE/POSITION:................................................................. ................... I plan to attend the conference, please send me the program: YES NO I plan to present a paper: YES NO Conference theme (circle one): A B C D E F G Title of the communication: Are you willing to present a demonstration of your prototype? YES NO Equipment needed: Please mail this form before Septembre 15, 1987 to: RIAO 88 Conference Service Office MIT Bldg 7, Room 111 CAMBRIDGE, MA 02139 USA From: Willard McCarty Subject: Call for Participation: Hypertext 87 & ACM SIGIR Date: 27 July 1987, 21:07:18 EDT X-Humanist: Vol. 1 Num. 147 (147) HYPERTEXT 87 WORKSHOP ON SYSTEMS, APPLICATIONS, AND ISSUES November 13-15, 1987 Chapel Hill, North Carolina Sponsored by ACM, IEEE, U. of North Carolina, ONR, MCC, NSF Hypertext is an approach to information management in which data is stored in a network of nodes connected by links. Nodes can contain text, source code, graphics, audio, video, or other forms of data, and are meant to be viewed and manipulated interactively. Hypertext systems support collaboration and cooperation among users in a wide varieties of activities, ranging from medical instruction to software development. Hypertext has come of age. An increasing number of hypertext systems and applications have been built and used within the last few years. This Workshop will be the first opportunity for implementors, application builders, and users of hypertext systems to come together to share information and ideas. Suggested Topics The workshop will focus equally on implementations of hypertext systems, applications of hypertext, and issues surrounding the use of hypertext. Possible topics for papers include, but are not limited to, the following: Implementations and technical issues - abstract machines and base engines - complete systems - user interfaces - multi-media support - distributed systems - query and search - storage management Applications and experiences - Computer-aided engineering (CASE, CAEE, ...) - Authoring and technical documentation - Medical and legal information management - Electronic encyclopedias - Interactive tools for education and museums - Information analysis and knowledge acquisition - Scholar's workbenches for the humanties and social sciences Issues surrounding use of hypertext - Cognitive aspects of using and designing hypertext systems - Strategies for effective use of hypertext - Supporting collaborative work - Managing complexity in large information networks - Legal issues (copyrights, royalties, ...) - Social implications Information for participants: Papers are invited for presentation at the Workshop and subsequent publication in proceedings. Papers should be limited to 20 pages, and 5 copies should be submitted to the following address: Hypertext 87 Department of Computer Science University of North Carolina Chapel Hill, NC 27514 Attendance at the workshop will be limited. Prospective participants not submitting a paper should submit a brief (1-2 page) position paper describing their activities or interests in hypertext. Important dates: 8/1/87 Submission of papers (Both position papers and presentations) 9/15/87 Notice of acceptance for full papers 9/15/87 Notice of admission to the Workshop 10/5/87 Camera-ready copy of papers for preprints Planing Committee: Frank Halasz, MCC, (Workshop Co-chair) Mayer Schwartz, Tektronix, (Program Co-chair) John B. Smith, UNC, (Workshop Co-chair) Nicole Yankelovich, Brown Univ. (Publications Chair) Local Arrangements Committee: David V. Beard, UNC, (Arrangements Chair) James M. Coggins, UNC, (Workshop Manager) Leigh Pittman, (Workshop Coordinator), Program Committee: Mayer Schwartz, Tektronix, Program Co-chair Stephen F. Weiss, UNC, Program Co-chair Greg Crane, Harvard University Norman Delisle, Tektronix Mark Frisse, Washington Univ. Med. School Frank Halasz, MCC David Lowe, NYU Norm Meyrowitz, Brown Univ. Theodore Nelson, Project Xanadu Walter Scacchi, USC John B. Smith, UNC Lucy Suchman, Xerox Parc Randy Trigg, Xerox PARC Andries van Dam, Brown Univ. Stephen A. Weyer, Apple Computer Nicole Yankelovich, Brown Univ. For more information contact: John B. Smith, 919-962-5021, jbs@cs.unc.edu Frank Halasz, 512-338-3648, halasz@mcc, seismo!ut-sally!im4u!milano!halasz From: GUEST4@YUSOL Subject: Off to Oberlin, or ONe Man's Discipline is Another's Pornography Date: Tue, 28 Jul 87 01:03 EDT X-Humanist: Vol. 1 Num. 148 (148) Here is the best commentary I could find on the now-famous Oberlin Proclamation. Strangely enough, my trusty humanistic computer counted few if any instances of the words "answers", "consensus", or "humanities student" in this parallel (but oh so interesting) proposal by our transatlantic cousins. And perhaps, where diachronic change and synchronic diversity of viewpoint or style in such matters are themselves seen as fit subjects for humanistic discussion, there might be less need for defensiveness about who is or is not a "true" believer. Other, better qualified students of comparative literature or computer linguistics will no doubt find further textual comparison of the two "calls" edifying, even if they choose to ignore the lingering echoes of Last Year at Poughkeepsie... --------------- PRELIMINARY ANNOUNCEMENT AND CALL FOR PAPERS CATH 88: Computers and Teaching in the Humanities: Re-defining the Humanities? Following on the highly successful conference on Computers and Teaching in the Humanities (CATH 87) held at Southampton in April 1987, a second conference is planned to take place on 13, 14 and 15 December 1988. The emphasis will be on academic issues related to the introduction of computing into academic courses in the humanities in higher education. The main part of the conference will be devoted to workshops and seminar sessions. There will also be opportunities for informal demonstrations and poster sessions. The conference will focus on the interface between the computer and Humanities disciplines. To what extent are the traditional assumptions and methods of each discipline being either supported or challenged by the use of new technologies in higher education? The computer may facilitate existing methods, making our practice more effective. Alternatively, the computer may be changing our conceptions about a discipline, pointing to new theoretical models and new ways of teaching. In some Humanities departments there is now a tension between established and computer-based methods. Does such a tension mark the coming birth of new, technology-based Humanities subjects? If so, what are the implications for the traditional commitments of teachers in the Humanities? Will the relationship between research and teaching change, and if so, in what ways? And how will students in the future acquire the values and methods appropriate to their subjects? These, and related issues, will be examined in workshop sessions on specific fields, such as English or Music, assessing the extent to which these disciplines are changing, or are likely to change under the impact of new technologies. Other sessions will examine themes common to several disciplines, such as the shift in learning methods, the potential of expert system methods for mapping the theoretical constructs of Humanities subjects, the use of simulation as a teaching tool, and so on. In addition, the conference will analyse the political and institutional context for new developments in the Humanities, looking at policies for supporting and funding computer-related teaching. The workshop and seminar sessions, which will form the main part of the conference will focus on the discussion of educational issues, rather than detailed descriptions of courses, or particular computer-based tools. There will be opportunities for discussing or demonstrating these in separate, parallel sessions. Proposals are invited for contributions to the workshop sessions. Abstracts of about 500 words should be sent, NOT LATER THAN 15 JANUARY 1988, to Dr May Katzen Office for Humanities Communication University of Leicester LEICESTER LE1 7RH These proposals will be considered by a Programme Committee, who will notify the outcome to those involved by 15 April 1988, and plan a detailed programme accordingly. A selection will be made from the abstracts submitted to provide the basis of a forthcoming book on the theme of the conference, and invitations to contribute chapters will be issued by the Editorial Board. Proposals for demonstrations and poster sessions will also be welcomed and should be sent to the address above by 15 May 1988. Anyone wishing to be put on the mailing list for future information should also write to that address. From: A_BODDINGTON%UK.AC.OPEN.ACS.VAX@AC.UK Subject: Date: 28-JUL-1987 11:05:55 X-Humanist: Vol. 1 Num. 149 (149) ATTITUDES TO HUMANITIES The York circular on COMPUTING FOR THE HUMANITIES raises a number of important issues. It seems astonishing that we still seem to be in a position where a department in a major University needs to make special efforts to justify provision of specialist services for the humanities. Do I also understand from Sarah Rees Jones message that York has only scientists in its Computing Service? Is this typical? (Here we have 2 advisers with computing science backgrounds, 2 archaeologists and 1 geographer). Here we do not provide special services for the humanities. We consider all our customers equal and attempt to provide the best possible facilities for all disciplines. It is not of concern to us whether someone is a historian or nuclear physicist, only that they need advice. Clearly the fact that our advisers are drawn from a range of disciplines benefits our users, though we are far from able to satisfy every customer due to a combination of limited knowledge and limited resources. The anti-humanities computing attitude which is found in some computing services is an archaic hangover from the days when computers only did 'hard sums' and the humanities only pontificated 'woolly concepts'. It remains a suprise that such attitudes remain as the real expansion area today is in the humanities and not the science areas. If computing services want more cash (beyond the conventional 'procurement' cycle) they need to show a broader range of demand. Long gone are the days when 'overwork' created new posts, now it is neccessary to expand into new areas and add even further to the workload to gain further finance. Hence they should be looking constantly to new markets, and this obviously (to us) includes the humanities. Regardless of the new horizons, cash is very hard to get. If there is a need at a university to designate a specialist adviser then the post will probably have to come from the existing staff complement. I dont think that we need feel embarassed about redirecting resources in this way but clearly the case must be constructed well. I think it would be of use to all of us to find out how many institutions provide specialist support. To repeat (almost) question (1) of Sarah Rees Jones message, does your university provide a 'humanities adviser' or a lectureship in 'humanities computing'? If we find that such provision is very rare then York can argue that they will be 'blazing a new and exciting trail', if it is commonplace then York can argue that their service is not providing the standard of facilities available elsewhere! Andy Boddington Academic Computing Service Open University Milton Keynes U.K. From: SUSAN%UK.AC.OXFORD.VAX2@AC.UK Subject: Date: 4-AUG-1987 10:53:22 X-Humanist: Vol. 1 Num. 150 (150) Oxford University Computing Service is looking at typesetters (again!), particularly PostScript typesetters such as the Linotronic. The high resolution machines are said to be slow. Does anybody have any detailed information about timings on these machines? Any other experiences would also be welcome. Please - typesetters only, not Laserwriter or other PostScript laserprinters. Susan Hockey, Oxford University Computing Service 13 Banbury Road Oxford OX2 6NN England SUSAN % VAX2.OXFORD.AC.UK @ AC.UK From: "Timothy W. Seid" Subject: BRIDGING THE GAP FROM BOTH SIDES Date: Wed, 05 Aug 87 08:30:51 EDT X-Humanist: Vol. 1 Num. 151 (151) I wrote recently describing what I considered a gap between the humanities and Computer "Science." Nancy Ide wrote me an encouraging note in which she subtlely put "Science" in all caps when referring to my message. When I wrote it, I cringed before I put it down, but went ahead because I wanted to draw a strong distinction. Would it be fair to put this in terms of a gap between the humanities and technology? The Massachusetts Institute of Technology seems to agree. Recently, the Providence Journal told of MIT's decision to begin a broader liberal arts program for their students in order to prepare them to more adequately work in contemporary society. This is illustrative of the way we can meet each other halfway. From: GUEST4@YUSOL Subject: BRIDGING THE GAP FROM TIMOTHY'S SEID: a mini-quibble Date: Wed, 5 Aug 87 10:36 EDT X-Humanist: Vol. 1 Num. 152 (152) I object to "meet each other halfway". It's too easy to be trapped by language into thinking of Science and Humanities each as some distinct entity other than, and comparable to, its opposite. All MIT is saying is that there is other stuff out there (and on its payroll!) which techies-in-train ing ought to spend more time with to come out looking smoother and fitting into corporate hierarchies better. Nobody (including Nancy Ide) has yet addressed my not-so-subtle insistence that there IS no single Humanities "type", "student", "method", "course", or "discipline", and so it becomes siller and sillier to argue about how best to feed its initiates' presumedly distinct, unique, and identifiable needs for computer knowhow. Humanities is EVERYBODY, including scientists, computerists and techies, whenever they wish to think about what they are doing as "the proper study of mankind". Nobody owns it, least of all, I'm afraid, the ever-more-self-assured ACH types. Or so I firmly believe. -Sterling Beckwith Humanities and Music York University From: "Timothy W. Seid" Subject: GAP Date: Wed, 05 Aug 87 12:36:18 EDT X-Humanist: Vol. 1 Num. 153 (153) I appreciate Sterling Beckwith's criticism and would like to hear from others too. First of all, I changed my description from SCIENCE to TECHNOLOGY. My guess is that there was a similar problem with the typewriter. How many of us know of older (I'm only 29) scholars who never learned how to type and even resisted using one? My professor does not know how to type on his outdated electric and has an IBM RT PC on his desk which is connected to a CD player with the TLG texts and indices on it, yet writes out by hand his manuscripts and has a secretary type it. I think what Sterling describes is the ideal we are working for but not the reality of the case. I want to refine my analysis further by putting it in terms of SPECIALISTS. Take my earlier example: There is a special discipline of social or cultural anthropology. Yet it has become necessary in my field (history of early Christianity) to be able to describe history in these terms. Some within my field have specialized in this area but all of us, I think, need to be familiar with it. This is the kind of GAP that I'm talking about. It just so happens that with computers, it has been the sciences ("hard sciences") which have mainly had the specialists. Persuading others to become computer capable has its drawbacks. Now I have to share our departments two Mac's with three others instead of having them both to myself like I did last year at this time. I can adjust. From: R.J.HARE%UK.AC.EDINBURGH@AC.UK Subject: Science & Technology vs Humanism Date: 06 Aug 87 10:51:46 bst X-Humanist: Vol. 1 Num. 154 (154) I've read the contributions to this debate with some interest, and a lot of interesting things have been said. I must say though that I regard being moulded to 'fit into a corporate heirarchy' as being probably one of the worst punishments meted out in the hot place down below - worse even than shovelling the entropy into sacks (I mean, it's got to go somewhere hasn't it?). If such is a major (or even a minor) goal of the sort of training people receive in our universities, then God help us all!: That is of course a personal point of view and may be impractical in a world where falling employment is a 'norm'. On the more relevant matter of the 'conflict' between the Arts and the Sciences, can I recommend two books by C P Snow on this subject - they are quite well-known, and I'm surprised that no-one has mentioned them before (maybe they are so well-known as to be not worth mentioning?). Anyway, the two books are 'The Two Cultures' and (I think) 'The Two Cultures Revisited' which was published some years after the first. I read them about fifteen years ago and found them extremely interesting and relevant to this debate which has been going on since long before computers were invented. Roger Hare. From: S_RICHMOND@UTOROISE Subject: computers vs. humanity Date: Thu, 6-AUG-1987 09:01 EST X-Humanist: Vol. 1 Num. 155 (155) 1 Problem--two cultures? I am glad that Roger Hare mentioned Snow's two cultures problem as the background for the current discussion here in these electronic pages about technology ("science") vs. the humanities (should this be in quotes too?). However, the problem that has arisen with the advent of computers into the humanities and other settings of traditionally non-computer users is different, but not new. Someone mentioned his "old professor" still struggling with his typewriter. Here is a problem akin to the advent of the microwave oven as opposed to the older technology of radiant heat ovens, or the gas-barbecue as opposed to the older technology of charcoal barbecues. Some people just have difficulty adapting to new tools; but the new tools just "cook"--they produce the same products. Word processors are just better typewriters when they are used to produce paper and even electronic essays. In the end, we are consuming what we consumed prior to the new technology, but we are producing it quicker--and perhaps with less resources ('person-years') required. Is there another and also a new problem? I believe so--and this problem has been the one that lurks in the shadows. Computers not only replace certain methods of production, but also can be used to produce new entities (products, goods, services, creatures). 2 The problem of how to adjust to the new world of computer creations: This problem cuts across disciplines and professions. Corporate workers in government and industrial bureaucracies, teachers in educational organizations, artists, homemakers, private entrepreneurs... have this problem of how to cope with the new products, new world, created by computers. This new world is the world of software processing that functions quasi-intelligently. For instance: software accounting models that predict and analyze cost-benefit; computer instructional systems that teach; computer graphic systems that generate animations. The difference here is that when the computer "cooks" we get a different type of product. The product is the process--and the process is semi-autonomous. Once set going, it has requirements which the user must satisfy if the user wants to receive the goods. In every technology, there is a process and product. However, there is an aspect of some computer systems where the products, or results, are in a sense by-products, and where the process is the real product. This is akin to our interaction with people, where the mode of interaction is itself the product, and the supposed goals of interaction are in a sense by-products. My point is that we are quite familiar with this situation in our daily lives when interacting with people and other species. We are quite familiar with processes such as teaching, discussing, playing...when in the company of organisms such as people and pets. However, undertaking these similar forms of interaction with semi-autonomous non-organic entities is somewhat disconcerting. In teaching the student can switch classes or the worker can quit, however, the law and morality prohibits the student and worker from killing the teacher or manager he dislikes. However, the user can "kill" the instructional system, or the accounting system--he can even, if he is the programmer--change the "soul" of the system. So it seems. Unfortunately, there is a new ethic, with enforcement by law in some cases, killing or tampering with the software when one is not "licenced" to do so is forbidden. It is not merely a matter of copywrite protection, but of maintaining software integrity. 3 The new problem: The new problem is: how should we interact with semi-autonomous computer systems that perform like people? Some computer developers and critics, such as Winograd and the Dreyfuss's in their recent books, do not want the problem to even get off the ground because they want to shelve machines that perform like people. But part of their hesitation has to do with the realization that the more we allow semi-autonomous systems to perform people-functions, such as teaching, game-playing, art-making...the more responsibility and skills we give to and give up to these systems. For instance, calculators, some teachers fear, take away elementary arithmetic skills from children (and adults). But why worry? Will we give up more serious thinking skills to computer systems--such as helping students to diagnose their intellectual problems--once we allow computer systems to perform more of the functions that we have done solely with human resources? Recently, someone told me of an incident with one of the pioneers of logic teaching computer systems. He introduced computer assisted logic teaching systems into his intro logic courses. The final step was that he allowed the computer system to teach the entire course. Students only came to see him either if they were to advanced or to behind the computer system--which was only a small number. The majority were satisfied to work solely with the computer. However, the administration soon caught on to this situation and wondered why he needed graduate assistants for his course. So, the teacher in order to save his requirements for assistants retreated and returned to only allowing the computer logic teaching system to function as a supplementary system. Of course, what he really wanted was to have more interaction with the majority of students. From: Willard McCarty Subject: Scientists and Humanists Date: 6 August 1987, 09:02:03 EDT X-Humanist: Vol. 1 Num. 156 (156) The mention of C. P. Snow's famous "Two Cultures" has blown the dust off a paperback volume I still have from the days of teaching composition to students of engineering: "The Scientist vs. the Humanist," ed. Geo. Levine and Owen Thomas, published by W. W. Norton in the U.S. in 1963. (I bought it for $2.75!) It contains pieces from the 18th century (Swift and Johnson) to recent times (Oppenheimer and Rabi). The bibliography begins with Aristophanes, runs through Bacon to Brecht, and includes an article by Kenney, "Dead Horse Flogged Again." It's not a bad collection, on not an unsuitable topic, for the kind of course one could imagine being taught to undergraduates who find themselves in the cross-disciplinary soup we are cooking. The horse is old, to be sure, but unless a person kills it for himself I don't see how it could ever be dead. The impact of computing on humanists, many of whom have never had direct exposure to the sciences, involves both dangers and considerable opportunities for renewal. I think the dangers have mostly to do with what might be called a Freudian envy of the sciences (and, more recently, of commerce), which has possessed many an unwary soul. The interesting thing is that this object of envy is so often a projection, compounded of fear and desire, which has little resemblance to what actually goes on in the sciences -- when they are intelligently practised -- and in commerce. I found Thomas Kuhn's The Structure of Scientific Revolutions very stimulating in this regard; his description of how science is done seemed to me not unlike how I conduct myself as a literary critic. The opportunities for renewal seem to me mostly to stem from the understanding we can gain of how humanists have always done their work, which may indeed turn out to be how thoughtful human beings have always thought. I doubt there is much really new in this, but to "renew it daily" (supposedly the motto on Confucius' bathtub) is simply intellectual survival. I've attended conferences where people have said that the humanities are moribund, and I've talked to others who say that the kind of intramural world that has allowed the humanities to exist is no longer possible. These people tend to look to computing as a saviour from extinction and ticket to full participation in the modern world, with all the rewards it offers. We seal our own doom, however, if we cannot restate from within our own group of disciplines the unchanging value of the humanistic scholarly life to ourselves and to our society, even if most of its members won't understand. As one of my teachers was fond of saying, there's no such thing as dead literature, only dead readers. It seems to me that computing in the humanities furnishes a very good interdisciplinary framework within which to restate what has never ceased being true. The financial pressures on our universities make this restatement absolutely vital. What can't be used gets sold. From: Willard McCarty Subject: HUMANIST BIOGRAFY in print? Date: 7 August 1987, 13:40:36 EDT X-Humanist: Vol. 1 Num. 157 (157) Nancy Ide of the ACH has proposed that the whole of HUMANIST BIOGRAFY be published in the ACH Newsletter. Please reread what you contributed; let me know if you object (a) in principle to your biographical statement being set down in the cool and authoritative print of the Newsletter, or (b) to the current version being printed. If you object to the latter, you will need to supply me with a replacement, let us say before the end of this month. If you have no objections please say nothing -- I get sufficient electronic mail as it is. Thanks for your continuing participation. From: Subject: Date: X-Humanist: Vol. 1 Num. 158 (158) Autobiographies of HUMANISTs First Supplement Following are 20 additional entries to the collection of autobiographical statements by members of the HUMANIST discussion group and 1 update to an existing entry. Further additions, corrections, and updates are welcome, to MCCARTY at UTOREPAS.BITNET. W.M. 10 August 1987 From: Subject: Date: X-Humanist: Vol. 1 Num. 159 (159) *Beckwith, Sterling 248 Winters College, York University, 4700 Keele St., North York, Ontario (416) 736-5142 or 5186. I teach at York University, have created and taught the only Humanities course dealing with computers, in the context of Technology, Culture and the Arts, and serve as director of computer music in the Faculty of Fine Arts, at York. From: Subject: Date: X-Humanist: Vol. 1 Num. 160 (160) *Boddington, Andy Academic Computing Service, Open University, Milton Keynes, MK7 6AA I am a Research Adviser at The OU responsible for advising a broad range of disciplines but specialising in the arts and social sciences. My particular interests professionally at the OU are in encouraging conferencing and developing data handling and data analysis packages for the non-scientist and the 'computer timid'. I also specialise in statistical analysis. I am an archaeologist by training and inclination I am particularly active in propagating computing as an analytical tool within archaeology; as well as the benefits of desk top publishing to a discipline which produces large volumes of printed emphemera. From: Subject: Date: X-Humanist: Vol. 1 Num. 161 (161) *Brown, Malcolm gx.mbb@stanford.bitnet ACIS/IRIS Sweet Hall, Stanford University, Stanford, CA 94305-3091 Humanities background. Undergraduate: UC Santa Cruz, BAs in Philosophy, German Literature Graduate: Universitaet Freiburg (two years); Stanford University (German Studies). Dissertation: "Nietzsche und sein Verleger Ernst Schmeitzner: eine Darstellung ihrer Beziehungen" Primary interests: European intellectual history from the Enlightenment to the present Computer background. Systems experience: IBM MVS, IBM VM/CMS; DEC TOPS-20; Berkeley 4.3 UNIX; PC- DOS and MS-DOS; Apple Macintosh. Current responsibilities. I support the Stanford Humanities faculty in all aspects of computer usage. We are currently looking at ways in which more powerful microcomputers (PS/2, Mac II) might assist humanist scholars in their research. Additional interests. all aspects of text processing, from data entry (such as scanning) to printing, which might loosely be called digital typography. Especially: page description (e.g. PostScript), typesetting (e.g. TeX, Interleaf, PageMaker etc), typeface design. From: Subject: Date: X-Humanist: Vol. 1 Num. 162 (162) *Brunner, Theodore F. Theodore F. Brunner, Director, Thesaurus Linguae, Graecae, University of California Irvine, Irvine CA 92717. My telephone number is (714) 856-6404. Short description of the TLG: A computer-based data bank of ancient Greek literature extant from the period between Homer and A.D. 600 (we are now beginning to expand the data bank through 1453). From: Subject: Date: X-Humanist: Vol. 1 Num. 163 (163) *Choueka, Yaacov Department of Mathematics and Computer Science, Bar-Ilan University, Ramat-Gan, Israel, 52100. From: Subject: Date: X-Humanist: Vol. 1 Num. 164 (164) I am the Secretary of the Association for Literary and Linguistic Computing and a member of the editorial committee of Literary and Linguistic Computing, and co-author (with B. H. Rudall) of Computers and Literature: a Practical Guide, recently published by Abacus Press, along with a number of articles and papers on humanities computing. I look forward to hearing from you. From: Subject: Date: X-Humanist: Vol. 1 Num. 165 (165) *Cover, Robin C. Assistant Professor of Semitics and Old Testament 3909 Swiss Avenue; Dallas, TX 75204 USA; I am the faculty coordinator of the (current) "Committee for the Academic Computerization of Campus"; we are just beginning to face up to the need for a distinct entity which will be responsible for academic applications of computers: software development for textual analysis; multi-lingual word processing; supervision of the student computer lab (with CAI for Koine Greek and Biblical Hebrew); purchase of workstation equipment dedicated to textual analysis (micro-IBYCUS, etc); faculty education in humanistic computing; etc. My specific role now is to represent to the administration the need for this new entity, the precedent for it (at other universities); definition of the role of the entity within institutional purpose; proposal for staffing, funding and organizational structure; etc. My special interests are in MRT archives and text retrieval programs to study encoded texts. From: Subject: Date: X-Humanist: Vol. 1 Num. 166 (166) *Curtis, Jared Curtis Department of English, Simon Fraser University, Burnaby, BC V5A 1S6 (604) 291-3130 I conduct research in textual criticism, including the use of computers, teach "Humanities research and computers" to graduate students, and give advice to colleagues and students. From: Subject: Date: X-Humanist: Vol. 1 Num. 167 (167) *Erdt, Terry Graduate Dept. of Library Science, Villanova University, Villanova PA 19085 (215) 645-4688 My interests, at this point in time, can be said to be optical character recognition, scholar's workstation, and the computer as medium from the perspective of the field of popular culture. From: Subject: Date: X-Humanist: Vol. 1 Num. 168 (168) *Goldfield, Joel Assistant Professor of French, Dept. of Foreign Languages, Plymouth State College, Plymouth, NH 03264; Tel. 603-536-5000, ext. 2277 My work focuses on stylostatistical and content analysis, especially in the field of 19th-century French literature. I am currently developing a sub-field called "computational thematics" wherein a selective database based on conceptually organized words and including frequency norms for appropriately lemmatized entries can be applied to thematic and content analysis. My current application is to the 19th-century diplomat and author, Arthur de Gobineau, his use of "tic words" and other stylistic traits disputed by Michael Riffaterre and Leo Spitzer. I attempt to resolve this controversy through this conceptual, thematic, and stylostatistical approach. See the project description listed by Klaus Schmidt in the latest newsletter/booklet from the Society for Conceptual and Content Analysis (SCCAC). I would welcome comments on database structures, stylostatistical applications and programming from other UNIX users, who may want to compare their experiences with those I described in my article for the ACTES of the ALLC meeting in Nice (1985), a 1986 publication by Slatkine, vol. 1. I am hoping to prepare a manuscript on humanities computing on the UNIX system for publication within the next 3 years and would welcome all suggestions for contributions. The scope may be restricted later to literary and linguistic applications, depending on contributions and an eventual publisher's preferences, but, for the moment, everything is wide open. The only real computer connection with what I teach here in the University System of New Hampshire (Plymouth State College) is computer-assisted instruction/interactive videotape & videodisk. My 4-course/sem. teaching load typically includes 2 beginning French course sections, 1 intermediate course, and an advanced one (translation, culture & conversation, 19th-cen. Fr. lit., or history & civ.). I also conduct innovative FL teaching methodology workshops and consult with various public school and college foreign language departments on evaluating, using and authoring CALI/interactive video. From: Subject: Date: X-Humanist: Vol. 1 Num. 169 (169) *Hare, Roger Training Group, Computing Service, University of Edinburgh, 59 George Square, Edinburgh, Scotland. Graduated in Applied Physics from Lanchester Polytechnic (Coventry) in 1972. First exposure to computing in second year course (algol on an Elliot 803), and third year training period (Fortran on IBM and Honewell machines at UKAEA Harwell). Thereafter spent several years working in the hospital service in Manchester and Edinburgh, mostly in the area of respiratory physiology and nuclear medicine. Computing interests re-awakened on moving to Edinburgh in 1974. After a couple of years away from computing, followed by a couple of years working as an 'advisor/programmer/trouble-shooter' for a bureau, re-joined Edinburgh University in 1980 as an 'adviser/programmer/trouble-shooter' on the SERC DECSystem-10 in 1980. After three years or so in this job, joined the Training Unit of the Computer Centre (now the Computing Service) where I have remained. We teach various aspects of computing, but my own interests are in the Humanities area (amongst others), literary analysis, languages suitable for teaching computing to non-numerate non-scientists, computerised document preparation (I don't like the terms word-processing and text-processing) and puncturing the arrogant idea held by many scientists that computers are solely for use by scientists, etc. I am currently looking (or trying to find the time to look) at Icon, Prolog, Lisp, Simula, Pop (?), etc. (I gave up on C!), with a view to using one of these as a language to teach programming to humanists. The first thing I have noted is that my head is starting to hurt! The second is that Icon seems to be a good idea for this sort of thing, though I am not deep enough into the language yet to be sure. If anyone out there has any ideas/experience on this one, I'll be happy to pick their brains... From: Subject: Date: X-Humanist: Vol. 1 Num. 170 (170) *Holmes, Glyn <42104_263@uwovax.UWO.CDN> Department of French, The University of Western Ontario, London, Ontario, Canada N6A 3K7. Phone: (519) 679-2111 ext. 5713/5700. Main area of research is computer-assisted language learning, with emphasis on input analysis and instructional design. Most of my publications have been in these areas. I have also taught a course on French and the Computer, which covered CALL, literary and linguistic computing, use of databases, etc. I am the editor of Computers and the Humanities. From: Subject: Date: X-Humanist: Vol. 1 Num. 171 (171) *Hulver, Barron Houck Computing Center, Oberlin College, Oberlin, OH 44074 My position is technical support analyst. Basically I assist students and faculty in trying to use our computers and networks. From: Subject: Date: X-Humanist: Vol. 1 Num. 172 (172) *Kashiyama, Paul I AM A PHILOSOPHY PH.D. CANDIDATE AT YORK UNIVERSITY CONCENTRATING IN THE AREA OF ETHIC AND JURISPRUDENCE. I AM PARTICULARLY INTERESTED IN THE POTENTIAL ROLES COMPUTERS/AI WOULD PLAY IN FORMULATIONS OF ETHICAL/LEGAL JUDGMENTS; AND THE PHILOSOPHICAL QUESTION OF WHETHER SUCH JUDGMENTS ARE ADEQUATE REPLACEMENTS FOR HUMAN DECISIONS OR AT LEAST ADEQUATE MODELS OF ETHICAL AND LEGAL DECISION MAKING PROCEDURES. MY BACKGROUND IN COMPUTING INCLUDES PROGRAMMING IN BASIC,PASCAL, PROLOG, SOME C, APPLICATIONS PROGRAMMING IN FRED,DBASEIII+, TRAINING AND TEACHING EXPERIENCES IN DATABASE MANAGEMENT, SPREDSHEET ORGANIZATION, WORD PROCESSING AND INTRODUCTION TO PROGRAMMING FOR CHILDREN AND BUSINESS PERSONS USING PERSONAL / MICRO COMPUTERS. From: Subject: Date: X-Humanist: Vol. 1 Num. 173 (173) *Matheson, Philippa MW Athenians Project, Dept. of Classics, Victoria College, Univ. of Toronto, Toronto, Canada M5S 1A1 (416) 585-4469 My university affiliation is the ATHENIANS project, Victoria College, University of Toronto, and my humanist computing activities are varied: programs for the Canadian classics journal, Phoenix; all forms of computer and scholarly aid for the ATHENIANS (Prosopography of ancient Athens) project; an attempt to establish a bibliography of articles in Russian (translated) on the subject of amphoras (ancient wine jars) on the EPAS machine; as well as trying to exchange amphora data for a database project on the stamps on ancient wine jars (called, imaginatively, AMPHORAS). I call myself a computer consultant, and am mostly consulted about how to make PCs deal with Greek... From: Subject: Date: X-Humanist: Vol. 1 Num. 174 (174) *McCarthy, William J. Dept. of Greek and Latin, Catholic University of America, Wash., D.C. 20064 (202) 635-5216/7 Although untrained in computer science - and doubtless possessing little aptitude for it -, I have plunged considerable time into an effort to harness for myself and my colleagues the powerful tools of study and "productivity" which the computer offers to accommodating scholars. My hope is that groups such as HUMANIST will be able, in some way, to guide the development of a fruitful conjunction of technology and humanism. From: Subject: Date: X-Humanist: Vol. 1 Num. 175 (175) *McGregor, John University of Durham, Abbey House, Palace Green, Durham DH1 3RS, UK Areas of interest: Septuagint/ Greek/ CALL/ Bible Present status: Developing CALL software for NT/Biblical Greek From: Subject: Date: X-Humanist: Vol. 1 Num. 176 (176) *Roosen-Runge, Peter H. Dept. of Computer Science, York University, 4700 Keele St., North York (416) 736-5053 I have been involved with supporting and extending computing in the humanities for many years (I think I taught the first course at the UofT on computing for humanists in 1968!) Current projects include melody generation based on a model of a "listener" expressed in Prolog, and a music database system under Unix. I am also very interested in the impact of large comprehensive text databases on teaching, and the role of universities in creating and publishing such databases; but I am only in the early stages of formulating a research project in this area. From: Subject: Date: X-Humanist: Vol. 1 Num. 177 (177) *Seid, Timothy W. 74 Clyde St., W. Warwick, RI 02983 Box 1927, Religious Studies Dept., Brown University, Providence, RI 02912 (401) 828-5485; (401) 863-3401 My interest in computers began when I first entered the doctoral program in History of Religions: Early Christianity two years ago soon grew to the point of being the department's Distributed Computer Support Person. During last year, when TA positions were scarce, I was able to get a Computer Proctorship. Again, for this next year, I will hold such a position. The main project, for which we have an Educational Computing Grant from the university, will be to develop a CAI which will teach students about textual criticism--in simulation for the under- graduate course in Earliest Christianity and using the ancient languages for the graduate seminar. Two personal projects have to do with word- division of ancient Greek manuscripts and scanned images of the same. I'm also a member of Brown University's Computing in the Humanities User's Group (CHUG) and co-leader of the Manuscript Criticism Working group of CHUG. As a service to the department and the University at- large, I maintain RELISTU, a Religious Studies Common Segment on the mainframe on which I archive the ONLINE NOTES and the BIBLICAL SCHOLARS ON BITNET ADDRESS BOOK and have the first version of the CAI I've called TEXT EDIT. From: Subject: Date: X-Humanist: Vol. 1 Num. 178 (178) *Sitman, David Computation Centre, Tel Aviv University I teach courses in the use of computers in language study and I am an advisor on computer use in the humanities. From: Subject: Date: X-Humanist: Vol. 1 Num. 179 (179) *Zayac, Sue I work for the Columbia University "Scholarly Information Center". This is an experimental union of the Libraries and the Computer Center designed to "stimulate and support the productive and creative use of information technology by our faculty and students" - Pat Battin, Vice President and University Librarian "Information technology" includes everything from parchment to CD-ROM, and from thumbing through a 3x5 card catalog to searching a database on a new supercomputer from the Vax workstation on your desk. My title is Senior User Services Consultant, Academic Information Services Group. My areas of responsibility are statistical programs, particularly SPSSX and SAS, word-processing, particular the mainframe text-formatting product, SCRIBE, and a smattering of anything and everything that anybody might ask me. I have a BA in Geology from Barnard College and a Masters from the Columbia University School of Public Health (major area was Population and Family Health). I'm one of the few people at the Computer Center who didn't major in Computer Science or Electrical Engineering. One of my great uses here is to play the part of "everyuser". Interests are classical archaeology (I almost majored in Greek and Latin, but realized in time I had no talent for languages), history of science, history in general, ballet, arm chair astronomy (I don't like the cold), gardening, and nature watching. I once did rock climbing but, like many of us in the computer field, I've gotten out of shape sitting in front of a monitor all day long. Mail is welcome, on any topic. From: ARCHIVE@VAX3.OXFORD.AC.UK Subject: Date: 11-AUG-1987 14:34:26 X-Humanist: Vol. 1 Num. 180 (180) OXFORD TEXT ARCHIVE RESEARCH ASSISTANTSHIP The British Library has recently approved a grant to fund a one- year research assistantship in the Oxford Text Archive at Oxford University Computing Service (OUCS). The person appointed will be required to investigate current and potential applications of machine readable texts in a scholarly context. A survey will be made of current usage, and recommendations produced about ways of integrating existing machine readable texts (e.g. typesetting tapes) into a text database. Applicants should have some experience of academic research, enthusiasm for text processing in the humanities and preferably some background knowledge of database or electronic publishing. It is hoped to appoint to the post with effect from January 1988, on the Research Scale 1A (#8,185-#14,825, under review). For more information, e-mail LOU@UK.AC.OX.VAX1, or write to Mrs D. Clarke, Oxford University Computing Service, 13 Banbury Road, Oxford OX2 6NN From: sano%VLSI.JPL.NASA.GOV@Hamlet Subject: RE: HUMANIST BIOGRAFY in print? Date: Tue, 11 Aug 87 13:39:46 PDT X-Humanist: Vol. 1 Num. 181 (181) Willard, If the biografy is going to print, I'd like to change mine. Unfortunatel y, my vlsi machine is going down tomorrow for a facility move which is only supposed to take a week. I'll try to get on and send you a new biografy, but if I don't, please don't print it. Thanks. Haj From: Willard McCarty Subject: Review of Discussions Date: 12 August 1987, 14:54:10 EDT X-Humanist: Vol. 1 Num. 182 (182) The following is a draft for an article that will appear in the forthcoming Newsletter of the ACH. The first part describes HUMANIST, the second part summarizes the discussions that have taken place here in the last two months. The plan is to create a summary of discussions every three months for the ACH Newsletter and for the Journal of the ALLC and to publish these summaries here as well. It seems to me that we need periodic reminding of what has happened on HUMANIST to give this rapidly flowing medium some continuity. Comments on this summary, either about its form or its content, are welcome. Please send them to me directly. W.M. ----------------------------------------------------------------------------- HUMANIST So Far: A Review of the First Two Months One of the first activities of the new Special Interest Group for Humanities Computing Resources has been to establish an international electronic discussion group, HUMANIST, on the Bitnet/NetNorth/EARN node in Toronto, Canada. The purpose of HUMANIST is to link together those who in any way support computing in the humanities in the terms defined by the new SIG. Initially HUMANIST was focused on discovering a common professional identity among its members; although this remains a strong interest, its horizons have expanded considerably. HUMANIST's first message was sent out on 13 May to approximately two dozen people in three countries. As of the end of July, HUMANIST has grown to nearly 100 people in 9 countries around the world, and membership continues to grow. To be included an individual must only be involved in some way with the support of humanities computing; he or she need not be a member of the ACH or ALLC, although membership in these organizations is actively encouraged. Because we do not really know what it means, this "support" is in practice very loosely defined. Technically speaking, HUMANIST is a list of names and addresses kept by ListServ software on the IBM 4381 known as UTORONTO. When ListServ receives an ordinary e-mail message addressed to HUMANIST@UTORONTO by anyone on the list, it automatically mails a copy to every other person on the list. The sender need not be on Bitnet/NetNorth/EARN but can communicate to HUMANIST from any network with a gateway to Bitnet. Unlike conferencing systems, ListServ does not permit subdivision of a discussion group into subtopics. It is thus like a large seminar on a very general topic, in which everyone is privy to everything everyone else says. De facto subdivision can be achieved by direct e-mail conversations among members, but ListServ does nothing to assist this. Every list has one or more "owners," who have supervisory rights that may be varied in their degree of control. HUMANIST has two: Steve Younker, the "postmaster" of the UTORONTO node, who helps with problems related to the network itself, and Willard McCarty, the editor. HUMANIST has been set up such that an individual must ask the editor to be given membership, but once he or she is a member mailing privileges are unrestricted. The lack of control in this regard inevitably leads to some unpleasant floods of junk-mail, but it also permits free-ranging discussion and frees everyone from the inhibiting burden of dictatorial powers and duties. The membership has indeed been patient and forgiving as well as very lively during the initial period. A few mutually respected rules of etiquette have evolved. Direct conversations among members interested in highly specialized topics are encouraged, with the understanding that the originator of the special discussion will summarize the results for everyone else. Direct conversions are especially recommended when a HUMANIST asks for specific information, e.g., "Where can I find worthwhile reviews of Nota Bene?" Members are also encouraged to identify the subjects of contributions and themselves by name. Because someone applying for membership in HUMANIST must say what he or she does to support computing in the humanities, the owner has accumulated many interesting biographical statements. These were recently gathered together, cursorily edited, and sent out to all HUMANISTs in order to introduce everyone to everyone else and thus to help define a professional identity. Supplements are planned as new members' statements accumulate. Tim Maher (Computing Services, Berkeley) is meanwhile working on a more detailed and systematic questionnaire. In many respects HUMANIST fulfills the late Marshall McLuhan's vision of the "global village," in which the great physical distances that separate its members almost cease to matter. It is for that reason a fascinating sociological experiment. Of course HUMANIST is used to disseminate information, but the interaction of personalities, perspectives, and ideas bulks much larger in my growing file of contributions than exchanges of facts. The Discussions Since the first contribution on 19 May, several types of discussions have occurred. I count 7 concerned with the etiquette of contributing to HUMANIST, 8 requests for specific information, and one advertisement of a job. On 5 occasions it has been used to announce publication of or offer subscription to both printed and electronic sources of information, and 6 conferences and calls for papers have been published this way. Interestingly, one HUMANIST's objections to the program for the forthcoming conference at Oberlin -- he pointed to the repetition of issues raised earlier at the Vassar conference -- resulted in a thorough exploration and defense of the rationale for the conference. As one defender put it, the later conference returns to the issues of the first because both are dealing with difficult and important questions: "what it is we want our students to learn, the nature of the world into which we are sending them, and the relationship both of technology and (more fundamentally) the algorithmic approach to problem-solving." 1. Programming in the curriculum. The unresolved nature of these questions is demonstrated by the prior and independent discussion on HUMANIST about the teaching of programming to students in the humanities. Some comments addressed the virtues and limitations of specific languages, such as Prolog or Icon, or of languages of a specific generation. The more interesting contributions, however, circled around the question, "Why should arts students learn programming at all?" One HUMANIST concluded that "the more basic task is to teach undergraduates, and people in general, how to recognize problems, identify and characterize them, understand their nature, and then to determine which tool may be appropriate for the problem." Another noted the analogy with learning classical languages (formerly the usual means of acquiring intellectual discipline) and concluded by saying that teaching students a computational language will show them "how to approach and analyze a problem from a computational point of view. And that will help them both in the Big Bad World... and in the academic world... where humanists need more than ever to understand how to express a problem clearly in computational terms in order to get not just a correct answer but the correct answer to the question they want to ask. It will also help them, if they remain in the academic world, to view with proper skepticism both those humanists who deny that the computer can be a valuable tool... and those who think the computer can solve any question it is worthwhile asking better than a human being can." 2. Professional recognition and electronic publishing Another substantive discussion began with the vexing problem of professional academic recognition for work in humanities computing and with the desire to exploit the electronic medium for publication. The latter issue is related to the former, since electronic publication carries with it no professional kudos and may preempt the conventional kind. The latter, however, is in some cases too slow to keep pace with developments in the field, so that, for example, reviews of software may be obsolete by the time they see print. The formality of print may also inhibit as well as encourage higher standards of work. Some HUMANISTs commented that they would always opt for publication in print unless journal editors were agreeable to pre-publication in electronic form. Unrestricted redistribution would be a problem, as would the availability and reliability of electronic networks around the world. The lack of typographic sophistication as well as diacritical marks makes imitation of printed journals impossible. "The technology isn't up to it," one person said. Another remarked, however, that since HUMANIST is non-refereed, publication there would be in a different category from the conventional kind, somewhat like the circulation of a technical report in computer science. The trick is to exploit rather than be thwarted by the characteristics of the medium. A change in how research in the humanities is done could result. The appearance of this column in the ACH Newsletter represents a link with conventional publication and an attempt to exploit the new medium, but it does not do much about the problem of professional credit. There was little disagreement about the lack of professional RECOGNITION. ONE HUMANIST REMARKED, FOR EXAMPLE, THAT IN HIS department "writing software ranked dead last in a list of 35 activities considered worthy for English faculty." He went on to note, however, that "I was not hired to work with computers.... So, it is to some degree my own doing." He advised younger, untenured members "to be sure that their computer activity officially be made part of their job description," but he concluded by noting that most of the work in humanities computing does not itself constitute research, at least not in the humanities. This is, of course, a serious issue, since it raises the question of our scholarly and academic legitimacy. It may be significant that there have been no replies to a direct question posed on HUMANIST about our scholarly contribution to humanistic scholarship. One contributor had suggested earlier that work in humanities computing might be considered on a par with the editing of texts or assembling of bibliographies, for example. The most vexing problems with making use of this analogy seem to point to the juvenality of an emerging discipline: the lack of peer-review, hence of quality-control; the confusion over aims and possibilities; and indeed, the fluid nature of terms and definitions. The exchanges over these issues on HUMANIST have been desultory, but the existence of an electronic forum promises to accelerate the shaping of this new discipline. 3. Desktop publishing. HUMANISTs also discussed the impact and potential of desktop publishing. The originator noted the many problems with formal electronic publication but remarked that "using electronic means to improve the quality of conventional scholarly publishing really seems to me an exciting possibility." To the dire predictions of decline in quality she opposed the great advantage for the academic editor or scholarly research project of being able to control book production as well as to reduce its cost greatly. A respondent noted two reasons for decline in quality: (1) the typographical superiority of traditional methods; and (2) the lack of required skills characteristic of most desktop publishers. Since improvements in technology will likely soon close the gap between new and old methods, the second item is really the central problem. As he remarked, "Really good work in this area cannot be done by amateurs," who are mostly unable to judge the quality of what they are producing. Another HUMANIST, who works at a major academic publishing house, described a "do-it-yourself" (rather than desktop) facility that to date has produced over 200 scholarly volumes. She stressed the role of the typographic department of the press in helping an author design a volume; or, when the author does not yet have a press, of other books in providing models for him to follow. She remarked that "On the whole... our users have been quite conscientious and have made considerable efforts to produce texts which have a pleasing appearance." Ironically, she noted that the generally high quality of these texts may be in part attributable to the fact that this system is considerably less "friendly" than the usual desktop publishing software. The user is forced to learn several unfamiliar typographical terms, "all of which remind him that he is dabbling in an area of considerable tradition and expertise and art, and encourage him to walk with caution, possibly even respect." Conclusion We plan to review the activities of HUMANIST in the ACH Newsletter on a regular basis. These reviews will also be published on HUMANIST itself in order to remind the members what has happened and thus to give them the opportunity to renew a lapsed discussion. Anyone wishing to join HUMANIST should send an e-mail note to MCCARTY@UTOREPAS.BITNET, giving a brief professional biography. As I have mentioned above, these biographies will later be circulated to all HUMANISTs. Willard McCarty Centre for Computing in the Humanities University of Toronto From: S_RICHMOND@UTOROISE Subject: COMPUTERS VS. HUMANITY Date: Wed, 12-AUG-1987 16:13 EST X-Humanist: Vol. 1 Num. 183 (183) 1 To Lou Burnard: Thanks for your comments. 2 I agree that there is insecurity about computers. Some of it is warranted --as far as job security goes. For instance, in Canada there are currently 19,000 draftsmen. In two years from now, it is predicted that there will be 900--due to the coming of CAD (computer-aided drafting). 3 As far as Snow and the two-cultures problem goes; this problem transcends its parochial background. It is a fact that people in the humanities and arts are ignorant, by and large, of current science, and scientific methodology. Scientists, by and large, are ignorant of the humanities. So what? That's the problem--what is the significance of the gap? 4 About word processors. I started using word processors about six years ago. What made me excited about using this technology was that it allowed me to combine two or three processes: 1. First hand written draft. 2. 'N' typed draft. 3. Cut and past. 4. Step 2 to produce N+1 typed draft and 3 and 4. ('4' is a recursive function.) I also use step 1 when I find myself too far from my key board; and then replace the typewriter with the word processor. Now I am even more excited about word processors than when I first started using them. For instance, the one I am using now to compose this reply, allows one to automate foot-noting, structuring (in terms of sections, and sub-sections), and automate a table of contents and index. Of course, it comes with a spelling-checker with a facility for making several custom dictionaries. However, by and large, I expect to produce the same old product--paper essays. Also, the cognitive functions supported by this process--in so far as word-processing expedites cut-and-paste--are no different than the hardware cut-and-paste. (Scrolling a typed text merely expedites cut-and-paste.) 5 Is there anything qualitatively new introduced by word processors? Yes--only in so far as they are used in conjunction with electronic journals/mail/bulletin boards to produce electronically stored essays, etc., that can be accessed quickly and virtually universally. In effect, we will open up the exclusive world of intellectual products to a wide audience of non-professional scholars who will be able to join in this world without requiring the luxury of an academic position. This widening of the academic world, or access to intellectual products without requiring an academic position, will not only widen the arena of discussion, but will (or could) open up new intellectual problems for discussion, and create new jargons and methodologies as required for the discussion of these problems. 5.1 But once these electronic journals replace paper-media, an unfortunate loss will be the art of typography. Furthermore, once people cotton to the idea that physical libraries, and full time attendance in university courses, could be replaced by electronic libraries and computer-assisted instruction, libraries and librarians, and universities and professors may become redundant or surplus. Instead of spending years in university and then taking on a job; people could join companies with their own educational institutes, computer-assisted job-training and skill-enhancing courses. I'm not sure where liberal arts courses would fit into this world--perhaps they will be treated as leisure time, continuing education courses, to be taken after work hours along with cooking, sailing, photography, creative writing, etc. But again, perhaps people will work 9 hours a week from home-offices at their computer terminals, and be capable of pursuing full-time research, if they wish, from their computer terminals, and attendance at traditional universities--where they could have face-to-face contact with professional teachers. 6 To conclude: I disagree with you about the qualitatively new features introduced by word processors. However, when they are used in conjunction with electronic mail/journals/bulletin boards, they do permit rapid access and participation in a world of thought, that could open up this world to a larger audience of non-dedicated scholars, or non-professional scholars. This could both universalize thinking, and produce new sub-groups with new jargons and new intellectual problems, heretofore uninvented due to the limited resources available for pursuing intellectual past-times. From: S_RICHMOND@UTOROISE Subject: WORD PROCESSORS AND THINKING Date: Wed, 12-AUG-1987 16:14 EST X-Humanist: Vol. 1 Num. 184 (184) An afterthought to my previous response: Do word processors add anything qualitatively new to the function of writing and thinking-writing? I welcome your comment that scrolling through finished looking texts is a new function of computer word processors. Automated spelling checks and grammar checks are also new functions. However, the automated spelling checker I have doesn't check for context. It only checks the spelling of the word, and ignores whether the word is the wrong word for a given context. To check for context requires natual language understanding, which still is at the rudimentary stage in A.I. research and development. My point is that not all new technological functions do anything that is qualitatively new for a process to which they provide support, or for the products which they help make. Still do word processors enhance thinking, or in your words, the marshalling of ideas? Cognitive psychological studies of expert writers reveal that these writers use their writing as a means for revising their understanding of problems, and for improving their solutions of these problems. In physical terms, this involves, cut and paste, deletion, addition, modification, and rewrite of entire text. It is true that when one does this on a computer, it is neeter in looking like a finished product, and one can do this quicker without having to use tape or glue, and also less paper ends up in the waste bucket. So, the speed of this process on computer, though no different in function or type when done manually with scissors, glue, paper, typewriter, waste bucket... because it is so much greater, permits a greater number of revisions that each look finished, but are not really--as far as the intellectual process of clarifying and improving one's ideas goes. In that respect, word processing could enhance thinking by allowing for more revisions in less time with less physical effort. But, I don't think word processing adds a new dimension to thinking, or adds new features to our thought processsing. From: Willard McCarty Subject: The scholarly contribution of humanities computing Date: 12 August 1987, 16:22:47 EDT X-Humanist: Vol. 1 Num. 185 (185) I received the following comment from Abigail Young that is thoughtful and lengthy enough to be passed on immediately to everyone else rather than saved for a later summary. ---------------------------------------------------------------------------- I have been mulling over your words about computing and a scholarly contribution to humanistic research at the end of #2 in the summary of discussion. I think a lot of the problem is one of definition. For example, REED makes, as a scholarly project engaged in documentary editing and publishing, a solid contribution to two areas of the humanities at least, history and literature. But our computing here is pretty pedestrian: it's central to the project, but what we are doing is to use the computer to do now things that humanists have been doing since at least the Renaissance. I wouldn't feel that computing was making a truly new contribution to the humanities unless it were possible to make a qualitative rather than a quantitative advance in humanistic studies by means of computing, that is, not doing something more accurately or more quickly, but doing something which could not have been done at all, was not even thought of, before. That doesn't mean I don't think that the contributions that computers make in assisting research and writing are impor- tant, especially databases for historical and certain kinds of literary research. I think they are very important. But they are merely providing better tools to do tasks we have always wanted to do. I'm not sure that there are truly new contributions to humanities to be made by computing, but if there are, I think the novelty will have to wear off before we can recognize them. This is, I suspect, a reactionary and heterodox view; and it may be all wrong: I may only think it because of my lack of familiarity with the cutting edge of humainites computing. And I emphasize that I don't at all mean that computing isn't a valuable and in many ways essential tool for humanists. But in my mind this is the reason why there has been no direct response to the question to which you alluded in your report. From: Willard McCarty Subject: Date: 12 August 1987, 19:09:23 EDT X-Humanist: Vol. 1 Num. 186 (186) The following contribution was send with an incorrect node ID. It is a missing piece of a discussion to which everyone received a two-part reply earlier today. -------------------------------------------------------------------------- > Date: 12-AUG-1987 10:21:54 > From: LOU@VAX1.OXFORD.AC.UK > To: humanist@UTOREPAS <---NOTE BAD NODENAME! > Subject: two kulchurs > > Subj: msg=> S_RICHMOND%UTOROISE@RL.EARN: Re: computers vs. humanity > > I'm not sure I like the cooking analogy, so I'm going to pursue the > typewriter/word processor one. It seems to me that a wordprocessor is not > just a better sort of typewriter; or rather that the difference is more > qualitatative than quantitative. Both engines enable one to produce a written > document; which is a complex operation involving the disposition of symbols on > a piece of paper, but also the marshalling of ideas in the mind. It seems to > me, after many years experience of both, that the wordprocessor actually > helps as much with the latter as it does with the former. Drafting things > out on paper is, by comparison, clumsy, where anything more than very prelimin ar > concept maps or headings etc is concerned. The wp, by making it easy to scroll > back and forth thro a text which always looks as if it has just been typed > even tho it may have been changed over and over again, changes the way > i compose texts, and i think for the better. There isn't any analogy for > this process, because it's a new function that simply wasn't > there in the old technology. Why then do people persistently want to find > analogies for what computers can do, and say "aaargh they are usurping the > human role" when they fail to find one? Insecurity perhaps? I think being > human is also about being a tool-user, homo faber; I have no patience with > the attitude that despises that part of the human spirit. As for > C.P. Snow, his novels are rooted in a deep insecurity occasioned by attitudes > (prevalent in Whitehall in the 50s, but now rather reversed) of a > classically-educated establishment to the arriving technocracy; as such > they are polemic, partisan and almost totally unreadable, because of Snow's > total lack of understanding of human nature. > > Lou Burnard > > From: R.J.HARE@EDINBURGH.AC.UK Subject: C P Snow etc. Date: 13 Aug 87 10:03:42 bst X-Humanist: Vol. 1 Num. 187 (187) Picking up the points made by Lou Burnard and S.Richmond about C P Snow etc. , I certainly agree that C P Snow's novels are 'almost unreadable' for just the reason.s that are given by LB - indeed one major example in 'The Two Cultures' relates to the attitudes of the arts-biased establishment towards the slightly naff science and engineering educated portion of the human race; and to Snow's attempts to influence that attitude at dinner parties with reference (I think) to the 2nd Law of Thermodynamics. I think that the problem these days is probably a little less one-sidedthan either Snow or SR seem to imply - unfortunately scientists look down on artists *and* artists look down on scientists. I certainly regard it as part of my job to try and break down these artificial barriers, and would also regard it as being essential for *anyone* whatever their background, who is involved in Humanities Computing to have something approaching the same outlook. There is no room for the "what on earth would an arts student want to learn programming for?" syndrome. This is not a hypothetical question, but a verbatim rendering of a question asked me by one of my colleagues some time ago. Roger Hare. From: S_RICHMOND@UTOROISE Subject: word processing and the two cultures Date: Thu, 13-AUG-1987 09:10 EST X-Humanist: Vol. 1 Num. 188 (188) 1 Response to Susan Zayac: Studies of expert writers by cognitive psychologists have revealed that what distinguishes the 'novice' from the 'expert' is that novices use the strategy of writing down their thoughts in a free-association, first in the mind, first out on paper strategy. Thus, word processors--if used only in a linear manner to get out ideas onto paper--could reinforce novice thinking-writing habits. However, because of the ease of the cut-and-paste function of word processors, when used in the looping manner of enter-revise-re-enter, word processing could encourage the acquisition of expert-strategies of thinking-writing. Admittedly, and happily, dyslexics report that word processing has allowed them to produce comprehensible texts because they do not have to worry about spelling, and orthography. That is, the problem that dyslexics face, unlike novice writers--in getting down their thoughts before they run out--is in writing something remotely legible. A sharper way of putting the question I previously asked about whether word processing enhances thinking-- Does word processing create any new and powerful cognitive strategies? 2 Response to Roger Hare: C.P. Snow I suppose wouldn't be ranked with Joyce; but Joyce couldn't be credited with contributing to science merely because he coined the word "quark". Was G.B. Shaw's preface on evolution in "Back to Methusaleh" a contribution to the evolution of the theory of evolution? Are they any cases of literary people making direct contributions to science? Einstein played the violin--was this a contribution to music? B. Russell wrote a novel, which I haven't found yet--would that count as a bridging of the two cultures? Or, was Russell a worker in both cultures because he (and Whitehead) virtually created symbolic logic and Russell wrote 'traditional' philosophy. These are some of the questions that comprise the two cultures problem, first pinpointed by Snow. Because computers find their main application and market in business does not mean that those who use computers are thereby in the business world. Scientists and engineers use computers, for the most part, as a device for processing complex formulae requiring lots of repetitive calculations of very large fields of data. Though, a recent breakthrough in the field of computera applications is the arrival of computer-aided engineering. CAE systems test and improve electronic circuit designs. Also, sub-nuclear physicists are now using computers to record and decode 'events' that were formerly recorded by photography and decoded by teams of graduate assistants. Moreover, the advent of these systems raises the same problem as does the advent of computer aided teaching systems for professors and students: How do and should we interact with computers that perform intelligent functions, such as designing electronic circuits, and describing physical processes? This problem cuts across cultures, domains, and socio-economic sub-groups. In sum: though people in business, science, and engineering were the first to exploit computers, there is nothing inherently scientific about using computers. Moreover, it is said that computer scientists don't know any programming languages, and if they do they do not program in any case. What they know is the theory of computation and finite mathematics. Would study about these revolutionary topics in the field of mathematics be more relevant to those interested in learning about the evolution or history of human thinking than learning how to code in a particular computer language? From: LOU@VAX1.OXFORD.AC.UK Subject: quality or quantity? Date: 13-AUG-1987 16:30:13 X-Humanist: Vol. 1 Num. 189 (189) Let me pick up 2 points from S. Richmond: >>>(1) Also, the cognitive functions supported by this process--in so far as word-processing expedites cut-and-paste--are no different than the hardware cut-and-paste. (Scrolling a typed text merely expedites cut-and-paste.) >>>>(2) Is there anything qualitatively new introduced by word processors? Yes- in so far as they are used in conjunction with electronic journals/mail/bulletin boards to produce electronically stored essays, etc., that can be accessed quickly and virtually universally. I think that "merely expediting cut and paste" is a bit of an underselling of what's going on here (both in this document and in general where wp takes root). Without electronic media 'cut-and-paste' is just impractical. And it is also far from invisible. Look at it the other way round: what we lose with wp is all that gorgeous polysemy and confusion that the practice of palimpsest gave us; as I sd, the wp text is always new, always being re-made, as it were re-read. But just supposing you agree that wp is just what we've always done, only a bit better, then proposition (2) above is surely inconsistent? What's the difference between electronic mail and a runner with a cleft stick? just a bit faster and more reliable (well, usually) isn't it? and whoever says e-mail is accessible "universally" really has been blinded by technophoria! let's not kid ourselves: this unique experiment/pastime/time-waster is just one very expensive toy which we happen (by virtue of our unique cultural/ geographic/political privilege) to be able to benefit from. what reason is there for imagining it would ever become as democratic, as universal, a form of communication as the written word? there are quite a few places in europe where the use of xerox copiers is illegal, never mind computer networks, funded by IBM on a temporary basis. I see no evidence at all of "access to intellectual products" ceasing to be contingent on "the luxury of an academic position". Maybe it's different over there. L From: TLG@UCIVMSA Subject: Date: Thu, 13 Aug 87 10:15 PDT X-Humanist: Vol. 1 Num. 190 (190) The TLG has been awarded a (modest) grant to support the convening of a panel at the December 5-8 Society of Biblical Literature (SBL) meeting in Boston. In accordance with the granting agency's wishes, the panel will discuss ways to make the TLG's facilities and resources (and partcularly the TLG's biblical and theological texts) more readily accessible to theological institutions and scholars. HUMANIST members with pertinent interests who might wish to participate in the conference at issue should contact me directly. Theodore F. Brunner Director Thesaurus Linguae Graecae University of California Irvine Irvine, CA 92717 Area Code 714 856-6404 From: S_RICHMOND@UTOROISE Subject: quality/quantity Date: Thu, 13-AUG-1987 16:19 EST X-Humanist: Vol. 1 Num. 191 (191) Reply to Lou Burnard's second reply: I don't see any inconsistency between holding that the word processor alone adds nothing new in terms of intellectual power and holding that the word processor in conjunction with telecommunication does add something qualitatively new to intellectual power--at least at the cultural and social level as opposed to the individual level. I am optimistic about the potentiality of electronic media to cross cultural and political boudaries. Thank you for reminding me about the political control over access to even the printed world, not only in Europe, but also in this continent. Do you think those in our paradise of access to this plaything, at the mercy of IBM, can and should take action to open this up to our friends outside these groves? From: Willard McCarty Subject: Date: 13 August 1987, 17:05:51 EDT X-Humanist: Vol. 1 Num. 192 (192) The following is from Nancy Ide. Some error in software caused it to go astray rather than to HUMANIST. From: KRAFT@PENNDRLN Subject: Response to NEH Funding Request Date: Thursday, 13 August 1987 2050-EST X-Humanist: Vol. 1 Num. 193 (193) In Autumn of 1986, Penn's CCAT (Center for Computer Analysis of Texts) sought funding from NEH to support its "external services" activity and to move in the direction of creating a consortium of cooperating humanities centers. The proposal was not funded, and the detailed discussion and anonymous referee reports have just reached me from the NEH offices. The issues raised are often predictable -- is it wise to invest in CD-ROM technology, isn't Penn overly bound to the TLG coding and IBYCUS influence, why isn't the proposal more specific about what texts will be put on future CD-ROMs -- but one very important issue is especially worth placing before the HUMANIST audience, and that is Do the Centers Really Want to have a Consortium Arrangement? Clearly, some of the reviewers thought not, or at least not on the terms described in the CCAT proposal. If any HUMANISTS would like copies of the relevant materials, or wish to discuss them, I am at your service. This may help strengthen future proposals, from whatever source. I still think we would profit from more formal "consortial" ties, if someone has the courage to try to coordinate us! Bob Kraft, CCAT, University of Pennsylvania (KRAFT@PENNDRLN) From: R.J.HARE@EDINBURGH.AC.UK Subject: Two cultures Date: 14 Aug 87 09:27:08 bst X-Humanist: Vol. 1 Num. 194 (194) I suppose that strictly speaking, the discussion about 'two cultures' is nothing to do with computing and the humanities, but it's intrinsically interesting and I suppose that the attitudes we have towards the 'two cultures' give a guide to our attitudes towards other matters. If only because of that last fact, the discussion is valuable, so, a couple of observations on the last few exchanges on this topic: I think that the idea of separating direct contributions to the arts|sciences|arts|sciences by scientists|artists|artists|scientists is a mildly dangerous idea in the first place, if only because it tends to reinforce the barriers between scientists and artists. The whole point about the 'two cultures' as far as I am concerned is that it is a myth. There is only one culture. If one accepts that basic thesis, then yes, GBS's introduction to Back to Methusaleh (which I haven't read) *is* an indirect contribution to the evolution of the theory of evolution (or to 'science' or to 'culture'), even if only a tiny one. Whether it's a contribution to Science (with a 'S') is a different matter. Similarly, Einstein's music-making *is* an indirect contribution to music (or the 'arts' or 'culture'), though again, it might not be considered as being a contribution to Music (with a 'M'). Either of these examples might also be considered as having made a direct contribution to our culture if for example, Shaw's introduction sparked off some new ideas in the mind of an evolutionary biologist, or a sequence of notes played by Einstein gave a composer the inspiration for a new work. Both pretty unlikely I admit, but not as far-fetched as one might suppose - I beleive for example that it's common for those 'good' at mathematics to be 'good' at music. Perhaps any psychologists out there could confirm that (slightly hazy) recollection and bring to our attention other correlations between 'artistic' and 'scientific' abilities. Roger Hare. From: LOU@VAX1.OXFORD.AC.UK Subject: Date: 14-AUG-1987 10:29:33 X-Humanist: Vol. 1 Num. 195 (195) I am currently revising and updating the Text Archive Shortlist/Snapshot in preparation for an exciting new academic year... One of the pages I am overhauling is the one which lists "Other Archives". The purpose of this is simply to list major institutions believed to be sitting on (or know of the whereabouts of) large quantities of machine readable texts. It's obviously not possible to list every place where such things might be found (and will, in any case, when/if the Rutgers MRTH project reaches fruition, be unnecessary) so I've tried to limit it to major, centrally-funded institutions, and (after some thought) have excluded centres which are PRIMARILY 'centres for computing in the humanities' (excepting those whose texts we have in the archive in category X, because this list is a subset of the depositor address list in the archive database). The current count is a paltry 17; I'm sure I must have forgotten some, and there are errors in those I've remembered, so please help if you can. P.S. Maybe this list might be a starting point towards the sort of 'consortion' that Bob Kraft seems to be proposing P.P.S Any responses received after the end of the month will be TOO LATE; any received by the end of next week (20th) will be ON TIME. Lou Burnard OXFORD TEXT ARCHIVE 14 Aug 1987 Other Archives Biblical texts Pe Center for Computer Analysis of Texts D Religion U Pennsylvania Philadelphia Pa 19143 USA Dutch Le I.N.L. Postbus 132 Leiden 2300 AC Netherlands English Be International Computer Archive of Modern English EDB-Senter for Humanistisk Forskning U Bergen Boks 53 Bergen-Universitet 5014 Norway French Na Institut Nationale de la Langue Francaise Universite* de Nancy 44 ave de la Libe*ration CO 3310 Nancy-Ce*de*x F 54014 France General Ca Literary & Linguistic Computing Centre U Cambridge Sidgwick Avenue Cambridge CB3 9DA Ox Oxford Text Archive U Oxford Computing Service 13 Banbury Rd Oxford OX2 6NN Ut Humanities Research Center Brigham Young University Provo, Ut. USA German Bo Inst. fur Kommunikationsforschung und Phonetik I.K.P. Poppelsdorfer Allee 47 Bonn I D-5300 W. Germany Ma Institut fur Deutsche Sprache Inst. fur Deutsche Sprache Friedrich-Karl Str. 12 Mannheim 1 D-6800 Germany Greek Ir Thesaurus Lingu^a Gr^ac^a U California at Irvine Irvine CA 92717 USA Hebrew BI Bar-Ilan Center for Computers and Jewish Heritage Aliza & Menachem Begin Building Bar-Ilan University Ramat Gan 52100 Israel Je Academy of the Hebrew Language Giv'at Ram P.O. Box 3449 Jerusalem, 91 034 Israel Icelandic Co Arnamagn^an Institute U Copenhagen Njalsgade 76 Copenhagen DK-2300 Denmark Italian Pi Ist. di Linguistica Computazionale U of Pisa via della faggiola Pisa I-56100 Italy Latin Lv Centre e*lectronique de traitement des documents Universite* Catholique de Louvain Louvain la Neuve B-1348 Belgium NH APA Repository of Greek and Latin texts LOGOI Systems 27 School Street Hanover NH 03755 USA Norwegian Bn Norsk Tekstarkiv Boks 53 Bergen-Universitet Bergen 5014 Norway Swedish Go Logotek U Goteborg Sprakdata 6 N. Allegatan Goteborg 41301 Sweden END OF LIST From: S_RICHMOND@UTOROISE Subject: two cultures Date: Fri, 14-AUG-1987 08:37 EST X-Humanist: Vol. 1 Num. 196 (196) 1 How many cultures? R.Hare's solution or dissolution of the two cultures problem in saying that there is really one culture is one way out of Snow's problem. Nelson Goodman says that really the arts are cognitive in content, but only differ from the sciences in their notational systems. If this is the type of solution that Hare is proposing, there still remains a dimension of Snow's problem that is untouched. Very few scientists and humanists can talk about each other's work together--not only on a professional level, but also on an informal level. They have very little comprehension of the problems, methods, and mores of each other. This is akin to a problem posed earlier by the founder of 'Reconstructionism' in North American modern Judaism. M. Kaplan asked -- how can the contemporary Jew live as a Jew in modern western civilization? There are two 'civilizations' that the modern Jew inhabits: one is the traditional Jewish civilization steeped in the Bible, and Rabbinic interpretation and law; the other is modern western civilization steeped in an extended version of the Bible and in Greco-Roman mores and values. The literature, the mores, and the institutions of these two civilizations, not only are different but conflict in certain respects. Analogously, the modern humanist is educated in a distinctive tradition with distinctive mores and problems; however, the humanist lives in a world dominated by the scientific culture. How can the humanist live as a humanist in modern scientific civilization? 2 Are the two cultures becoming one? Perhaps N. Ide's obversation that textual analysis and literary criticism is converging with A.I. on the problem of understanding meaning--of how to decode texts and strings--indicates that the two cultures are converging. This reminds me of Karl Popper's remark that really there is only one problem that all thinkers, scientists, philosophers, historians... are interested in: namely, what is our place in this universe of random events?; and his remark that what really matters are pursuit of problems regardless of academic discipline. However, in spite of this, should we ignore the fact that the focus of the A.I. world and the literary criticism world on the problem of explaining how meaning occurs and how meaning can be obtained, differs? The A.I world is interested in simulating the process of meaning. The textual analysis world is interested in the meaning of particular texts, of decoding texts found in different periods of history. Isn't this one of the crucial differences in values between the sciences and humanities? The sciences are interested in process just as process; the humanities are interested in process in so far as it helps one to approach the understanding of unique products and unique events in human history. From: KRAFT@PENNDRLN Subject: Archives Date: Friday, 14 August 1987 1020-EST X-Humanist: Vol. 1 Num. 197 (197) Since this may be of more general interest, here are a couple of corrections and comments on Lou Burnard's list of Archives: 1. The address for "Pe" = CCAT needs to be corrected as follows: Religious Studies (this is optional; CCAT will do) Box 36 College Hall U Pennsylvania Philadelphia PA 19104-6303 USA 2. The Pe archive focuses on biblical materials, but includes much more since we have tried to gather a variety of texts from other sources (as the CD-ROM "text sampler" contents indicate). Probably the "General" category would be appropriate, perhaps with the comment "special focus on biblical and related materials." 3. The APA Archive has now moved with Stephen Waite to the new Packard Humanities Institute (which Waite now directs) 300 Second Street Los Altos, CA (I need to get the zipcode for Lou) USA 415 948-0150 (Bitnet account not yet established) This Institute (PHI) will have more than only Latin (and Greek) texts, although the initial concentration is on producing a Thesaurus Linguae Latinae parallel to TLG. Bob Kraft From: Willard McCarty Subject: Scholarly computing in the humanities? Date: 16 August 1987, 14:00:09 EDT X-Humanist: Vol. 1 Num. 198 (198) My question about the scholarly nature of humanities computing, recently addressed by Abigail Young and Nancy Ide, leads almost immediately to the more general question of what humanistic scholarship is, or what we think it is. Popular culture is still permeated with the simplistic notion of progress, which in practice is much more congenial to the sciences than it is to the humanities. A scientist's career often hinges on whether he is the first to announce a new discovery, and if he does the rewards can be enormous. He is much more likely to capture the public imagination than the humanist. The humanist's discoveries, or rediscoveries, are both more remote and more immediate to daily life, therefore harder to see: either because cultural self-understanding is difficult to achieve and its effects profound and gradual, or because they touch intimately aspects of life that are routinely ignored though utterly inescapable. Fruit of the scientist's work is often marketable, at least in theory, whereas the humanist's work is not. In an age dominated by the "ethic" of the marketplace, it follows that the humanist is bound to do poorly. If forced to sell himself, he will be forced to sell himself. Now I'm not saying that scientists are crass and humanists noble (I'd be absolutely *overwhelmed* by evidence to the contrary!), but that the public perception of their roles creates a bad situation for the scholar of either kind. I've heard scientists, including our recent Nobel Prize winner John Polanyi, complain about how highly touted work has been conducted in spite of the demands of the marketplace and academic salesmen and about the ways in which directions of research have been disturbingly altered by the pressure of granting agencies. This is no new situation, but that doesn't make it any the less of a threat. As computing humanists we're caught in the middle, but in an important sense not between two opposed scholarly communities. Like the scientists we need money for equipment, but we are very new to the game of how to get it without becoming slavish creatures of the marketplace, thus of the lowest common denominator of public opinion. Our role is Socratic, but how do we avoid the hemlock? Watch out for those who would demolish tenure. Progress (which sells because it holds out a soporific hope) is not our most important product or aim. We don't so much go where no man has gone before but continually return to basic questions. So a humanities computing that furnishes us with tools to do what the best of us have always done, but do it more efficiently, indeed do it at all, is a discipline worth following. As Nancy Ide has said or implied, one important effect of humanities computing is to subject formerly intuitive methods to algorithmic scrutiny, so to make conscious some of what has been subconscious. I don't think that means our results will be either less tentative or less imaginative, but it does mean that we may know ourselves as scholars better. The danger from within is that as champions of the observable phenomena from which algorithms are constructed we will lose sight of the unobservable and so trivialize our disciplines. The danger from without is that like Esau we will sell our birthright for a bowl of yummy pottage. On the other hand, the potential of humanities computing both for the humanities *and* for computer science is very great and very exciting. I hope this is not too woolly for HUMANISTs at large. I do think we're in a good position to trouble ourselves and our colleagues with gritty questions about basic issues. Very few others seem to be doing it. From: GUEST4@YUSOL Subject: McCarty on Scholarship, Pottage, etc. Date: Mon, 17 Aug 87 17:41 EDT X-Humanist: Vol. 1 Num. 199 (199) Bring on the grits! But first, perhaps, a little of the nitty. Does anyone remember an explanation of why the BIOGRAFY file is being spruced up for publication in the ACH journal? Does it make spicy reading? Is it representative of the latest algorithmic pulse of the high-tech humanists of our time? Does it help, or hinder, the free exchange so natural in this new medium to know that sooner or later, all our vital statistics and perhaps even some of our knottier comments will be grabbed off to fill the (otherwise uncrowded?) pages of some institutional organ or other? Will it help raise funds for this enterprise, or simply build an image, and distribute a useful free mailing list, for those otherwise not well enough occupied with (dare one utter the word?) their own (algorithmic or other) scholarly pursuits? Doubtless none of the above suspicions have any foundation in fact. Then why not a brief rationale for jumping into print so soon to advertise who the early joiners of HUMANIST happen to be? Behind all this flippancy lurks another question, really quite a sincere one, stemming from genuine ignorance and wishful optimism. THat is, does anyone feel able to outline some of the new wrinkles in textual studies or other branches of humanities research that but for ubiquitous computerization would not, could not, have existed, and why we would all have been the poorer for it? I don't really begrudge the premise, if indeed it's true. But the first few instances that did come to my mind turned out to be pretty quickly traceable to scholars or practices already known to be under way without computers. I for one would welcome some facts to bolster this argument for computers in the humanities, next time I too am tempted to advance it. --Sterling Beckwith From: Willard McCarty Subject: Junk on the rebound from wiscvm Date: 17 August 1987, 19:23:36 EDT X-Humanist: Vol. 1 Num. 200 (200) A temporarily bad address on the ARPA network, in collusion with crude software on a VM-machine in Wisconsin, has resulted in a small flood of junk mail for HUMANISTs. Brute-force methods have been applied to stop additional junk from landing in your readers. Your friendly anthropomorphic peripheral interface extends the necessary apologies. From: Dr Abigail Ann Young 1-416-585-4504 Subject: Date: 18 August 1987, 10:13:11 EDT X-Humanist: Vol. 1 Num. 201 (201) Do any HUMANISTS know of a text archive with medieval Latin exegetical texts in machine readable form, eg, Alcuin, Bede, Rupert of Deutz, Thomas Aquinas, etc? Or even patristic exegetical texts? I have tried communicating with CETEDOC at Louvain and haven't had a reply on this point. I am interesting in analyzing one of the perennial problems in the history of western biblical commentaries, the so-called senses of Scripture, by using computer analysis but need texts! Any replies should, I think, be sent direct to me, rather than posted to HUMANIST generally. Thank you. Abigail Ann Young University of Toronto YOUNG at UTOREPAS From: KRAFT@PENNDRLN Subject: textual studies Date: Tuesday, 18 August 1987 1056-EST X-Humanist: Vol. 1 Num. 202 (202) No punches pulled. I was a bit put off by the tone of Sterling Beckwith's comments on the plan to publish the brief biographies. I, for one, find this sort of information very helpful for seeking advice, writing grant proposals (and suggesting possible referees), and referring information seekers, to mention only some obvious uses. Maybe it wouldn't need to be published, but there are many people out there who are not on e-mail and who might find it useful. So do it. As for the "new" advantages of computer assisted textual studies, we at CCAT (Center for Computer Analysis of Texts) have found many. The CATSS (Computer Assisted Tools for Septuagint Study) project out of which CCAT to some extent has emerged is able to do careful and complete study of "translation technique" (e.g. between the Hebrew and Greek biblical materials, or Greek and Latin, or Greek and Coptic) at a level that is theoretically possible without computer, but would hardly have been attempted. We are also engaged in encoding all available textual variants from the hundreds of ancient manuscripts for the Greek Bible to enable complete analysis and production of data in various forms to assist the creation of new critical editions. Again, theoretically possible otherwise, but not likely to be done at this level. We have done semi-automatic morphological analysis of some of these materials, and will coordinate the various elements (text, analysis, variants, translation alignment) in such a way as to afford gigantic leaps forward in philological, textcritical, cultural-linguistic, and (hopefully) historical research. Although we are just beginning to look ahead to next stages, with the growing availability of large bodies of such data on CD-ROM, it is clear to us that digitization combined with character oriented data will put such studies as paleography, papyrology and codicology on new bases that could hardly be accomplished more than sporadically with pre-computer technology (e.g. precise mapping of handwritten letterforms, and careful comparison of such; "fingerprinting" of papyri striations to help match fragments; shadowing and enhancement techniques to assist with reading palimpsests or badly damaged materials). Much of this can, of course, overflow into basic instructional approaches (especially when combined with sound and pictures/pictorial graphics) and/or general enhancement of the use of texts for whatever reasons -- e.g. being able to browse a text in a "foreign language" (or even in one's native tongue) and call into a window a lexical entry to assist with meaning, and (in another window) a grammatical analysis, or even translation equivalents in other languages (and, for that matter, variants). Sure, it could all be done without computer in some theoretical sense, but certainly not as quickly, conveniently, thoroughly, etc. Some would call all this "drugery." Perhaps so. But it is foundational to other applications, and I would certainly be "poorer" in my own work (on Jewish and Christian literature and history in the Greco-Roman period) without it! I suspect that similar things could be said about text dependent research in general. Bob Kraft (CCAT) From: CHURCHDM@VUCTRVAX Subject: HUMANITIES BULLETIN BOARD Date: Wed, 19 Aug 87 12:56 CDT X-Humanist: Vol. 1 Num. 203 (203) Frank Borchardt at Duke told me I should get on your bulletin board. Please send me a message telling when the board is accessible and how I can get on it and use it through BITNET. Thanks, Dan M. Church, Dep't of French & Italian, Vanderbilt University From: Steve Younker - Postmaster Subject: Humanist Test Date: Wed, 19 Aug 87 14:49:10 EDT X-Humanist: Vol. 1 Num. 204 (204) I trust all the people on the HUMANIST subscription will excuse this short test message. Steve Younker, Postmaster - University of Toronto Postmaster - HUMANIST From: Willard McCarty Subject: The biographies in print? Date: 19 August 1987, 15:59:38 EDT X-Humanist: Vol. 1 Num. 205 (205) Nancy Ide's proposal to print the biographies, recently endorsed by Bob Kraft, has created a minor flurry of comments and caused a fair number of revisions to be made. It's fascinating and significant that several HUMANISTs have felt compelled to take the expansive and playful elements out of their biographies, as it were to put them in their Sabbath best for presentation to company. I find in this ample justification for an electronic forum, where homo ludens is as much at home as homo sapiens (though those two are, of course, aspects of each other). The motivation for printing the biographies seems to me a good one. Ide and Kraft have both suggested that those of us who don't yet have access to e-mail might profit from reading them as much as we will. On the other hand, many of you wrote yours not suspecting that they would be published in any fashion. So, I leave the matter with you. If anyone wants to discuss this in public, please do so. If you merely want your biography deleted from the collection to be printed, please send me a note directly; if you want a revised version printed, send me the revision. I'm not yet certain of the Newsletter's deadline but will let you know. From: Willard McCarty Subject: An introductory guide to HUMANIST Date: 19 August 1987, 20:55:32 EDT X-Humanist: Vol. 1 Num. 206 (206) Following is a revised welcome message, expanded to such an extent that I'm now calling it *A Guide to HUMANIST*. I'd be grateful if you would send me any comments or suggestions for improvement. One shortcoming I'm especially conscious of is its "monolingual" preoccupation with Bitnet e-mail. Help from anyone familiar with the workings of other networks now used by HUMANISTs (e.g., JANET, uucp, ARPA) would be very welcome. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ A Guide to HUMANIST +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ C O N T E N T S I. Nature and Aims II. How to use HUMANIST A. Sending and receiving messages B. Conventions and Etiquette C. Distributing files D. ListServ's commands and facilities E. Suggestions and Complaints From: Subject: Date: X-Humanist: Vol. 1 Num. 207 (207) I. Nature and aims From: Subject: Date: X-Humanist: Vol. 1 Num. 208 (208) Welcome to HUMANIST, a Bitnet/NetNorth/EARN discussion group for people who support computing in the humanities. Those who teach, review software, answer questions, give advice, program, write documentation, or otherwise support research and teaching in this area are included. Although HUMANIST is intended to help these people exchange all kinds of information, it is primarily meant for discussion rather than publication or advertisement. HUMANIST is an activity of the Special Interest Group for Humanities Computing Resources, which is in turn an affiliate of both the Association for Computers and the Humanities (ACH) and the Association for Literary and Linguistic Computing (ALLC). Although participants in HUMANIST are not required to be members of either organization, membership in them is highly recommended. In general, HUMANISTs are encouraged to ask questions and offer answers, to begin and contribute to discussions, to suggest problems for research, and so forth. One of the specific motivations for establishing HUMANIST was to allow people involved in this area to form a common idea of the nature of their work, its requirements, and its standards. Institutional recognition is not infrequently inadequate, at least partly because computing in the humanities is an emerging and highly cross-disciplinary field. Its support is significantly different from the support of other kinds of computing, with which it may be confused. It does not fit easily into the established categories of academia and is not well understood by those from whom recognition is sought. Apart from the general discussion, HUMANIST encourages the formation of a professional identity by maintaining an informal biographical directory of its members. This directory is automatically sent to new members when they join. Supplements are issued whenever warranted by the number of new entries. Members are responsible for keeping their entries updated. The directory and its supplements may be printed in the Newsletter of the ACH unless individuals declare otherwise. Those from any discipline in or related to the humanities are welcome, provided that they fit the broad guidelines described above. Please tell anyone who might be interested to send a message to me, giving his or her name, address, telephone number, and a short biographical description of what he or she does to support computing in the humanities. This description should cover academic background and research interests, both in computing and otherwise; the nature of the job this person holds; and, if relevant, its place in the university. From: Subject: Date: X-Humanist: Vol. 1 Num. 209 (209) II. How to Use HUMANIST From: Subject: Date: X-Humanist: Vol. 1 Num. 210 (210) A. Sending and receiving messages ----------------------------------------------------------------- Currently anyone given access to HUMANIST can communicate with all other members without restriction. A member need not be on Bitnet but can use any comparable network with access to Bitnet. Thus, to send mail to everyone simultaneously, use whatever command your system provides (e.g., NOTE or MAIL) addressed to HUMANIST at UTORONTO. Your message is then sent by your local software to the UTORONTO node of Bitnet, where the "Revised List Processor" (or ListServ) automatically redirects it to everyone currently on the list of members. Because ListServ is automatic, HUMANIST is subject to inadvertent abuse, and a certain amount of "junk mail" is inevitable. With the number of members world-wide, using many different systems on several different networks, the possibilities for error are not inconsiderable. Membership in HUMANIST thus requires patience with fallible human artifacts and regular attention to one's incoming e-mail. Otherwise the accumulation can be burdensome. [Please note that in the following description, commands will be given in the form acceptable on an IBM VM/CMS system. If your system is different, you will have to make the appropriate translation.] ----------------------------------------------------------------- B. Conventions and Etiquette ----------------------------------------------------------------- Restricted conversations or asides can, of course, develop from the unrestricted discussions on HUMANIST by members communicating directly with each other. This is particularly recommended for replies to general queries, so that HUMANIST and its members are not burdened with messages of interest only to the person who asked the question and, perhaps, a few others. If, for example, one of us asks the rest about the availability of software for keeping notes in Devanagari, suggestions should be sent directly to the questioner's e-mail address, not to HUMANIST. A questioner who receives one or more generally interesting and useful replies should consider gathering them together with the original question and submitting the collection to HUMANIST. (Please note that the REPLY function of some electronic mailers will automatically direct a user's response to HUMANIST, not to the original sender. Thus REPLY should be avoided in many cases. This is particularly true for systems that allow automatic replies, for example, in cases in which the user is temporarily unable to attend to his account.) Use your judgment about what the whole group should receive. We could easily overwhelm each other and so defeat the purpose of HUMANIST. Strong methods are available for controlling a discussion group, but the lively, free-ranging discussions made possible by judicious self-control seem preferable. Controversy itself is welcome, but what others would regard as tiresome junk-mail is not. Courtesy is still a treasured virtue. Make it an invariable practice to help the recipients of your messages scan them by including a SUBJECT line in your message. Be aware, however, that some people will read no more than the SUBJECT line, so you should take care that it is accurate and comprehensive as well as brief. Use your judgment about the length of your messages as well. If you find yourself writing an essay or have a substantial amount of information to offer, it might be better to follow one of the two methods outlined in the next section. ----------------------------------------------------------------- C. Distributing files ----------------------------------------------------------------- HUMANIST offers us an excellent means of distributing written material of many kinds, e.g., reviews of software or hardware. (Work is now underway to provide this service for reviews.) Although conventional journals remain the means of professional recognition, they are often too slow to keep up with changes in computing. With some care, HUMANIST could provide a supplementary venue of immediate benefit to our colleagues. There are two possible methods of distributing such material. More specialized reports should probably be reduced to abstracts and posted in this form to HUMANISTs at large, then sent by the originators directly to those who request them. The more generally interesting material in bulk can be sent in an ordinary message to all HUMANISTs, but this could easily overburden the network so is not generally recommended. We are currently working on a means of centralized storage for relatively large files, such that they could be fetched by HUMANISTs at will, but this means is not yet fully operational. At present the only files we are able to keep centrally are the monthly logbooks of conversations on HUMANIST. See the next section for details. ----------------------------------------------------------------- D. ListServ's Commands and Facilities ----------------------------------------------------------------- As just mentioned, ListServ maintains monthly logbooks of discussions. Thus new members have the opportunity of reading contributions made prior to joining the group. To see a list of these logbooks, send the following command: TELL LISTSERV AT UTORONTO SENDME HUMANIST FILELIST (Note that in networks that do not allow interactive commands to be given to a Bitnet ListServ, the same thing can be accomplished be sending a message to HUMANIST with the command as the first and only line. This will result in junk-mail for everyone else, but so be it.) The logbooks are named HUMANIST LOGyymm, where "yy" represents the last two digits of the year and "mm" the number of the month. The log for July 1987 would, for example, be named HUMANIST LOG8707, and to get this log you would issue the following command: TELL LISTSERV AT UTORONTO GET HUMANIST LOG8705 ListServ accepts several other commands, for example to retrieve a list of the current members or to set various options. These are described in a document named LISTSERV MEMO. This and other documentation will normally be available to you from your nearest ListServ node and is best fetched from there, since in that way the network is least burdened. You should consult with your local experts to discover the nearest ListServ; they will also be able to help you with whatever problems in the use of ListServ you may encounter. Once you have found the nearest node XXXXXX, type the following: TELL LISTSERV AT XXXXXX INFO ? The various documents available to you will then be listed. ----------------------------------------------------------------- E. Suggestions and Complaints ----------------------------------------------------------------- Suggestions about the running of HUMANIST or its possible relation to other means of communication are very welcome. So are complaints, particularly if they are constructive. Experience has shown that an electronic discussion group can be either beneficial or burdensome to its members. Much depends on what the group as a whole wants and does not want. Please make your views known, to everyone or to me directly, as appropriate. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Willard McCarty, 20 August 1987 Editor of HUMANIST, Centre for Computing in the Humanities, University of Toronto (MCCARTY@UTOREPAS.BITNET) From: GUEST4@YUSOL Subject: why not wait until the questionnaire? Date: Wed, 19 Aug 87 23:26 EDT X-Humanist: Vol. 1 Num. 211 (211) Good the motives may well be, but what are they? Does one join an electronic conference just to provide tantalizing tidbits for those who do NOT have access to e-mail? This is only one of many confusing and seemingly contradictory aspects of the proposal to put personal identifying information from the early stage of this particular new electronic conference into a print journal that serves a different membership and a different (albeit somewhat overlapping) function. The issues about the difference between print and electronic media raised so delicately by McCarty seem no less critical. Without much better explanations than have so far been supplied, would it not make sense to keep these issues going as proper food for our discussion, while awaiting some kind of standardized questionnaire, which I gather is already in the works, before going public with our personal data? Better yet, such a format might well produce just the sort of machine-readable material about ourselves that could (who knows?) one day power yet another "giant leap forward" for the computerized humanities! S. Beckwith (who would include a digitized passport photo with each bio) From: GUEST4@YUSOL Subject: Why not wait for the questionnaire? Date: Wed, 19 Aug 87 23:41 EDT X-Humanist: Vol. 1 Num. 212 (212) (REPLY to W. McCarty re printing bios) Good the motives may well be, but what are they? Does one join an electronic conference just to provide tantalizing tidbits for those who do NOT have access to e-mail? This is only one of many confusing and seemingly contradictory aspects of the proposal to put personal identifying information from the early stage of this particular new electronic conference into a print journal that serves a different membership and a different (albeit somewhat overlapping) function. The issues about the difference between print and electronic media raised so delicately by McCarty seem no less critical. Without much better reasons for haste than have so far been supplied, would it not make sense to kee p these issues going as proper food for our discussion, while awaiting some kind of standardized questionnaire, which I gather is already in the works, before going public with our personal data? Better yet, such a format might well produce just the sort of machine-readable material about ourselves that could (who knows?) one day power yet another "giant leap forward" for the computerized humanities! -- S. Beckwith (who would include a digitized passport photo with each bio...) From: KRAFT@PENNDRLN Subject: microform scanning Date: Thursday, 20 August 1987 0952-EST X-Humanist: Vol. 1 Num. 213 (213) Does anyone have information about scanners capable of converting microfilm and/or microfiche images directly into electronic form (without going through a hardcopy stage)? I have heard that such things exist, but have never found an address or telephone number to contact for details. Bob Kraft From: Willard McCarty dartvax!psc90!jdg@ihnp4 (Dr. Joel Goldfield) Subject: Joel Goldfield on new things under the sun Date: 22 August 1987, 17:03:01 EDTThu, 20 Aug 87 17:01:54 EDT X-Humanist: Vol. 1 Num. 214 (214) ------------------------------------------------------------------------- Dear Colleagues, I appreciate Willard's having sent on Abigail Young's thoughtful comments on humanities computing. My apologies to Willard for his having to forward this comment since my gateway/node to BITNET is somewhat dyslexic. Abigail and others may find the upcoming book of essays edited by Rosanne Potter (probably to be published by U. of Pennsylvania Press) to be useful in determining whether the computer lets us do "something new." This volume will consist of approximately one dozen essays, scholarly "eggs" containing a variety of literary applications (English, French, German, etc.) of computer research. They are probably on that "cutting edge" which we should consider in our humanities computing repertoire before coming to an initial decision on whether of not this "newness" exists. Personally, I find that if the process is new and speeds up my research, thus the amount of information I can evaluate and incorporate in my literary analyses, it is a valuable addition to my work. Often, speed with accuracy in research allows us to discover and evaluate more efficiently. The results I obtain seem richer and more accurate in substantiating the "big picture" as well as "trees in the forest" conclusions for which I strive. Sincerely, Joel D. Goldfield Plymouth State College (NH, USA) From: ENGHUNT@UOGUELPH Subject: Date: 23 August 1987, 20:25:59 EDT X-Humanist: Vol. 1 Num. 215 (215) I have to second Joel's recent comment: the increase in speed and accuracy is a principle advantage of computer-assisted humanities research. Helping us to do faster and more accurately the things that we already do is a definite advantage of the new technology. The "cutting edge", however, is a different matter. I think of the renaissance astronomers with their "new technology" and look at the results. The ones who reshaped the world weren't those who simply used the technology to help them to do what they already did; the ones who reshaped the world were those who used the technology to do -- and to discover -- things that had not been done -- or discovered -- by doing things the way they had been done in the past. The real contributions of computer technology to humanities research are only going to appear when we begin to use the tool to do things we haven't been able to do before, either because they were too time-consuming or labour-intensive -- as someone already has suggested in these discussions -- or because we have not been able, until now, to conceive of them. From: Willard McCarty Subject: Renaissance astronomers & Renaissance poets Date: 23 August 1987, 20:57:41 EDT X-Humanist: Vol. 1 Num. 216 (216) Stuart Hunter has replied to the continuing discussion on humanities computing by citing the example of Renaissance astronomers, who shook up the old world with new discoveries. The problem with this analogy to our use of computers lies mostly (here it is once more!) with the practical differences between the conduct of the sciences and of the humanities. Galileo and company were scientists, very much interested in going where no man had gone before. We, however, are much closer to Milton, who may have visited Galileo in his pontifical confinement and who wrote him into Paradise Lost. Milton apparently had read everything ancient and modern by the time of his blindness, was very much aware of contemporary discoveries of all sorts, yet he turned these discoveries to the eternal problems explored in his poetry. Critics have been puzzled by the odd mixture of Ptolemaic and Copernican astronomy in PL, but the puzzle vanishes as soon as you realize that both were equally grist for his mill, that as a poet he wasn't espousing one astronomy or the other as the "truth" but using both as metaphors. With Joel Goldfield I'm also looking forward to the publication of Rosanne Potter's book with its several examples. Meanwhile, what seems to have come out in this discussion are two suggestions: (1) that it's too early to tell what the scholarly contribution of humanities computing may be; and (2) that h.c. has already established itself in the scholarly world by allowing us to do things that were theoretically possible before but would not have been done very thoroughly or at all because they were so laborious. Both seem true to me. In my own work (on the classical antecedents to Milton) it certainly seems that by using a computer to collect and arrange thousands of extracts from classical texts I've been able to do things otherwise beyond my powers. This brings up a corollary to the second point just mentioned. There are and have been classicists capable of recalling and arranging such extracts without a computer, but they were not Miltonists as well. So, by using a computer I've been able to wander far beyond the limits that would otherwise constrain me and thus to do (I flatter myself to think) exciting work. At least the work excites me, and some of it gets published. To generalize, then, computational methods favour cross-disciplinary research. For another, simpler example, take the Thesaurus Linguae Graecae, a great thesaurus indeed. Using it, say on an Ibycus microcomputer, someone with a rudimentary knowledge of Greek can do things utterly impractical if not unthinkable without it. Of course the thinking still has to be done, but the TLG gives us a wealth of material to think with. The same (and more) could be said of the Responsa database of rabbinic Hebrew, a 70-million word corpus covering a millenium, because of its wonderful tools for morphological analysis. Am I wrong to presume that the aim of philology and textual editing is to provide reliable texts on which we can base interpretations of the past and its culture? Could our parallel aim be to provide various reliable ways of manipulating these texts? Our case is somewhat different, since to a much greater extent we must also be interpreters. [65 lines] From: ENGHUNT@UOGUELPH Subject: Date: 24 August 1987, 10:29:26 EDT X-Humanist: Vol. 1 Num. 217 (217) Willard puts his finger on the nub of the problem when he asks, in his final comment: "Could our parallel aim be to provide various reliable ways of manipulating these texts? Our case is somewhat different, since to a much greater extent we must also be interpreters." To clarify the place of computing in the humanities we have to begin, I would argue, with a clear idea of what it is that we in the humanities are trying to do. If the aim of textual editing and of philology is "to provide reliable texts on which we can base interpretations of the past and its culture" then the place of computing can be viewed as that of a tool that aids one in the process of compiling the texts, comparing the variants, and, ultimately, producing the final product, the "reliable texts." In this it is clear that the philologist and editor are using the technology to do better and to do faster the things that they have always done -- or tried to do. What, though, is the broader aim of the literary scholar? To use Willard's description of his work on Milton as an example, exactly what is it that he is trying to do? If his task is simply "to collect and arrange thousands of extracts from classical texts," then again the computer becomes a tool that enables us to do things better and faster. Willard goes on, though, to point toward the more difficult problem when he says, at the end of his note, that "Our case is somewhat different, since to a much greater extent we must also be interpreters." The question that has to be addressed, it seems, is how, to what extent, in what ways, can the technology assist us in the task of interpretation? In order to answer that question, we need to begin from a clear idea of the process of interpretation. Exactly what do we do when we "interpret" a text? What kinds of questions do we ask? What kinds of mental processes are involved? How can those processes be duplicated by computer programmes? Hopefully Roxanne's collection of essays will include not only the "look at how I've been able to speed up my work" kind of paper but also the "this is the goal of my scholarship and this is the process used to reach that goal" kind of paper, the latter being clearly focussed on the process AS RELATED TO THE GOAL. From: Mark Olsen ATMKO at ASUACAD Mark Olsen Subject: Vanilla SNOBOL4 Date: Mon, 24 Aug 87 09:37:19 MST24 August 1987, 08:38:59 MST X-Humanist: Vol. 1 Num. 218 (218) Catspaw Inc. has recently released a public domain version of SNOBOL4+ called Vanilla SNOBOL4. It is a stripped down version of SNOBOL4+ that can address 30 kilobytes of program and data space, does not support extended features of the language, but conforms to the so-called "Green Book" SNOBOL4 standard. The package also has a 150 page manual on disk. Catspaw has attempted to provide a usable product while leaving enough out of the PD version to encourage users to purchase the full implementation. Vanilla SNOBOL4 may be very useful for teaching students SNOBOL programming on a micro without purchasing a site license or multiple copies of the program. You can order copies from Catspaw directly (I believe there is a $15.00 charge for shipping, copy and diskette) or send me a formatted 5.25 inch IBM-PC floppy diskette and a self-addressed, stamped diskette mailer, and I will send a copy to you. Catspaw can be contacted at: Catspaw, Inc. P.O. Box 1123 Salida, Colorado 81201 USA (303-539-3884) (emmer@arizona.edu) I can be contacted at: Humanities Computing Facility Department of English Arizona State University Tempe, AZ 85287 Mark Olsen *** end of message *** From: Mark Olsen Subject: Do you have....? Date: Mon, 24 Aug 87 12:10:19 MST X-Humanist: Vol. 1 Num. 219 (219) I have had several requests for the following texts from faculty at Arizona State University: Milton's Paradise Lost, Erasmus Praise of Follow and More's Utopia. We are prepared to scan these, but I certain like to avoid having to scan something that long. I would like to know if these texts are in computer readable format and, if so, what it would cost to obtain copies. Thanks, Mark Olsen From: KRAFT@PENNDRLN Subject: interpretation Date: Monday, 24 August 1987 1920-EST X-Humanist: Vol. 1 Num. 220 (220) If, as I would argue, the PRIMARY task of interpretation is to try to understand the target text/author on its own terms (philologically, lexically, historically, etc.), then the use of computer as a fast and accurate tool to recover or uncover relevant data and its use as an aide to interpretation are two sides of the same coin -- or maybe two facets of the same gem -- not so? If it can take us farther (but in what directions, and why?), that would be a bonus. But just as before (or without) the machine, so with the machine, awareness of our own assumptions and aims is probably the most important factor in assessing how we do the job, and how well we do the job. Is "science" really so different? Das Wesens des Wissenschaft ist Methode (Paul de Lagarde; with apologies if I fractured the Deutsch -- "The essence of scholarly research is its method"). Bob Kraft From: GUEST4@YUSOL Subject: "Are these texts to be had in machine-readable form?" Date: Mon, 24 Aug 87 20:17 EDT X-Humanist: Vol. 1 Num. 221 (221) A Code of Computer Ethics, if it existed or could be cajoled into being by HUMANISTs, might specify that anyone who scans (or hires third-world slave labor to encode) a complete literary text MUST send word of its existence and whereabouts in such a format to some central, on-line clearing-house or bulletin-board (ACH? HUMANIST itself?) Who is now working on THAT bit of publishity or grantswomanship, one wonders??? Surely Mr. Olsen and others unnumberable would bless and thank them often. Or so it seems to S. Beckwith From: KRAFT@PENNDRLN Subject: centralized information on texts Date: Monday, 24 August 1987 2027-EST X-Humanist: Vol. 1 Num. 222 (222) Surely most HUMANISTS know how to start locating available texts. The most obvious and extensive general collection in the English speaking world (probably in the world at large) is the Oxford Archive, the catalogue of which can be obtained via electronic mail from Lou Burnard. The Rutgers Inventory Project has been gathering information about encoded texts for years, and will circulate whatever results are currently available. Rutgers has not attempted to gather all the texts at this point. The Center for Computer Analysis of Texts at Penn has tried to gather texts, has applied for funding to do so (with mixed results), and is putting what has been gathered thus far onto a CD-ROM for distribution this Fall. Milton's Paradise Lost is among those texts. My earlier query about cooperation and consortial arrangements is directly relevant to this issue of information and availability. I am astounded that only three responses from HUMANISTS were received, and I wonder who has any real interest in such matters? Must we continue the humanistic tradition of isolated individualism in this time of new opportunity? Get a form from Rutgers to register the text you have encoded. Send it to Oxford and/or CCAT if you are willing to make it available (even with conditions) to others, or to another cooperating center -- Thesaurus Linguae Graecae for Greek materials (U.Cal, Irvine; Theodore Brunner), Thesaurus Linguae Latinae for (classical) Latin (Packard Humanities Institute; Stephen Waite), ARTFL for French (Univ. of Chicago; Robert Morrissey), etc. (with apologies to the collectors or collections I've overlooked -- speak up so we all know what you are doing!). And say something if you think this is an important function for cooperation/coordination, and for seeking funding collectively. Bob Kraft From: GUEST4@YUSOL Subject: I knew it! Let MRTH abound! Date: Mon, 24 Aug 87 21:11 EDT X-Humanist: Vol. 1 Num. 223 (223) There HAD to be something of the sort already up and running, and in going over old HUMANIST mail, I see mention of a Rutgers MRTH Project, the acronym for which even I can decipher. Now can anyone lead me gently by the nose to more information about this happy endeavor? With thanks, S. Beckwith From: ENGHUNT@UOGUELPH Subject: Date: 24 August 1987, 23:18:04 EDT X-Humanist: Vol. 1 Num. 224 (224) I don't have my OTA file on my home machine, but if Oxford does not have either the Erasmus or the More, you might try contacting George Logan at Queens for the More. From: LOU@VAX.OXFORD.AC.UK Subject: text archive catalogue Date: 25-AUG-1987 11:05:27 GMT X-Humanist: Vol. 1 Num. 225 (225) The new edition of the Oxford Text Archive catalogue is now available on request. It lists over a thousand different texts, many of which are available from Oxford. It includes information supplied by CCAT at Penn, BYU, the ILC at Pisa and the LLC at Cambridge as well as over a dozen contact addresses for other Archives. I would be delighted to send a copy somewhere everyone could get at it, if someone would tell me where! (Particularly since we are currently undergoing the trauma of instlling a new VAX system at Oxford, and our PSE - i.e. the magic gadget that links us to the rest of the world - has chosen this week to go on the blink) In response to St. Beckwith: (1) the MRTH project is still going on collecting detailed cataloguing information about machine readable texts. These will (I believe) eventually be made available over RLIN. It is also planned to publish the current state of their file as part of Joe Raben's forthcoming "Electronic Scholars Resource Guide". They supplied us (OTA) some time ago with paper copies of some of their records; these are unfortunately not in our catalogue because I couldnt face the thought of typing it all in by hand when there was at least a faint chance they might have the sense to send us a machine readable list one day. Mea culpa! (2) I endorse everything everyone has to say about telling the rest of the world what they're doing/preparing. I do my best! If you deposit a text in the Text Archive you can be sure that (a) its availabiltiy will be as widely circulated as possible (b) any copies made of it will be recorded. If you want to know who's worked with machine readable texts of what we can tell you. I get roughly 20 enquiries a month, and have other jobs to do as well, so bear with me... We have copies of two of the three texts requested, by the way... Lou Burnard From: Undetermined origin c/o Postmaster Subject: Text archiving, data sharing Date: 25 August 1987 12:15:48 CDT X-Humanist: Vol. 1 Num. 226 (226) I'd like to second all the admirable sentiments expressed by HUMANISTs recently about the registration of texts and above all about the sharing of texts. It seems a shame to me that there is no central facility in the U.S. comparable to the Oxford archive, to take at least some of the burden off Lou Burnard's shoulders. But it seems an even bigger shame that it is not regarded as a matter of course to deposit texts with OTA or some other archive (perhaps with all the archives that collect in the relevant field), as soon as they are in a presentable state. There are, I'm sure, a lot of reasons that this doesn't always happen now: ignorance, the belief that a given text isn't yet clean enough to make public, fear of being taken advantage of, and more. My proposals toward a set of principles for making, sharing, and using machine-readable texts would include: 1 if you make a machine-readable text, deposit it with an archive, whether you wish to share it right away or not. After you die or retire, you won't care anymore, and the archive should have it then. 2 make the text available to others (i.e. give the archive permission to distribute it). The archive should be willing either to keep the originator of the machine-readable version of the text informed of copies distributed, or to require borrowers to get prior permission from the originator, at the originator's option. (Oxford does this.) That way you can keep track of who is using the text, and if you have a suspicious nature you can even quiz them on their motives before giving permission to borrow it. You can also reserve the data for your own use for a while, until you have got your analysis well underway and don't fear being 'scooped' with your own data. (Personally I think this fear unrealistic -- the number of people in a position to produce quick printable results from a machine-readable text must be so small as to make the chances negligible. Still, I have heard this fear from several people.) 3 Machine-readable copies of texts should be in a documented encoding scheme (ACH is working on developing one of these, and comments from interested parties may be addressed to the undersigned) and include in the file itself (so it's still there even after the paper documentation is lost) the relevant information: who wrote the text, its title and date, bibliographic reference to the edition used as copy text, who made the machine-readable version, where, and when. Archives may wish to insert additionally their names and any revisions or re-formatting they have done. 4 When you use a machine-readable text, BOTH the originator of the data and the distributor (eg a text archive) should be given credit in a footnote, just as you would give credit to the editor and publisher whose edition of Shakespeare you were using. This is established practice in the social sciences (I am told), and has done a little bit to make people willing to publish their datasets. (Not enough, but a little.) 5 All of us should begin to discuss just how we want to go about setting up some central facility or network of central facilities to handle the collection, documentation, and eventual improvement and standardisation of data. It seems pretty clear that it's too expensive a task for individual schools to handle alone, so a consortium seems like a good plan. The details need a lot of discussion, but it seems to me the consortium will have to be self-supporting, -- so we will need to find ways of making it cheaper for schools to join the group than for them to avoid joining. I have a number of ideas on this, but will save them for another note. This is already long enough. Michael Sperberg-McQueen, Univ. of Illinois at Chicago PS MRTH = Machine-Readable Texts in the Humanities. Look in the usual bibliographies for papers by Marianne Gaunt of Rutgers. From: Willard McCarty Subject: Method, Interpretation, and Humanities Computing Date: 25 August 1987, 22:22:39 EDT X-Humanist: Vol. 1 Num. 227 (227) It seems to me that Bob Kraft and Stuart Hunter have both identified something central to scholarship in our field. Kraft, you'll recall, pointed to the importance of method in all scholarship, and Hunter to the question of exactly how interpretation is done, what steps are followed. In doing my own work, a fair bit of which is now computerized, I've spent some effort observing what I do and asking myself what aspects of it are susceptible to computational methods. In doing this I've noted (as I'm sure many others have) how utterly dependent the programs that result from this kind of observation are on whatever literary critical, historiographical, or other theories the observer may hold. Bringing these into the light is, I think, usually a healthy thing, though not necessarily pleasant. (Does the Emperor have any clothes on? If he ostentatiously reads Greek in the Bodleian he probably doesn't.) If, as it seems to me, computational scholarship in the humanities tends to reveal the theoretical bases of interpretation, then we have a natural affinity for the study of criticism, historiography, and, in general, semiotics. As some of you doubtless know better than I, computational methods allow the semiotician to construct and improve upon models of interpretation. Others of us (like me) are content to computerize the observable aspects of our favourite methods because our interests really lie elsewhere, with the texts themselves & the hermeneutical act rather than theories of how texts are read. Computing in the humanities seems to be where many things meet. I continue to think that it is vital for this discipline to be practiced by people with independent scholarly interests. Like comparative literature, religious studies, and semiotics (this is a very Canadian statement), it is essentially a field populated by people from somewhere else, with the odd "native" specialist in theory. Comments? From: Willard McCarty Subject: Latest snapshot of the Oxford Text Archive Date: 26 August 1987, 11:39:15 EDT X-Humanist: Vol. 1 Num. 228 (228) Lou Burnard sends this listing of the contents of the Archive. When we figure out how to keep files centrally for distribution on request, information of this kind won't semi-automatically be mailed to everyone. From: Subject: Date: X-Humanist: Vol. 1 Num. 229 (229) =========== OXFORD TEXT ARCHIVE SHORTLIST - SEPTEMBER 1987 ================== This list contains author and title information only for all texts currently held in the Oxford Text Archive. It also contains information about the their holdings supplied by the following other Archives:- Ca Literary & Linguistic Computing Centre, University of Cambridge Ph Center for Computer Analysis of Texts, University of Pennsylvania Pi Ist. di Linguistica Computazionale, Universita di Pisa Pr Humanities Research Center, Brigham Young University --LEGEND--------------------------------------------------------------- Each entry has a prefix indicating :- The "lead site" code for the text:- This two character code indicates the site from which the text originated or where it is held. Full details of addresses etc. for each site code used are given at the end of the list of texts. The availability of the text at Oxford:- U = generally available on receipt of signed user declaration; A = prior written consent of depositor needed; contact Oxford X = not available outside Oxford = Address all enquiries direct to the Archive concerned The TOMES identifier of the text. This number should be used to identify the text on order forms etc. The size of the text:- A = less than half a Megabyte (will probably fit on one diskette) B = up to a Megabyte C = up to 2 Megabytes D = up to 5 Megabytes E = greater than 5 Megabytes = information not available The following special characters are used for accents etc.:- ^ supershift (so ^a is ae ligature) * acute accent \ cedilla { grave ~ bar over ] umlaut --ORDER PROCEDURE------------------------------------------------------- ** NB This applies to texts ordered from Oxford only*** To order copies of U category texts you must send :- (a) A signed completed order form (b) Payment in advance To order copies of A category texts you must also provide :- (c) Written authorisation from the depositor of the text Order forms are available from Oxford on request, as are Depositor addresses (for A category texts). Electronic mail is the quickest way of reaching us: archive @ uk.ac.ox.vax (JANET) archive%vax.ox.ac.uk @ ukacrl.earn (BITNET) archive%vax.ox.ac.uk @ ucl.cs.edu (EDU) (but we do NOT normally issue texts over networks) Telephone (less reliable):- +44 (865) 273238 [direct] or 273200 [switchboard] Postal address (slow but sure): Oxford Text Archive Oxford University Computing Service 13 Banbury Road, Oxford OX2 6NN Current charges:- * 5 pounds per text plus 15 pounds media and packing (Europe) or 25 pounds (outside Europe) for each tape or diskette needed to complete an order * PAYMENT MUST BE MADE IN ADVANCE AND WE CANNOT ISSUE INVOICES! * Payments not made in sterling attract a surcharge of 10 pounds. * All payments should be made to the account of OXFORD UNIVERSITY COMPUTING SERVICE. ---SHORTLIST------------------------------------------------------------ TOMES Database Snapshot taken on 26 Aug 1987 From: Subject: Date: X-Humanist: Vol. 1 Num. 230 (230) Collections, corpora &c Ox U 413 C | Collection of pre-Islamic verse Ox U 415 A | Early Arabic epistles Ox U 420 A | Modern Arabic prose samples Hamadhani Ox U 416 A | Poems From: Subject: Date: X-Humanist: Vol. 1 Num. 231 (231) Bible Ph 1109 A | Selected books, ed Stone From: Subject: Date: X-Humanist: Vol. 1 Num. 232 (232) Collections, corpora &c Ph 1117 A | Nag Hammadi Library Bible Ph 1108 A | Pauline Corpus (ed Horner) Ph 1107 A | Psalms (ed Budge) From: Subject: Date: X-Humanist: Vol. 1 Num. 233 (233) Anonymous Ca 905 A | Floris ende Blancefloer Collections, corpora &c Ca 907 B | Die Haager Liederhandschrift Le X 424 E | Eindhoven corpus of contemporary Dutch Ca 906 A | Liederen en Gedichten uit het Gruuthuse-Handschrift Ca 909 A | Van Vrouwen ende van Minne: Middelnederlandse gedichte Hadewijch Ca 831 A | 13th century mystic poems (ed. S.J. van Mierlo) Vondel, Joost van den Ca 910 C | Collected works From: Subject: Date: X-Humanist: Vol. 1 Num. 234 (234) Anonymous Ox U 535 A | Alliterative Morte Arthure Ox U 817 B | Anglo Saxon Chronicle (selections) Ox U 586 D | Anglo Saxon Poetic Records (ed Krapp & Dobbie) Ox U 814 A | Apollonius of Tyre (ed Goolden) Ox U 1 A | Arden of Faversham Ox U 816 A | Blickling homilies 7 8 and 19 (ed Morris) Ox U 36 A | Cursor mundi (Edinburgh ms) Ca 916 D | Domesday Book and its Satellites (parts) Ox U 664 A | Edmund Ironside Ox A 557 A | Englands Helicon Ca U 53 A | Erkenwald Ox U 3 A | Famous victories of Henry V Ca 935 C | Floris and Blauncheflur Ox U 9 A | King Leir and his daughters Ox U 33 A | Lenten sermons from MS BM Harley 2276 Ox U 658 A | Lyfe of Ipomydon (ed Ikegami) Ox U 170 B | Medi^aval devotional prose (mss in the Katherine group) Ca 929 A | Northern Homilies Cycle (Eustace, Oswald and Alexis legends) Ox U 815 A | Orosius' Histories (ed Sweet) Ox A 109 A | Owl and the nightingale Ox A 581 A | Pearl (ed Gordon) Ox A 1047 A | Peirce the Ploughmans Crede (ed Skeat) Ox U 10 A | Pricke of conscience (ed Morris) Ox U 279 A | Rauf Gilyear Ox U 62 A | Sir Gawayne and the grene knyght Ox U 11 A | Sir Thomas More Ox A 22 A | Speculum vit^a (BL Add 33995, ed Robinson) Ox U 813 A | St Augustine's soliloquies (ed Endter) Ox U 4 A | Taming of a shrew Ox U 290 A | The Asloan ms Ox U 403 A | The Bannatyre ms Ox U 414 A | The Maitland folio Ox U 595 A | The Tibetan book of the dead (translations) Ox U 283 A | The chapman and myllar prints Ox U 388 A | The complaynt of Scotland Ox U 7 A | Thomas of Woodstock Ox U 5 B | Troublesome reign of King John Ox A 697 B | Wycliffite sermons (further selections) Ox A 174 B | Wycliffite sermons (17 14th century sermons) Collections, corpora &c Ox U 159 C | American news stories Ox A 545 D | Anthology of 14 Canadian poets (ed Djwa) Ox X 685 E | Articles from the New Scientist (2.12.82-12.5.83) Ox U 401 B | Augustan prose sample Pr 1068 A | BYU corpus of US Constitutional writings Ox A 646 A | Berkshire Probate Inventories (ed C.R.J. Currie) Ox U 643 D | Birkbeck spelling error corpus Ox A 160 C | British Columbian Indian myths Be A 402 D | Brown corpus of present day American English Ox U 161 B | Civil War polemic (34 3000-word samples ) Ox U 163 E | Complete corpus of Old English (the Toronto D.O.E. Corpus) Ox U 164 C | Dedications etc. transcribed by Ralph Crane Ox U 668 D | Kucera-Francis wordlist (frequency countof text 402) Be A 167 E | Lancaster-Oslo-Bergen corpus of modern English (tagged, horizon Ox U 173 B | Lexis (samples of spoken English) Be A 168 D | London-Lund corpus of spoken English Lv A 555 E | Louvain corpus of modern English drama Be A 1046 B | Melbourne-Surrey Corpus of Australian English Ca 920 A | Methodist Letters (18th Century) Ox A 171 E | Michigan Early Modern English materials Ox U 172 D | Modern prose (15 2000-word samples) Ox U 701 B | Older Scottish Texts (The Edinburgh DOST Corpus) Ox U 1024 B | Records of Early English Drama (Selections) Ox U 166 E | Warwick corpus of written materials Dictionaries, &c Ox U 155 E | Collins English dictionary Ox U 571 D | English pronouncing dictionary (Daniel Jones) Ox U 1054 E | MRC Psycholinguistic database (Expanded SOED entries) Ox U 683 A | Oxford advanced learner's dictionary (parsed and tagged version Ox U 710 D | Oxford advanced learner's dictionary (expanded "Computer Usable Ox U 667 D | Oxford advanced learner's dictionary (untagged version) Ox U 154 E | Oxford advanced learner's dictionary (ed A.S. Hornby) Ox U 592 C | Oxford dictionary of music Ox U 398 D | Oxford dictionary of quotations Ox U 288 D | Oxford dictionary of current idiomatic English Ox U 157 D | Shorter Oxford dictionary (headwords only) Ox U 400 B | Thorndike-Lorge magazine count Ox X 559 E | Webster's 7th international dictionary (MARC format) Akenside, Mark Ox U 392 A | Pleasures of the imagination Ashford, Daisy Ox A 553 A | The young visiters Austen, Jane Ca A 12 C | Emma Ox U 13 C | Letters (ed R.W. Chapman) Ca A 14 B | Northanger Abbey & Persuasion Ox U 16 B | Pride and prejudice Ox U 18 B | Sense and sensibility Austen, Jane (et al) Ox U 17 B | Sanditon Ayckbourn, Alan Ox U 425 A | Relatively speaking Barbour, John Ox U 218 A | The Brus Barnes, Barnabe Ox U 19 A | Sonnets Barnes, Peter Ox U 426 A | The ruling class Barstow, Stan Ox A 490 B | A kind of loving Baxter, David Ox U 427 A | Will somebody please say something Beaumont, Francis Ox A 611 A | The knight of the burning pestle Beckett, Samuel Ox A 1058 A | Company Ox U 20 A | Ping & Lessness Ox U 23 A | Waiting for Godot Bennett, Alan Ox U 428 A | Getting on Bermange, Barry Ox U 429 A | Oldenberg Berryman, John Ox U 24 A | Dream songs Berton, Pierre Ox X 684 B | Settling the West 1896-1914: The promised land Bible Ph U 1060 E | King James Authorised Version Ph U 1061 E | Revised Standard Version Bowen, John Ox U 430 A | After the rain Brennan, Michael Ox A 6 A | The war in Clare 1911-1921 Brenton, Howard Ox U 431 A | Christie in love Bruce, Michael Ox U 28 A | Collected poems Bullokar, John Ox U 25 A | Three pamphlets on grammar Byrne, John Ox A 543 A | Still life Ox A 541 A | The slab boys Ox A 542 A | Threads Cameron K.C. et al Ox A 662 A | The computer and modern language studies Campbell, Ken Ox U 466 A | Anything you say will be twisted Capgrave, John Ox A 536 A | Solace of pilgrimes Ox A 162 A | The life of St. Norbert Carlyle, Thomas Ox U 26 B | 200 selected prose samples Ox A 549 A | English and other critical essays Carroll, Lewis Ox U 27 B | Alice in Wonderland Cather, Willa Ox U 626 A | The professor's house Chapman, George Ox A 624 A | The revenge of Bussy d'Ambois Chaucer, Geoffrey Ox U 29 C | Canterbury Tales (ed Robinson) Ox X 704 B | Canterbury tales (ed N.F. Blake) Cheatle, Syd Ox U 432 A | Straight up Chesterfield, Earl of Ox U 30 A | The case of the Hanover forces in England Chettle, Henry Ox U 678 A | Kind heart's dream Ox U 675 A | The card of fancy Clough, Arthur Hugh Ox A 1045 A | Collected verse Coggan, Jean Ox A 251 A | Through the day with Jesus Coleridge, Samuel Taylor Ox A 538 B | Notebooks (ed Coburn) vols 1-3 Ox U 31 A | Poetical works (ed E.H. Coleridge) Collins, Wilkie Ox A 1056 B | The woman in white Collins, William Ox U 32 A | Odes & Eclogues Communist Affairs Ox A 492 C | Vol 1 Num 1 Jan 1982 Conrad, Joseph Ox U 627 A | Lord Jim Cooper, Giles Ox U 34 A | Everything in the garden Ox U 433 A | Happy family Cooper, Thomas Ox A 551 A | The life of Thomas Cooper Cowper, William Ox U 35 A | The task Cregan, David Ox U 434 A | The houses by the green Daniel, Samuel Ox U 37 A | Rosamund Darwin, Charles Ca 914 E | Collected Letters of Charles Darwin Ox U 632 A | Sketch of 1842 Davies, Robertson Ox A 661 A | A voice from the attic Davies, Sir John Ox U 556 A | A discovery of the true causes why Ireland was never subdued (e Defoe, Daniel Ox U 537 A | Moll Flanders Ox A 1020 B | Robinson Crusoe Dekker, Thomas Ox U 39 A | Match mee in London Ox A 619 A | The honest whore (part 2) Ox U 38 A | Witch of Edmonton Dell, Jack Holton Ox U 467 A | The duel Devanny, Jean Ox A 534 A | The butcher's shop Dickens, Charles Ox U 40 A | A Christmas carol Pr 1067 B | A tale of two cities Ox A 657 B | Edwin Drood Ox A 1055 B | Great expectations Ca 915 A | Oliver Twist Disraeli, Benjamin Ox A 550 A | Lord George Bentinck: a political biography Donne, John Ox U 1029 A | Anatomie of the world: the first anniversary Ox A 1052 A | Poems (1633) Ox U 43 A | Songs and sonnets (part) Dostoevski, F. (translations) Ox U 44 A | Notes from underground Dryden, John Ox U 42 A | Absalom and Achitophel Dryden, Ken Ox U 596 B | The game Du Bartas, G. de S. (translations) Ox A 651 A | Divine weeks and works (vol. 2) Du Maurier, Daphne Ox A 498 B | Rebecca Dudley, Fourth Lord North Ca 918 A | Collected Poems Dudley, Third Lord North Ca 917 A | Collected Poems Duffy, Maureen Ox U 468 A | Rites Dylan, Bob Ox U 45 A | Published songs 1962-9 Ox A 491 A | Tarantula Edgeworth, Roger Ox A 244 A | Sermons very fruitful godly and learned (1557) Eliot, George Ca A 48 D | Daniel Deronda Ca A 47 D | Middlemarch Ca U 46 A | Silas Marner Eliot, Thomas Stearns Ca A 49 D | Complete poems and plays Ox U 50 A | Poems 1909-35 England, Barry Ox U 435 A | Conduct unbecoming Erasmus (translations) Ox U 51 A | De immensa Dei misericordia (tr Hervet) Fielding, Henry Ox U 54 C | Joseph Andrews Ox U 55 C | Miscellanies Ox U 56 A | Shamela Fitzgerald, F. Scott Ox U 57 B | The great Gatsby Fleming, Ian Ox A 507 A | Dr No Fletcher, John Ox U 802 A | Demetrius and Enanthe Ox U 1021 A | Monsieur Thomas Ox U 688 A | The chances Ox A 605 A | The faire maide of the inne Ox U 689 A | The island princess Ox U 691 A | The loyal subject Ox A 623 A | The tragedy of Bonduca Ox U 690 A | The woman's prize Ox U 1022 A | Tragedy of Valentinian Fletcher, John (et al) Ox U 58 A | Sir John Van Olden Barnavelt Ford, John Ox A 639 A | 'Tis pitty shee's a whore Frisby, Terence Ox U 436 A | There's a girl in my soup Frost, Robert Ox U 59 A | Selected verse Fry, Christopher Ox U 520 A | A phoenix too frequent Ox U 522 A | The lady's not for burning Ox U 521 A | Thor with angels Frye, Northrop Ox A 660 A | The bush garden Ox X 597 B | The educated imagination Galt, John Ox A 177 A | Ringan Gilhaize Gaskell, Elizabeth Ox U 61 A | Selected contributions to Frasers Gill, Peter Ox U 469 A | Over gardens out Gower, John Ca U 63 A | Confessio amantis Graves, Robert Ca 928 B | Claudius the God Ca U 64 B | Complete poems Ca 927 B | I, Claudius Gray, Simon Ox U 437 A | Butley Gray, Thomas Ox U 65 A | Complete poems Greene, Graham Ox A 489 A | Brighton rock Greene, Robert Ox U 681 A | A quip for an upstart courtier Ox U 674 A | Cony-catching (parts 1 & 2) Ox U 676 A | Frier Bacon and Frier Bungay Ox U 66 C | Proverbs Ox U 682 A | Repentance Ox U 665 A | Tarlton's newes out of purgatorie Ox U 677 A | The Scottish history of James the fourth Ox U 672 A | The black book's messenger Ox U 673 A | The black dog of Newgate Ox U 671 A | The comical history of Alphonsus Ox U 679 A | The history of Orlando furioso Griffin, James Ox A 654 A | Well-being: its meaning, measurement and moral importance Guevara, Antonio (translations) Ox U 91 B | The golden book of Marcus Aurelius (tr J. Bourchier, Lord Bern Hampton, Christopher Ox U 438 A | The philanthropist Hansford-Johnson, Pamela Ox A 531 A | Night and silence, who is here Hardy, Thomas Ox U 67 C | Far from the madding crowd Ox A 539 B | Jude the obscure Ox U 68 B | Tess of the D'Urbervilles Hare, David Ox U 470 A | Slag Harries, Richard Ox A 252 A | Turning to prayer Hauser, Arnold Pr 1069 D | Sociology of art Haworth, Don Ox U 471 A | A hearts and minds job Hawthorn, Nathaniel Ox U 69 A | Selections Hay, Gilbert Ox U 220 A | The buke of the law of armys; The buke of knychthode Hearst, Pattie Ox U 70 A | Diaries Henryson, Robert Ox U 243 A | Collected works Hervey, Thomas Ox U 71 A | Letter to Sir Thomas Hanmer Hesse, Hermann (translations) Ox U 393 A | Steppenwolf Heywood, Thomas Ox A 635 A | A woman kilde with kindnesse Hill, Susan Ox A 510 A | Gentleman and ladies Hockey, Susan Ox A 156 A | A guide to computer applications in the humanities Hogg, James Ox A 588 A | The three perils of man Hopkins, Gerard Manley Ox U 73 A | Complete verse Hopkins, John Ox U 472 A | Find your way home Housman, A.E. Ox U 1034 A | A Shropshire lad Howarth, Donald Ox U 473 A | Three months gone Johnson, Samuel Ox A 76 B | Journey to the Western Isles Ox U 75 A | London & The vanity of human wishes Ox U 77 A | Rasselas Prince of Abissinia Jonson, Ben Ox U 78 A | Pleasure reconcil'd to vertue (a masque) Ox A 616 A | Volpone Joyce, James Ox U 79 A | Dubliners Ox U 1030 A | Extract from 'Work in Progress' Ox U 80 A | Portrait of the artist Julian of Norwich Ox U 700 B | A revelation of divine love Kant, Immanuel (translations) Ox A 289 B | Critique of pure reason Keats, John Ox U 81 C | Poetical works (ed J. Stillinger) Khapa, Tsong (translations) Ox X 652 A | The essence of true eloquence (translations) King, Martin Luther Ox A 532 A | Stride for freedom Kydd, Thomas Ox U 83 A | Cornelia Ox U 82 A | Spanish tragedie Laffan, Kevin Ox U 439 A | It's a two-foot-six-inches-above-the-ground world Langland, William Ox A 500 A | The vision of Piers Plowman (ed Schmidt) Larson, Clinton F. Pr 1070 C | Collected works Lawrence, David Herbert Ox U 84 A | St Mawr Layamon Ox U 85 B | Brut (two mss) Le Carre*, John Ox A 86 A | The spy who came in from the cold Lessing, Doris Ox U 87 A | Each his own wilderness Ox U 89 A | Memoirs of a survivor Ox U 88 A | Summer before the dark Lowell, Robert Ox U 90 A | Notebook Luke, Peter Ox U 474 A | Hadrian VII Malamud, Bernard Ox A 52 A | The assistant Mansfield, Katherine Ca U 92 A | Selected short stories Manwaring Ox U 93 A | Seaman's glossary Marcus, Franc Ox U 441 A | Mrs Mouse are you within? Marlowe, Christopher Ox U 94 B | Dramatic works Ox A 615 A | Tamburlaine (part 2) Marston, John Ox A 629 A | The Dutch courtezan Marvell, Andrew Ox U 95 A | Miscellaneous poems Massinger, Philip Ox A 603 A | A new way to pay old debts Maugham, Robert Ox U 475 A | Enemy Maugham, Robin Ox U 442 A | The servant McGrath, John Ox U 440 A | Events while guarding the Bofors gun Medwall, Henry Ox U 1032 A | Nature Melville, Herman Ox U 96 C | Moby Dick Ox U 628 A | Moby Dick (Signet ed) Mercer, David Ox U 476 A | Belcher's luck Middleton, Thomas Ox U 97 B | A game at chess (two mss) Ox U 584 A | Newes from Persia and Poland touching Sir Robert Sherley... Ox U 15 A | Song in several parts Ox U 583 A | The black book Ox U 585 A | The ghost of Lucrece Ox A 642 A | The revenger's trag^adie Ox U 98 A | The witch Millar, Ronald Ox U 443 A | Abelard and Heloise Milner, Roger Ox U 444 A | How's the world treating you? Milton, John Ox A 1027 A | English poems Ox U 102 A | Il penseroso Ox U 100 B | Paradise lost Ox U 101 A | Samson agonistes Monaco, James Ox U 705 C | The Connoisseur's Guide to the Movies (1985) Mortimer, John Ox U 477 A | A voyage around my father Moss, Rose Ox A 103 B | The terrorist Munday, Anthony Ox A 630 A | The book of John a} Kent & John a} Cumber Murdoch, Iris Ox A 509 B | The bell Nashe, Thomas Ox U 680 A | Pierce pennyless Ox U 105 A | Summer's last will and testament Nassyngton, William of Ox U 653 A | The bande of louynge Nichols, Peter Ox U 445 A | A day in the death of Joe Egg Norman, Frank Ox U 478 A | Inside out O'Casey, Sean Ox U 107 A | Juno and the paycock Ox U 108 A | Shadow of a gunman Ox U 106 A | The plough and the stars O'Malley, Ernie Ox A 213 A | Army without banners Ox A 574 B | The singing flame O'Neill, Michael Ox U 446 A | The bosom of the family Orton, Joe Ox U 447 A | What the butler saw Orwell, George Ox A 1097 A | 1984 Osborne, John Ox U 448 A | West of Suez Parfit, Derek Ox X 250 B | Reasons and persons Paston family Ox A 395 C | Letters and papers of the 15th century (ed N. Davies), vol 1 on Patten, Brian Ox U 1042 A | Selected verse Peel, Sir Robert Ox A 552 A | Memoirs, part 1 Pinner, David Ox U 449 A | Dickon Pinter, Harold Ox U 450 A | Old times Plath, Sylvia Ox U 111 A | Collected poems Ox U 110 A | The bell jar Pope, Alexander Ox A 580 A | Rape of the lock Pound, Ezra Ox U 113 B | Cantos Ox U 112 A | Guide to Kulchur Powell, Antony Ox A 508 A | Acceptance world Pulman, Jack Ox U 479 A | The happy apple Ramsley, Peter Ox U 480 A | Disabled Randolph, Thomas Ox U 114 A | Aristippus Ox U 116 A | Pr^aludium Ox U 115 A | The conceited pedler Ox U 117 A | The drinking academy Ox U 118 A | The fary knight Rattigan, Terence Ox U 451 A | A bequest to the nation Ross, Kenneth Ox U 452 A | Mr Kilt and the great I am Rossetti, Dante Gabriel Ox A 217 B | Works, ed W.M. Rossetti Rowley, William Ox A 608 A | All's lost by lust Saunders, James Ox U 481 A | Neighbours Schumacher, E.F. Ox A 399 A | Small is beautiful Scott, Walter Ox U 74 A | Castle Dangerous Ox U 165 A | Selected poems Ox U 60 A | The antiquary Scruton, Roger Ox A 533 C | Fortnight's anger Selbourne, David Ox U 453 A | The damned Shaffer, Anthony Ox U 454 A | Sleuth Shaffer, Peter Ox U 455 A | Black comedy Shakespeare, William Ox U 133 A | 1 Henry IV (Q1) Ox U 134 A | 2 Henry IV (Q1) Ox U 125 A | A midsummer nights dream (Q1) Ox A 119 D | Complete works (first folio) Ox U 2 A | Contention of York & Lancaster [Henry VI part 2] (Q1) Ox U 135 A | Edward III (Q1) Ox U 1064 A | Hamlet (Q1) Ox U 121 A | Hamlet (Q2) Ox U 169 A | Julius Caesar (Arden ed) Ox U 123 A | King Lear (Q1) Ox U 122 A | Loves labours lost (Q1) Ox U 126 A | Merchant of Venice (Q1) Ox U 1057 A | Merry wives of Windsor (Q1) Ox U 120 A | Much ado about nothing (Q1) Ox U 124 A | Othello (Q1) Ox U 127 A | Pericles (Q1) Ox A 138 A | Poems Ox U 129 A | Richard II (Q1) Ox U 130 A | Richard III (Q1) Ox U 128 A | Romeo and Juliet (Q2) Ox U 137 A | Sonnets Ox A 659 C | The tempest (various editions) Ox U 131 A | Titus Andronicus (Q1) Ox U 132 A | Troilus and Cressida (Q1) Ox U 8 A | True tragedie of Richard Duke of York [Henry VI part 3] (Q1) Ox U 136 A | Two noble kinsmen (Q1) Shakespeare, William (et al) Ox A 529 A | The passionate pilgrim Shaw, Robert Ox U 456 A | Cato street Shelley, Percy Bysshe Ox U 139 A | Prometheus unbound Shirley, Thomas Ox A 601 A | The cardinal Simpson, N.F. Ox U 457 A | The cresta run Speight, Johny Ox U 458 A | If there weren't any blacks, you'd have to invent them Spencer, Colin Ox U 482 A | Spitting image Spender, Stephen Ox A 141 A | Collected poems Ox A 142 A | The generous days Spenser, Edmund Ca A 144 D | Faerie Queene Ca U 143 A | Minor poems Spurling, John Ox U 483 A | Macrune's Guevara Sterne, Laurence Ox A 1048 C | Tristram Shandy Stoppard, Tom Ox U 459 A | Jumpers Storey, David Ox U 484 A | Home Ox A 72 A | This sporting life Taylor, A.J.P. Ox A 158 C | English History 1914-1945 Taylor, Cecil Ox U 460 A | Bread and butter Terson, Peter Ox U 485 A | Spring-heeled Jack Thomson, James Ox A 21 A | The seasons Tolkien, J.R.R. Pr 1071 B | The hobbit Pr 1072 B | The lord of the rings Tourneur, Cyril Ox A 600 A | The atheist's tragedy Ox U 145 A | The revenger's tragedy Ustinov, Peter Ox U 461 A | The unknown soldier and his wife Wager, William Ox U 1033 A | Enough is as good as a feast Ox U 1031 A | The longer thou livest the more fool thou art Wain, John Ox A 530 A | Hurry on down Waugh, Evelyn Ox A 146 B | Brideshead revisited Webster, John Ox A 612 A | A speedie poste, with certaine new letters Ox A 610 A | Miscellania Ox A 607 A | The devil's law-case Ox A 606 A | The merchant's handmaide ... Ox A 631 A | The tragedy of the dutchesse of Malfy Ox A 613 A | The valiant Scot Ox A 618 A | The white divel Webster, John (et al) Ox A 602 A | A cure for a cuckold Ox A 621 A | Anything for a quiet life Ox A 599 A | Appius and Virginia Ox A 637 A | North-ward hoe Ox A 620 A | The famous history of Sir Thomas Wyat Ox A 634 A | The induction to the malcontent ... Ox A 617 A | West-ward hoe Welburn, Vivienne Ox U 462 A | Johnny so long Wesker, Arnold Ox U 463 A | The friends Whitehead, E.A. Ox U 464 A | The foursome Wilkins, George Ox U 666 A | Pericles Prince of Tyre (ed Bullough) Ox U 663 A | The miseries of an enforced marriage Wilson, R.A. Ox A 594 A | The birth of language Woolf, Virginia Ca U 147 A | A haunted house, and other stories Ox U 149 A | Mrs Dalloway Ox U 148 A | The waves Ox U 150 A | To the lighthouse Wordsworth, William Ox U 151 A | Lyrical ballads Wyatt, Thomas Ca U 152 A | Poetical works Wycherley Ox A 1049 A | The country wife Wymark, Olwen Ox U 465 A | Stay where you are Yeats, William Butler Ox U 153 C | Complete poems Ox U 1023 A | Essays and introductions Zimmerman, Carle C. Ox X 591 A | Siam: Rural Economic Survey, 1930-31 From: Subject: Date: X-Humanist: Vol. 1 Num. 235 (235) Anonymous Pr 1074 C | Kalevala (ed E. Lnnrot) From: Subject: Date: X-Humanist: Vol. 1 Num. 236 (236) Anonymous Ox U 175 A | Aliscans Ca A 893 B | La chanson de Roland (ed Whitehead, 1947) Ox A 587 A | Le roman de Tristan (tome 3) Ox A 187 A | Li fet des Romains I Ox A 404 A | Li quatre livre des Reis Collections, corpora &c Ox U 199 C | 18th century correspondence Ox A 191 B | Echantillon du que*becois parle* Pr 1066 B | English-French translation database Ox U 569 A | Modern business correspondence Ox U 176 A | Old French corpus Ox A 590 A | Sample of Nova Scotian Acadian French Balzac, Honore* de Ox A 572 B | La peau de chagrin Bayle, Pierre Ca 938 A | Avis aux Re*fugie*s Ca 939 A | Correspondence Beckett, Samuel (translations) Ox A 604 A | En attendant Godot Bernanos Ox U 178 A | M. Ouine Bible Ox A 570 A | The gospels (part) Calvin, Jean Ca 940 A | Supplementa Calviniana, Vol. II (6 sermons) Ce*line, P. Ox U 179 A | Voyage au bout de la nuit Chartier, Alain Ca 941 B | Poetical works Chawaf, Chantal Ox A 1044 A | La valle*e incarnate Ox A 1050 A | Landes Ox A 1051 A | Maternite* Chre*tien de Troyes Ox A 180 A | Cliges Ox A 181 A | Erec Ox A 182 A | Lancelot Ox A 183 A | Perceval Ox A 184 A | Ywain Constant, Benjamin Ca U 560 A | Adolphe Ca A 185 B | Lettres Cre*billon, C.P.J. (fils) Ox A 614 A | La nuit et le moment Ox A 622 A | Le hasard du coin de feu Ox U 1104 A | Le sopha Ox A 609 A | Les e*garements du coeur et de l'esprit Froissart, Jean Ox A 698 B | Chronicles (Ms Reg. Lat. 869) Ca 900 A | Chronicles (selections) Gide, Andre* Ox U 186 A | L'Immoraliste Guillaume de Lorris Ca 898 B | Le Roman de la Rose Guyotat, Pierre Ox A 573 A | E*den E*den E*den Jean de Howden Ca 899 A | Li rossignos Mallarme*, S. Ca 897 D | Poetical works Malraux, Andre* Ox U 189 A | La tentation de l'occident Ox U 190 A | La voie royale Ox U 188 A | Les Conque*rants Marguerite de Navarre Ox U 499 C | L'Heptameron Maupassant, Guy de Ox A 215 A | Pierre et Jean Montaigne, Michel Eyquem de Ox U 1059 C | Les essais (ed. Villey) Nerval, Ge*rard de Ca 896 A | Aure*lia Orle*ans, Charles d' Pr 1075 C | Poe*sies completes Pre*vost, Abbe* Ox A 41 A | Manon Lescaut Proust, Marcel Na X 405 E | A la recherche du temps perdu Queneau, Raymond Na X 192 A | Exercises de style Na X 193 A | Pierrot mon ami Rabelais, Francois Ca 895 E | Complete works Rene* d'Anjou Ca 894 A | Livre du cuer d'amours espris Rimbaud, Arthur Ox U 811 A | Collected verse Robbe-Grillet, Alain Ox U 194 A | La jalousie Sartre, Jean-Paul Ox U 195 A | La nause*e Schwob, Maurice Pr 1076 B | Writings on the Dreyfus Affair Stendhal Na X 196 B | La chartreuse de Parme Na X 197 B | Le rouge et le noir Na X 198 B | Lucien Leuwen Vaugelas, Claude Favre de Ox U 253 B | Remarques sur la langue franc\oise From: Subject: Date: X-Humanist: Vol. 1 Num. 237 (237) Ba, A.H. Ox U 494 A | Kaidara Lacroix, P.F. Ox U 493 A | Poe*sie peule de l'Adamawa Sow, A.I. Ox U 496 A | Contes et fables des veille*es Ox U 495 A | La femme, la vache, la foi From: Subject: Date: X-Humanist: Vol. 1 Num. 238 (238) Anonymous Ox A 625 A | Seanmenta chinge uladh A'Kempis, Thomas (translations) Ox A 1036 B | Imitatio Christi Conrad, Joseph (tr Mac Grianna) Ox U 1174 A | Amy Foster Ox U 1173 A | Seidea*n Bruithne (Typhoon) Mac Grianna, Seosamh Ox A 214 A | Pa*draic O* Conaire agus aist^m* eile Ox U 1172 A | Pa*draic O* Conaire agus aist^m* eile (1936 ed) O* Grianna, Se*amus Ox U 1176 A | An Teach na*r To*gadh Ox A 1178 A | Caislea*in O*ir Ox U 1175 A | Michea*l Ruadh Ox A 1177 A | Sce*al U*r agus Sean-Sce*al From: Subject: Date: X-Humanist: Vol. 1 Num. 239 (239) Anonymous Ca 892 B | Aviso (newspaper, 1609) Ox U 202 B | Das Nibelungenlied Ca 880 A | Das St Trudperter Hohe Lied Ca 879 B | Daz Anegenge Ca 890 B | Die Vorauer Bu]cher Moses Ca 876 B | Die altdeutsche Exodus Ca 887 A | Graf Rudolf Ca 883 B | Kudrun Ca 875 C | Relation oder Zeitung (newspaper, 1609) Ca 884 B | Strassburger Alexander Ox U 210 A | Tundalus der Ritter Collections, corpora &c Ca 882 C | Die religio]sen Dichtungen des 11 und 12 Jh.s Ca 885 B | Early German sermons Ca 908 A | Lieder der Berliner Hs Germ fol 992 Mn X 207 D | Mannheimer Korpus Pr 1078 B | Pfeffer corpus of spoken German Ca 874 C | Sermons of the 12, 13 and 14 centuries Ca 881 B | Speculum ecclesi^a: Early Middle High German sermons Dictionaries, &c Ox A 246 C | Lexikon zur Wortbildung Morpheminventar A-Z Ox U 818 D | Pons German-French dictionary (part) Beckett, Samuel (translations) Ox A 598 A | Warten auf Godot Benn, Gottfried Ox U 200 C | Works (ed Lyon) Bo]hme, Jacob Ca 891 B | Aurora Brecht, Berthold Pr 1079 D | Poetic works Celan, Paul Pr 1080 C | Gessamelte werke Ox U 201 A | Selected poems Eckartbote Ox A 567 C | Selections Eilhart von Oberge Ca 889 B | Tristrant Fleck, Konrad Ca 888 B | Flore und Blanscheflur Goethe, Wolfgang von Pr 1081 C | Complete works (Hamburg ed) Ox U 203 B | Faust Grimm, W. and J. Ox U 204 A | Ma]rchen (selected) Hartmann von Aue Ox U 211 A | Der arme Heinrich Hermann von Sachsenheim Ca 886 A | Der Spiegel Ca 877 B | Eraclius Hofmannsthals, Hugo von Pr 1082 C | Poetic works Kafka, Franz Pr 1084 B | Der process (historical critical ed) Ox U 205 A | In der Strafkolonie Kempowski, Walter Pr 1083 E | Deutschen chronik Mann, Thomas Ox U 206 A | Tonio Kro]ger Meyer, Conrad Pr 1085 C | Poetic works Meyer, Conrad F. Ox U 812 A | Die Hochzeit des Mo]nchs Ox U 208 A | Lyric poems Notker III of St Gall Ca 878 C | Psalmen, nach der Wiener Handschrift Stramm, August Ox U 209 A | Complete poems Wittgenstein, Ludwig Ox X 562 A | Culture and value Ox X 564 A | Last writings on the philosophy of psychology (part) Ox X 563 B | Remarks on the philosophy of psychology From: Subject: Date: X-Humanist: Vol. 1 Num. 240 (240) Anonymous Ls U 265 A | Homeric hymns Ir X 316 A | Homeric hymns (TLG ed) Collections, corpora &c Ph 1115 A | Early Christian Materials : versions of 3 Corinthians. Ir X 421 A | Greek anthology Ph 1113 A | Inscriptions (Cornell) Ph 1114 A | Inscriptions (Princeton) Ox A 270 D | Oxyrhynchus papyri vols 11-46 (documentary papyri only) Ox A 696 E | The Duke documentary papyri corpus Dictionaries, &c Ph 1112 D | Dictionary for Greek New Testament Achilles Tatius Ir X 386 A | Collected works Aeschylus Ir X 418 B | Collected works Ls U 212 B | Five plays Apollonius Rhodius Ls U 221 A | Argonautica, 3 Ir X 486 A | Collected works Apostolic Fathers Ls U 222 A | Works (ed Lake) Aratus Ir X 487 A | Collected works Ls U 223 A | Ph^anomena Archytas Ox U 224 A | Doubling the cube Aristarchus of Samos Ox U 225 A | On sizes and distances of the Sun and Moon Aristophanes Ox U 227 A | Acharnians Ox U 232 A | Birds Ox U 229 A | Clouds Ir X 488 A | Collected works Ox U 236 A | Ecclesiazous^a Ox U 235 A | Frogs Ox U 228 A | Knights Ox U 233 A | Lysistrata Ox U 231 A | Peace Ox U 237 A | Plutus Ox U 234 A | Thesmophoriazous^a Ox U 230 A | Wasps Aristotle Ir X 226 D | Complete works Aristoxenus of Tarentum Ox U 238 A | Elementa Harmonica Asterius Amasenus Ox A 648 A | Selected homilies (ed Datema) Asterius Sophista Ox A 647 A | Commentarii in Psalmos (ed Richard) Autolycus of Putane Ox U 219 A | De Sph^ara & De Ortibus Bible Ph A 708 D | Greek Jewish Scriptures (ed Rahlfs) Ph U 245 B | Morphologically analysed Pentateuch (Septuagint version) Ph U 1101 E | Morphologically tagged Greek Jewish Scriptures (CATSS text) Ir X 397 D | New Testament Ir X 516 B | Septuagint (TLG text) Ox A 540 C | Septuagint, vols 3 and 13 Ph 1106 E | Tagged Greek New Testament (ed Freibergs) Ox U 269 A | The gospels Callimachus Ir X 513 A | Collected works Chariton Ir X 291 A | Collected works Demosthenes Ir X 292 A | Collected works Diodorus Siculus Ir X 239 D | Collected works Diodorus Tarsensis Ox A 644 A | Commentarii in Psalmos (selected) Diogenes Laertius Ir X 293 A | Collected works Euclid Ls U 240 D | Elements vols 1-4 Euclid (pseudo) Ox U 241 A | Musici scriptores gr^aci Euripides Ir X 294 A | Collected works Ox U 242 B | Major works Eusebius C^asariensis Ox A 649 A | Commentarii in Psalmos (selections) Galen Ir X 547 E | Complete works Gregory of Nyssa Ox U 255 A | De tridui spatio Gregory the Pagurite Ox A 1040 A | Encomium of S. Pamcratius of Taormina Heliodorus Ir X 295 A | Collected works Herodas Ir X 296 A | Collected works Herodotus Ir X 256 E | Complete works Hesiod Ir X 297 A | Collected works Ls U 260 A | Fragments Ls U 257 A | Opera et dies Ls U 258 A | Scutum Ls U 259 A | Theogonia Hippocrates Ox A 261 D | Complete works Hippocrates of Chios Ox U 262 A | Quadrature of the Lunule Homer Ir X 299 A | Collected works Ls U 263 B | Iliad Ls U 264 B | Odyssey Isaeus Ir X 524 A | Collected works Ls U 266 A | Orations Libanius Ir X 389 A | Collected works Longus Ir X 390 A | Collected works Lycophron Ir X 394 A | Collected works Lysias Ox U 267 A | Speeches 12 and 24 Meliton Ox U 268 A | Selections Nicander Ir X 396 A | Collected works Origen Ph 1116 A | Patristics Parthenius Ir X 412 A | Collected works Pausanius Ir X 417 A | Collected works Plato Ir X 419 B | Collected works Ox A 271 D | Works Plato (pseudo) Ox U 561 A | Doubling the cube Plutarch Ir X 515 A | Collected works I Ir X 544 E | Collected works II Ox U 273 A | Moralia Pseudo-Chrysostomus Ox A 640 A | In adorationem venerand^a crucis Ox A 638 A | In resurrectionem Domini (ed Aubineau) Ox A 641 A | Two Easter homilies (ed Liebaert) Pseudo-Evagrius Ox A 1039 A | Life of S. Pancratius of Taormina Pseudo-Galen Ir X 548 A | Works Sextus Empiricus Ox U 248 A | Works (Loeb ed, I and III only) Sophocles Ox U 276 A | Antigone Ir X 517 B | Collected works Ls U 274 A | Electra Ox U 278 A | Oedipus Colonus (part) Ox U 275 A | Oedipus Tyrannus Ox U 277 A | Philoctete St John Damascene Ca 851 A | Selected works Themistocles (pseudo) Ls U 280 A | Epistul^a Theocritus Ir X 518 B | Collected works Thucydides Ir X 281 E | Complete works Xenophon Ir X 519 B | Collected works Ls U 282 C | Major works Xenophon Ephesius Ir X 523 A | Collected works From: Subject: Date: X-Humanist: Vol. 1 Num. 241 (241) Agnon, S.Y. Ox U 216 A | Ha-malbush Ox U 300 B | Hadom vekisee Bible Je U 1119 E | Aligned Texts of Hebrew and Greek Jewish Scriptures (CATSS data Ph A 1111 E | Aligned Texts of Hebrew and Greek Jewish Scriptures (CATSS data Ph U 525 C | Bibl. Heb. Stuttgartensia (Michigan-Claremont text) Ox U 422 A | Book of Job (Targum) Ox U 301 B | Pentateuch Ox A 140 A | Psalms (Targum text) Ph 1110 A | Pseudo-Jonathan (Targum) Dickens, Charles Ca 873 A | Oliver Twist (part) From: Subject: Date: X-Humanist: Vol. 1 Num. 242 (242) Anonymous Co A 298 D | Mo]druvallabo*k From: Subject: Date: X-Humanist: Vol. 1 Num. 243 (243) Alighieri, Dante Pi A 695 A | Il paradiso Pi A 694 A | Il purgatorio Pi A 693 A | L'Inferno Ariosto, Lodovico Pi A 1041 B | Orlando furioso Boccaccio, Giovanni Pi A 1120 C | Decameron Pi A 1099 A | Il Teseide Boiardo, Matteo Ox U 1100 B | Orlando Innamorato Calvino, Italo Ox U 406 A | Seven dialect tales Castiglione, B. Ox A 302 A | Il Cortegiano Della Casa, Giovanni Ox U 407 A | Galateo Machiavelli, Niccolo Ox A 303 A | Discorso o dialogo intorno alla nostra lingua Michelangelo Ox A 304 B | Rime 1-85 Nievo Ox U 408 A | Canzoni popolari greche Pigna, G.B. Ox A 1062 A | Amori Rossetti, Gabriele Ox A 702 C | Letters to Charles Lyell Svevo, Italo Ox A 1043 B | La coscienza di Zeno Tasso, Torquato Ca A 872 B | Gerusalemme Conquistata Ca A 871 B | Gerusalemme Liberata Verga, Giuseppe Ox U 305 A | Six short stories From: Subject: Date: X-Humanist: Vol. 1 Num. 244 (244) Hark^m~, Mulla Sa'^m~d Ox U 249 A | Sgand^m~na~n^m~ texts (ed MacKenzie) From: Subject: Date: X-Humanist: Vol. 1 Num. 245 (245) Anonymous Pr 1086 B | Corpus Christianorum Pi 1132 ? | De dubiis nominibus (ed Glorie, 1968) Ox U 309 A | De rebus bellicis Ox U 633 A | Gedichte des Archipoeta (ed Krefeld & Watenpuhl) Pi 1129 ? | Itinerarium Antonini Placentini Ls U 310 A | Sententi^a et epistul^a Hadriani Ox A 497 A | Speculum duorum Ox A 104 A | The book of Ilan Dav Ca 849 A | Vit^a I & II S. Brigit^a Ox U 568 A | Vit^a abbatum Ox U 575 A | Vita S. Cuthberti Collections, corpora &c Pi 1121 ? | Anthologia Latina sive Poesis Latin^a supplementum Pi 1122 ? | Carmina Latina epigraphica (ed Bucheler) Pi 1123 ? | Concilium Constantinopolitanum I Pi 1124 ? | Concilium Nic^anum I Pi 1151 ? | Corpus juris civilis Iustinian^aum Pi 1125 ? | De dubiis nominibus (ed Glorie, 1968) Ox U 409 A | Defixiones Latin^a Ox U 410 A | Dipinti on amphor^a from Rome and Pompeii from CIL 4 and 15 Ox A 512 B | Early scholastic colloquies Pi 1128 ? | Fabularum atellanarum fragm. (ed Frassinetti, 1955) Pi 1154 ? | Grammatici Latini (ed Keil) Ca 848 A | Hiberno-Latin Pi 1126 ? | Incerti auctoris querolus sive aul. (ed Corsaro, 1964) Ox U 411 A | Index of personal names from CIL 13 Ox A 506 A | Littere Wallie Pi 1130 ? | Mimorum Romanorum fragmenta (ed Bonaria, 1955) Ca 861 C | Poet^a Latini ^avi Carolini Pi 1131 ? | Sc^anic^a Romanorum poesis fragmenta (ed Ribbeck, 1897) Dictionaries, &c Ox U 329 C | Codex Theodosiani Ox A 332 A | Historia Augusta Pi 1127 ? | Index Thomisticus (rationarium) Pi 1145 ? | Lexicon totius Latinitatis (Forcellini) Pi 1146 ? | Onomasticon totius Latinitatis (Forcellini) Africanus Ox U 306 A | Fragments Al Kindi Ca 865 B | Iudicia Alan of Lille Ca 867 B | Anticlaudianus Ca 868 B | De planctu Natur^a Alcuin Ca 866 B | Collected verse Ambrose Ls U 307 A | Selections Ammianus Marcellinus Ox U 308 C | Histories Andreas Cappelanus Ox A 321 A | De amore Anselm of Canterbury Ca 864 E | Complete works Apicius Ox U 311 A | De re coquinaria Architrenius Ox U 314 A | Works Arusianus Messius Pi 1133 ? | Exempla elocutionis (ed A Della Casa, 1977) Augustine Ls U 315 A | Selections Aurelius Victor Ox U 317 A | De C^asaribus Bacon, Francis Ox U 318 A | 10 2000-word prose samples Pi 1134 ? | Novum organum Bede Pi 1135 ? | De arte metrica et de schematibus (ed Kendall, 1975) Ox U 578 A | De orthographia Pi 1136 ? | De orthographia (ed Jones, 1975) Ox U 558 A | Epistola ad Egbertum Ox U 577 A | Retractatio Ox U 576 A | Vit^a abbatum Ox U 579 A | Vita S. Cuthberti Bernardus Silvestris Ca 863 B | Cosmographia Bible Ox X 319 E | Vulgate Birch (ed) Ox A 511 C | Cartularium Saxonicum vols 1-3 Boethius Ls U 320 A | De syllogismo hypothetico 1 Cassiodorus Pi 1137 ? | Institutiones (excerpta) (ed Mynors, 1937) Cato Ls U 322 A | De agri cultura Ls U 323 A | Historical and oratorical fragments Catullus Ls U 324 A | Carmina Celsus, P. Iuventius Ls U 325 A | Fragments Charisius Pi 1138 ? | Ars (ed Barwick, 1925) Cicero Ls U 327 D | Major works Cicero (attrib) Ox U 328 A | Epistula ad Octavianum Cosentius Pi 1139 ? | De barbarismis et metaplasmis (ed Niedermann, 1937) Crispin, Gilbert Ca 850 A | Works, ed G. R. Evans Culman, Leonard Ca 862 A | Sententi^a Pueriles Dositheus Pi 1140 ? | Ars (ed Tolkiehn, 1913) Dracontius Blossius Aem. Pi 1141 ? | Orestis tragoedia Einhard Ca 860 A | Vita Karoli Magni Emanuel, Hywel D. (ed) Ox A 504 A | Latin texts of the Welsh law Ennius Quintus Pi 1142 ? | Ennian^a poesis reliqui^a (ed Vahlen, 1928) Eutropius Ox U 330 A | Breviarum A.U.C. Festus Ox U 331 A | Breviarum Festus Sextus Pompeus Pi 1143 ? | De verborum significatione qu^a sup. (ed Lindsay, 1913) Pi 1144 ? | De verborum significatu qu^a sup.cum Pauli epitome Fortunatianus Ox U 326 A | Ars rhetorica (selections) Frithegod Ca 858 A | Breviloquium regum Britanni^a Galilei, Galileo Pi 1147 ? | De motu accelerato Pi 1148 ? | De motu locali Pi 1149 ? | Sidereus nuncius Pi 1150 ? | Theoremata circa centrum gravitatis solidorum Geoffrey of Monmouth Ca 857 C | Historia Regum Britanni^a Giraldus Cambrensis Ox A 503 A | De invectionibus vol 6 Gratian Ox A 699 D | Decretum Gregory of Nyssa (trans) Ox U 582 A | De hominis opificio, tr John Scottus Eriugena Higden, Ranulph Ca 856 A | Mss. Harl 1.48.1, St John 2.29.1 Hippocrates Pi 1152 ? | De ^aribus locis et de aquis Horace Ls U 333 A | Ars Poetica Ls U 334 B | Epistul^a 1-2 Ox U 546 A | Odes Ls U 335 A | Sermones Iulianus Toletanus Pi 1153 ? | Ars (ed Maestre, 1975) John of Hauville Ca 855 B | Architrenius Julianus Ls U 336 A | Fragments Juvenal Ls U 337 A | Satur^a Littleton, Adam Ca 854 A | Lingua Latina liber: dictionarius quadripartitus Livius Andronicus Pi 1155 ? | Fragments (ed Lenchantin, 1937) Livy Ls U 338 E | Ab urbe condita Lucan Ls U 339 B | Bellum civile 1 and 10 Lucretius Ls U 340 A | De rerum Natura Marcellus Empiricus Pi 1156 ? | De medicamentis liber (ed Niedermann, 1968) Marius Victorinus Pi 1170 ? | Ars (ed Mariotti, 1967) Ls U 341 A | Selections Martial Ox U 342 B | Works Modoinus Ox U 343 A | Selected poems More, Thomas Ox U 344 A | Utopia 1 and 2 Nevius Gnaeus Pi 1157 ? | Fragments (ed Marmorale, 1953) Nonius, Marcellus Pi 1158 ? | De compendiosa doctrina XX (ed Lindsay, 1903) Orderic Vitalis Ca 853 D | Ecclesiastical history Ovid Ls U 345 A | Amores Ls U 346 A | Ars amatoria Ls U 347 A | Fasti Ls U 348 A | Medicamina faciei femine Ls U 349 A | Metamorphoses 1 and 12 Ls U 350 A | Nux Ls U 351 A | Remedia amoris Pacuvius, Marcus Pi 1159 ? | Fragments (ed D'Anna, 1971) Paulinus of Nola Pr 1087 A | Carmina sex Pelagius Ox A 505 A | Expositions of thirteen epistles of St Paul Persius Ls U 800 A | Satires Petrarch Ox U 352 A | Bucolicum carmen Petronius Ox U 711 A | Satur^a (ed Buecheler) Pi 1160 ? | Satyricon Phocas Pi 1161 ? | De nomine et verbo (ed Casaceli, 1974) Plautus Ls U 353 A | Amphitruo Ls U 354 A | Asinaria Ls U 355 A | Aulularia Ls U 356 A | Bacchides Ls U 357 A | Captivi Ls U 358 A | Pseudolus Ls U 359 A | Rudens Ls U 360 A | Stichus Ls U 361 A | Trinummus Ls U 362 A | Truculentus Plautus, Titus Maccius Pi 1162 ? | Comoedi^a (ed Lindsay, 1955) Pliny the younger Ls U 363 A | Epistula 10 Poliziano, Angelo Ca 852 C | Latin Letters Pope Gregory Ox A 364 B | Dialogues Rhigyfarch Ox A 501 A | Life of St David Rosmini, Antonio Pi 1163 ? | Constitutione societatis Sallust Ls U 365 B | Complete works Scribonius Largus Pi 1164 ? | Compositionum liber (ed Helmreich, 1887) Seneca ,Lucius Annaeus Pi 1165 ? | Works Simmacus, Quintus Aurelius Pi 1166 ? | Works (ed Seeck, 1883) Spinoza, Baruch Pi 1167 ? | Tractatus de intellectus emendatione Statius Ls U 366 A | Achilleid Ls U 367 A | Silv^a (hexameter poems) Ls U 368 B | Thebaid Symmachus Ox U 369 A | Relationes Tacitus Ox U 370 D | Annals Terentius Afer Pi 1168 ? | Comoedi^a (ed Kauer-Lindsay, 1953) Turpilius Pi 1169 ? | Works (ed Richlewska, 1971) Vegetius Ox U 371 A | Epitoma rei militaris Venantius Fortunatus Ca 859 C | Opera poetica Vergil Ls U 374 A | Aeneid Ls U 372 B | Eclogues Ls U 373 A | Georgics Vergil (attrib) Ls U 312 A | Culex Ls U 313 A | Moretum Victorinus Pi 1171 ? | De solecismo et barbarismo (ed Niedermann, 1937) Wade-Evans, A.W. (ed) Ox A 502 A | Vit^a sanctorum Britanni^a et genealogi^a From: Subject: Date: X-Humanist: Vol. 1 Num. 246 (246) Collections, corpora &c Ox U 287 B | Latvian folksong corpus From: Subject: Date: X-Humanist: Vol. 1 Num. 247 (247) Wilkinson & Winstedt (eds) Ox U 376 C | Pantun melayu From: Subject: Date: X-Humanist: Vol. 1 Num. 248 (248) Bible Pr 1094 B | New testament From: Subject: Date: X-Humanist: Vol. 1 Num. 249 (249) Anonymous Ox U 247 A | Maha~niddesa (ed Poussin & Thomas, I and II only) From: Subject: Date: X-Humanist: Vol. 1 Num. 250 (250) Anonymous Ox U 526 A | O auto de Dom Luis et dos Turcos Collections, corpora &c Pr 1088 C | Weidner corpus Rosa, Joao Guimares Pr 1089 C | Grande Sertao: Veredas From: Subject: Date: X-Humanist: Vol. 1 Num. 251 (251) Collections, corpora &c Ox A 377 A | Provencal charters Dictionaries, &c Ox A 380 A | Le breviari d'amor Girart de Roussillon Ox A 378 A | Collected works Giraut de Bornelh Ca 843 A | Texts and variants Guillaume de Machaut Ca 842 A | La Prise d'Alixandre Jofre de Foixa* Ox A 379 A | Regles de trobar From: Subject: Date: X-Humanist: Vol. 1 Num. 252 (252) Leskov, N. Ox U 375 B | Samples of narrative and dialogue Pososhkov, I.T. Ca 841 A | Kniga o Skudosti i Bogatstvye From: Subject: Date: X-Humanist: Vol. 1 Num. 253 (253) Anonymous Ox U 381 A | Bhagavad Gita Ca 836 A | Bodhicarya~vata~ra Ox A 1063 A | The Bilvamangalastava Ox U 589 D | The Rig-Veda Kalida~sa~ Ox U 527 A | Kuma~rasambhava chaps 2 and 6 From: Subject: Date: X-Humanist: Vol. 1 Num. 254 (254) Dictionaries, &c Pr 1090 B | Serbo-Croatian verb dictionary Njegos Ox U 382 A | Selected works Orwell, George (translations) Ox A 1102 A | 1984 (in Croatian) Ox A 1098 A | 1984 (in Serbian) Ox A 1103 A | 1984 (in Slovenian) From: Subject: Date: X-Humanist: Vol. 1 Num. 255 (255) Anonymous Ox U 383 A | El Cid Ox U 670 B | Lazarillo de Tormes (four editions) Ox U 528 A | Libro de cirugia de Teodorico Collections, corpora &c Pr 1091 A | BYU contemporary Spanish corpus Dictionaries, &c Ca 833 A | Catalogo de las publicaciones periodicas Madrilenas Alonso XII Ma A 384 D | General estoria (part 1) Bible Pr 1092 E | Reina Valera version Caldero*n de la Barca, C. Ca 840 A | En la vida tode es verdad y toda mentir Machado, M. Ca 838 A | Complete works Ca 839 A | Poes^m*as Opera Omnia Lyrica, second edition Vallejo, C. Ca 837 A | Collected verse de Castro, Rosal^m* Ox A 656 A | Poes^m* completa en galego From: Subject: Date: X-Humanist: Vol. 1 Num. 256 (256) Collections, corpora &c Ox A 385 B | Newspaper extracts From: Subject: Date: X-Humanist: Vol. 1 Num. 257 (257) Anonymous Ox U 387 C | Modern prose (samples from literary texts and newspapers) Ca 834 A | Transcription of speech, play, and literary material Agaoglu, Adalet Ox U 286 A | Yu]ksek gerilim Fu]ruzan Ox U 254 A | Parasiz Yatili Gu]nes, Islak Ox U 285 A | Hula kutlu Karaosmanoglu, Yakup Kadri Ox U 272 A | Yaban Lewis, Geoffrey Ox A 391 A | Turkish grammar Makal, Mahmut Ox U 284 A | Kuru Sevda From: Subject: Date: X-Humanist: Vol. 1 Num. 258 (258) Dictionaries, &c Pr 1095 B | Uzbek-English dictionary From: Subject: Date: X-Humanist: Vol. 1 Num. 259 (259) Anonymous Ox A 655 A | Peredur Bible Ox A 566 A | Y testament newydd Brytyt, Kyndelw Ca 830 A | Collected Poems From: Subject: Date: X-Humanist: Vol. 1 Num. 260 (260) Dictionaries, &c Pr 1093 C | Dictionaries of several Central Americanlanguages Pr 1096 B | Qatabanian Inscriptions (ed Ricks) From: Subject: Date: X-Humanist: Vol. 1 Num. 261 (261) Collections, corpora &c Ox U 1038 A | Essen corpus of German folksong melodies Ox A 514 D | The Tyneside linguistic survey corpus Dictionaries, &c Ox U 423 D | Chinese telegraphic code character set Bach, Johann Sebastian Ox U 650 A | Well-tempered clavier 1 & 2 (Hewlett encoding) Fletcher, J.M. Ox A 692 B | Tree-ring dating of oak, AD 416-1687 Howgego, C.J. Ox U 593 C | Greek Imperial Countermarks From: Subject: Date: X-Humanist: Vol. 1 Num. 262 (262) Site list--------------------------------------------------- This lists all site codes used in the current snapshot together with names, addresses and electronic mail contact if known Please send any corrections to ARCHIVE @ UK.AC.OX.VAX Computing Centre for the Humanities Boks 53 - Universitetet Bergen N-5027 Norway Major holdings : English Boks 53 -Universitetet Bergen N-5027 Norway Major holdings : Norwegian I.K.P. Poppelsdorfer Allee 47 Bonn I D-5300 W. Germany Major holdings : German Literary & Linguistic Computing Centre U Cambridge Sidgwick Avenue Cambridge CB3 9DA U Copenhagen Njalsgade 76 Copenhagen DK-2300 Denmark Major holdings : Icelandic U Goteborg Sprakdata 6 N. Allegatan Goteborg 41301 Sweden Major holdings : Swedish Thesaurus Linguae Graecae U California at Irvine Irvine CA 92717 USA Major holdings : Greek Academy of the Hebrew Language Giv'at Ram P.O. Box 3449 Jerusalem, 91 034 Israel Major holdings : Hebrew I.N.L. Postbus 132 Leiden 2300 AC Netherlands Major holdings : Dutch Packard Humanities Institute 300 Second Street Los Altos CA USA Major holdings : Latin Universite* Catholique de Louvain Louvain la Neuve B-1348 Belgium Major holdings : Latin U Wisconsin D Spanish 1120 Van Hise Hall Madison WI 53706 USA Major holdings : Medi^aval Spanish Inst. fur Deutsche Sprache Friedrich-Karl Str. 12 Mannheim 1 D-6800 Germany Major holdings : German Universite* de Nancy 44 ave de la Libe*ration CO 3310 Nancy-Ce*de*x F 54014 France Major holdings : French D Religious Studies Box 36 College Hall U Pennsylvania Philadelphia PA 19104-6303 USA Major holdings : Biblical texts Ist di linguistica computazionale U of Pisa via della faggiola Pisa I-56100 Italy Major holdings : Italian Brigham Young University Provo, Ut. USA Maths & Computer Science Building Bar-Ilan University Ramat Gan 52100 Israel Major holdings : Hebrew From: LOU@VAX.OXFORD.AC.UK Subject: catalogue Date: 27-AUG-1987 16:57:06 GMT X-Humanist: Vol. 1 Num. 263 (263) An apology to any humanist whose mailbox recently collapsed under the unanticipated bulge of a draft copy of the Text Archive Snapshot. This was not intended for mass dissemination (apart from being indecently large it had a number of minor errors still uncorrected), but the note in which I informed Central Control of this fact appears to have gone AWOL. Anyway a new version (with those errors corrected and some new as yet undetected ones introduced) is now available on request as before. UK Humanists will be able to read it on HUMBUL shortly, and I hope it will also be available from the ListServer at FAFSRV within a few days. Lou Burnard From: Willard McCarty Subject: Now We Are 100 Date: 28 August 1987, 14:07:47 EDT X-Humanist: Vol. 1 Num. 264 (264) For those of you who retain a trace or more of respect for numbers, today is to be celebrated, since for the first time HUMANIST has 100 members. (I still count 9 countries; may that number increase!) Unfortunately champagne cannot be passed around electronically. You are therefore obliged to drink alone to the continued health of our thriving group this (on my side of the international dateline) Friday or (for our New Zealand members) Saturday evening. L'chaim! From: Willard McCarty Subject: Date: 28 August 1987, 22:02:42 EDT X-Humanist: Vol. 1 Num. 265 (265) Autobiographies of HUMANISTs Second Supplement Following are 21 additional entries and updates to the collection of autobiographical statements by members of the HUMANIST discussion group. Further additions, corrections, and updates are welcome, to MCCARTY at UTOREPAS.BITNET. W.M. 28 August 1987 From: Subject: Date: X-Humanist: Vol. 1 Num. 266 (266) *Barnard, David T. Head, Department of Computing and Information Science, Queen's University Kingston, Ontario, Canada K7L 3N6; 613-545-6056 My research interests are in communication systems, information systems, and literary applications. In the latter area, I collaborate with George Logan (English) and Bob Crawford (Computing Science). Our joint work has involved development of coding standards for documents being used in textual analysis, investigation of text structures for electronic books, and some preliminary work toward building an archive based on our encoding standard. I have just completed a five-year term as Director of Computing and Communications Services. From: Subject: Date: X-Humanist: Vol. 1 Num. 267 (267) *Baumgarten, Joseph M. I teach Rabbinic Literature, Dead Sea Scrolls, and related subjects at the Baltimore Hebrew College, 5800 Park Heights Ave, Baltimore, Md. 21215. Aside from using a Compaq computer for word processing in English and Hebrew, I am especially interested in CD-ROM's for accessing biblical and rabbinic sources in the manner of TLG. I am awaiting the results of the CCAT program to enable access to CD ROMs with IBM type computers. From: Subject: Date: X-Humanist: Vol. 1 Num. 268 (268) *Beckwith, Sterling 248 Winters College, York University, 4700 Keele St., North York, Ontario (416) 736-5142 or 5186. I teach Music and Humanities at York University, have instigated and taught the only Humanities course dealing with computers that is currently offered there, under the rubric of Technology, Culture and the Arts, and serve as coordinator of computer music and general nuisance on academic computing matters in both the Faculty of Arts and of Fine Arts at York. I was the first researcher in an Ontario university to work intensively on the design of educational microworlds (for exploring and creating musical structures) using the then-obscure and still-poorly-exploited computing language known as LOGO. This led to my present interest in discovering what today's AI languages and methods can offer as vehicles and stimulating playgrounds for music-making and other kinds of artistic and intellectual creation. From: Subject: Date: X-Humanist: Vol. 1 Num. 269 (269) *Bing, George 154 Thalia St., Laguna Beach, CA 92651; Phone: (213) 820-9410 I am a student at UCLA, and I work for the Humanities Computing program here to support the computer needs of the Humanities departments. From: Subject: Date: X-Humanist: Vol. 1 Num. 270 (270) *Brainerd, Barron Department of Mathematics, University of Toronto, Toronto, Ont., Canada M5S 1A5 I am professor of mathematics and linguistics. My particular professional interests are in quantitative stylistics (using for the most part statistical methods) and early modern English. I have an Apple at home and an XT at the university and program naively in Basic and Snobol. I access SPSSX, which among other thing i use in my course 'Statistics for Linguists,' via CMS. From: Subject: Date: X-Humanist: Vol. 1 Num. 271 (271) *Burnard, Lou [note change of address, effective from 24th August ] I work at Oxford University Computing Service, where I am responsible for the Text Archive and for database support and design. I have designed and even written many bits of text processingn software, notably OCP, FAMULUS and recently a general purpose text-searching interface to ICL's CAFS hardware search engine. But I don't think academics should write software at that level any more; just good interfaces to standard backages such as INGRES (or other SQL compatible dbms), BASIS... My main enthusiasm remains database design, which I see as an important and neglected area of humanities computing. From: Subject: Date: X-Humanist: Vol. 1 Num. 272 (272) *Church, Dan M. Associate Professor of French, Vanderbilt University Box 72, Station B, Nashville, TN 37235, (615) 322-6904 (office), (615) 292-7916 (home) I have produced computer-assisted learning exercises for elementary French courses and a database containing information on all plays produced in state-subsidized decentralized theaters in France since World War II. And I have plans for many more projects using computers in the Humanities. From: Subject: Date: X-Humanist: Vol. 1 Num. 273 (273) *Erdt, Terrence Graduate Dept. of Library Science, Villanova University, Villanova PA 19085, ph. (215) 645-4688. My interests, at this point in time, can be said to be optical character recognition, scholar's workstation, and the computer as medium from the perspective of the field of popular culture. From: Subject: Date: X-Humanist: Vol. 1 Num. 274 (274) *Gold, Gerald L. Department of Anthropology, York University, North York, Ont. M3J1P3; (416) 225 8760 (home); (416) 736 5261 (office) I am a cultural anthropologist and a Metis (half-humanities/half-social sciences . I have developed an interest in the relationship of qualitative and quantitative data. More specifically, how can a computer assist with the storage and retrieval of field notes, archival materials, interviews, life histories and other textual materials. Of specific interest is the preservation of the intrinsic character of narrative while using the computer as an analytical tool that can assist in statistical overviews and tabulation. In this sense, I am thinking beyond 'content analysis' which limits the qualitative side of data recovery. Some of my solutions are relatively simple, but I would like to discuss them and get feedback from others. More important, I am open to the suggestions and proposals that may reach my terminal. From: Subject: Date: X-Humanist: Vol. 1 Num. 275 (275) *Goldfield, Joel D. Assistant Professor of French, Plymouth State College, Plymouth, NH 03264 USA My exposure to computers began in Saturday morning courses offered to ambitious high school students. I took FORTRAN 4 and "Transistor Electronics" in the early 1970's. The FORTRAN 4 manual was poorly written and the language itself seemed almost totally worthless for my musical and communications-oriented interests, so I summarily forgot it and paid more attention to French, literature, science and math, all of which seemed more useful. Also, some of my home electronic projects worked, some not, just like computer programs, as I later discovered. Although I majored in Comp. Lit. (French, German, Music) in College, I took a few math courses and had to complete computer assignments in BASIC, invented by a couple of genial professors in the same department. The son of the major architect was to be one of my "students" that summer when I served as an undergraduate teaching assistant on a language study abroad program in Bourges, France. How I ever successfully completed those BASIC programs on figuring probabilities for coinciding birth dates, etc., I'll never know. Most of what I wrote was based on "Euclid's Advanced Theorum," as we called it on our high school math team: "trial and error." For my doctoral degree at Universit'e de Montpellier III, I found that I needed to catalogue, sort and evaluate the distribution of vocabulary in a particular work of fiction in order to better understand the author's strange symbolic system and diachronic mixing of associated terms. I also discovered a French frequency dictionary that would supply an apparently valid and reliable norm for external comparison with the work's internal norms. Although my return to the States made on-line querying impossible, I was able to obtain a printout of all words, since, happily, the work had been included in the frequency dictionary's compilation. I learned as much of "C" and "awk" (a "C" derivative under the UNIX system) as I needed to write programs to complement UNIX utilities. A colleague in Academic Computing graciously "tutored" me on many esoteric aspects of UNIX that were, and probably still are, obscure in its documentation. I worked on a methodology to organize my word, stylistic, and thematic data for computer-assisted research. Without this need and organizational "forthought" that also evolved as I learned more and more about the utilities and languages, all programming fireworks would have been useless sparkles. My major academic interests are computer-assisted literary research applied to literary criticism, computer-assisted language instruction/ interactive video, foreign language teaching methodologies and excellent foreign language/culture teaching. From: Subject: Date: X-Humanist: Vol. 1 Num. 276 (276) *Hockey, Susan Oxford University Computing Service, 13 Banbury Road, Oxford OX2 6NN England; telephone: +44 865 273226 After taking a degree in Oriental Studies (Egyptian with Akkadian) at Oxford University I started my career in computing in the humanities as a programmer/advisor at the Atlas Computer Laboratory which at that time was providing large scale computing facilities for British Universities. There in the early 1970's I wrote programs to generate non-standard characters on a graph-plotter and was involved with the development of version 2 of the COCOA concordance program. In 1975 I moved to Oxford and began to develop various services for computing in the humanities which are used by other universities, including Kurzweil optical scanning, typesetting with a Monotype Lasercomp and the Oxford Concordance Program (OCP). I am in charge of these facilities and also teach courses on literary and linguistic computing and on SNOBOL. My publications include two books, based on my courses, and articles on various aspects of humanities computing including concordance software, Kurzweil scanning, typesetting, past history and future developments. I am also series editor for an Oxford Unviersity Press series of monographs, Oxford Studies in Computing in the Humanities. I have lectured on various aspects of humanities computing in various corners of the globe, more recently on current issues and future developments for humanities computing, Micro-OCP and its uses and on computers in language and literature for a more general audience. I have been a Fellow of St Cross College, Oxford since 1979 and I now look after the computing interests in the college. My recent activities have been concerned with (1) Version 2 of the Oxford Concordance Program and Micro-OCP. (2) The Association for Literary and Linguistic Computing of which I am currently Chairman and am on the editorial committee of the ALLC's journal, Literary and Linguistic Computing. My next project will be concerned with the introduction of computers in undergraduate courses at Oxford. These courses consist almost entirely of the detailed study of set texts, and this project, which is funded under the UK government Computers and Teaching Initiative, will set up a University-wide system for analysis of these texts via IBM-PC workstations linked to a large VAX cluster at the central service. From: Subject: Date: X-Humanist: Vol. 1 Num. 277 (277) *Hunter, C. Stuart: and to a related study of the impact of the translations of the Psalms on the development of the religious poetry of the renaissance in England. On the teaching side, I am actively involved not only in teaching basic courses in word processing and database applications in the Humanities but also in developing computer conferencing as a specific teaching tool. From: Subject: Date: X-Humanist: Vol. 1 Num. 278 (278) *Koch, Christian < FKOCH%OCVAXA@CMCCVB > or < chk@oberlin.edu.csnet > Oberlin College, Computer Science Program, 223D King Building, Oberlin, OH 44074; Telephone: (216)775-8831 or (216)775-8380 I think it might be fair to say that I'm the token humanist on the computer science faculty here at Oberlin -- and I love the work. I come to computing from a long and eclectic background in the humanities. Am one of those people who always harbored the hope that a strong interdisciplinary background would ultimately serve a person in good stead. I think that now, working in the general area of cognitive science and computing, I'm probably as close to realizing that hope as I have ever been. My undergraduate work was in the Greek and Roman classics to which I added a masters degree in music history with pipe organ performance and another in broadcasting and film art. Ph.D. (1970) was essentially in literary criticism with psychoanalytic emphasis. Computing skills were picked up on the side during the 80's. Have also recently taken time out from the academic scene to work as a therapist with the Psychiatry Department of the Cleveland Clinic. Although I've been at Oberlin for some years, I joined the computer science faculty only in 1986 and am still sorting out directions and options. My computing interests are currently in the general area of natural language understanding, more specifically systems of knowledge representation and processing. As a kind of pet project I am working on developing an expert system for specialized psychiatric diagnoses. At the more practical level, in addition to teaching some traditional CS courses, I am charged with developing programming courses aimed at the student who wishes to combine computer programming skills with a major in a non- computer science area. In the immediate future is the offering of a course dealing with the computer analysis of literary texts. Am also introducing a more theoretical course in the general area of mind and machine (cognitive science overview). Would much appreciate hearing from persons who would like to share experiences or make suggestions in these areas as well as in areas where computing may be involved in the analysis of 'texts' in music (computer-assisted Heinrich Schencker?) and the other arts. All ideas having to do with interesting ways of combining computer programming and other traditionally non-quantitative areas of study would be most welcome. From: Subject: Date: X-Humanist: Vol. 1 Num. 279 (279) *Kraft, Robert A. Professor of Religious Studies, University of Pennsylvania 215-898-5827 Coordinator of External Services for CCAT (Center for Computer Analysis of Texts), co-director of the CATSS project (Computer Assisted Tools for Septuagint Studies), director of the Computerized Coptic Bible project, chairman of the CARG (Computer Assisted Research Group) of the Society of Biblical Literature, editor of OFFLINE column in the RELIGIOUS STUDIES NEWS (dealing with computers and religious studies). BA and MA Wheaton (Illinois) College 1955 and 1957 (Biblical Lit.); PhD Harvard 1961 (Christian Origins). Assistant Lecturer in New Testament at University of Manchester (England) 1961-63; thereafter at University of Pennsylvania. Main interests are in ancient texts, especially Jewish and Christian, paleography, papyrology, codicology, and in the historical syntheses drawn from the study of such primary materials. The computer provides a fantastic shortcut to traditional types of research, and invites new kinds of investigation and presentation of the evidence. I am especially anxious to integrate graphic and textual aspects (e.g. in paleographical and manuscript studies), including scanning and hardcopy replication. From: Subject: Date: X-Humanist: Vol. 1 Num. 280 (280) *Kruse, Susan I am a Computer Advisor within the Humanities Division of the Computing Centre at King's College London. Although many Universities in Britain increasingly have a person within the Computer Centre who deals with humanities' enquiries, King's College is unique in having a Humanities Division. There are eight of us within the division, some with specific areas of expertise (e.g. databases, declarative languages) and others (like myself) who deal with general issues. Some of us are from computer backgrounds; others, like myself, are from a humanities background (in my case archaeology). We cater to all users within the College, but specialise in providing a service for staff and students in the arts and humanities. This primarily involves advising, teaching, and writing documentation. From: Subject: Date: X-Humanist: Vol. 1 Num. 281 (281) *Logan, Grace R. Arts Computing Office, PAS Building, University of Waterloo, Waterloo, Ontario. I received my B.A. at Pennsylvania State University in 1956 and my M.A. at the University of Pennsylvania in English in 1960. My training in computing has been largely an apprenticeship supplemented by courses at Waterloo in math and computing. I am now a consultant and programmer for the Arts Computing Office at the University of Waterloo where I have been since 1970. I have been associated with computing in the humanities since 1958 and I helped to organize the Arts Computing office at Waterloo in the early seventies. I was a member of the organizing committee for ICCH/3. I am active in the ACH and OCCH where I am serving on the executive committees. I have also been active in the MLA where I have served as the convenor of the computer section. I have developed program packages for use by Arts users and I have taught courses in computer literacy for the Arts Faculty at Waterloo. I regularly attend computing conferences where I have presented several papers. I have also been invited to give several seminars and workshops on computing in the Arts by various groups and organizations. From: Subject: Date: X-Humanist: Vol. 1 Num. 282 (282) *Sinkewicz, Robert E. Senior Fellow, Pontifical Institute of Mediaeval Studies, member of the Centre for Computing in the Humanities at the University of Toronto. Principal Interests: the use of relational databases in humanities research, and the development of text databases in Byzantine religious literature. Major Research in Progress: The Greek Index Project, an information access system for all extant Greek manuscripts. By Sept. 1988 we propose to have online a relatively complete listing of all Greek manuscripts as well as manuscript listings for authors of the Late Byzantine Period. IBM SQL/DS is our principal software tool. From: Subject: Date: X-Humanist: Vol. 1 Num. 283 (283) *Sitman, David Computation Centre, Tel Aviv University, Israel I teach courses in the use of computers in language study and I am an advisor on computer use in the humanities. From: Subject: Date: X-Humanist: Vol. 1 Num. 284 (284) *Tompa, Frank Wm. Associate Professor of Philosophy, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1 I am an interested outsider. My fields of research include the development of mathematical logic in the 19th century (which in a way made modern computation possible), and problems confronting cognitive science (i.e. questions concerning the limits of the applicability of our current conception of computation). On the applied side, the University of Waterloo has long been a leader in software development, and in the area of computer application. As a result, we have had ready access to powerful computing resources for many years. I, for instance, have been processing my words since the early '70s (when IBM's ATS was in vogue, and VDTs were a novelty). From: Subject: Date: X-Humanist: Vol. 1 Num. 286 (286) *Winder, Bill [Accents are indicated as follows: \C = caret; \G = grave; \A = acute.] As a doctoral candidate at the University of Toronto's French Department, my computing activities are largely conditioned by my thesis topic: "Maupassant: predictability in narrative". The fundamental axis of this research concerns automatic abstracting: in precisely what way can automatic abstracting techniques be said to fail with literary texts? Maupassant's 310 short stories were chosen as the literary corpus primarily because the format of the genre is computationally manageable on a microcomputer, the plot and style of Maupassant's stories are straightforward, and the number of stories allows for statistically relevant comparisons between pieces. My research on abstracting should offer the basis for a coherent approach to critical model building, particularly with respect to the semantic value of predictability in text and in the critical model itself. This endeavour has led me to Deredec, (Turbo) Prolog, and, more recently, Mprolog. The use of the first of these is presented in CHum's issue on France, where J.-M. Marandin discusses "Segthem", a Deredec automatic abstracting procedure. My interest in Prolog, as an alternative to Deredec, developed out of studies in combinatory logic, natural deduction, and Peirce's existential graphs. In connection with my research in literary computing, I am a teaching assistant for the French Department's graduate computer applications course, and in that capacity have taught word processing and demonstrated packages such as Deredec, BYU concordance, TAT (my own French concordance package), COGS, and MTAS. This recent interest in computing (1985) grew out of seasoned interest in semiotics (1979). In France, I completed a Ma\Citrise de Lettres Modernes (1982) with the Groupe de S\Aemiotique in Perpignan, and a Diplome d'Etudes Approfondies (1984) with A. J. Greimas's Groupe de Recherche en S\Aemio- linguistique at l'Ecole des Hautes Etudes in Paris. I am presently a member of the Toronto Semiotic Circle, and served in June 1987 as secretary to the International Summer Institute for Semiotic and Structural Studies, site of a promising encounter between researchers in artificial intelligence, semiotics, and humanities computing. This encounter is in fact indicative of my overall ambition in computing, which is to assess the computational component of semiotic theories, particularly those of L. Hjelmslev and C. S. Peirce. From: Willard McCarty Subject: Hyperties: a "hypertext" system Date: 29 August 1987, 17:40:48 EDT X-Humanist: Vol. 1 Num. 287 (287) Following is a brief description of a hypertext system that Ben Shneiderman (Computer Science, Maryland) has recently announced. I pass it on to you bcecause I think that the idea of hypertext is potentially of great interest to designers of software in our area. Anyone who has a description of the similar work that has gone on at Brown might consider posting here also. From: mbb@portia.Stanford.EDU Subject: Spelling checkers for non-English languages Date: Fri, 04 Sep 87 13:42:38 -0800 X-Humanist: Vol. 1 Num. 288 (288) Hi all, I am looking for recommendations on spell checking programs for non-English languages that run on either the Mac or IBM PC compatibles. I'm primarily interested in the romance languages (especially German), as well as Russian. Please send me a note if you've any suggestions or recommendations. Send directly to me (gx.mbb@stanford), and I'll summarize the responses I receive to the Humanist list. many thanks Malcolm Brown Stanford University From: LOU@VAX.OXFORD.AC.UK Subject: Text Archive - Change of Address Date: 5-SEP-1987 13:22:05 GMT X-Humanist: Vol. 1 Num. 289 (289) Please note that all correspondence, enquiries etc concerning the Oxford Text Archive should be sent to the following address:- ARCHIVE @ UK.AC.OX.VAX (JANET) From BITNET, make sure that your Mailer is constructing the TO: part of the header correctly (it should say ARCHIVE%UK.AC.OX.VAX ) and forwarding the message to MAILER @ UKACRL.BITNET From EDU (etc), the address is now ARCHIVE%VAX.OXFORD.AC.UK @ UCL.CS.NSS There have been several changes in our connexion to the international networks recently; old methods of connecting with us may suddenly cease to work. Lou Burnard P.S. Please do not send Text Archive enquiries to this address (LOU@OX.VAX) unless you want them to be ignored until November! I shall be in Germany (try MIG04W@DGOWD01.EARN) until that date, but messages to ARCHIVE will still be acted on. In line with Humanist Traditions, I had intended to send everyone 98 annoucements of this fact, but time alas precluded. From: LOU@VAX.OXFORD.AC.UK Subject: Date: 5-SEP-1987 13:36:27 GMT X-Humanist: Vol. 1 Num. 290 (290) ======CORRECTED VERSION OF PREVIOUS MESSAGE================ Please note that all correspondence, enquiries etc concerning the Oxford Text Archive should be sent to the following address:- ARCHIVE @ UK.AC.OX.VAX (JANET) From BITNET, make sure that your Mailer is constructing the TO: part of the header correctly (it should say either ARCHIVE%UK.AC.OXFORD.VAX@AC.UK or ARCHIVE@VAX.OXFORD.AC.UK) and forwarding the message to MAILER @ UKACRL.BITNET From EDU (etc), the address is now ARCHIVE%VAX.OXFORD.AC.UK @ UCL.CS.NSS There have been several changes in our connexion to the international networks recently; old methods of connecting with us may suddenly cease to work. Lou Burnard P.S. Please do not send Text Archive enquiries to this address (LOU@OX.VAX) unless you want them to be ignored until November! I shall be in Germany (try MIG04W@DGOWD01.EARN) until that date, but messages to ARCHIVE will still be acted on. In line with Humanist Traditions, I had intended to send everyone 98 annoucements of this fact, but time alas precluded. (but I managed two at least) From: "Robin C. Cover" Subject: Generalized (Descriptive) Markup Language for Lexica Date: Sun, 6 Sep 1987 20:52 CST X-Humanist: Vol. 1 Num. 291 (291) Is anyone else working with SGML or other descriptive markup language for tagging digitized lexicons? I have seen the AAP manuals implementing SGML for electronic publishing, but this still leaves room for many decisions, including which features of SGML to actually implement, and whether to abbreviate some of the more cumbersome tagging. If you can offer advice or help for tagging lexica, please notify me via BITNET or postal mail: Robin C. Cover; 3909 Swiss Avenue; Dallas, TX 75204; 214/824-3094 (w); 214/296-1783 (h) From: Randall Jones Subject: COPYRIGHT AND TEXT FILES Date: Mon, 7 Sep 1987 22:25 MDT X-Humanist: Vol. 1 Num. 292 (292) One issue that has not been adequately discussed with regard to the exchange of literary texts is that of copyright. This may not be a significant problem in other countries, but it is very real in the U.S. Virtually every edition of a literary text worth coding in electronic form is protected by copyright law. While most scholars do not worry about obtaining permission from the publisher for work that is done internally, permission must be secured in order to publish a concordance, index, etc. based on that edition. I can see potential problems if texts that were originally intended for internal use suddenly begin to be exchanged around the world. We have secured copyright permission from publishers for several texts we have been working on here at Brigham Young University (e.g. the Hamburg Edition of Goethe), and we are well aware that the permission does not grant us the right to pass on the electronic version to other users. Perhaps we can negotiate with Beck in Munich for this permission, but I am not optimistic that they will be positively disposed toward the idea. Any other thoughts? Randall Jones Humanities Research Center Brigham Young University Provo, Utah 84602 From: Randall Jones Subject: REGOGNITION FOR COMPUTER PROGRAMS Date: Mon, 7 Sep 1987 22:27 MDT X-Humanist: Vol. 1 Num. 293 (293) If I may be allowed to resurrect the issue of recognition for computer programs, HUMANIST readers may be interested to learn that the Modern Language Association of America and the Center for Applied Linguistics have recently entered into an agreement with IBM to implement a system of peer review for language- oriented software written for IBM microcomputers and compatibles. The software may be for instruction or research in literature, writing, second language learning, or linguistics. Recommended software will be made available to the public through an independent software-distribution center. Authors will receive a royalty from the sale of their software, but, perhaps more important, they will receive a letter from the MLA or CAL informing them that their software has been selected to be made available for their colleagues, kind of a "seal of approval" from a recognized body. It may not mean as much as an article in a journal, but it certainly should carry some weight. To request additional information or a software submission form write to Carol Zuses, Software Evaluation Project, Modern Language Association, 10 Astor Place, New York, NY 10003-6981 (for literature, writing and CALL other than ESL) or Barbara Robson, Software Evaluation Project, Center for Applied Linguistics, 1118 22nd St. NW, Washington, D.C. 20037 (for linguistics and ESL CALL). From: Mark Olsen Subject: ESL software and reviews Date: Tue, 08 Sep 87 15:33:16 MST X-Humanist: Vol. 1 Num. 294 (294) I am on the thesis committee of a student in the English as a Second Languge program who is looking into software for teaching English to a wide variety of students. His project calls for a review of the available software and literature dealing with computer use in ESL programs. This is a long way from my area of expertise and I am wondering if there are journals, bibliographies, or so on that would provide a useful starting point for a project of this nature. The student has already run through a number of journals including CALICO, but any additional title suggestions would be greatly appreciated. Thank you, Mark Olsen From: Willard McCarty Subject: The ARTFL Bibliography Date: 9 September 1987, 12:32:12 EDT X-Humanist: Vol. 1 Num. 295 (295) A bibliographical listing of the contents of a major database of French texts is now available to HUMANISTs. It is the work of a cooperative project of the Centre National de la Recherche Scientifique (CNRS) and the University of Chicago and is known as the American and French Research on the Treasury of the French Language (ARTFL). The file is 1900 80-character lines long. Because we do not yet have the facility for centralized storage of texts, I am keeping it in my account and will send a copy to anyone who wants it. Requests should come to me directly. Detailed information about ARTFL can be obtained from the ARTFL Project, The University of Chicago, Dept. of Romance Languages and Literatures, 1050 East 59th Street, Chicago, Illinois 60637 U.S.A., (312) 962-8488. I do not know if the Project has an e-mail address. From: "Timothy W. Seid" Subject: WORD DIVISION OF ANCIENT MANUSCRIPTS Date: Wed, 09 Sep 87 15:24:54 EDT X-Humanist: Vol. 1 Num. 296 (296) My professor, Dr. Stanley Stowers, came to me one day with the idea of a computer program that would be able to generate the possible word divisions in a Greek text. Since ancient Greek was written in a continuous script until about the ninth century A.D., the word divisions in our critical texts are based on later interpretive editing. I shuttered at the thought of what would be involved in programming such a thing. Then one day while I was working with the TLG texts I came up with an idea. We at Brown are using a system called Isocrates to access TLG. Rather than searching through the texts themselves, Isocrates has an index to each author and an index to the entire corpus. Isocrates was developed by Greg Crane at Harvard in conjunction with the Institute for Research in Scholarship at Brown. (of course, all due credit to Dr. Brunner). My idea was to generate a sequence of strings from a Greek text and match each time to a file of Greek words. I have finally complete a working version on the mainframe. For the first version I've used the word file to the Septuagint (Greek Old Testament) since it was the easiest to get to in the Isocrates files. Here is a sample of the output for Galatians 1:6-8: LINE 6 ______ qauma qaumazw a ma a zw w o oti ti iou o ou outw outws w ws ta taxews a ew ews w ws me met meta ta a ti tiqesqe qes qesqe ea a ap apo pot potou o otou to tou o ou ouk kale a esan sa san a ant to o os su umas ma a as en xariti a ar ti xristou to tou o ou ei eis eteron te ro o on eu a ge ion o on noo LINE 7 ______ o o ou ouk esti estin ti tina in ina a all allo o ei mh mhti h ti tines in esei ei eis eisi eisin sin in o oi ta tarassontes a ar ara aras a as son o on ontes te su umas ma a as kai a ai qel- ontes o on ontes te me met meta metastreyai ta tas a as a ai to o eu a ge ion o on to tou o ou oux xristou to tou o ou LINE 8 ______ a all alla a kai a ai aie ea ean a anh nh h hmeis me ei eis ish sh sha h a aggelos ge elos o os ec o ou our oura ouran a o ou eu a ge zh h ta a ai umin in par a ar ro o eu h ge elisa isa sa a me meq a umin in ina a ana anaq anaqema a qema ema ma a estw w I'm not sure how much explanation is needed or how much more I should tell. I'm not too optimistic that I will discover any places which could be divided up differently. I would appreciate any comments, suggestions, questions, or criticisms. Tim Seid From: CAMERON@EXETER.AC.UK Subject: CALL Conference EXETER Date: Wed, 09 Sep 87 19:17:54 BST X-Humanist: Vol. 1 Num. 297 (297) For all HUMANIST readers - accommodation still available if requested IMMEDIATELY. UNIVERSITY OF EXETER PROGRAM STRUCTURE and PRINCIPLES in CALL Lopes Hall, September 21 -23 1987. COST 50 pounds all inclusive - pro rata rates available MONDAY September 21 16.30 - 18.00 Registration 18.00 Reception 19.00 Dinner 20.15 S.Dodd (Exeter) CALL and the chalkface. D.F.Clarke, (U.E.A.) Design considerations in the production of extended computer assisted reading materials TUESDAY September 22 08.00 Breakfast 09.30 P.Hickman, (La Ste Union) Structuring interactive grammar practice programs. D.Ferney, (Wolverhampton Poly.) A computer model of the French native speaker's skill with grammatical gender. 10.45 Coffee 11.15 O.Durrani, (Durham) Designer Labyrinths: Text mazes for language learners. A.Benwell, (Lanchester Poly.) How we use HELP facilities. 13.00 Lunch 14.30 A.Kukulska-Hulme, (Aston) Liberation or constraint : the useful- ness of a program interface to a vocabulary database. G.A.Inkster, (Lancaster) Databases as a learning activity. 15.45 Tea 16.15 Workshop : Reading Programs - D.F.Clarke (U.E.A.); I.Morris (Man chester Poly.). Language Programs - D.Ashead (B'ham); O.Durrani (Durham). Wordprocessing aid - L.M.Wright (Bangor) 18.30 Wine reception 19.00 Dinner 20.15 J.D.Fox, (U.E.A.) Can CAL aid vocabulary acquisition? L.M.Wright, (UC, Bangor) Aspects of text storage and text compression in CALL. WEDNESDAY September 23 08.00 Breakfast 09.30 D.Scarborough (City London Poly.) The computer as a teaching resource on a Commercial French course. J.E.Galletly (Buckingham) Elementary verbal phrase syntax- checker for French sentences. 10.45 Coffee 11.15 Workshop: Language programs : M.Blondel (City London Poly.); B.Farrington (Aberdeen); P.Hickman (La Ste Union); D.Ferney (Wolverhampton); M.L'Huillier (Brunel). 13.00 Lunch 14.15 B.Farrington, (Aberdeen) A.I. Grandeur et servitude M.Yazdani, (Exeter) Tools for second language teaching. Future projects. 15.45 Tea KCCameron/EXETER Tel.0392 - 264216 From: "Bill Winder (416) 960-9793" Subject: Date: 9 September 1987, 18:51:28 EDT X-Humanist: Vol. 1 Num. 298 (298) I have 2 questions that the Humanist group may be able to help me with. First, does anyone know of a good archive management tool? I'm looking for something that a library can use to index all its stock, whether it be books, maps, photographs, manuscripts, painting, statues, etc. One suggestion was Revelation, but users of the package that I have contacted don't recommend it. Dbase might seem appropriate, but the fixed field length is a problem when dealing with an extremely heterogeneous data set. I thought of AskSam -- a textbase -- but I was hoping to find something designed for archives. Any suggestions? Secondly, I'm trying to find the list of network nodes in Athens, Greece. Are there no Athenian Humanists? A colleague in Athens would like to have access to EARN (the European network) but doesn't know where a node mainframe is in Athens. (I was able to find the University of Patras in the CMS Help, but no connection date is indicated, nor site -- I'm not sure whether it is in Athens or not.) Many thanks, Bill Winder (Winder at Utorepas) From: Undetermined origin c/o Postmaster Subject: bibliographic databases, network nodes in Greece Date: 10 September 1987 09:32:48 CDT X-Humanist: Vol. 1 Num. 299 (299) On Bill Winder's questions: Second question first: Our Bitnet tables show exactly two nodes in Greece: GREARN in Crete and CRPATVX1 at the University of Patrus. Database question next: Depending on what you are looking for in a database system, you may want to check Revelation out again despite the reports you've gotten. It has variable-length fields, great flexibility, and appears to combine many of the strengths of the relational data model without its rigidity. Its three major drawbacks, according to a knowledgeable, enthusiastic fanatic on the subject, are its documentation, its documentation, and its documentation. I've heard this report elsewhere too so I believe it. By all reports the program itself is very very good -- if one can put up with the manual. For general-purpose work on an IBM PC (that seems to be a hidden specification in your search), I believe anyone ought to look long and hard at RBase System V and DataEase. The one leans to power and the other to ease of use for beginners and occasional users, but each is very good. I served on an evaluation committee that spent months looking at programs, ads, and specs, and then a full week performing tests and sample database designs on a few finalists, and these two programs were clearly at the top. There are also a number of programs aimed strictly at bibliographies, which may prove ideal for your application: Professional Bibliographic Systems in Ann Arbor Michigan has a rather nice program that implements the ANSI standard for bibliographic description and can handle the problem you have with non-standard media very easily. There are others, of course, but I don't know enough to say anything useful. Perhaps others on the list will comment on Sci-Mate, Notebook II, and so on. (This messages has 47 lines.) From: Willard McCarty Subject: A sin of omission & work in progress Date: 11 September 1987, 20:43:26 EDT X-Humanist: Vol. 1 Num. 300 (300) When I advertised the ARTFL bibliography (of which several copies have subsequently been sent out), I failed to note that Mark Olsen was responsible for securing it for us. My apologies and thanks to him for supplying the sort of thing that makes HUMANIST valuable. Lou Burnard's snapshot of the Oxford Text Archive is another example. Work still proceeds on centralized storage of such things on the UTORONTO node for automatic retrieval on demand. Please be patient. Meanwhile, I'll be happy to distribute the ARTFL bibliography and anything else appropriate. If you have something you think might appeal to HUMANISTs, let me know. From: Willard McCarty Subject: The Humanities Computing Yearbook Date: 14 September 1987, 07:24:34 EDT X-Humanist: Vol. 1 Num. 301 (301) On behalf of Oxford University Press, the publishers, the Centre for Computing in the Humanities is pleased to announce a new periodical, The Humanities Computing Yearbook. Ian Lancashire and Willard McCarty are the co-editors. An editorial board is in process of being set up. The first volume, scheduled for publication in the summer of 1988, aims to give a comprehensive guide to publications, software, and specialized hardware organized by subject or area of application. Research and instructional work in many fields will be covered: ancient and modern languages and literatures, linguistics, history, philosophy, fine art, and areas of computational linguistics affecting text-based disciplines in the humanities. The more notable software packages will be described in some detail. We welcome your suggestions of what we should consider. We are especially interested in discovering innovative software that may not be widely known, including working prototypes of systems in development. Electronic correspondence should be sent to YEARBOOK@UTOREPAS.BITNET, conventional mail to the Editors, The Humanities Computing Yearbook, Centre for Computing in the Humanities, Univ. of Toronto, 14th floor, Robarts Library, 130 St. George Street, Toronto, Canada M5S 1A5. Our telephone number is (416) 978-4238. Please feel free to distribute this notice. Ian Lancashire Willard McCarty 14 September 1987 From: KRAFT@PENNDRLN Subject: ARTFL Date: Monday, 14 September 1987 2234-EST X-Humanist: Vol. 1 Num. 302 (302) Concerning the ARTFL Project and materials, it would be useful for HUMANIST to carry details of how institutions can become subscribers to this data bank, if this has not already been done. It is my impression that costs are reasonable and benefits great for Romance Language departments, although I'm not sure that the French people at my own institution have taken advantage of the situation yet. Is there an ARTFL spokesperson on HUMANIST to give precise details? (Or to repeat them, if I missed it at first.) Bob Kraft From: KRAFT@PENNDRLN Subject: Copyright Date: Monday, 14 September 1987 2301-EST X-Humanist: Vol. 1 Num. 303 (303) Apropos Randy Jones' important query/note about the copyright issue, much thought has necessarily been given to these matters by those involved in encoding ancient texts en masse, particularly the Thesaurus Linguae Graecae (TLG) project directed by Ted Brunner at U.California Irvine, and the newly formed Latin counterpart being coordinated at the Packard Humanities Institute (PHI) under the direction of Stephen Waite in Los Altos, CA. It is also a central issue for CCAT in gathering various materials for a CD-ROM. We have found that some publishers are very interested in cooperation, especially if they can be made to see that having their material in electronic form can benefit them as well. Other publishers have adopted a go slow (or don't go) policy, since they have not yet thought much about the impact of computerized distribution on their materials. Still others are actively entering the computerized market, and thus reserve circulation rights to themselves. One lesson for authors and text editors is perhaps to attempt to retain copyright control for these purposes, or at least to recover the copyright if a publisher discontinues an author/editor's publication. I suspect that in the long run we will find that circulation of materials in electronic form increases the market for hard copy of the same materials (at least as present study and reading habits are constituted), but there is little hard evidence on which to test such a hypothesis, and many publishers are wary of what might happen with the increase of electronic distribution. Bob Kraft From: MCCARTY@UTOREPAS Subject: Technical flaws Date: 16 September 1987, 07:04:30 EDT X-Humanist: Vol. 1 Num. 304 (304) A HUMANIST just pointed out to me that on this VM/CMS system an uploaded file often ends with what appears to be a double quotation mark, <">. Whatever it is, it becomes an end-of-file marker (1A in hex) when that file gets downloaded to a PC. If the downloaded file has been made part of another file, the end-of-file marker may make the rest of the larger file inaccessible until it has been removed. (I hope this is clear!) So, if you download something from HUMANIST that ends abruptly or seems to be much shorter than the corresponding entry in the directory would lead you to believe, one or more spurious hex 1A's are likely at fault. I have used the Norton Utility to remove them, though until now I have not understood where they came from. If you encounter this problem with a file from HUMANIST and cannot solve it, please let me know. The second technical flaw I have to report concerns discursive headers, i.e., those that specify the sender's full name and may also give his or her address, telephone number, etc. At least from this VM/CMS system, some messages so adorned are oddly treated by other mailers, which may take the discursive part for the electronic address. As far as I know, ListServ has not run into this problem, but I have when sending messages directly to individuals. For this reason, my full header has become a footer. Willard McCarty Centre for Computing in the Humanities University of Toronto (416) 978-4238 From: Philippa MW Matheson 416 585-4469 Subject: Re: Technical flawMy experience with the ~Z (hex 1A) character at the end of some messagesis that it is easier to remove while it is still on the mainframe (i.e.*before* bringing to the PC). If the apparent double quote character(") at the end of a message is deleted, the problem does not occur onthe PC. This makes a real incentive to a least browse through myHUMANIST mail on-line (before taking it home to print out and read on thebus on my way back to work...?) Date: 16 September 1987, 11:26:38 EDT X-Humanist: Vol. 1 Num. 305 (305) Interesting, though, that I have had this problem only with Willard McCarty's contributions--perhaps it is purely local. Philippa Matheson AMPHORAS at UTOREPAS From: "Robin C. Cover" Subject: Offending EOF Characters Uploaded from MS-DOS (ascii) Files Date: Wed, 16 Sep 1987 12:13 CST X-Humanist: Vol. 1 Num. 306 (306) The offending EOF characters Willard has mentioned can be removed from a downloaded file with PC-Write (as well as with Norton Utilities). I think these must originate in MS-DOS files from some word processors when these text files are uploaded to the VM/CMS system. I have asked the mainframe operators if there is any way to eliminate these EOF characters from the (minidisk) file prior to downloading (with some global change), but the answer was "no." In general, the networks seem to handle hi-bit characters and control codes (1-32) in an irratic fashion...unless I am ignorant of some essential fact. I remove these EOF characters from downloaded files with PC-Write's alt-F4, alt-F6 sequence, which locates "non-ascii" (sic!) characters. From: MCCARTY@UTOREPAS Subject: Announcements of new software & of updates Date: 18 September 1987, 10:08:50 EDT X-Humanist: Vol. 1 Num. 307 (307) I would like to propose that HUMANIST be used to announce the availability of new software and of updates to existing packages, whether they be commercial or in the public domain. Advertising in the usual sense seems inappropriate for an academic network, but I think all of us would appreciate knowing about new things in a timely fashion. Until we have the ability to store files centrally, these announcements should probably be brief and should each contain an offer of more information from the sender. Only major updates would likely be of interest. Comments? Willard McCarty Centre for Computing in the Humanities University of Toronto (416) 978-4238 From: MCCARTY@UTOREPAS Subject: A sample report on interesting software Date: 18 September 1987, 22:23:37 EDT X-Humanist: Vol. 1 Num. 308 (308) Please consider the following technical report on CRITIQUE, an interesting example of work in natural language processing being done at the Watson Center of IBM. Many of you will of course already know about this work. I'm circulating the report primarily to suggest what we might publish on HUMANIST and to get your reactions. Please let me know directly what you think. Willard McCarty Centre for Computing in the Humanities University of Toronto (416) 978-4238 From: Subject: Date: X-Humanist: Vol. 1 Num. 309 (309) CRITIQUE CRITIQUE, formerly known as EPISTLE, is a mainframe natural language processing system being developed at the Thomas Watson Research Center of IBM. This system analyzes the syntactic structure of sentences, diagnoses lexical, grammatical, and stylistic errors, offers corrections, and computes various statistics about the writing. It is currently used in IBM both as a research tool, in open-ended research on semantic analysis, and as an aid in the writing of documentation. CRITIQUE consists of two major components: a parsing "engine" constructed by means of PLNLP, the "Programming Language for Natural Language Processing, pronounced "Penelope"; and PEG, the "PLNLP English Grammar." The other components of the system are the dictionary, which contains syntactic information associated with 70,000 lexical items, and the style component, which consists of style-checking rules. CRITIQUE begins by separating sentences from each other, then subjects each sentence serially to lexical analysis. Parts of speech are labelled at this stage and word-level errors determined. Successful sentences are fed to a parser that segments them according to a parse-tree and detects grammatical errors. Sentences still free of errors are then examined for stylistic weaknesses, which are reported to the user. CRITIQUE's criteria for style are based on existing manuals modified in consultation with teachers of composition and on errors collected from a large database of IBM office correspondence. Examples of stylistic weakness that CRITIQUE might report are: excessive length of a sentence, excessive complexity, or unclear punctuation. CRITIQUE deals with one sentence at a time. It does not currently keep track of what it regards as infractions or infelicities so as to produce criticism of the writing in general. In recent tests 70% of the sentences given to CRITIQUE were analyzed in a single parse; 15% required multiple parses and were ranked according to a metric of preferred interpretations; and 15% were "fitted" or indeterminate parses, as in sentence fragments. A 20-word sentence, for example, can be processed on an IBM 3081 4Mb virtual machine in 1 CPU-second. [This report is based on a talk given by Dr. Yael Ravin (Natural Language Processing Group, Thomas Watson Research Center) on 26 March at the University of Toronto and is republished from the Ontario Humanities Computing newsletter, 1.3, June 1987, with thanks.] From: Randall Jones Subject: Date: Fri, 18 Sep 1987 23:45 MDT X-Humanist: Vol. 1 Num. 310 (310) Electronic Text Corporation of Provo, Utah has just announced release 4.2 of WordCruncher (formerly BYU Concordance). They have also announced the ETC Bookshelf Series, indexed texts that can be accessed by WordCruncher's ViewETC. Already available is a set of 47 U.S. Constitution documents. Soon to be released will be the Riverside Shakespeare and selected volumes of the Library of America (e.g. Twain, Melville, Franklin, Jefferson, Faulkner, Cather, etc.). For information about the new features in 4.2 as well as the ETC Bookshelf Series write (via BITNET 4) to me. Randy Jones From: CAMERON@EXETER.AC.UK Subject: Hum.Comp.Yrbook Date: Sun, 20 Sep 87 12:15:04 BST X-Humanist: Vol. 1 Num. 311 (311) Software Reviews - I'm in favour of general distribution of all reviews. Although users' needs are different, details of other progs can provide ideas of adaptation I should also like to see a Clearing House for details of projects in hand and of willingness to cooperate on development Keith Cameron, EXETER From: MCCARTY@UTOREPAS Subject: Reviews and information Date: 21 September 1987, 06:39:40 EDT X-Humanist: Vol. 1 Num. 312 (312) The following is from Joseph Baumgarten, sent to me, and deserves to be passed on. From: Subject: Date: X-Humanist: Vol. 1 Num. 313 (313) As a freshman in the learned circle of Humanist experts, I would greatly welcome the availability of current information about software. Yet, I think that specialized hardware also deserves mention. A specific example: We have had rave reviews of the capabilities of the Ibycus for accessing CD ROM data bases. Yet there are few users who can afford a computer which can do only this. Are there alternatives for users of standard computers? If so, what hardware and software is needed? Will the new generation of PCs be likely to approximate the capabilities of Ibycus? Another question in a different area: Is there any progress in the electronic indexing of periodicals in the humanities? Joseph M. Baumgarten (BAUMGARTEN@UMBC.BITNET) From: KRAFT@PENNDRLN Subject: responses to inquiries, etc. Date: Monday, 21 September 1987 0932-EST X-Humanist: Vol. 1 Num. 314 (314) It is difficult to know whether it is most practical to send separate notes to HUMANIST on separate items, or to package things as I am doing here. If there are strong feelings about this, it might be good to have a recommended policy. Doing separate notes may make it easier to organize what HUMANISTs wish to keep. In any event; 1. Brief information on hardware, software, etc., would be welcome via HUMANIST. For more elaborate treatment, including sometimes the source code of programs, Jack Abercrombie's ONLINE NOTES is an appropriate listing that goes out monthly to 200 or so e-mail addresses (write JACKA@PENNDRLS). Indeed, for informational purposes, the contents of such ONLINE NOTES and similar services could perhaps be listed in HUMANIST along with listings of other relevant e-mail sources. 2. More generally, on the information front, I would like response from HUMANISTs on how to reach the non-e-mail multitudes with what they need to know about computer related developments. For several years I have published a brief column (OFFLINE) in the main professional newsletter for Religious Studies, and I suspect that other professional society organs may have similar columns -- and that some professional groups do not. It is clear to me that many colleagues want such information and will not subscribe to the special computer publications (CHum, Scope, LLC, etc.) or join the associations that produce the special publications. These colleagues can be reached most easily through their own professional publications. But much of the information they need is not at all "discipline specific" so it occurred to me that perhaps we should encourage the creation of a "syndicated" column approach that we could offer to the various professional societies and editors for inclusion in newsletters, etc. It is one function that a consortium of professional societies might be interested in supporting. I would be interested in being involved, if the idea seems feasible and if others would also get involved, with their own professional groups in view. Responses? 3. Responding to Joseph Baumgarten's note, but with general information. Several HUMANISTs have first hand acquaintance with the IBYCUS System, I suspect, and I certainly do. It would be incorrect to describe the IBYCUS SC as capable ONLY of accessing CD-ROM material. It is an excellent all-around tool for scholarly, and especially textual, work. Its word processing capabilities are adequate and getting better. Its programming language (similar to C) is relatively transparent and powerful. Many programs of various sorts are available in this language (IBYX), which has been in use for nearly a decade by 15 centers and/or individuals who have owned the earlier (mini-computer based) versions of the IBYCUS System. We at CCAT hope to collect a utilities disk of such programs for the IBYCUS SC in the near future. An extensive review of the machine appeared in the inaugural issue of John Hughes' BITS AND BYTES REVIEW last October, and can be xeroxed for anyone interested. 4. For IBM DOS users, CCAT has been developing software for accessing the TLG and CCAT CD-ROMs from IBM type machines. We have worked with the Sony reader and interface card. This software will be available soon for testing, responses, etc., and will be sold for a nominal charge ($75 was announced) through CCAT. The attempt is to emulate IBYCUS for IBM. It will not be entirely successful since IBYCUS was built exactly for this sort of thing, IBM was not. But it should provide a reasonable alternative. Bob Kraft, CCAT From: MCCARTY@UTOREPAS Subject: Date: 22 September 1987, 09:29:28 EDT X-Humanist: Vol. 1 Num. 315 (315) The following, intended for HUMANIST, was addressed to the wrong Torontonian node of Bitnet/NetNorth -- a common mistake, easily made. Please note that although my account is on the UTOREPAS node, HUMANIST belongs to UTORONTO. ---------------------------------------------------------------------------- > 21 Sep 87 10:38:50 BST > Via: UK.AC.RL.EARN; Mon, 21 Sep 87 10:38:49 BST > Received: > Via: 000005121001.FTP.MAIL; 21 SEP 87 10:38:46 BST > Date: Mon 21 Sep 87 10:38 > From: D.MITCHELL@QMC.AC.UK > Message-ID: > To: HUMANIST@UTOREPAS > Subject: ATARI ST > > > Does anybody out there know if any software packages exist which will > produce concordances on an Atari St machine ? > > David Mitchell > D.MITCHELL @ UK.AC.QMC ------------------------------------------------------------------------- Please address replies to the sender, not to HUMANIST. Thanks. From: mbb@jessica.Stanford.EDU Subject: Spell checking programs Date: Tue, 22 Sep 87 15:01:29 -0800 X-Humanist: Vol. 1 Num. 316 (316) A few weeks ago, I sent a note to HUMANIST requesting information on spell checking programs. Well, I received only a half-dozen responses, and nearly all of these contained no information but rather simply expressed interest in the outcome! Willard McCarty at UTOREPAS volunteered he was using a program named MicroSpell, but that was about it. Just as I was about to despair, in comes the latest issue of "Bits & Bytes Review," which contains reviews of four spell checking programs running in the DOS environment!! (Vol 1, #5) This is clearly the most comprehensive coverage of this subject, so anyone wanting to know about this stuff should read these reviews. Hughes reviews the following programs: JetSpell, which has the potential to check multilingual documents; MicroSpell, WordProof II, and Webster's NewWorld Spelling checker. As I say, the reviews are well done and make worthwhile reading if you want to find out about the latest and greatest in spell checking programs. Malcolm Brown Stanford University gx.mbb@stanford From: MCCARTY@UTOREPAS Subject: An objection Date: 24 September 1987, 14:31:21 EDT X-Humanist: Vol. 1 Num. 317 (317) A good friend and fellow HUMANIST sent me the following objection to a tendency possibly dormant in our recent efforts to begin circulating reviews and notices of software. This friend felt quite diffident about addressing HUMANIST with a passionate and frank objection, but I am persuaded that we need to hear it. From reactions I have received, I've concluded that one aspect of HUMANIST many of its members find most stimulating is the discussion of issues and ideas. It seems to me that we can both engage in these discussions and circulate software reviews. Comments on the following are welcome. Willard McCarty Centre for Computing in the Humanities University of Toronto (416) 978-4238 From: Subject: Date: X-Humanist: Vol. 1 Num. 318 (318) I was a little taken aback by your suggestion about HUMANIST and software reviews or notices. I am afraid of seeing HUMANIST change from a venue for interesting discussions of computing in general and specific issues relevant to humanities (such as desk-top publishing, copyright and machine-readable text, what students in the humanities are expected to learn by learning programming, etc.) to one largely dominated by exchange of news and gossip about the latest soft- and hard-ware, with the concomitant academic one-upsmanship, not to mention salesmanship. I realize that there is a place for HUMANIST as a locus from which information can be disseminated about new and important programs and machines, but I don't think it should become our focus. It seems to me that information about commercial programs is fairly well disseminated, and most of the other stuff seems precious (I mean it affects me as a woman preaching is supposed to have affected Dr. Johnson) or rather useless. I know that there is a solid "remnant" of useful stuff even as I say that, but I do think the chaff outweighs the wheat. Now you are free to (and perhaps would be wise to) dismiss this feeling as the amalgam of two rather unworthy emotions: a deep suspicion of all technological panaceas, especially those "silicon-based life-forms" which many humanities computer types seem to me to be peddling under the guise of parsers and "workbenches" and expert systems; and a self-contradictory but equally real jealousy of the technological fast-lane and those who have the knowledge and leisure to travel it. So I hope that your suggestion results in only a small rise in the amount of software reviews, announcements, etc. From: Mark Olsen Subject: A reply to the "nameless dissenter" Date: Thu, 24 Sep 87 21:38:44 MST X-Humanist: Vol. 1 Num. 319 (319) I am not certain that I understand the objection posed by the nameless dissenter. Information about commercially available software -- particularly that aimed at the humanities -- is VERY slow getting out and can be rather outdated. A personal story might be appropriate. Almost two years ago I reviewed a very good package aimed at humanities applications. I submitted the review and before it appeared a radically revised version was released. The nice guy that I am, I rewrote the review and sent in an essentially new piece. Again, before the review could appear, the company was sold, the name was changed and the product updated yet again. I have updated the review, it has not yet appeared and I have found that, yes, a "new improved" version has been released. Since most of the journals aimed at humanities processing are quarterly academic journals, they tend to have slow turn around times. This is fine for book reviews and articles, which do not change as rapidly as computer related products. In another review that I wrote, the developer phoned me several times while I was writing the review, and released updates in response to the criticisms I raised about the product! Willard's proposal to circulate software reviews to the members of HUMANIST would certainly permit limited distribution of the text while it is current. There is no reason, in my opinion, to assume that HUMANIST would become the realm of "right to silicon life" enthusiasts since most of the participants are active academics, not software developers, whose comments about availability and performance of programs are on the whole rarely self-serving or breach professional ethics of disclosing personal interest in a product. I am also rather shocked that the writer of this objection could eliminate non-commercial software from serious consideration. The SNOBOL and Icon projects at the University of Arizona, University of Toronto's MTAS and COGS, and numerous PD packages are examples of non-commercial software that is of high quality. Beyond packages, a number of scholars circulate specialized utilities, source code libraries, and other useful tools -- frequently free of charge (or for the cost of distribution). Few of these are EVER covered in the main computer press, but many are of particular interest to individuals involved in humanities processing. A far more serious problem that must be addressed is the issue of electronic publication. It is difficult, if not impossible, to convince journal editors and publishers to accept mss. that have appeared electronically, and even harder to have academic colleagues to accept electronic media as a serious form of publication. I should like to see HUMANIST act as a sort of electronic journal, if only to serve as a trial electronic academic publication forum. There is, in my opinion, no technical reason barring a fully electronic publication medium. Rather, it is going to take a major shift in what academics view as a substantial publication -- something that can be given whatever credit the piece deserves -- before the medium can be used. Until that time, HUMANIST cannot attract anything more than occaisonal commentary. From: MCCARTY@UTOREPAS Subject: Russian texts Date: 25 September 1987, 11:04:47 EDTFri, 25 Sep 87 16:19 N X-Humanist: Vol. 1 Num. 320 (320) The following query is from Harry Gaylord. Please send your replies to him directly, at the address given below. ---------------------------------------------------------------------------- I have been trying to assemble Russian texts for students to work with in stylistics. Does anyone have suitable machine-readable material? I can offer in exchange M. Ju. Lermontov's Geroj Nasego Vremeni which we have keyed in. Harry Gaylord GALIARD@HGRRUG5 From: MCCARTY@UTOREPAS Subject: Both an exchange of ideas and a review of software Date: 28 September 1987, 06:35:58 EDT X-Humanist: Vol. 1 Num. 321 (321) Most recently two HUMANISTs argued about the appearance of software reviews and notices here. One was cautious about it, fearing an invasion of the marketplace; the other was much more positive. It is good to see that our philosophical "flaming" is appreciated. (Do I understand that word correctly? I mean "ardent discussion.") The plan to introduce reviews and notices on HUMANIST should not threaten the exchange of ideas, however. What's likely is that reviews and descriptions of software will be radically summarized for automatic distribution on HUMANIST, but that the full versions will be kept centrally and made available to individuals on request. If the summaries were to become burdensome, they could be moved to a central file, available only on request, but this seems unlikely. The Humanities Computing Yearbook, which I'm involved with, should make some of the posting of notices unnecessary, but HUMANIST will remain a timely place for new developments to be announced. If that's what you want. Comments? Willard McCarty Centre for Computing in the Humanities University of Toronto (416) 978-4238 From: MCCARTY@UTOREPAS Subject: Flaming defined Date: 28 September 1987, 11:02:00 EDT X-Humanist: Vol. 1 Num. 322 (322) Peter Roosen-Runge, a HUMANIST at nearby York University, has sent me the following learned exposition of the verb "to flame" and its related noun. Be it known that when I attributed flaming to fellow HUMANISTs I did so in utter ignorance of the history of the word -- a grievous scholarly fault for which I apologize. Willard McCarty Centre for Computing in the Humanities University of Toronto (416) 978-4238 ----------------------------------------------------------------------------- From the Hacker's Dictionary (citing terms which came into use in the mid-70s or earlier) "FLAME v. To speak incessantly and/or rabidly on some relatively uninteresting subject or with a patently ridiculous attitude. FLAME ON: v. To continue to flame. See RAVE." But this meaning is now somewhat antiquated; "flame" began to be used as a term of self-derogation to protect oneself against the accusation of flaming. In the middle of an E-mail discussion, one might write FLAME ON FLAME OFF and continue in a less offensive manner. (Note the shift in meaning of the ON here to make possible the "FLAME OFF".) Another common use today is "No flames, please", i. e. don't bother to send criticisms, especially vehement ones. FLAME is rarely today used in its original sense to criticize someone else's utterances; instead, it is used to characterize the tone of an utterance, typically by its author, and as you suggested conveys a sense of warmth or vehemence. But a necessary ingredient now is that the FLAME *itself* be critical, indeed highly, irresponsibly, and even offensively critical. As Usenet developed, flames of this sort became very common and contributed so much "noise" to the discussions that they were confined to a group all their own, now known as talk.flame, I believe. I haven't read it for a while, but its existence is a great tribute to the net's dislike of censorship. From: Chuck Bush Subject: More on FLAMEs Date: Mon, 28 Sep 87 12:37:43 MDT X-Humanist: Vol. 1 Num. 323 (323) As the BITNET "Postmaster" for our installation, I see lots of notes from other Postmasters complaining about BITNET and MAILER problems. Lately some have been trying to express the degree (pun intended) of their flames. I noticed one today that expressed it as "moderately high flames--microwave level 80%." Interesting, the versitility of this language we speak. Chuck Bush BYU Humanities Research Center From: A_BODDINGTON@VAX.ACS.OPEN.AC.UK Subject: Flames ..... Date: 28-SEP-1987 20:44:54 GMT X-Humanist: Vol. 1 Num. 324 (324) Taken (more or less verbatim) from NOTABLE COMPUTER NETWORKS, *Comm of the ACM*, 29, 10 1986, p 932-971 p 967 -------------------------------------------------------------------------------- One of the most obvious effects of networks is their tendency to induce users to "flame", that is, to produce many words on an uninteresting topic or in an abusive or ridiculous manner; "raving" is almost a synonym for flaming. The usual explanation for why computer networks tend to aggrevate flaming is that the flamer is isolated from the readers and has no negative feedback to inhibit such behaviour. There are typographic conventions that have developed on the various networks to get around the difficulties of expressing nuances in ASCII characters. One of the more universal is that UPPERCASE means SHOUTING (much to the chagrin of those with micros that only have uppercase). Some *surround phrases with asterisks* to indicate emphasis, while others s p a c e the characters out. People will mark or . Facial expressions often get spelled out <*grin*>. There are many ways to indicate the start of a flame, such as *FLAME ON!*. A shorter way to indicate the lack of serious intent is :-) -------------------------------------------------------------------------------- (hint : try looking at :-) sidewards) -------------------------------------------------------------------------------- Here at the Open University, we are expending much effort in developing computer conferencing and electronic mail. Besides the conventions mentioned above people seem to emphasise /this way/ while the asterisks seem be used for *formal italics* (titles etc.). Two more facial expression's are common Sadness :-{ Suprise 8-0 We also seem to have a coherent use of Flame. <<<<<<<<<<<<<<<<<<<<<<<<<<<<<< FLAME ON >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Its primary use is to SHOUT opinions across in the general morass of conversation (or as one brilliant proponent from Finland recently argued, /mediocrity/). FLAMES are arguably one of the most stimulating environments in the E-conference theatre. They focus the MIND and cut the /drivel/. <<<<<<<<<<<<<<<<<<<<<<<<<<<<<< FLAME OFF >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> Some of the OU's staff build up to /flames/ for days (hence mine is weak, its a two minute demonstration, not a heart-rending experience). /Flames/ seem to have got out of control on some communications networks and have been relegated to vehicles for /fun/. Our users seem to prefer them to express deep emotion via the ASCII character set. Andy Boddington Open University Milton Keynes England From: CSHUNTER@UOGUELPH Subject: Change of User Name Date: 28 September 1987, 16:06:41 EDT X-Humanist: Vol. 1 Num. 325 (325) Hello; Effective immediately my USERNAME for Netnorth/Bitnet/Earn is changed from ENGHUNT@UOGUELPH to CSHUNTER@UOGUELPH Please make the necessary changes in your files and notes. Dr. C. Stuart Hunter, Department of English, University of Guelph, GUELPH, Ontario, Canada, N1G 2W1, 519-824-4120, ext. 3251. CSHUNTER@UOGUELPH From: MCCARTY@UTOREPASA_BODDINGTON@VAX.ACS.OPEN.AC.UK Subject: Redistribution of HUMANISTHumanist etc. Date: 28 September 1987, 20:40:08 EDT28-SEP-1987 20:18:55 GMT X-Humanist: Vol. 1 Num. 326 (326) The following question was sent to me by Andy Boddington of the Open University (UK): ---------------------------------------------------------------------------- I still continue to enjoy the contributions flowing in over the Ether. Here we run a conferencing system called CoSy (it comes from your corner of the world). I would quite like to post extracts from parts of HUMANIST into some of the CoSy conferences as they would be valid and useful contributions to the debate(s). Now what are the implications of this? Is HUMANIST discussion only appropriately discussed within HUMANIST or can extracts be displayed elsewhere. Naturally such contributions would be acknowledged. But would this offend your contributors, would it indeed amount to piracy? ----------------------------------------------------------------------------- This is the third or fourth such request, and my answer to Andy is the same as to the rest: that he is welcome to redistribute conversations on HUMANIST as he sees fit, but that anyone who wants to participate must join directly and thus be known to us all. In places where the cost of international electronic transmission is very high (e.g., New Zealand), direct membership by a number of people is not feasible, so we allow indirect membership provided that each of the active members sends a biography and that the person in charge of redistribution takes the responsibility for wicked flaming. I have two purposes for telling you all this: to get your reactions, if any, and to invite you to do the same if you chance to run a local bulletin board or redistribution list. Willard McCarty Centre for Computing in the Humanities University of Toronto (416) 978-4238 From: CMI011@IBM.SOUTHAMPTON.AC.UK Subject: Date: Tue, 29 Sep 87 19:08:59 BST X-Humanist: Vol. 1 Num. 327 (327) (and I apologize for a cretinous mailer thay has no subject field) I am surprised that no-one pointed out the derivation of FLAME ON and FLAME OFF; the Hackers Dictionary someone quoted is seriously misleading in claiming that FLAME ON means 'carry on flaming'. As any fule kno, the idea of a FLAME comes from Johnny Storm, part of the Fantastic Four (Marvel Comics), who could become a being of fire (and incidentally fly) at will. In the way that comic book heroes have, he appeared to find it necessary to say 'FLAME ON' to set the reaction going, and 'FLAME OFF' at the end when he sank to the ground to be a moody and spotty teenager again. Similarly, Judge Dredd says aloud 'Armour Piercing!' when he switches his Lawgiver gun to that setting to bring down a fleeing juve's car. A curious bit of popular Kultur, isnt it. Its typical of computer types (ie usenet gurus) to get their whole culture from junk comics, but then to forget its origin, and make up a pompous explanation. One is reminded of the claims people made that they knew what KERMIT stood for, when it was just the name of a certain frog. I found Andy Boddington's note about visual stress fascinating; I think it is a genuinely interesting problem, forced on us by the ASCII-Devil. If, of course, we agreed on an SGML-conformant markup for our messages, we could each have interpreters to display HUMANIST messages in a readable way. I think it would be sad if HUMANIST started to adopt the symbol conventions, dont you? :-) The use of asterisks and slashes, however, is /quite/ sensible. *UNLESS* you /overdo/ it. Brave New Worlds rule ok - lets NOT become like usenet.... sebastian rahtz. computer science, southampton, uk From: MCCARTY@UTOREPAS Subject: A concert of noise? Date: 29 September 1987, 16:43:55 EDT X-Humanist: Vol. 1 Num. 328 (328) Greg Waite, who kindly manages the redistribution for HUMANIST in New Zealand, has sent me the following: -------------------------------------------------------------------------- A local reader of HUMANIST has asked me to pass on his observations about the subject matter of HUMANIST messages. He says: I note that there was some discussion about what sort of thing HUMANIST should be used for. May I, through you, suggest less chatter, and more solid discussion of Humanities-based topics, less about the computer, which is only a tool, and more about what we are using it for - teaching, research, etc. I ADORE computers and computing, but not for their own sake, only because of the way in which they can help me do other things. The people to whom I am trying to sell the HUMANIST idea do not adore computers, and the sort of exchanges which seem to be most frequent in HUMANIST do nothing to persuade them that they want to join in. I agree with these sentiments, but I concede that one of the factors which makes e-mail worthwhile is the rapidity with which information can be exchanged. This rapidity can only be maintained if there is no "editor" (the human kind), who is required to read and select material, thereby creating a delaying bottleneck. However, I believe users must exercise a certain editorial control over their contributions. While "chatter" is perhaps a relatively minor matter for many users in North America, merely clogging one's disk and taking a few seconds to skim and delete, in this part of the world mail-users bear all charges directly (and, unfortunately, we pay to receive as well as send trans-Pacific mail). It would be extremely helpful to us if chatter were kept to a minimum, and even highly important messages kept as concise as possible. This would surely benefit ALL users who store such mail and have limited disk space. Our point of view is a minority one, but perhaps you could put it out as a suggestion for consideration in any case. ------------------------------------------------------------------------ [W.M. again] Here is a problem. Since most of us never think of cost when we use HUMANIST, we can allow ourselves the liberty of sporting with each other to relieve the rarely relieved seriousness of things. We can play with language (which Milton would argue is how prelapsarian language was used). We can risk saying something, at length, that might or might not prove valuable to someone else. Our New Zealand friends have a very real point, however. I guess it's a matter of perspective. Are other HUMANISTs affected by costs of transmission, even indirectly? Do those who are not have the impression of a significant amount of worthless chatter? I agree with Greg that self-control is the only kind worth having, but do you think that some of us need a bit more of it? Willard McCarty Centre for Computing in the Humanities University of Toronto (416) 978-4238 From: GUEST4@YUSOL Subject: Audio-Visual Stress Date: Tue, 29 Sep 87 22:16 EDT X-Humanist: Vol. 1 Num. 329 (329) Even a Toronto humanist can apparently see the words but miss the message. Of all people, allow a mere musician to demur. Isn't it time that hard-shell text-happy Humanities types faced up to the real danger they are courting by even a casual involvement with this new medium? It is not only costly (particularly if one chooses to live halfway round the world), but also hopelessly corrupt and corrupting, fooling one, by its very speed and immediacy, into thinking that one is no longer straightjacketed by mere print, but free to actually chatter away and draw pictures in the sand and even sing, not just chisel colums of cold, unadorned prose into tablets of lead or stone. Old Gutenberg's galaxy is larger and messier now, I fear, than some who identify humanistic study too closely with the uniformly printed page might feel comfortable to allow. I would urge all such, after making a quick detour to peruse the Book of Kells or the handiwork of the Benediktbeuern scriptorium (now those were TEXTS!!), to rush out and buy their first Macintosh. Life in the Ivory Tower (and notions of what constitutes worthwhile software for the humanities) may never be the same again. Hypertextually yours, S. Beckwith, York University From: "Prof. Choueka Yaacov" Subject: Address and e-mail Date: Wed, 30 Sep 87 11:48:57 +0200 X-Humanist: Vol. 1 Num. 330 (330) Please note my new address and e-mail: Yaacov Choueka Department of Mathematics and Computer Science Bar-Ilan University, Ramat-Gan, Israel, 52100 choueka@bimacs.bitnet From: Subject: Date: X-Humanist: Vol. 1 Num. 331 (331) I happened to catch some of the conversation about 'chatter', 'off topic discussions', and costs of e-mail to subscribers in New Zealand. As Postmaster of a NetNorth site and ultimate controller of various 'goings on' within the LISTSERV groups at the University of Toronto, I feel that a little background to NetNorth, EARN, Bitnet, and LISTSERV may be of interest to the HUMANIST group. Basically the academic networks NetNorth, EARN, and Bitnet were formed to encourage a free exchange of academic and research information in a very fast and very cheap manner. One must admit that the speed of these networks surely beats the heck out of snail mail (Canada Post) and even, with respect, the efficient British Post Office (well, it's more efficient than anything on this side of the pond). How the costs of these networks are distributed varies with the network. Certainly, in Canada, one is not charged for incoming and outgoing e-mail. My British, European, and American colleagues can correct me, but I'd say that in general, this situation prevails across the networks. Thus, we in North America, the UK, and Europe are fortunate to have a reasonably cheap medium for academic exchange. It is indeed an unfortunate situation that those of you in New Zealand are being charged a greater amount to take advantage of HUMANIST on a different network. Maybe in the future, communication costs will drop in New Zealand. All I can say is that there are different ways of charging for e-mail and we are the lucky ones. I don't mean that last statement to sound mercenary, but the manner in which the system is used will in some way be driven by dollars, pounds, lira, etc. Can the costs be controlled by controlling chatter? Well, LISTSERVs are set up to encourage disscussion on a broad basis among many people. Willard is your editor and guiding light in this matter. As editor he has decided, after consultation with the initial subscribers, to let all participants submit material without editorial review. So, what this means is simply your group is self-regulating. How broad your discussions become and how much innovation occurs is up to you. I believe the expression, "free-wheeling" is appropriate here. If one or more people feel that submissions to the group are becoming a little fuzzy or irrelevant, all that has to be done is to send a gentle and Willard has told me that in his opinion, the group has found its intended purpose rather than strayed from it. In that, you are to be congratulated, for other LISTSERV groups have been known to turn into real zoos. So, if our friends down under are charged by the line, I guess I can encourage you to be succinct in your submissions. On the other hand, if they are charged by the number of 'pieces of mail' received, then such a suggestion is unnecessary. Maybe our New Zealand friends can tell us how they are charged and suggest some ways of making it cheaper for them. Is each individual charged for each piece of mail arriving in New Zealand? If you send or receive mail to one another in New Zealand, are you charged for each piece and is it cheaper than to send and receive mail from overseas? If there is a problem here, I don't think it is going to go away, because HUMANIST is growing all the time and therefore the number of potential submissions and hence traffic is also going to increase. HUMANIST started with a relatively small number of participants. At the moment, it well over 100 subscribers and growing weekly. I hope that this letter has been at least informative and that after seeing your latest discussions on FLAMES, I don't get roasted from the four corners of the earth, especially from the boiling mud of Rotorua. :-) I think that the HUMANIST discussion group is the greatest thing since sliced bread as it is exploring a new medium for your field. It certainly seems to have great potential. Enjoy it folks. If there is anything I can do to make it better or CHEAPER, please get in touch with me. Steve Younker, Postmaster - University of Toronto From: CMI011@IBM.SOUTHAMPTON.AC.UK Subject: Date: Fri, 2 Oct 87 14:17:54 BST X-Humanist: Vol. 1 Num. 332 (332) I have written a short report on the courses I have taught for the Faculty of Arts here over the last year, with some general observations on humanities computing in Southampton. As this report is about 16 pages formatted, I am not sending it direct to HUMANIST, but will forward it to interested parties. Please tell me whether you want a preformatted ASCII text (rather gross) or LaTeX source (the decent version) for you to format yourself. If you have problems, I have sent copies of both to Willard and I hope he can redistribute as needed. The 'New Zealand' controversy; I am sorry to say this, but I think they are in a minority in having to spend real cash on Humanist, and that there is very little that can be done. To slightly correct Steve Younker, mail to Bitnet from JANET in Britain via the EARN gateway is currently funded by IBM, but in the near future the costs will have to be met from elsewhere, and it may revert back to each sender. FLAME ON Since Brian needs to spend a great deal of its money on prosecuting Peter Wright, keeping up a vast nuclear arsenal, and destroying the British education system (the Post Office went ages ago, Steve), I expect that soon you will hear no more than vague squeaks from Thatcherite Britain on HUMANIST..... FLAME OFF The content of HUMANIST: I'm sorry, I thought H. was *founded* for chatter! Isn't the purpose of H. to exchange ephemeral opinions, advice and questions about how computers relate to and are used in the Humanities? Maybe I am wrong, actually - Willard please correct me. If we are merely using the medium to discuss general issues of the humanities then I fail to see how we progress. ASCII communication. So how many books have you read recently that make no use of typographical tricks to make their point? OK, I except novels, but if I pick up (at random) The Computation of Style by Anthony Kenny, and open to page 94, I see a) running head, page numbers b) italics for a book title c) smaller type for a quote d) mathematical setting without even trying. How do you convey that on a dull ASCII terminal? As to the person who suggested that buying a Mac might change ones life: I have had access to 2 Macs for 18 months, c. 5 seconds walk away. The most likely change to my life is death from suicide, due to frustration and anger at the mickey mouse software, the keyboard, the concept, the lot.... If thats the 20th century's answer to Gutenburg and the Book of Kells, god help us all said tiny tim. I for one would rather produce beautiful Gutenberg books with TeX. And so, dear readers (how many dollars so far to whoever owns those phone lines to New Zealand?), I urge you all to start considering sending your HUMANIST contributions in a compressed (there are good, widely available, compression programs - lets use them) form with structural markup. Sound the death knell to *reading* ASCII - leave it to computers. sebastian rahtz. computer science, southampton, uk From: MCCARTY@UTOREPAS Subject: Costs Date: 2 October 1987, 12:02:25 EDT X-Humanist: Vol. 1 Num. 333 (333) A network "postmaster" in New Zealand has kindly supplied some information about the cost of e-mail to and within his country. Some of his remarks bear on charges elsewhere in the world, so I pass the brief whole on to you. W.M. From: Subject: Date: X-Humanist: Vol. 1 Num. 334 (334) A single copy of the Humanist mail comes into NZ at Waikato University over a Public Packet Switching network; thus we are charged on a volume basis, rather than letter basis. From there, a single copy is sent to each participating University, and redistributed internally. Thus the major cost is the importation of the material. The costs to individual recipients will go down if the number of recipients in NZ grows; otherwise costs can only go up (with a TELECOM monopoly) because a leased line connection is out of the question in the forseeable future. Internal charges for mail are 1/10th the international charges. The problem of mail costs is not entirely academic for our European colleagues I believe EARN is scheduled to move to Packet-Switching sometime, and then people will also be charged on a volume basis. This may be absorbed by the institutions involved (as the leased-line costs are now) or they may be passed on to individual departments. Whatever, one becomes more concious of costs when every word one sends adds to a University's bills, particularly if budgets become stretched. I know: we moved from a leased-line to a packet-switched connection to a sister institution - the real cost has dropped to a fraction of what it was, but I no longer shift megabytes around. The problem affects more than NZ; electronic mail is a wonderful way for remote and unwealthy universities to keep in touch and we should spread the gospel to (other) Third World countries, but they won't be connected by leased lines. Some of my collegues will be addressing this problem at the EDUCOM conference. Regards. AJB (Postmaster - Auckland University) From: Steve Younker Subject: Date: Mon, 05 Oct 87 15:34:12 EDT X-Humanist: Vol. 1 Num. 335 (335) A VERY brief test. From: Network Mailer CMI011%UK.AC.SOUTHAMPTON.IBM@UK.AC.RL.IB Subject: mail delivery error Date: Mon, 05 Oct 87 12:06:15 BSTFri, 2 Oct 87 14:17:54 BST X-Humanist: Vol. 1 Num. 336 (336) Batch SMTP transaction log follows: 220 UK.AC.RL.IB Columbia MAILER X1.24 BSMTP service ready. 050 HELO UKACRL 250 UK.AC.RL.IB Hello UKACRL 050 MAIL FROM: 250 ... sender OK. 050 RCPT TO: 250 ... recipient OK. 050 DATA 354 Start mail input. End with . 554 Mail aborted. Maximum hop count exceeded. 050 QUIT 221 UK.AC.RL.IB Columbia MAILER BSMTP service done. Original message follows: I have written a short report on the courses I have taught for the Faculty of Arts here over the last year, with some general observations on humanities computing in Southampton. As this report is about 16 pages formatted, I am not sending it direct to HUMANIST, but will forward it to interested parties. Please tell me whether you want a preformatted ASCII text (rather gross) or LaTeX source (the decent version) for you to format yourself. If you have problems, I have sent copies of both to Willard and I hope he can redistribute as needed. The 'New Zealand' controversy; I am sorry to say this, but I think they are in a minority in having to spend real cash on Humanist, and that there is very little that can be done. To slightly correct Steve Younker, mail to Bitnet from JANET in Britain via the EARN gateway is currently funded by IBM, but in the near future the costs will have to be met from elsewhere, and it may revert back to each sender. FLAME ON Since Brian needs to spend a great deal of its money on prosecuting Peter Wright, keeping up a vast nuclear arsenal, and destroying the British education system (the Post Office went ages ago, Steve), I expect that soon you will hear no more than vague squeaks from Thatcherite Britain on HUMANIST..... FLAME OFF The content of HUMANIST: I'm sorry, I thought H. was *founded* for chatter! Isn't the purpose of H. to exchange ephemeral opinions, advice and questions about how computers relate to and are used in the Humanities? Maybe I am wrong, actually - Willard please correct me. If we are merely using the medium to discuss general issues of the humanities then I fail to see how we progress. ASCII communication. So how many books have you read recently that make no use of typographical tricks to make their point? OK, I except novels, but if I pick up (at random) The Computation of Style by Anthony Kenny, and open to page 94, I see a) running head, page numbers b) italics for a book title c) smaller type for a quote d) mathematical setting without even trying. How do you convey that on a dull ASCII terminal? As to the person who suggested that buying a Mac might change ones life: I have had access to 2 Macs for 18 months, c. 5 seconds walk away. The most likely change to my life is death from suicide, due to frustration and anger at the mickey mouse software, the keyboard, the concept, the lot.... If thats the 20th century's answer to Gutenburg and the Book of Kells, god help us all said tiny tim. I for one would rather produce beautiful Gutenberg books with TeX. And so, dear readers (how many dollars so far to whoever owns those phone lines to New Zealand?), I urge you all to start considering sending your HUMANIST contributions in a compressed (there are good, widely available, compression programs - lets use them) form with structural markup. Sound the death knell to *reading* ASCII - leave it to computers. sebastian rahtz. computer science, southampton, uk From: Network Mailer CMI011%UK.AC.SOUTHAMPTON.IBM@UK.AC.RL.IB Subject: mail delivery error Date: Mon, 05 Oct 87 12:07:52 BSTFri, 2 Oct 87 14:17:54 BST X-Humanist: Vol. 1 Num. 337 (337) Batch SMTP transaction log follows: 220 UK.AC.RL.IB Columbia MAILER X1.24 BSMTP service ready. 050 HELO UKACRL 250 UK.AC.RL.IB Hello UKACRL 050 MAIL FROM: 250 ... sender OK. 050 RCPT TO: 250 ... recipient OK. 050 DATA 354 Start mail input. End with . 554 Mail aborted. Maximum hop count exceeded. 050 QUIT 221 UK.AC.RL.IB Columbia MAILER BSMTP service done. Original message follows: I have written a short report on the courses I have taught for the Faculty of Arts here over the last year, with some general observations on humanities computing in Southampton. As this report is about 16 pages formatted, I am not sending it direct to HUMANIST, but will forward it to interested parties. Please tell me whether you want a preformatted ASCII text (rather gross) or LaTeX source (the decent version) for you to format yourself. If you have problems, I have sent copies of both to Willard and I hope he can redistribute as needed. The 'New Zealand' controversy; I am sorry to say this, but I think they are in a minority in having to spend real cash on Humanist, and that there is very little that can be done. To slightly correct Steve Younker, mail to Bitnet from JANET in Britain via the EARN gateway is currently funded by IBM, but in the near future the costs will have to be met from elsewhere, and it may revert back to each sender. FLAME ON Since Brian needs to spend a great deal of its money on prosecuting Peter Wright, keeping up a vast nuclear arsenal, and destroying the British education system (the Post Office went ages ago, Steve), I expect that soon you will hear no more than vague squeaks from Thatcherite Britain on HUMANIST..... FLAME OFF The content of HUMANIST: I'm sorry, I thought H. was *founded* for chatter! Isn't the purpose of H. to exchange ephemeral opinions, advice and questions about how computers relate to and are used in the Humanities? Maybe I am wrong, actually - Willard please correct me. If we are merely using the medium to discuss general issues of the humanities then I fail to see how we progress. ASCII communication. So how many books have you read recently that make no use of typographical tricks to make their point? OK, I except novels, but if I pick up (at random) The Computation of Style by Anthony Kenny, and open to page 94, I see a) running head, page numbers b) italics for a book title c) smaller type for a quote d) mathematical setting without even trying. How do you convey that on a dull ASCII terminal? As to the person who suggested that buying a Mac might change ones life: I have had access to 2 Macs for 18 months, c. 5 seconds walk away. The most likely change to my life is death from suicide, due to frustration and anger at the mickey mouse software, the keyboard, the concept, the lot.... If thats the 20th century's answer to Gutenburg and the Book of Kells, god help us all said tiny tim. I for one would rather produce beautiful Gutenberg books with TeX. And so, dear readers (how many dollars so far to whoever owns those phone lines to New Zealand?), I urge you all to start considering sending your HUMANIST contributions in a compressed (there are good, widely available, compression programs - lets use them) form with structural markup. Sound the death knell to *reading* ASCII - leave it to computers. sebastian rahtz. computer science, southampton, uk From: Network Mailer CMI011%UK.AC.SOUTHAMPTON.IBM@UK.AC.RL.IB Subject: mail delivery error Date: Mon, 05 Oct 87 12:08:08 BSTFri, 2 Oct 87 14:17:54 BST X-Humanist: Vol. 1 Num. 338 (338) Batch SMTP transaction log follows: 220 UK.AC.RL.IB Columbia MAILER X1.24 BSMTP service ready. 050 HELO UKACRL 250 UK.AC.RL.IB Hello UKACRL 050 MAIL FROM: 250 ... sender OK. 050 RCPT TO: 250 ... recipient OK. 050 DATA 354 Start mail input. End with . 554 Mail aborted. Maximum hop count exceeded. 050 QUIT 221 UK.AC.RL.IB Columbia MAILER BSMTP service done. Original message follows: I have written a short report on the courses I have taught for the Faculty of Arts here over the last year, with some general observations on humanities computing in Southampton. As this report is about 16 pages formatted, I am not sending it direct to HUMANIST, but will forward it to interested parties. Please tell me whether you want a preformatted ASCII text (rather gross) or LaTeX source (the decent version) for you to format yourself. If you have problems, I have sent copies of both to Willard and I hope he can redistribute as needed. The 'New Zealand' controversy; I am sorry to say this, but I think they are in a minority in having to spend real cash on Humanist, and that there is very little that can be done. To slightly correct Steve Younker, mail to Bitnet from JANET in Britain via the EARN gateway is currently funded by IBM, but in the near future the costs will have to be met from elsewhere, and it may revert back to each sender. FLAME ON Since Brian needs to spend a great deal of its money on prosecuting Peter Wright, keeping up a vast nuclear arsenal, and destroying the British education system (the Post Office went ages ago, Steve), I expect that soon you will hear no more than vague squeaks from Thatcherite Britain on HUMANIST..... FLAME OFF The content of HUMANIST: I'm sorry, I thought H. was *founded* for chatter! Isn't the purpose of H. to exchange ephemeral opinions, advice and questions about how computers relate to and are used in the Humanities? Maybe I am wrong, actually - Willard please correct me. If we are merely using the medium to discuss general issues of the humanities then I fail to see how we progress. ASCII communication. So how many books have you read recently that make no use of typographical tricks to make their point? OK, I except novels, but if I pick up (at random) The Computation of Style by Anthony Kenny, and open to page 94, I see a) running head, page numbers b) italics for a book title c) smaller type for a quote d) mathematical setting without even trying. How do you convey that on a dull ASCII terminal? As to the person who suggested that buying a Mac might change ones life: I have had access to 2 Macs for 18 months, c. 5 seconds walk away. The most likely change to my life is death from suicide, due to frustration and anger at the mickey mouse software, the keyboard, the concept, the lot.... If thats the 20th century's answer to Gutenburg and the Book of Kells, god help us all said tiny tim. I for one would rather produce beautiful Gutenberg books with TeX. And so, dear readers (how many dollars so far to whoever owns those phone lines to New Zealand?), I urge you all to start considering sending your HUMANIST contributions in a compressed (there are good, widely available, compression programs - lets use them) form with structural markup. Sound the death knell to *reading* ASCII - leave it to computers. sebastian rahtz. computer science, southampton, uk From: Network Mailer CMI011%UK.AC.SOUTHAMPTON.IBM@UK.AC.RL.IB Subject: mail delivery error Date: Mon, 05 Oct 87 12:10:37 BSTFri, 2 Oct 87 14:17:54 BST X-Humanist: Vol. 1 Num. 339 (339) Batch SMTP transaction log follows: 220 UK.AC.RL.IB Columbia MAILER X1.24 BSMTP service ready. 050 HELO UKACRL 250 UK.AC.RL.IB Hello UKACRL 050 MAIL FROM: 250 ... sender OK. 050 RCPT TO: 250 ... recipient OK. 050 DATA 354 Start mail input. End with . 554 Mail aborted. Maximum hop count exceeded. 050 QUIT 221 UK.AC.RL.IB Columbia MAILER BSMTP service done. Original message follows: I have written a short report on the courses I have taught for the Faculty of Arts here over the last year, with some general observations on humanities computing in Southampton. As this report is about 16 pages formatted, I am not sending it direct to HUMANIST, but will forward it to interested parties. Please tell me whether you want a preformatted ASCII text (rather gross) or LaTeX source (the decent version) for you to format yourself. If you have problems, I have sent copies of both to Willard and I hope he can redistribute as needed. The 'New Zealand' controversy; I am sorry to say this, but I think they are in a minority in having to spend real cash on Humanist, and that there is very little that can be done. To slightly correct Steve Younker, mail to Bitnet from JANET in Britain via the EARN gateway is currently funded by IBM, but in the near future the costs will have to be met from elsewhere, and it may revert back to each sender. FLAME ON Since Brian needs to spend a great deal of its money on prosecuting Peter Wright, keeping up a vast nuclear arsenal, and destroying the British education system (the Post Office went ages ago, Steve), I expect that soon you will hear no more than vague squeaks from Thatcherite Britain on HUMANIST..... FLAME OFF The content of HUMANIST: I'm sorry, I thought H. was *founded* for chatter! Isn't the purpose of H. to exchange ephemeral opinions, advice and questions about how computers relate to and are used in the Humanities? Maybe I am wrong, actually - Willard please correct me. If we are merely using the medium to discuss general issues of the humanities then I fail to see how we progress. ASCII communication. So how many books have you read recently that make no use of typographical tricks to make their point? OK, I except novels, but if I pick up (at random) The Computation of Style by Anthony Kenny, and open to page 94, I see a) running head, page numbers b) italics for a book title c) smaller type for a quote d) mathematical setting without even trying. How do you convey that on a dull ASCII terminal? As to the person who suggested that buying a Mac might change ones life: I have had access to 2 Macs for 18 months, c. 5 seconds walk away. The most likely change to my life is death from suicide, due to frustration and anger at the mickey mouse software, the keyboard, the concept, the lot.... If thats the 20th century's answer to Gutenburg and the Book of Kells, god help us all said tiny tim. I for one would rather produce beautiful Gutenberg books with TeX. And so, dear readers (how many dollars so far to whoever owns those phone lines to New Zealand?), I urge you all to start considering sending your HUMANIST contributions in a compressed (there are good, widely available, compression programs - lets use them) form with structural markup. Sound the death knell to *reading* ASCII - leave it to computers. sebastian rahtz. computer science, southampton, uk From: Network Mailer CMI011%UK.AC.SOUTHAMPTON.IBM@UK.AC.RL.IB Subject: mail delivery error Date: Mon, 05 Oct 87 12:10:40 BSTFri, 2 Oct 87 14:17:54 BST X-Humanist: Vol. 1 Num. 340 (340) Batch SMTP transaction log follows: 220 UK.AC.RL.IB Columbia MAILER X1.24 BSMTP service ready. 050 HELO UKACRL 250 UK.AC.RL.IB Hello UKACRL 050 MAIL FROM: 250 ... sender OK. 050 RCPT TO: 250 ... recipient OK. 050 DATA 354 Start mail input. End with . 554 Mail aborted. Maximum hop count exceeded. 050 QUIT 221 UK.AC.RL.IB Columbia MAILER BSMTP service done. Original message follows: I have written a short report on the courses I have taught for the Faculty of Arts here over the last year, with some general observations on humanities computing in Southampton. As this report is about 16 pages formatted, I am not sending it direct to HUMANIST, but will forward it to interested parties. Please tell me whether you want a preformatted ASCII text (rather gross) or LaTeX source (the decent version) for you to format yourself. If you have problems, I have sent copies of both to Willard and I hope he can redistribute as needed. The 'New Zealand' controversy; I am sorry to say this, but I think they are in a minority in having to spend real cash on Humanist, and that there is very little that can be done. To slightly correct Steve Younker, mail to Bitnet from JANET in Britain via the EARN gateway is currently funded by IBM, but in the near future the costs will have to be met from elsewhere, and it may revert back to each sender. FLAME ON Since Brian needs to spend a great deal of its money on prosecuting Peter Wright, keeping up a vast nuclear arsenal, and destroying the British education system (the Post Office went ages ago, Steve), I expect that soon you will hear no more than vague squeaks from Thatcherite Britain on HUMANIST..... FLAME OFF The content of HUMANIST: I'm sorry, I thought H. was *founded* for chatter! Isn't the purpose of H. to exchange ephemeral opinions, advice and questions about how computers relate to and are used in the Humanities? Maybe I am wrong, actually - Willard please correct me. If we are merely using the medium to discuss general issues of the humanities then I fail to see how we progress. ASCII communication. So how many books have you read recently that make no use of typographical tricks to make their point? OK, I except novels, but if I pick up (at random) The Computation of Style by Anthony Kenny, and open to page 94, I see a) running head, page numbers b) italics for a book title c) smaller type for a quote d) mathematical setting without even trying. How do you convey that on a dull ASCII terminal? As to the person who suggested that buying a Mac might change ones life: I have had access to 2 Macs for 18 months, c. 5 seconds walk away. The most likely change to my life is death from suicide, due to frustration and anger at the mickey mouse software, the keyboard, the concept, the lot.... If thats the 20th century's answer to Gutenburg and the Book of Kells, god help us all said tiny tim. I for one would rather produce beautiful Gutenberg books with TeX. And so, dear readers (how many dollars so far to whoever owns those phone lines to New Zealand?), I urge you all to start considering sending your HUMANIST contributions in a compressed (there are good, widely available, compression programs - lets use them) form with structural markup. Sound the death knell to *reading* ASCII - leave it to computers. sebastian rahtz. computer science, southampton, uk From: MCCARTY@UTOREPAS Subject: Current flood of British junk mail Date: 6 October 1987, 11:22:55 EDT X-Humanist: Vol. 1 Num. 341 (341) People on both sides of the Atlantic are trying to discover the cause of the problem that has resulted in the recent flood of junk mail from the U.K. We do not yet know if it was temporary and will not recur or is something more serious. If it continues we will first suspend HUMANIST for a day or two, then if the problem has not been found, we will restart HUMANIST without its U.K. members. They will be sent HUMANIST messages by a different means until the crisis is over. I very much regret that this latest spill is vexing you. Yours, W.M. _________________________________________________________________________ Dr. Willard McCarty / Centre for Computing in the Humanities University of Toronto / 14th floor, Robarts Library / 130 St. George St. Toronto, Canada M5S 1A5 / (416) 978-4238 / mccarty@utorepas.bitnet From: CMI011@IBM.SOUTHAMPTON.AC.UK Subject: Date: Wed, 7 Oct 87 18:26:22 BST X-Humanist: Vol. 1 Num. 342 (342) Apologies to those who suffered at the hands of my last HUMANIST contribution; I am unclear whether it was the fault of my machine or somewhere else. A quick question: who uses Icon? Roger Hare was suggesting a UK User Group, and it would be nice to know across the globe what the use is of Icon in the HUMANIST community. If people care to mail me or Roger, we would be glad to make some sense of the responses. (For those who dont have it, Icon is Griswold's structured successor to Snobol, available at media cost for Vax, Unix, MSDOS etc) sebastian rahtz. computer science, southampton, uk From: MCCARTY@UTOREPAS Subject: The latest flood of junk mail & watchfulness Date: 7 October 1987, 18:04:50 EDT X-Humanist: Vol. 1 Num. 343 (343) As far as we can tell, the latest outburst of junk e-mail was due to an isolated incident at the EARN/JANET gateway in the U.K. and should no longer trouble us. Response to this incident was slow because the local Postmaster and I thought that we alone were getting the junk. To improve our service we have each added a test account to HUMANIST that will allow us to see exactly what ListServ is sending everyone else. It is still possible that some node or gateway will cause trouble to you alone, however. So, if you're getting trashed, copy the trash and send it to your local expert and to us. I think it's important for us to be vigilant and vocal about the faults and virtues of the complex system that makes HUMANIST possible. My guess is that humanists are less tolerant of the quirks and get less pleasure from them than those who have gone before. We who value literacy can be a potent force for the improvement of e-mail. Our friends in New Zealand (I hope they are still our friends after this recent spill!) have alerted us to the unpleasant possibility of having to put a price on each word. As subsidies, such as IBM's in Canada, are removed, provision of free e-mail will have to be argued for, and fewer people will want to argue for an error-ridden, flaky system than for a truly reliable one. Thanks for your continued support and patience. Yours, W.M. _________________________________________________________________________ Dr. Willard McCarty / Centre for Computing in the Humanities University of Toronto / 14th floor, Robarts Library / 130 St. George St. Toronto, Canada M5S 1A5 / (416) 978-4238 / mccarty@utorepas.bitnet From: MCCARTY@UTOREPAS Subject: Conference report touching on HUMANIST Date: 8 October 1987, 13:54:16 EDT X-Humanist: Vol. 1 Num. 344 (344) Dear Colleagues: This past weekend I had the opportunity to talk about HUMANIST and related matters with a small group at the annual conference of the Medieval Association of America, in Cleveland, Ohio. Four sessions on computing in the humanities were held, of which time allowed me to attend three. The next to the last session was given entirely by HUMANISTs: Chuck Henry (Columbia), May Katzen (Leicester), and John J. Hughes (Bits & Bytes Review). I won't attempt to summarize what they said, but from their talks, as from the talking of computing humanists everywhere, the need for the sharing of information became very clear. This need was addressed in the final session of the series. This session began with the questions of access to information and the forms it might take. Several participants (among whom I was one) stubbornly insisted that the quality and reliability of information and its means of organization are more important than mere quantity. The field is no longer so poor that we need gratefully snatch at whatever might be found; in fact, a researcher can easily be overwhelmed by the volume of raw information available in many areas of humanities computing. We are not yet able, however, to depend on accepted conventions of quality, aim, and focus. We gave some attention to standards for software. Some participants noted that these do not need to be spelled out, rather they should be analogous to the implicit standards of traditional academic disciplines. Several people complained of the general lack of agreement about what constitutes good software, with the concomitant undependability of software reviews. These often do little more than illuminate the ignorance of the reviewer. Trust, one participant pointed out, is essential; otherwise the reinvention of wheels is a lamentable necessity. The MLA's project to provide peer-review of software, which Randy Jones announced recently here, was mentioned as one positive sign. The lack of academic recognition for work in our field (which contributes to the poor quality of reviews) naturally reared its blatant ugliness, but one participant reported that at his institution a group of senior professors had been able, after tireless efforts, to get such work to count towards hiring, tenure, and promotion. There was general agreement that such recognition depends not only on the unbending insistence of senior faculty but also on the solidity of the work and the dependability of its means of access. May Katzen pointed to the wider problem of access on various levels. Experts can forget that computing or potentially computing humanists require information according to their experience and interests. Introductory guidebooks, textbooks, and courses are thus as necessary as comprehensive bibliographies. Again, the worth of information is not necessarily proportional to its volume but depends on its structure and its reliability. Existing and forthcoming channels of communication were discussed: May Katzen's HUMBUL electronic bulletin board and its parallel "Humanities Communication Newsletter" in the U.K.; John Hughes' "Bits and Bytes Review" and forthcoming book, "Bits, Bytes, & Biblical Studies: A Resource Guide for the Use of Computers in Biblical and Classical Studies" (forthcoming November 20, 1987, by Zondervan Publishing House); the "Humanities Computing Yearbook" that Ian Lancashire and I are involved with; our own HUMANIST; and several books, journals, newsletters, and other electronic services. May Katzen later noted that, "we need different kinds of vehicles for conveying different kinds of information about computing in the humanities, depending on the information itself and the needs and interests of users and readers." I found myself noticing that what makes HUMANIST different from the others (and I think especially valuable) is that its conversational style encourages not so much the exchange of information but of ideas and substantive issues. As editor of HUMANIST I usually try to stimulate or provoke discussions rather than contribute to them, but here I cannot resist offering my view that the discussion of ideas and issues is what we do best of all. This is not to say that HUMANIST should not be used for distributing listings, texts, reviews, and so forth, just that our prime possibility for contribution to our emerging discipline seems to me more philosophical than informational. Unless, that is, "information" is interpreted etymologically, to mean "that which bestows form." Form is exactly what we need (architecture, not just more bricks), and this, I think, was the primary message of the sessions I attended in Cleveland. Many thanks are due to David Richardson, managing editor of the Spenser Encyclopedia and organizer of the sessions on computing, for his inexhaustible generosity, kindness, and enthusiasm for real computing in the real humanities. Yours, W.M. [This message has 95 lines.] _________________________________________________________________________ Dr. Willard McCarty / Centre for Computing in the Humanities University of Toronto / 14th floor, Robarts Library / 130 St. George St. Toronto, Canada M5S 1A5 / (416) 978-4238 / mccarty@utorepas.bitnet From: MCCARTY@UTOREPAS Subject: Republication of HUMANIST Date: 8 October 1987, 14:11:41 EDT X-Humanist: Vol. 1 Num. 345 (345) From time to time contributions to HUMANIST may be republished electronically elsewhere, for example, on May Katzen's HUMBUL bulletin board. If you do not want something of yours republished in this way, please attach a brief statement to that effect to your contribution. W.M. From: CMI011@IBM.SOUTHAMPTON.AC.UK Subject: Date: Fri, 9 Oct 87 14:43:11 BST X-Humanist: Vol. 1 Num. 346 (346) A small plea: I am supervising a 3rd year Computer Science student's project, and we have decided he is to write a 'browsing aid' for German, a program to take a reader through a text in a language he/she more or less knows and gives help on vocabulary and grammar when requested. So a) I know other people have done/are doing similar projects. Any suggestions as to what to avoid or what features to aim for? b) we really need a machine-readable German-English dictionary; has anyone got such a beast, however skimpy, that we could have or buy? Sorry if this seems really naive - any thoughts welcome. sebastian rahtz. computer science, southampton, uk From: KRAFT@PENNDRLN Subject: Status of Humanities Computing Date: Friday, 9 October 1987 0931-EST X-Humanist: Vol. 1 Num. 347 (347) I appreciate Willard's good report on the Cleveland discussions, and will use its appearance as an opportunity to (again) try to air some strategies for addressing some of the issues raised. My biggest disappointment with HUMANIST (balanced by many positive aspects) is the failure of all but a few HUMANISTS to become involved in substantive discussions of how to help make computing an accepted part of the arsenal of tools for humanist scholarship, teaching and research. Specifically, I have not received a single comment from any HUMANIST member (pro or con) on the suggestion that we take cooperative steps to produce an appropriate standard information column to offer to professional societies for their newsletters/journals, similar to what I already do for Religious Studies News. It seems to me that until and unless we raise the consciousness of our relatively uninitiated colleagues to the values, availability, etc. of computer related developments, we will make little headway on many of the issues touched on by Willard's report. The HUMANIST membership presumably represents various professional societies and connections. What do you think about this idea? Would you be willing to be involved, at least as an advocate to your own societies? Or am I wrong that there is a need for such "consciousness raising"? I did get a limited amount of response to an earlier question about cooperation (consortium model) among the various "centers." The context of my query was the rejection by NEH of a proposal that, among other things, argued for the creation of a position of "coordinator" for pursuing such a consortium arrangement. Among the comments from reviewers of the proposal was the question whether the existing "centers" really wanted such cooperation? I had hoped to learn from the HUMANIST participants whether they thought that a move to cooperative efforts (in coordinating information, producing generally useful software and encoding data, quality control, etc.) was a useful idea or not. There is no point in spending hours to write grant proposals if those who would supposedly profit from the project being proposed are not interested in it! Conferences are fine to discuss what needs to be done, but at some point, shouldn't we face up to the practical questions of who will do what and how to fund it? Your ideas are *eagerly* sought, and should be of interest to most other HUMANISTS, I would think. Bob Kraft From: Undetermined origin c/o Postmaster Subject: Browing aids Date: 9 October 1987 10:54:10 CDT X-Humanist: Vol. 1 Num. 348 (348) Just a quick note on some existing browsing aids I know about, in answer to Sebastian Rahtz's inquiry. Jim Noblett at Cornell was working, last I heard, on a system for foreign-language (specifically French, but I think the system was to be extensible) composition work. The basic concept was that of an editor or word processor with the tools to make second-language composition easier, specifically online dictionary lookups in either direction, and the ability to examine inflectional paradigms on demand (to find out just what the second-person plural future perfect subjunctive form IS). There may have been standard compositional aids, too (outlining, screen-blanking for brain- storming sessions, and so on), but I don't believe there were when I saw the program in 1986. A system for browing rather than composition would require a different, but overlapping, set of features. Perhaps someone more with more recent knowledge can report on the program. Commercially, there is a system called Mercury, for creating and using online lexica. This is a memory-resident program for IBM PCs and compatibles; the intended users, I believe, are primarily working translators (who would construct specialized lexica for technical fields, to aid their work in technical translation) and language teachers and learners (who would use the online dictionary for browsing texts in the target language, or for composition). I have not used Mercury myself, but the grapevine I've heard has been positive. One big advantage: it's memory resident and so can be consulted from whatever editor the user fancies -- the user is not forced into a specific kind of editor to use the lexicon. Since you asked for desiderata, I'll suggest five: (1) The user should be able to browse through the dictionary headwords (eg on a screen with one headword per line, and the beginning of the dictionary article on the rest of the line). (2) The user should not have to type the word to be looked up, if it's already on the screen: positioning the cursor over the word in question should be enough. (3) Ideally, the user should be able to find inflected forms as well as dictionary-headword forms -- at least for irregular inflections. (4) (For this the student should get serious extra credit!) I always want to look up synonyms and near-synonyms of words I am learning -- so it would be nice to be able to get a display of words with similar meanings. A hidden lookup based on Roget's thesaurus numbers might be one approach to this task, but maybe your student can come up with something better. (5) In reading the text, the user should have the same freedom of movement found in any normal text editor: forward and backward by screen or line. This seems obvious to me but there are serious programs for humanists which give you a forward-only browse function, and eventually they make me want to put my fist through the screen. Finally, I want to argue that a truly serious program of this kind (not necessarily a student project, or a program one writes for oneself and one's friends, but certainly a program written for wide serious distribution, whether commercial or not) would do very well to handle texts in formats other than plan vanilla ASCII. One can always export a Wordstar or Word Perfect file to ASCII, and ditto for most other programs -- but it's a boring, burdensome chore and it would be a real boon to have text-analysis tools be able to handle one's word processor files without further ado. Ideally, the analysis program would recognize bolding and underscoring and centering, and display accordingly -- failing that, I'd settle for a program that just stripped out the control codes and displayed legible text. Would it be so impossible to handle files in the five or six most common word processors? From my experiences deciphering Word perfect files, I'd say not impossible at all. Do other people agree, or would this be asking too much of our software-making friends? (And which editors are the most commonly used and should be supported?) (This message has 81 lines, including the address header.) From: "Dr. Joel Goldfield" Subject: Date: Fri, 9 Oct 87 10:52:19 EDT X-Humanist: Vol. 1 Num. 349 (349) Dear Colleagues, I thank Willard for his encouraging advocacy of our concerns and heartily agree with his observation about the need for senior colleagues to press for inclusion of our activities in job descriptions. Observing my few senior colleagues has shown me that when they begin to work with word processors and gleefully join me in pointing out the major pedago- gical failings of much computer-assisted language instruction, they become much more involved in computing in the humanities, and always on an optimistic bent. This favorable disposition sometimes requires a bit of time, up to a year, but it inevitably occurs here, at least, if the faculty member starts to do something productive with computers. --Joel D. Goldfield Plymouth State College (NH, USA) From: IDE@VASSAR Subject: willard's message Date: Mon, 12-OCT-1987 11:43 EST X-Humanist: Vol. 1 Num. 350 (350) In response to Willard McCarty's note outlining discussions about the nedd for information exchange in t he filed of humanities computing, I would like to point out two current efforts by ACH to accomplish this: first, ACH has just received a grant from NEH to develop guidelines for the encoding of texts intended for research in literature and linguistics. Such guidelines will provide consistency in machine-readabel texts and enable (as well as hopefully encourage) the development of software for manipulation and analysis of such texts that does not require specialized forms of input. Second, ACH is at present applying to NEH to augment a data base of information on computers and the humanities courses and establish an on-line bibliogrphy for computers and the humanities. Nancy M. Ide ide@vassar From: Michael Sperberg-McQueen Subject: Consolidated Computer Column for humanists Date: 12 October 1987 11:24:21 CDT X-Humanist: Vol. 1 Num. 351 (351) Reflections on Bob Kraft's idea of a consolidated column on computer uses in research, to be distributed to / for / by a variety of scholarly organizations. At first sight, I confess this idea did not fill me with enthusiasm. A general column might easily be reduced to common denominator material (look at what happened to PMLA when they decided to print only articles of 'general interest') and end up containing no information of real interest to anyone, at least not regularly. It would be work, at least for the columnist. My zeal for proselytizing has fallen off sharply of late. And does a column of this sort really belong in a journal? Reconsidering it, however, the idea looks not bad at all. If I envision it as appearing not necessarily in the journal, but rather in the newsletter, of my various professional organizations, the idea of a regular column seems less incongruous. And if I envision it as discussing character set standards, the American Association of Publishers electronic manuscript markup tags, text encoding issues, text analysis software, special-purpose systems like the Ibycus, and giving the occasional overview of word-processing issues and problems of displaying and printing special characters on a level appropriate for reasonably competent non-beginners -- in short, if I envision its contents as similar to those of Bob Kraft's column (the ones I've seen), it becomes positively attractive. I for one can well do without more material aimed at novices, but a colunn for people who have outgrown novice material would fill a need. As the list above suggests, I wouldn't urge much concentration on commercial software, though they needn't be banned entirely. Instead, discussions of (a) widespread problems and the various approaches to their solution, (b) standards and standardization efforts, and (c) examples of concrete work (e.g. a report on how the Lexicon Iconographicum Mythologiae Classicae has organized its database, -- without becoming too involved in details of their database management system: staying on the level of data analysis and organization) that other people can learn from. (When it left my hands, this note contained 65 lines. I am not responsible if some mailer adds error messages at the top.) From: MCCARTY@UTOREPAS Subject: More biographies Date: 12 October 1987, 14:59:34 EDT X-Humanist: Vol. 1 Num. 352 (352) ------------------------------------------------------------------------- Autobiographies of HUMANISTs Third Supplement Following are 19 more entries to the collection of autobiographical statements by members of the HUMANIST discussion group. Further additions, corrections, and updates are welcome, to MCCARTY at UTOREPAS.BITNET. W.M. 11 October 1987 From: Subject: Date: X-Humanist: Vol. 1 Num. 353 (353) *Bratley, Paul Departement d'informatique et de r.o., Universite de Montreal, C.P. 6128, Succursale A, MONTREAL, Canada H3C 3J7, (514) 343 - 7478 I have been involved in computing in the humanities since the early 1960s, when I worked at Edinburgh University on automated mapping of Middle English dialects. Since then I have been involved in projects for syntax recognition by computer and a number of lexicographical applications. With Serge Lusignan I ran for seven years at the University of Montreal a laboratory which helped users with all aspects of computing in the humanities. As a professor of computer science, it is perhaps not surprising that my interests lie at the technical end of the spectrum. I designed, with a variety of graduate students, such programs as Jeudemo (for producing concordances), Compo (for computer typesetting), and Fatras (for fast on-line retrieval of words and phrases), all of which were or are still used inter- nationally in a variety of universities. My main current research interest involves the design of a program for on-line searching of manuscript catalogues. The idea is to be able to retrieve incipits despite unstable spelling and such-like other variants in medieval texts. The project, involving partners in Belgium, Morocco and Tunisia is intended to work at least for Latin, Greek and Arabic manuscripts, and possibly for others as well. From: Subject: Date: X-Humanist: Vol. 1 Num. 354 (354) *Carpenter, David I am an assistant professor of theology at St. Joseph's University in Philadelphia with training primarily in the history of religions. I work on Indian traditions (Hinduism and Buddhism) as well as some work on Western Medieval material. I have recently been engaged in putting a Sanskrit test into machine-readable form and would like to see what else has been done. From: Subject: Date: X-Humanist: Vol. 1 Num. 355 (355) *Dixon, Gordon Bitnet Editor-in-Chief, Literary and Linguistic Computing, Institute of Advanced Studies, Manchester Polytechnic, Oxford Road, Manchester, M15 6BH U.K. In particular, my interest lies in the publication of good quality papers in the areas of: Computers applied to literature and language. Computing techniques. Reports on research projects. Hardware and software. CAL and CALL. Word Processing for Humanities. Teaching of computer techniques to language and literature students. Survey papers and reviews. From: Subject: Date: X-Humanist: Vol. 1 Num. 356 (356) *Gilliland, Marshall Department of English, University of Saskatchewan, Saskatoon, Saskatchewan, Canada S7N 0W0 (306) 966-5501 campus, (306) 652-5970 home I'm a professor of English whose literary specialty is American literature, and I also teach expository prose, first-year classes, and utopian literature in English. Thus far, I'm the lone member of my department to use a mainframe computer and to teach writing using a computer. Most immediately, I'm the faculty member responsible for getting a large computer lab for humanities and social science students in the college, and one of the few faculty promoting using computers. I maintain the list ENGLISH on CANADA01. From: Subject: Date: X-Humanist: Vol. 1 Num. 357 (357) *Hamesse, Jacqueline Universite Catholique de Louvain, Chemin d'Aristote, 1, B-1348 LOUVAIN-LA-NEUVE (Belgium) Je suis membre du Comite d'ALLC et Co-ordinator de l'organisation des Conferences annuelles de cette association. D'autre part, je suis Professeur a l'Universite Catholique de Louvain et Presidente de l'Institut d'Etudes Medievales. Je travaille depuis vingt ans dans le domaine du traitement des textes philosophiques du moyen age a l'aide de l'ordinateur. Pour le moment, j'etudie surtout les possibilites offertes par l'ordinateur pour la collation et le classement des manuscrits medievaux. Je viens de lancer avec Paul Bratley de l'Universite de Montreal un projet international de Constitution d'une base de donnees pour les incipits de manuscrits medievaux (latins, grecs, hebreux et arabes). From: Subject: Date: X-Humanist: Vol. 1 Num. 358 (358) *Hubbard, Jamie I teach in the area of Asian Religions at Smith College, focusing on East Asian Buddhism. I am also active in attempting (??!!) to archive Chinese materials on CD-ROM and other sundry projects (IndraNet, bulletin board/ conferencing for Buddhist Studies, has been around for app. 2 yrs). From: Subject: Date: X-Humanist: Vol. 1 Num. 359 (359) *Hughes, John J. (for other electronic addresses, see bottom of front page of last issue of the "Bits & Bites Review") 623 Iowa Ave., Whitefish, MT 59937, (406) 862-7280 From: Subject: Date: X-Humanist: Vol. 1 Num. 360 (360) Dept of History, University of York, Heslington, YORK YO1 5DD, U.K. My interests are in the field of early medieval history, specifically Frankish history, and with a special interest in Merovingian cemeteries. From: Subject: Date: X-Humanist: Vol. 1 Num. 361 (361) *Jones, Randall L. Humanities Research Center, 3060 JKHB, Brigham Young University Provo, Utah 84602, (Tel.) 8013783513 I am a Professor of German and the Director of the Humanities Research Center at Brigham Young University. I have been involved with using the computer in language research and instruction since my graduate student days at Princeton, 1964-68. My activities have included the development of language CAI, diagnostic testing with the computer, interactive video (I worked on the German VELVET program), computer assisted analysis of modern German and English and the development and use of electronic language corpora. I have worked closely with the developers of WordCruncher (aka BYU Concordance) to make certain that the needs of humanists are properly met (e.g. foreign character sets, substring searches, etc.). In 1985 I organized (with the good assistance of my colleagues in the HRC) the 7th International Conference on Computers and the Humanities, which was held at BYU. I am a member of the Executive Council of the Association for Computers and the Humanities, the Chairman of the Educational Software Evaluation Committee of the Modern Language Association, a member of the Committee on Information and Communication Technology of the Linguistic Society of America, and a member of the Editorial Board of "SYSTEM". I have written articles and given lectures on many aspects of the computer and language research and instruction. From: Subject: Date: X-Humanist: Vol. 1 Num. 362 (362) *Lane, Simon Computing Service, University of Southampton, Highfield, Southampton, England. I am currently employed as a Programmer in the Computing Service at Southampton University, England, and have special responsibility for liaison with the Humanities departments within the University, and support of their computing needs. From: Subject: Date: X-Humanist: Vol. 1 Num. 363 (363) *Lessard, Greg I am a linguist (Ph.D. 1983, Laval, in differential linguistics, for a study of formal mechanisms of antonymy in English and French). I have been teaching in the Department of French Studies at Queen's since 1978 and have been involved in humanities computing for several years now, in a variety of areas: 1) computer-aided analysis of literary texts. In 1986 Agnes Whitfield and I gave a paper at the annual meeting of the "Association canadienne- francaise pour l'avancement des sciences" where we used a computer analysis to compare two novels by Michel Tremblay and Victor-Levy Beaulieu, respectively. Agnes is also in French Studies. 2) production of computer-readable texts. For the past year or so, I have participated in a group project in the Department of French Studies at Queen's which involves the entry into the mainframe of computer-readable texts by means of a Kurzweil data entry machine. 3) concordance production. J.-J. Hamm (of Queen's) and I are working on a concordance of the novel "Armance" by Stendhal. 4) linguistic analysis. I make heavy use of the computer in my work analysing errors in student texts produced in French. 5) annotation. Diego Bastianutti (of Queen's) and I are working in the area of annotation as a teaching tool in the humanities. We gave a paper at this year's Learned Societies where we outlined our research and presented a prototype of an annotation facility based on the word processing program "PC-Write". 6) computer-aided instruction. With a group of colleagues in the languages and in computer science at Queen's, I am working on an intelligent computer-aided instruction system for French, other Romance languages, and eventually a variety of other languages as well. We are in the second year of this multi-year project, funded in part by the Ministry of Colleges and Universities of Ontario. From: Subject: Date: X-Humanist: Vol. 1 Num. 364 (364) *Logan, George M. Professor and Head, Department of English, Queen's University, Kingston, Ontario, Canada K7L 3N6; 613-545-2154 My area of literary specialization is the English Renaissance. For my research interests in computer applications to literary studies, see the biography of my colleague David Barnard. For 1986-87, I have been chairman of the Steering Group for Humanities Computing of five Ontario universities: McMaster, Queen's, Toronto, Waterloo, and Western Ontario. I am also a member of the steering group of the Ontario Consortium for Computing and the Humanities. From: Subject: Date: X-Humanist: Vol. 1 Num. 365 (365) *Ravin, Yael I have an M.A. in Teaching English as a Second Language from Columbia University and a Ph.D in Linguistics from the City University of New York. My Ph.D thesis is about the semantics of event verbs. I am a member of the Natural Language Processing Group at the Watson Research Center of IBM. My work consists of writing rules in a computer language called PLNLP for the detection of stylistic weaknesses in written documents. I am now beginning research in semantics. This research consists of developing PLNLP rules to investigate the semantic content of word definitions in an online dictionary, in order to resolve syntactic ambiguity. From: Subject: Date: X-Humanist: Vol. 1 Num. 366 (366) *Reimer, Stephen I am an assistant professor of English, using computers extensively both in research and in teaching. My introduction to computer use in the humanities came in the late 70s when I was beginning my dissertation and was faced with an authorship question in a set of medieval texts--I thought that the problem might be resolvable through quantitative stylistics with the help of the computer. Through John Hurd at the Univ. of Toronto, I learned the rudiments of programming in SNOBOL and learned much about concordancing algorithms; on this basis, I wrote a rather large and sloppy program to "read" any natural language text and to generate a substantial number of statistics. Producing the dissertation itself involved me with micro-computers and laser printers. And when I began teaching after graduation, I was involved in an experiment using Writers' Workbench as an aid in teaching composition. I have, this fall, moved from the U of T to the University of Alberta. Here I have been asked to act as something of a consultant for other English professors who are starting to make use of computers, and I have been assigned to a team with a mandate to establish a small computing centre to be shared by four humanities departments (English, Religious Studies, Philosophy and Classics). Finally, I am embarking on a long term project which is again concerned with authorship disputes: over the coming years I expect to consume huge numbers of cycles in an effort to sort out the tangled mess of the canon of John Lydgate. From: Subject: Date: X-Humanist: Vol. 1 Num. 367 (367) *Salotti, Paul Oxford University Computing Service, 13, Banbury Road, OXFORD OX2 6NN U.K. Tel. 0865-273249 I work in the Oxford University Computing Service and provide support and consultancy for the application and use of databases (Ingres, IDMS, dBase etc) in academic research. From: Subject: Date: X-Humanist: Vol. 1 Num. 368 (368) *Smith, Tony I have recently started work as research assistant to Gordon Neal in the Department of Greek at Manchester University. Our project has a number of aims. Ultimately we hope to program a computer to perform as far as possible the automatic syntactic parsing of Classical Greek. Texts with syntactic tagging (which in the early stages can be performed manually) can then be used for pedagogic purposes, by allowing a student on a computer to ask for help with the morphology and syntax of selected words and sentences. The tagged texts would also be very useful for research purposes, allowing various kinds of statistical analysis to be carried out. The texts will be drawn from the Thesaurus Linguae Graecae database on CD-ROM, which will be accessed by a network of IBM-compatibles. The system will also offer facilities for searching through the Greek texts similar to those found on the Ibycus Scholarly Computer. From: Subject: Date: X-Humanist: Vol. 1 Num. 369 (369) *Tov, Emmanuel Prof. in the Dept of Bible, Hebrew University, Jerusalem, Israel, Tel. (02)883514 (o), 815714 (h). Together with R.A. Kraft of the U. of Penn. I am the director of the CATSS Project - computer assisted tools for Septuagint studies (for a description of the work, see CATSS volumes 1 and 2). From: Subject: Date: X-Humanist: Vol. 1 Num. 370 (370) *Wolffe, John Temporary Lecturer in History, University of York, England. AT the moment my use of computers in my own research is confined largely to humble word-processing, but I have plans during the next academic year to develop some computer-based analysis of the 1851 England and Wales Census of Religious Worship. I am also very interested in wider questions about the use of computers in the humanities, especially as these relate to the development of coherent defense of the humanities in general and of history in particular in the face of the current political and social climate in the UK. From: Subject: Date: X-Humanist: Vol. 1 Num. 371 (371) *Wyman, John C. Library Systems Office, Bird Library, Room B106F, Syracuse Univ. Syracuse, New York 13244-1260 USA, (315) 423-4300/2573 I am the Systems Officer for the Syracuse University Library, called Bird Library, and am in charge of all of our computer and system support for the library. This includes our on-line catalog (SULIRS); access to OCLC for shared bibliographic cataloging information; and our increasing use of microcomputers for staff support. Also I'm involved in our on-line access to remote data bases, such as Dialog or BRS, for our users and staff. Finally we have a growing effort of acquiring and providing access to collections of research data for people in the social sciences, called the Research Data System of the Libraries. My interests revolve around providing access to, and usage of computers for, non-computer type people. Even, and especially, at the expense of extra programming and systems effort. Too many computer systems today are hard for e for the casual user to use. My background is Electrical Engineering, Numerical Analysis, Computer User Service, Library User Service, with many systems designed and programmed by me or my staff. The human interface is the most important aspect of this work. From: KRAFT@PENNDRLN Subject: Computer Column for Humanist Newsletters Date: Tuesday, 13 October 1987 0941-EST X-Humanist: Vol. 1 Num. 372 (372) I very much appreciate Michael Sperberg-McQueen's thoughtful response to the "syndicated column" suggestion. He asks for some further information, and notes that we are nearing the season for professional society meetings such as MLA (also AAR/SBL/ASOR, APA, and doubtless others). His response also points to the need to ask a supplementary question: how many professional society publications known to HUMANISTS already attempt to deal with computer-related matters in a systematic manner? (And how many don't?) How great is the need? Michael's listing of representative issues for such a column is largely similar to what I do in my own OFFLINE column, although I also try to give the "novice" leads on where to get good information for becoming a "competent non-beginner" -- that is, I really try not to scare "novices" away, but to lead them further into the subject by defining new computer jargon (e.g. most recently "hypertext" -- "authoring systems" in a future issue), alerting them to new software releases of special interest (e.g. micro-OCP), informing them of data availability (e.g. CCAT diskettes and CD-ROM, TLG, Oxford Archive), etc. I do not find the writing of the column overly onerous since I note in my own computer file new information and ideas as they come to my attention, and simply organize those materials when the next deadline rolls around. To broaden the coverage in a "syndicated" form would doubtless involve more work, but if there was a responsible board of editorial contributors, it MIGHT not be overly demanding. I am willing to be involved, but am not begging to be editor, if we decide to try the experiment. I am convinced of the value of this approach, based on 15 issues of OFFLINE and the responses that column has produced, and will continue the OFFLINE column if nothing supercedes it. Bob From: MCCARTY@UTOREPAS Subject: Line count Date: 13 October 1987, 19:35:13 EDT X-Humanist: Vol. 1 Num. 373 (373) John Law of Newcastle (U.K.), who receives HUMANIST through a redistribution list, suggests that our desultory practice of putting a line count at the end of messages is a good idea -- but upside down. He'd like to see such a count at the beginning, as he explains in the following. If it's at all possible, would you mind indicating in your subject line the approximate number of lines in the message that follows? Thanks very much. W.M. -------------------------------------------------------------------------- The reason I'd like the count at the beginning is a simple one (which we've used here for several years): if the message is going to be a long one I want to interrupt it right away and then read it later at my leisure (laughable term, in this business). The HUMANIST messages come in with all my other messages, from user queries to meeting announcements etc. I archive all the HUMANIST material for later study, but I like (as well) to read them as they appear on the screen: if they are not longer than, say, 50 lines. Yours, John Law (Documentation Officer, Computing Services, Secretary to Arts Advisory Group, and (once) an Arts Graduate.) From: MCCARTY@UTOREPAS Subject: This time it's Swedish junk mail.... Date: 13 October 1987, 23:31:03 EDT X-Humanist: Vol. 1 Num. 374 (374) The recent (even, alas, ongoing) flood of junk mail was due to a Swedish HUMANIST's userid suddenly becoming illegal -- for whatever reason. I have removed the innocently offending person from the list, but it appears that the junk keeps coming. It will stop shortly. Hang on, my stalwart colleagues! Yours, W.M. From: "Dana E. Cartwright 3rd" Subject: Date: Wed, 14 Oct 1987 08:19:55 LCL X-Humanist: Vol. 1 Num. 375 (375) I dislike doing things by hand which a computer can do as well, more easily, and often more accurately. The counting of lines in a message seems to fall into this category of activity. On all IBM computers running the VM/CMS operating system, incoming mail is summarized by sender, time, date, and number of lines. One reads it in whatever order strikes one's fancy. The issue of the order in which I read my mail is a matter which I should sort out on my end. I, for example, file my electronic mail into an extensive series of notebooks, by subject. I could ask all of you to include one or more subject keywords on each of your messages, to make my filing easier or more accurate. But, I think the assigning of keywords is something which *I* should do. I put line counts in the same category. From: CHURCHDM@VUCTRVAX Subject: Optical Scanning of Texts -- Hard- and software Date: Thu, 15 Oct 87 00:03 CDT X-Humanist: Vol. 1 Num. 376 (376) We at Vanderbilt are about to submit a grant proposal for a major project involving scanning to put in machine-readable form a large body of texts almost all of which are in French. I have talked with VAR dealers about recent products from both Kurzweil and Palantir, and in both cases the dealers have told me that the machines (and the accompanying software) cannot handle the accented letters in French. From the entries I have seen in the lists of machine-readable text in HUMANIST, I find it hard to believe that somebody out there hasn't already solved this problem. Would you please send messages to me personally telling me what hardware and software solutions you have found to the problem of scanning texts with accented letters. I'll summarize the answers in a message to HUMANIST. Dan Church Vanderbilt University (CHURCHDM@VUCTRVAX) (This message contains 14 lines, including this one.) From: Mark Olsen Subject: TEXTS? Date: Thu, 15 Oct 87 13:11:10 MST X-Humanist: Vol. 1 Num. 377 (377) Before scanning in the following titles, I would like to know if they have been put on tape already: John Woolman (1720-1772) The Journal of .... Essays of ... Joyce, Finnegans Wake I am a little surprised that the latter does not exist in computer readable format. Woolman is an obscure Quaker writer, or I should say obscure to me. Any reference to an electronic copy would be greatly appreciated. From: CMI011@IBM.SOUTHAMPTON.AC.UK Subject: Date: Mon, 19 Oct 87 19:29:33 BST X-Humanist: Vol. 1 Num. 378 (378) A colleague (as yet un-HUMANISTized) asks me about the DEREDEC system from Quebec. I know that a few places in Canada have it, and use it, but I have not properly taken on board their feelings about it (it is, for those who have not come across it, a large French grammer analysis system). I wonder if anyone would care to comment, either to me direct or (perhaps better) to Sean O'Cathasaigh himself (despite appearances, he is in the Department of French here!)= FRI001@uk.ac.soton.ibm, about - the state and/or usefulness of DEREDEC and its subsystems for serious research - the likely learning curve for the non-Lisp hacker - the usefulness of the package for undergraduate teaching. A general question arises, which concerns me for a number of reasons, about whether in general these large sophisticated systems can ever have an impact on undergraduate teaching as we know it. Is there place in a British 3 year degree for a *serious* look at grammar? I suppose it goes back to that issue about whether "IT" should/could affect the whole undergraduate career - if we use the computer's power to seriously look at grammar a) we need a better background in school, and b) we will have to drop something - what? it may interest HUMANISTs to know that this University offers a degree in Modern Languages with Computing. Students do a conventional language course, but largely leaving out the literature, do extra linguistics and philosophy, and do half a Computer Science degree. Are they better or worse off? sebastian rahtz. computer science, southampton, uk From: GUEST4@YUSOL Subject: Are Musicians Humanists Too? (testing the waters) Date: Mon, 19 Oct 87 23:54 EDT X-Humanist: Vol. 1 Num. 379 (379) From the Fall 1987 issue of IEEE Expert, p. 86: From: Subject: Date: X-Humanist: Vol. 1 Num. 380 (380) Musical scoring on PCs ---------------------- After 16 years of work, according to the Stanford University News Service, Leland Smith has adapted his computer music printing system (called Score) for use with any IBM or compatible PC. Passport Designs Inc. (Half Moon Bay, CA) will distribute the program. A Stanford faculty member since 1958, Smith has also worked with computer-generated sound and has long-range visions of computer use in the music industry. "I want to be able to do almost anything you can think of with Score," he says. Users can add lyrics and graphics to their music easily with the system. "Music is a code system made up of symbols with conventional meanings," Smith says. His system contains a library of 200 musical symbols plus graphics (Mrs. Smith, an art instructor at Foothill College, has included an image library for illustrating children's music). Score can stretch symbols and notes, rotate them, move them to different lines, and transpose them with ease. Composers can zoom in for a closer focus on any section of a musical score. While Smith does not expect computer sound to replace musical instruments, he hopes to see computers involved in the electronic distribution of music, allowing quick transfer of musical scores worldwide. At present, he has to wait six months to receive music ordered from Vienna through an American distributor. In 10 years or so, he foresees every music library equipped with terminals at which students can view music on a screen, deposit coins, and recieve printed copies. The cost of computer music printing has declined since the days of its use on $50,000 computers. Desktop music publishing is now the province of users equipped with PCs, printers, and Score. The program will sell for $495. From: MCCARTY@UTOREPAS Subject: Interesting software Date: 20 October 1987, 20:09:50 EDT X-Humanist: Vol. 1 Num. 381 (381) So far V E R Y F E W suggestions have reached me about interesting software as a result of the announcement on HUMANIST of the Humanities Computing Yearbook (Oxford U.P., vol. 1 forthcoming Summer 1988). By means of local sources, other grapevines, and esp. my knowledgeable co-editor, I've managed to identify nearly 100 items, a sufficient number, but I wouldn't want to overlook any worthy packages. So, if you know of software you consider worthwhile, please send a note to YEARBOOK@UTOREPAS.BITNET giving as much of the relevant information as you have. Include an electronic address if the author or vendor has one. However obvious the excellence of the software may be to you, don't assume that I've already listed it. Our field is still too disorganized for anyone to be able to claim comprehensive and systematic knowledge of its goings on, even if he or she works at it without sleep. I sleep. Thanks very much, in advance, for your help. Yours, W.M. _________________________________________________________________________ Dr. Willard McCarty / Centre for Computing in the Humanities University of Toronto / 14th floor, Robarts Library / 130 St. George St. Toronto, Canada M5S 1A5 / (416) 978-4238 / mccarty@utorepas.bitnet From: KRAFT@PENNDRLN Subject: Source Code for LIST or BROWSE Date: Thursday, 22 October 1987 0929-EST X-Humanist: Vol. 1 Num. 382 (382) I know of two very useful public domain programs for accessing text files on the IBM PC, LIST (which I use regularly) and BROWSE (which others have recommended), and would like to have the source code so that some features could be added. Does anyone know where the source code for LIST and/or BROWSE is available? Bob Kraft, CCAT From: MCCARTY@UTOREPAS Subject: The first summary and its forthcoming sequel Date: 22 October 1987, 20:49:12 EDT X-Humanist: Vol. 1 Num. 383 (383) Those of you who are members of the ACH have likely seen the first summary of activities on HUMANIST (for the period June-July 1987) in print in the Newsletter. For those of you who aren't, the text of the article is exactly the same as the summary file sent to all of you. Some time ago I noticed that my file of contributions to HUMANIST was growing so rapidly that another bimonthly summary was inevitable. That summary is close to completion, so I'd appreciate any contributions to it, from those of you who have held interesting private discussions or received direct replies to questions and think these replies worth publishing. I may finish the summary by the end of this weekend, but perhaps not. If you have anything and anticipate sending it, please let me know directly right away. Yours, W.M. _________________________________________________________________________ Dr. Willard McCarty / Centre for Computing in the Humanities University of Toronto / 14th floor, Robarts Library / 130 St. George St. Toronto, Canada M5S 1A5 / (416) 978-4238 / mccarty@utorepas.bitnet From: Philippa MW Matheson 416 925-9931 Subject: Copyright: translations on the net Date: 23 October 1987, 01:05:02 EDT X-Humanist: Vol. 1 Num. 384 (384) I've been following some of the debate on the copyright status of contibutions to HUMANIST, and gather that an informal practice obtains: if you *say* you don't want something re- published, then it doesn't go further than the original publication (i.e., its appearance in HUMANIST in the first place). Also a good deal has been said about whether contibutions to computer journals count as publications from the point of view of academic credit and/or whether hard copy publishers are willing to allow "their" books to appear on computer net. But the invitations to contribute to text archives that I've seen do not seem to say "Of course if you send us something you must be sure that you/we have the right to publish it." If I were to type out the published poems of a contemporary poet, still living, and contribute them to such an archive, and if the archive were then to send them on request to, let's say, a school teacher who printed them out in multiple copies for her/his class, surely someone has infringed copyright somewhere? Was it alright until it got into hardcopy? My reason for asking about this is that I have for some years been doing informal translations of archaeological articles (mostly to do with amphoras) from Russian books and journals for circulation among a very limited body of "amphoristes." Since they are usually things I, or members of the Amphora Project I work with in Greece, want to read, I don't attempt to exact a fee. But it seems to me the sort of thing which should be available to any interested scholar, and a Russian colleague is presently helping me a) to make the translations I've done more accurate, b) to do more, and c) to compile a bibliography of Soviet studies on "ceramic epigraphy" and amphora studies in general (which, of course, gets us into the odd excavation report, for dates and stratigraphy, and even into clay analysis studies of other objects, like roof-tiles; and we aren't above obliging with a curse tablet article, when requested). When enough material is available, I would like to offer the bibliography (which would list available "private" translations, on line or off) to other Humanists, and send on-line translations to anyone who asks. If I wanted to publish a volume of translations, I or the publishing firm would, I think, get permission from the original publishers: is electronic publication different? Or is it simply the lack of formal legislation covering the computer journals which makes me feel, as I confess I do, that I can simply go ahead (provided anyone wants the material, of course...). Perhaps there are parallels for this: if so, I'd be glad to hear about them, and any comments you may have on the legality of such a service (and/or its utility...). --------------------- __ __ [||] (416) amphoras Philippa MW Matheson [||] < > 925-9931 (work) at 43 McKenzie Avenue < > \/ 921-1774 (home) utorepas Toronto, Ont M4W 1K1 \/ From: Undetermined origin c/o Postmaster Subject: Copyright Date: 23 October 1987 09:14:49 CDT X-Humanist: Vol. 1 Num. 385 (385) [This note has 62 lines beginning with this one.] First things first: first, all praise to Philippa MW Matheson and all generous souls everywhere who share the fruits of their keyboarding. So far, I have never found a way to get a formal legal opinion from (say) a university counsel, on the issue of the copyright status of a text keyboarded for personal use and distributed to friends or colleagues. If anyone out there has, please speak up. Surely one could argue that a text, even one in copyright, keyboarded for research purposes into a single computer and accessible only to the originator, would fall under the scope of 'fair use'. If I can make a single photocopy for personal use, why not a machine copy? Distribution to friends and colleagues might also be fair use -- I'd hate to have to prove in court that it wasn't. (Or that it was.) Distribution to all comers, it seems, would be much easier to class as 'publication' of the sort copyright law forbids. Of course, the presses can be much less generous in their interpretation. As well they might, when a machine-readable version can drive the typesetting of a pirated edition so easily. But perhaps they sometimes go a little too far. The editor of a critical edition of T______ wrote to the press, a few years ago, mentioning that a colleague at another school was interested in preparing a concordance of the new critical edition. Indeed, this colleague had already typed Vol. 1 in and concorded it, which demonstrated the seriousness and the technical feasibility of the request. Could copies of the typesetting tapes of the later volumes be made available to simplify preparation of the text -- and would the press be interested in publishing the concordance? Answer: a flat no, and an assertion that the concordance-maker was in violation of the press's copyright, even though the concordance made was for personal use and neither it nor the text had been distributed to anyone. Of course, in the case of Soviet materials there may be special considerations. There was a time when the USSR subscribed to the 'wrong' copyright convention and Soviet materials did not have much if any legal protection in the west (or vice versa). A few years ago there were stories about that being changed, the better to suppress Western republication of smuggled mss. But there may still be some special legal technicalities regarding Soviet materials. Some people tell me presses are more enlightened now -- and indeed many presses have given their consent to the scanning of copyright Dante commentaries for the Dartmouth Dante Project. Perhaps the rule is that everyone likes to be asked. It's not as though most scholarly presses expected to make a lot of money selling the rights to journal translations. So my advice in dealing with copyright materials would be: ask. (If they say no, what have you lost? You can always say 'Article X is unavailable for copyright reasons.') We should hammer and hammer at our publishers and ourselves and colleagues, though, until we achieve wide public acceptance of unhindered distribution of electronic texts for research purposes. No pirating, no intellectual theft, full documentation of the source of both the original and the electronic form -- get that accepted as the basic code of practice, and I think we'll be making progress. Michael Sperberg-McQueen, Univ. of Illinois at Chicago From: Norman Zacour 923 9483 Subject: Date: 24 October 1987, 22:25:53 EDT X-Humanist: Vol. 1 Num. 386 (386) The following has 32 lines. Subject: Copyright. I doubt very much that making a copy of someone else's work to be distributed to friends and colleagues can be defined as `fair use' either legally or morally. To duplicate a copyrighted work instead of buying it is to deny the author's rights; thinking otherwise is a measure of the anesthetizing effect of the Xerox machine on our moral sense. No one objects to having their scholarly works, say, used by others in their research - that's what they were produced for. That doesn't mean that they can be copied and handed about indiscriminately by someone else as though by right. Compensation, usually monetary, is not always so; an acknowledgement of permission, a return of favour, enhancement of prestige, or merely simple courtesy, all play a part in the outcome of what Sperberg-McQueen has characterized as `being asked' - but surely, the choice is the author's, not the copier's. I gather from Philippa Matheson that archives are not mere depositories, but in some instances act as publishers - at least, if making someone's work public is still called publishing. This is muddy ground: if you are going to deposit your work in a public archive, failing specific instructions about its disposition, there is a good argument that you are releasing it to the public ipso facto.The one thing we can be sure of is that this and other such problems won't be decided by humanist discussions, but rather in the courts, on the basis of legal precedents, which means the application of legal principles developed before electronic publishing was dreamed of. As for the rest, I do not think that Philippa Matheson has much of a problem. A lawyer specializing in copyright would probably know about international conventions touching on Russian publications, but five roubles will get you fifty that translations of Russian scholarly publications will require the formal permission of the publisher. On the other hand, the bibliography of such translations is all her own, to publish as she may wish. In doing so she might be revealing a slightly illegal operation, but if she wants to cast her bread upon the waters, who shall say her nay? May she have a rich return in mighty fishes. From: IMD7VAW@UCLAMVSPostmanIMD7VAW Subject: ACH newsletterUndelivered mailACH Newsletter Date: Mon, 26 Oct 87 13:29 PSTMon, 26 Oct 87 13:28 PSTMon, 26 Oct 87 13:28 PST X-Humanist: Vol. 1 Num. 387 (387) Your mail was not delivered to some or all of its intended recipients for the following reason(s): 5001 mailbox invalid -> HUMANIST ---------------------------------------------------------------------- ACH members, Due to an unfortunate error at the printers, some of the recent ACH Newsletters are missing pages. Let me know if you would like a new copy. Sorry for the inconvenience, but it did show that some people do read it, which I'm glad to hear. Vicky Walsh = IMD7VAW@UCLAMVS From: MCCARTY@UTOREPASHugh Kenner Subject: Re: Copyright Date: 27 October 1987, 11:18:35 ESTSat, 24 Oct 87 9:53:46 EDT X-Humanist: Vol. 1 Num. 388 (388) The following somehow arrived in our postmaster's mailbox instead of HUMANIST's. -------------------------------------------------------------------------- On the other hand, there is the advice I once received from the legal eagle at a friendly publisher's: If you are confident that what you are doing is Fair Use, then do NOT ask. Asking concedes the other party's right to refuse. The likelihood of your being dragged to court over, e.g., the keyboarding incident to making a concordance is vanishingly small. The likelihood of anyone's winning such a case against you is even smaller. --Hugh Kenner. From: CHURCHDM@VUCTRVAX Subject: SCANNING FOREIGN LANGUAGE TEXTS - Hard- and software Date: Tue, 27 Oct 87 11:17 CDT X-Humanist: Vol. 1 Num. 389 (389) Here is a summary of the responses I received in answer to my question about hard- and software for scanning text in foreign languages. Many thanks to all who replied. The new Kurzweil "desktop" scanner and the Palantir both use "smart" software to figure out problematic characters in a logical fashion (for English) . This procedure makes these machines read faster and more accurately, but it also makes it impossible for them to handle foreign languages, especially those with accented letters or non-Roman alphabets. Kurzweil has promised new software in the near future that will enable the desktop scanner to handle French. Other foreign languages are down the road a bit, but this solution is not ideal becaus e it means switching software each time one switches languages and will not accommodate mixtures of languages. The old Kurzweil scanners (e.g., Model 3) and the relatively new Model 4000 do not use the same type of software for recognition; instead they use a "training mode" that allows the user to tell the machine what the problematic characters are. It does that by scanning some text and then prompting the operator to enter characters (or combinations of up to three characters) for the unrecognized ones. While that approach may make the processing somewhat slower, it does allow for "training" the machine to recognize accented and even non-Roman characters. recognize accented and even non-Roman characters. From: Michael Sperberg-McQueen Subject: ACH work toward Text Encoding Guidelines Date: 27 October 1987 13:51:09 CDT X-Humanist: Vol. 1 Num. 390 (390) [This message has 106 lines counting from this one.] As Nancy Ide recently mentioned in passing in this discussion, the National Endowment for the Humanities recently funded the first phase of the ACH's project to develop guidelines for encoding texts in machine-readable form for research purposes. Since this project promises to have deep repercussions for those of us interested in computer-assisted textual studies -- and since the project is going to need a lot of work from interested humanists -- participants in these discussions may be interested in a little more information about it. (Let this also be a partial answer to those who have already contacted me privately for more information.) The Goal The point of the effort is to provide a single, rational set of guidelines for encoding machine-readable texts. Such guidelines should ideally prove useful to scholars encoding new texts, to those (including text archives) who exchange texts with others, and to software developers. If they can be made concrete enough, such guidelines could provide a standard, pre-defined interface between text-analysis software and the data they handle. If they can be made compatible with publishing-industry practice, such guidelines could make it easier to use the same text for publication (e.g. as a critical edition) and machine-based research with concordance and stylistic-analysis programs. The Phases of the Project The first phase of the project (funded by NEH) calls for a gathering of representatives from various text archives and learned societies to discuss the scope, structure, and basic plan (the 'architecture', if you will) of the guidelines. This meeting will take place next month at Vassar College in Poughkeepsie, NY. Its result is to be a document formally describing the recommended architecture for the guidelines, and an understanding among the societies represented on how to organize the next steps of the process. In the second phase, a working party set up by the organizations collaborating in the work will first publish the architecture as defined at the Poughkeepsie conference, request comments on it, and revise it in the light of the comments. The working party will then draft the guidelines. When they are complete, the guidelines will themselves be circulated for public comment (this is phase 3) and revised. (Inter alia, I expect them to be posted on this list, or announced here and distributed electronically to those interested in commenting.) In phase 4, the collaborating organizations will formally validate and approve the guidelines by a mechanism yet to be chosen. Then (phase 5) the guidelines will be published. Questions for those on this list Although there will be ample time later for participants in these discussions to comment on the basic architecture and on the draft guidelines, it would nonetheless be useful to hear from those on the list concerning the scope and structure that they would like to see in a set of guidelines for text encoding practice. What kinds of texts, what languages, and what kinds of research should guidelines of this kind attempt to serve? It seems intuitively clear that if we attempt to settle practice for all languages and all scripts at the outset, we may never get the guidelines finished. But which scripts MUST be included at the outset, and which can be usefully postponed for later revisions or extensions of the guidelines? Can we limit ourselves for the moment to alphabetic scripts? To left-to-right alphabetic scripts? To Latin-based scripts? Which kinds of text-analysis ought to be planned into the guidelines? Concordance-making? Linguistic analysis? Archaeological analysis? That is, should the needs of classicists working with inscriptions be addressed? How about numismatists? Analytic bibliographers? Codicologists? Textual critics? In your own work as humanists or as computer consultants helping other humanists, you may already have -- or will someday -- spend some time working on texts encoded by someone else. When you do, what information about the text will you (did you) want that other person to have encoded with the text? Are page numbers important? Are chapter divisions? Line breaks in the original text? Source of the text (edition used for keyboarding)? Type style and size? When you encode texts yourself, and encounter a document entitled "Guidelines for Encoding Textual Material for Literary, Historical, or Linguistic Research" (or words to that effect) -- what will you want that document to explain? Should it tell you whether to encode the original page numbers or not? Should it tell you HOW to encode the page numbers (or the type size and leading and ...) if you want to do so, but refrain from telling you what to encode and what not to encode? (E.g. "If encoding page breaks, use the form '' where 'n' is an Arabic or Roman numeral. Use the form that appears on the page. If no number appears on the page, surround the numeral in square brackets: ''.") Or should guidelines of this sort confine themselves to defining a syntax for tags, telling you that, whatever you tag and however you tag it, you should precede each tag by a '<' or other delimiter, and follow it with a '>' or other delimiter, etc.? In sum, how much detail should be used to specify the WHAT, and how much detail for the HOW, of text tagging? While there will be, as I say, ample time to comment on these issues later, still it would be useful to know, going into the meeting next month, what feelings are on these questions among those on this list. I will be grateful for any comments (including ones to the effect that you don't have an opinion one way or the other). Thanks. Michael Sperberg-McQueen, University of Illinois at Chicago From: Paul Bratley Subject: Student for sale Date: Tue, 27 Oct 87 18:36-0500 X-Humanist: Vol. 1 Num. 391 (391) This message contains about 20 lines, give or take a few. I have a first-rate student for sale if anyone is interested. He expects to finish a masters in computer science in February 1988 or thereabouts. (I expect him to finish in April.) He would like to work abroad, that is, outside Canada, with a preference for Europe but the USA would do. He has helped me with a project involving lexical statistics on a corpus of Zola's novels, and his masters consists of an expert system (or some approximation thereto) for deciding questions about unemployment insurance, which I direct jointly with a member of our law faculty. As you will realise, his interests are in the applications of computers in a rather vague area including the humanities, socially-useful systems (he introduced me to the works of Trotsky some years ago), and so on. His mother tongue is French, but his English is competent. If anyone is looking for a good research assistant with a solid background in computer science, I can thoroughly recommend this young man. Any offers ? Paul Bratley Bitnet : 3935@umtrlvr Universite de Montreal CDNnet : bratley@iro.udem.cdn (514) 343-7478 From: MCCARTY@UTOREPASamsler@flash.bellcore.com (Robert Amsler) Subject: Copyright and "Fair Use"Asking for Permission for Fair Use access Date: 28 October 1987, 11:31:24 ESTTue, 27 Oct 87 19:36:47 est X-Humanist: Vol. 1 Num. 392 (392) The following has been rescued by the postmaster from oblivion at the wrong node. -------------------------------------------------------------------------- A good description of the policy of not asking for permission when a use is clearly ``fair use'' is given in the Chicago Manual of Style. The ``right of fair use'' is somewhat seen as the same as a ``public right to passage'' and one which should not be weakened by asking for permission from those who presumably do not have any right to prevent ``fair use''. However, ``fair use'' of text doesn't yet have a very clear electronic counterpart. Copyright is a law designed to protect the financial interests of the originators of information. Those interests can be damaged, for instance, by not-for-profit exchanges of text which nevertheless deny the originator of the information a potential sale. So... the question is typically whether what one is doing could reasonably be accomplished by the recipients of the information buying it from the originator. Clearly quoting a paragraph or similar small portion of text is not likely to reduce the salability of the original text--but photocopying a whole article without permission and then distributing it for free to everyone could potentially reduce the revenue of the originators of the article. Note that educational and research use of text is specifically permitted in copyright law, but redistribution for non-educational purposes that could reduce purchases of the text would not be ``fair use''. From: MCCARTY@UTOREPAS Subject: ALLC -- ICCH Conference (ca. 114 lines) Date: 2 November 1987, 08:42:49 EST X-Humanist: Vol. 1 Num. 393 (393) CALL FOR PAPERS Association for Literary and Linguistic Computing Association for Computers and the Humanities 16th International ALLC Conference -- 9th ICCH Conference June 6-10, 1989 University of Toronto, Toronto Ontario, Canada The 16th International ALLC Conference and 9th International Conference on Computing and the Humanities will be held conjointly at the University of Toronto from June 6th to 10th, 1989. Papers on all aspects of computing in linguistics, ancient and modern languages and literatures, history, philosophy, art, archaeology and music are invited for presentation at the conference. Topics include, but are not limited to, the following: authorship studies bibliography computer-aided instruction computer-assisted language learning computerized dictionaries concordances content analysis database grammar development systems historical simulation humanities computing centres lexicography lexicology literary statistics machine translation prosodic studies quantitative linguistics natural language processing scholarly publishing speech analysis stylistics teaching humanities computing textbase text enrichment text generation writing instruction The organizers are particularly interested in papers presenting the results of computer-aided work in the humanities. A 4-day software fair will be part of the conference and will include demonstrations using micro-computers or network connections back to mainframes. A published Software Fair Guide will be given to all registrants. Anyone wishing to present a paper or participate in the Software Fair should send three copies of an abstract of the paper or of a description of the software (approximately 1,000 words in either case) to Professor Ian Lancashire. This abstract should be received by October 15, 1988. The joint ALLC/ACH programme committee will then choose suitable submissions. Speakers or demonstrators of software will be informed by February 1, 1989, of the acceptance of their submissions. Final texts of papers and descriptions of software for the Software Fair Guide will be due in Toronto by May 1, 1989. Selected papers presented at the conference will be published the ALLC Conference Series. During the conference, time will be set aside for attendees to organize poster sessions, panel discussions and parallel groups. Anyone wishing to propose a meeting on a particular theme is requested to contact Professor Ian Lancashire. Working languages for the conference will be English and French. ACCOMMODATION The conference will be held on the downtown campus of the University of Toronto, which is in the centre of the city and within easy walking distance of many hotels, restaurants and shops. Accommodation will be reserved at a nearby international hotel and in inexpensive student residences at Victoria College, about five minutes walk from the conference site. The city of Toronto is served by a large number of domestic and international airlines. DEADLINES October 15, 1988 Proposals for papers, panel discussions, and software demonstrations (1,000 words). February 1, 1989 Acceptance of proposals. April 1, 1989 Early bird registration; final texts due for Software Fair Guide. May 1, 1989 Deadline for submission of papers for published proceedings. June 6, 1989 Start of ALLC-ICCH Conference. June 10, 1989 Acceptance of papers for the published proceedings. For information, registration and submissions, contact Professor Ian Lancashire ALLC-ICCH Conference Centre for Computing in the Humanities University of Toronto Toronto, Ontario CANADA M5S 1A5 (416) 978-4238 E-mail address: IAN @ UTOREPAS.BITNET From: MCCARTY@UTOREPAS Subject: An editorial letter on reminders and improvements (ca. 30 lines) Date: 2 November 1987, 09:06:04 EST X-Humanist: Vol. 1 Num. 394 (394) Dear Colleagues: Those of you who occasionally use HUMANIST to make calls for papers, as Ian Lancashire just did for the 1989 ALLC/ICCH conference, might consider issuing periodic reminders. The conversational rhythm of an electronic discussion group tends not to favour long-term memory, and new members may miss these important announcements, which are wisely made well in advance of their events. I'll leave it to the organizer to decide what is a judicious reminder and what a bothersome bit of pestering. You may be interested to know that HUMANIST now has close to 140 members in 10 countries. So far I have noted only 3 or 4 dropouts. Pressure of time does not allow me to do a great deal of work with HUMANIST (the possibility of making periodic summaries, for example, daily recedes further into the realm of unfulfilled desire), but I would very much like to hear your suggestions about how HUMANIST could be improved. It has far outrun its editor's original ambitions and so become a much more useful and significant thing. Let's have the user-driven improvements continue. We are constrained by the combined limits of manpower and software, so some suggestions may be impractical or just impossible to realize, but your making them will show us what to aim for. Thank you all for your continuing participation. Yours, W.M. _________________________________________________________________________ Dr. Willard McCarty / Centre for Computing in the Humanities University of Toronto / 14th floor, Robarts Library / 130 St. George St. Toronto, Canada M5S 1A5 / (416) 978-4238 / mccarty@utorepas.bitnet From: Ramana Subject: Has our local redistribution list been put on? Date: 7 Nov 87 02:02 PST X-Humanist: Vol. 1 Num. 395 (395) Can you check to make sure that XeroxHumanists~.x has been put on your distribution list. I haven't seen any traffic on this list, but so reference to it in another DL so believe that somehow we got dropped or never added. -- Ramana From: CMI011@IBM.SOUTHAMPTON.AC.UK Subject: Date: Tue, 10 Nov 87 11:55:41 GMT X-Humanist: Vol. 1 Num. 396 (396) Most HUMANISTs in the UK will know about the 'Computers in Teaching Initiative' and I expect those over the water have come across references to it. Both groups would probably be interested in the final report on one of the first CTI projects, Project Pallas at Exeter, which aimed to bring computing to the whole Faculty of Arts in Exeter by provision of facilities in the building, staff, help with regular courses, and specific computing courses. The summary of three years work or more on Pallas makes an interesting read for anyone involved in a similar exercise (I should admit a certain bias as I used to work for Pallas). The report isn't exactly published, but I am sure Exeter will send copies to those who are interested. Try mailing BUCKETT@EK.AC.EXETER (Dr John Buckett, the project co-ordinator) and ask him how to get a copy. Maybe he could send a machine-readable version to Toronto for redistribution. I do point out that I mention this as a 'review' not an advert - dont ask me for a copy! Under the same CTI umbrella, my own institution, with York University, has just started a project to do with excavation simulation in archaeology. We are aiming to set up systems to help train students in the strategy and management decisions involved in running an archaeological 'intervention'. As an off-shoot of this, an ex-Southampton student (Susanna Hawkins) wrote an MSc report at the LSE on the background to the CTI and the place of CAL in archaeology with reference to the aims of SYASS (our scheme). If any HUMANISTs would like to receive copies of this, and other documents relating to SYASS, they should contact me, with a note of what sort of documents they can use (the choice is LaTeX, troff/nroff 'mm' or formatted ASCII). The Hawkins report is about 30 pages formatted. In general, archaeological HUMANISTs are invited to send me their mail addresses so I can pass on any material that passes through my hands; this latter appeal is on behalf of the yearly "Computer Applications in Archaeology" conference, whose mailing list (paper and electronic) I am setting up and maintaining. If you are already on the paper list but have an e-mail, please send it to me. sebastian rahtz. computer science, southampton, uk From: Steve Younker Subject: More Junk Mail Date: Tue, 10 Nov 87 11:40:01 EST X-Humanist: Vol. 1 Num. 397 (397) By now, most of you are probably aware of a flood of junk mail coming from HUMANIST. I believe I have stopped it for now. The cause is still under investigation. Since Willard has been out of town, I was not aware that all HUMANIST subscribers were receiving the junk files that I was getting. Only when a couple of you brought it to my attention this morning did I realize that there was a larger problem. My thanks to those of you who sent me the information. When I have an explanation for what has happened, either Willard or I will try to inform you of the cause of this inundation of mail. Please just delete the offending mail and carry on. Well, back to the salt mine! :-) Steve From: Steve Younker Subject: Strange Mail Date: Tue, 10 Nov 87 16:54:13 EST X-Humanist: Vol. 1 Num. 398 (398) I'm terribly sorry to do this to you, but this junk mail problem is more complicated than I thought. The purpose of this file is to create some MORE junk mail. I'm collaborating with our systems people in tracking down the source of the problem. So, please keep deleting the junk and sending in your normal submissions. I know the junk mail is annoying, but you should see it from this end! Thanks for your patience. Steve From: Steve Younker Subject: LISTSERV going bananas Date: Wed, 11 Nov 87 10:04:38 EST X-Humanist: Vol. 1 Num. 399 (399) I got in this morning and to my dismay I find that HUMANISTS have once again been buried under a wave of bad headers. At this point to save you all from the frustration of further nonsense, I am suspending HUMANIST for probably the next 24 hours. If I can solve the problem before then, so much the better. You will receive another note from me telling you when it is once more safe to venture forward with HUMANIST. Hope to talk to you all soon. Steve From: R.J.HARE@EDINBURGH.AC.UK Subject: TECH88 (about 40 lines) Date: 11 Nov 87 09:25:24 gmt X-Humanist: Vol. 1 Num. 400 (400) TECH88 A Project on TECHNOLOGY, COMMUNICATION AND THE HUMANITIES April to September 1988 An enquiry into the complex interplay between technology, communication and the humanities, bringing together representatives from higher education, the arts, commerce and industry. Public Lectures ------ -------- Five public lectures will be held during July and August 1988 under the following general headings: 15 July Technology, Finance and Economics 22 July Technology and transport 29 July Technology, the Media and the Humanities 5 August Management, Education and Anti-Technology 12 August Free Enquiry in a Competitive Market: The Implications of Data banks and targeted Advertising. Conference on Technology, Communication and the Humanities ---------- -- ---------- ------------- --- --- ---------- A Major inter-disciplinary conference to be held at the University of Edinburgh from 18 to 21 August 1988, mounted in association with the TeCH 88 Project, and organised by the Institute. The main themes are: - Technology and decision making - Technology and the dissemination of information - Technology and the acquisition of knowledge - Technology and creative design - Technology and daily life Participants who wish to deliver a paper (in English) should contact the Director of the Institute for further particulars, indicating their proposed topic by 1 March 1988. Fellowships ----------- Further information is available from the Director of the Institute. Address for further information: ------- --- ------- ----------- Institue for Advanced Studies in the Humanities University of Edinburgh Hope Park Square Edinburgh EH8 9NW Scotland From: Steve Younker Subject: Welcome Back Date: Thu, 12 Nov 87 16:56:10 EST X-Humanist: Vol. 1 Num. 401 (401) Hello folks, Well, after a pretty intensive search of various logs, we feel that we have found the perpetrator of the junk mail. As far as we can tell, this person probably does not know he was causing the problem. He has been temporarily removed from the list. We will try an alternate method for his inclusion. What we don't know at this time is the EXACT reason for the problem. This is the subject of further research for a group of us at the UofT. I am bringing HUMANIST up this afternoon and I'm going to let it run overnight. If all goes well, it will remain up for the weekend. There is a slight possibility of further rejection notices being transmitted to all of you. However, I believe this possibility is very slight. I hope I don't have to eat my words. :-) I am going to be out of town for the weekend, so I won't be here to strangle HUMANIST if it runs riot once again. There will be another person who has the ability to stem the tide if this becomes necessary. Of course, on a weekend the trick is to notice a problem in the first place. So, please continue to show patience if the dam breaks once again. When I have a proper explanation for this problem, I will pass it on to you. Meanwhile, please carry on with your discussions. Thanks once again, Steve From: MCCARTY@UTOREPAS Subject: Post mortem, in 56 lines Date: 12 November 1987, 21:07:55 EST X-Humanist: Vol. 1 Num. 402 (402) Dear Colleagues: In view of the recent disaster on HUMANIST, which filled up readers around the world, I have added a new section to the electronic guidebook sent to all new members. This section is reproduced below. I would very much appreciate your comments, sent directly to me. I hope that you've recovered. You will appreciate my dismay when returning from a stimulating and enjoyable conference in Waterloo, Ontario (on the use of large textual databases) I was confronted with 247 messages in my reader! You see, for every bad message you get, I get at least two. -------------------------------------------------------------------- E. Junk Mail and other Network Problems --------------------------------------------------------------------- It is important to realize that HUMANIST is a highly complex web of individuals using a wide variety of computing systems linked together by several different electronic networks. Few of the many parts that comprise HUMANIST were designed to work together. Because HUMANIST's software, ListServ, is rarely able to distinguish between an original message from a member and an error message from a member's system or from some intervening node, it is highly susceptible to both human and mechanical mistakes. Others have commented that we have done well with human errors. Experience has shown, however, that under the present order of things serious floods of meaningless junk mail are exceedingly difficult to prevent. These floods, some say, are simply the price of belonging to the group. A flood of junk mail can be a serious matter to some. A member's reader can fill up quickly, causing much inconvenience and, perhaps, loss of meaningful e-mail. The editor and postmaster of HUMANIST do what they can to intervene once a flood has started, but even they must sleep, whereas our sometimes disobedient electronic servants do not. Individual members can help prevent a few such problems. One way is to be careful to specify reliable addresses. In some cases the advice of local experts may help. Any member who changes his or her userid or nodename should first give ample warning to the editor and should verify the new address. Accounts should not be left unwatched and unused. If you are planning to be away for more than a few days, make certain that your account can cope with a significant number of messages, or ask the editor to delete you temporarily from the list. (You will need to send him a subsequent request to be restored to the list when you return.) If you know your system is going to be turned off or otherwise adjusted in a major way, find out when it will be out of service and inform the editor. All the precautions in the world will not prevent some floods of junk mail, however. Ultimately a more perfect network depends on our appreciative insistence that the effort be spent in improving it. Meanwhile patience is required. ----------------------------------------------------------------- Yours, W.M. _________________________________________________________________________ Dr. Willard McCarty / Centre for Computing in the Humanities University of Toronto / 14th floor, Robarts Library / 130 St. George St. Toronto, Canada M5S 1A5 / (416) 978-4238 / mccarty@utorepas.bitnet From: MCCARTY@UTOREPAS Subject: Optical scanners (39 lines) Date: 15 November 1987, 11:14:00 EST X-Humanist: Vol. 1 Num. 403 (403) The following is from Dan Church -------------------------------------------------------------------------- Here is a summary of the responses I received in answer to my question about hard- and software for scanning text in foreign languages. Many thanks to all who replied. The new Kurzweil "desktop" scanner and the Palantir both use "smart" software to figure out problematic characters in a logical fashion (for English). This procedure makes these machines read faster and more accurately, but it also makes it impossible for them to handle foreign languages, especially those with accented letters or non-Roman alphabets. Kurzweil has promised new software in the near future that will enable the desktop scanner to handle French. Other foreign languages are down the road a bit, but this solution is not ideal because it means switching software each time one switches languages and will not accommodate mixtures of languages. The old Kurzweil scanners (e.g., Model 3) and the relatively new Model 4000 do not use the same type of software for recognition; instead they use a "training mode" that allows the user to tell the machine what the problematic characters are. It does that by scanning some text and then prompting the operator to enter characters (or combinations of up to three characters) for the unrecognized ones. While that approach may make the processing somewhat slower, it does allow for "training" the machine to recognize accented and even non-Roman characters. From: MCCARTY@UTOREPAS Subject: Biographical supplement (38.5K) Date: 15 November 1987, 11:40:49 EST X-Humanist: Vol. 1 Num. 404 (404) Autobiographies of HUMANISTs Fourth Supplement Following are 26 additional entries to the collection of autobiographies by members of the HUMANIST discussion group. Additions, corrections, and updates are welcome, to mccarty@utorepas.bitnet. W.M. 15 November 1987 From: Subject: Date: X-Humanist: Vol. 1 Num. 405 (405) *Amsler, Robert Bell Communications Research, Morristown, N.J. Despite the fact that I feel I have almost exclusively a background in the sciences, I find that I am continually working with people from the humanities and have been doing so for the last 12 or so years. I graduated from college with a B.S. in math and went on to graduate school at NYU's Courant Inst. of Math. Sciences in Greenwich Village. There I changed from a mathematician to a computer scientist--and even more significantly, to a computational linguist. I just decided one day that it was a lot more fun to see computers printing words than numbers. From NYU I went to the University of Texas at Austin (UT), where I worked with Robert F. Simmons for a number of years. Texas became home for 10 years and I eventually worked on a variety of humanities computing projects there as the programming manager of the linguistics research center in the HRC (which many of us preferred to think of as the Humanities Research Center even after the University changed the name to the Harry Ransom Center). At UT I worked on machine-readable dictionaries and eventually did a dissertation entitled ``The Structure of the Merriam-Webster Pocket Dictionary'' in which I proved you can construct taxonomies out of definitions. I also worked on a few other interesting humanities computing projects including providing the programming support (sorting, typesetting and syntax-checking) for Fran Karttunen's Analytical Dictionary of Nahuatl, building a concordance for Sanskrit texts, working on pattern recognition for Incunabula, data organization for a bibliography of literature of the 18th (or was it the 17th, sigh) century, Mayan calendar generation, and in general helping to spearhead an effort in the late 1970s to get the computing center to recognize text as a legitimate use of computing resources on campus. I have an interdisciplinary Ph.D. from UT in Computer Sciences (Computational Linguistics/Artificial Intelligence), Information Science, and Anthropological Linguistics (Ethnosemantics). After school, I went to SRI International in Menlo Park, CA and worked in the AI Center and the Advanced Computer Systems Dept. there for 3 years on a variety of projects and grants involving text, information science, and AI. From SRI I came to my present job at Bell Communications Research in Morristown, NJ in the Artificial Intelligence and Information Science Research Group, where I continue to specialize in working on machine-readable dictionary research (computational lexicology) and in general on finding alternate uses for machine-readable text. I'm a member of AAAI, ACL, AAAS, ACM, DSNA, and IEEE. My long-term interest is in trying to understand what it will mean to us in the future to have all the world's text information accessible to computers, and what the computers will be able to figure out from that information. Most recently, my attention has turned to the need to create some standards for the encoding of machine-readable dictionaries and to data entry of the Century Dictionary. From: Subject: Date: X-Humanist: Vol. 1 Num. 406 (406) *Benson, Jim English Department, Glendon College, York University, Toronto I use the CLOC package developed at the University of Birmingham for research purposes, which include statistical interpretations of collocational output for natural language texts. CLOC also produces concordances, indexes, etc. similar to the OCP. At York, CLOC is also currently being used to produce an old spelling concordance of Shakespeare. From: Subject: Date: X-Humanist: Vol. 1 Num. 407 (407) *Bevan, Edis 014 Gardiner Building, Open University, Milton Keynes, MK7 6AA, Great Britain. (The Open University is the biggest University in Britain in terms of student numbers. Instruction is at a distance by means of broadcast materials, written texts and some local tuition. The University has on its undergraduate programme more students with disabilities than all the other higher education institutions in Britain combined.) I intend to set up a discussion group which I hope will be as international as HUMANIST. This will probably be called ABLENET, and I am discussing with Andy Boddington how we could operate as a kind of pseudo-LISTSERVE. Participating in HUMANIST would give me some insights into how such a system could operate effectively. I believe good information exchange is as much a matter of developing communicative competence amongst the users as it is in manipulating the technologies. I am told that HUMANIST is an example of good practice in this matter. I also believe that HUMANIST debates could be most relevant to my general research into information and empowerment. It is not just a matter of applying modern technology to the specific needs of individual disabled people, great through the benefits of this can be. The information technology revolution is creating a whole new world, and it is largely being created for able bodied living with some afterthoughts for possible benefits for people with disabilities. Also there is no reason why disabled people of high academic capability should not be interested in the humanities and in computing in the Humanities. I intend to prepare a directory of resources for disabled people who want to initiate or carry through research projects for themselves. If they become interested in the humanities then HUMANIST could be a relevant resource for them. Furthermore, since I want to make this a truly international resource I need to look at the problems of information exchange in languages other that English. This may be relevant to your concerns with linguistic computing. From: Subject: Date: X-Humanist: Vol. 1 Num. 408 (408) *Butler, Terry I am active in supporting humanities computing at the University of Alberta. I am in the University Computing Systems department. We have the OCP program on our mainframe, TextPack (from Germany) recently installed, and a number of other utilites and program being used by scholars. We have considerable experience in publishing and data base publishing (I am in the Information Systems unit). I have a masters degree in English Literature from this university. From: Subject: Date: X-Humanist: Vol. 1 Num. 409 (409) *Cerny, Jim University Computing, University of New Hampshire Kingsbury Hall, Durham, NH 03824. (603)-862-3058 I am the site INFOREP for BITNET purposes and part of the academic support staff in the computer center. We have only been part of BITNET since mid-April-87, so I am working hard to find out what is "out there" and to let our user community know about it. I am especially working hard to show these possibilities to faculty from non-traditional computing backgrounds, such as in the humanities. I am publisher of our campus computer newsletter, ON-LINE, which we produce with Macintosh desktop publishing tools. We are always interested in exchanging newsletter subscriptions with other newsletter publishers/editors. As for myself, I am a wayward geographer, Ph.D. from Clark Univ., cartography as a specialization, and I teach one credit course (adjunct) per year. From: Subject: Date: X-Humanist: Vol. 1 Num. 410 (410) *Chapelle, Carol 203 Ross Hall, Iowa State University, Ames, IA 50011. (515) 294-7274 I am an assistant professor of ESL/Applied Linguistics at Iowa State University in Ames, Iowa. I am interested in the application of computers for teaching English and research on second language acquisition. My papers on these topics have appeared in TESOL Quarterly, Language Learning, CALICO, and SYSTEM. My current work includes writing courseware for ESL instruction and research, and developing a "computers in linguistics/humanities" course for graduate students at ISU. From: Subject: Date: X-Humanist: Vol. 1 Num. 411 (411) *Cooper, John I am working on a UK government sponsored project under the Computer Teaching Initiative umbrella. The project is headed by Susan Hockey, and the third member is Jo Freedman. We are developing ways in which texts in several languages and scripts can be accessed by university members (undergraduates initially, but we hope that graduates and researchers will be able to make use of the facilities) directly onto micro screens connected up with the university mainframe computers. They will be able to see their texts in the original scripts, and then be able to use concordance programs such as OCP and other text-oriented software to performs searches, etc., of their material. At present we are working with Middle English, Italian, Latin, Greek, and Arabic, but we are interested in incorporating any scripts and languages for which there is a demand in the university. Jo Freedman is languages for which there is a demand in the university. I am working partcularly on the textual side of the project, and we are using texts from the Oxford Text Archive to begin with. My particular interest is in Arabic and other languages written in the Arabic script, and I am at present working on a thesis in the field of Islamic jurisprudence. From: Subject: Date: X-Humanist: Vol. 1 Num. 412 (412) *Feld, Michael I currently teach Philosophy at University College, University of Manitoba, Winnipeg, MB R3T 2M8 (204) 474-9136. My use of computers is a newborn thing: primarily, as yet, to access data-bases, and to communicate with other scholars in my field via e-mail. My research interests center on moral epistemology and applied ethics. From: Subject: Date: X-Humanist: Vol. 1 Num. 413 (413) *Friedman, Edward A. Stevens Institute of Technology, Hoboken, New Jersey 07030 USA. 201-420-5188 I am currently a Professor of Management at Stevens Institute of Technology in Hoboken, NJ 07030, USA. Previously, as Dean of the College I had administrative responsibility for the development of the computer-intensive environment at Stevens. Every student had to purchase a computer ( beginning in 1983 ). The first computer was a DEC Professional 350 and now it is an AT&T 6310. A great deal of curriculum development has taken place at Stevens around this program. We are currently engaged in a massive networking effort which will place more than 2,000 computers on a 10Megabit/sec Ethernet with interprocess communications functionality. My interest is in uses of information technology in society and in the impact of information technology on liberal arts students. I recently had a grant from the Alfred P. Sloan Foundation to complete a text of information technology for liberal arts students that will be published by MIT Press. I currently have a grant from the Department of Higher Education of the State of New Jersey to implement an undergraduate course using full text search techniques. We are placing approximately ten volumes related to Galileo into machine-readable form. They include writings of Galileo, biographical material and commentaries. This data base will be used with Micro-ARRAS software in a history of science course on Galileo. I am working with Professor James E. McClellan of the Stevens Humanities Department and with Professor Arthur Shapiro of the Stevens Manangement Department on this project. I would be interested in hearing from individuals who have suggestions for experiments or observations that we might consider in this pilot project when it is implemented in the Spring Semester ( Feb - May 1988 ). I am also a founder and Co-Editor of a journal entitled, Machine-Mediated Learning, that is published by Taylor & Francis of London. The Journal is interested in in-depth articles that would be helpful to a wide audience of scholars and decision makers. Anyone wishing to see a sample copy should contact me. From: Subject: Date: X-Humanist: Vol. 1 Num. 414 (414) *Gauthier, Robert Sciences du langage, UNIVERSITE TOULOUSE-LE MIRAIL (61 81 35 49), France I am at present head of the "Sciences du Langage" Department at the "Universite Toulouse le-Mirail". I spent twenty years out of France mainly in Africa where I taught linguistics and semiotics. I started as a phonetician with a these de 3eme Cycle on teaching intonation to students learning French (FLE equivalent to TEFL). I worked for various international organisations (UNESCO, USAID, AUPELF) and the French Cooperation. I was then mainly interested in Audio-visual methods of teaching sundry subjects. I got involved in research on local folktales and wrote a few articles on the subject. I have been using computers for 10 years as a means of research, filing, word-processing, and intellectual enjoyment. I learnt and used a few languages (Fortran, Basic, Logo, Prolog...) and worked on different computers. After a These d'etat on the didactical use of pictures in growing up Africa, I came home to the Linguistics department of Toulouse university. I teach Computers or Semiotics at "Maitrise" level and I have a "Seminaire de DEA" on Communication and computed meaning (an unsatisfactory translation of the ambiguous french expression : Calcul du sens). The whole university shows a keen interest in computers and we have to fill in lots of forms to give shape to projects which aim to develop the teaching and use of computers in the Humanities. Unfortunately local problems prevent the university from having an efficient program to give students some kind of competence in dealing with Computers. In fact nobody seems aware of the specific problem posed by our literary students and their confrontation with courses given by specialists. As for what should be taught and how, this is either taboo or an irrelevant impropriety. In July 87 at the Colloque d'Albi, I presented a paper, which tried to promote a way to teach Basic to students with a literary background and I will try to perfect the method this year with the students attending my course on Basic and the Computer. I have just completed a stand alone application that helps make, merge, sort and edit bibliographies. It works on Macintosh and can be ported on IBM PC ( It was compiled with ZBasic). I am interested in hearing from persons using Expercommon Lisp on Macintosh for an exchange of views. From: Subject: Date: X-Humanist: Vol. 1 Num. 415 (415) *Graham, David Department of French and Spanish, Memorial University of Newfoundland St. John's, NF CANADA A1B 3X9 (709) 737-7636 I was trained in 17th century French literature but have in the last few years become more interested in the history of emblematics in France. To this end, I am now investigating the feasibility of a com- puterized visual database of French emblems, and am currently exploring the use of Hypercard on a Macintosh Plus to work on this. In addition, for the last few months I have been attempting to encourage the formation of a distribution list for French language and literature specialists in Canada along the lines of ENGLISH@CANADA01 (though I understand it has not been a complete success...). Consequently, I am very interested in the use of e-mail by scholars and teachers in the hu- manities generally. We are at present looking into the use of computers for teaching FSL here at Memorial and so I would be interested in exchanges of views and material on that subject as well. I am not however personally interested in parsers etc though I have colleagues here who are. From: Subject: Date: X-Humanist: Vol. 1 Num. 416 (416) *Hawthorne, Doug Director, Project Eli, Yale Computer Center, 175 Whitney Ave. New Haven, CT 06520, (203) 432-6680 My office is responsible in broad terms for providing the resources to support instructional computing at Yale. In addition to managing the public clusters of microcomputers available to students, I and my staff assist faculty who are searching for software to use for instruction or who are actively developing such software. In order to fulfill this role we attempt to stay abreast of recent developments and to funnel appropriate information to interested faculty at Yale. While not focussed exclusively on the humanities, we do give considerable attention to the humanists because they do not seem to be as "connected" to matters concerning computing as the scientists. But one example, I have been the principal organizer of a one day conference titled "Beyond Word Processing: A Symposium on the Use of Computers in the Humanities" which will be held tomorrow (Nov. 7). I look forward to participating in the network. From: Subject: Date: X-Humanist: Vol. 1 Num. 417 (417) *Hofland, Knut The Norwegian Computing Centre for the Humanities P.O. Box 53, University N-5027 Bergen Norway Tel: +47 5 212954/5/6 I am a senior consultant at the Norwegian Computing Centre for the Humanities in Bergen (financed by the Norwegian Research Council for Science and Arts), where I have been working since 1975. The Centre is located at the University of Bergen. I have worked with concordancing, lemmatizing and tagging of million words text like the Brown Corpus, LOB Corpus, Ibsens poems and plays. I have also worked with publication of material via microfiche, typesetters and laserwriters. We are a clearing house for ICAME (International Computer Archive of Modern English), a collection of different text corpora, and have recently set up a file server on Bitnet for distribution of information and programs. (FAFSRV at NOBERGEN, can take orders via msgs or mail). At the moment we are investigating the use of CD-ROM and WORM disks for distribution of material. We have worked for several years with computer applications in Museums, printed catalogues and data bases both on mainframes and PCs. From: Subject: Date: X-Humanist: Vol. 1 Num. 418 (418) *Hogenraad, Robert Faculte de Psychologie et des Sciences de l'Education, Universite Catholique de Louvain 20, Voie du Roman Pays B-1348 Louvain-la-Neuve (Belgium) For some time, I have been active here in the field of computer-assisted content analysis (limited to mainframe computers, alas, for financial reasons). For example, we recently issued a User's Manual --in French--for our recent PROTAN system (PROTAN for PROTocol ANalyzer). We intend some more work on our system in two directions, i.e., developing a sequential/narrative approach to content analysis, and developing new dictionaries, in French, in addition to the ones we already work with. From: Subject: Date: X-Humanist: Vol. 1 Num. 419 (419) *Hughes, John J. Bits & Bytes Computer Resources, 623 Iowa Ave., Whitefish, MT 59937; telephones: (406) 862-7280; (406) 862-3927. Editor/Publisher of the "Bits & Bytes Review." After attending Vanderbilt University (1965-1969, philosophy), Westminster Theological Seminary (1970-1973, philosophical theology), and Cambridge University (1973-1977, biblical studies), I taught in the Religious Studies Department at Westmont College in Santa Barbara, California (1977-1982). During 1980-1981, while teaching third-semester Greek at Westmont College, I attempted to use Westmont's Prime 1 to run GRAMCORD, a program that concords grammatical constructions in a morphologically and syntactically tagged version of the Greek New Testament. I had no idea how to use the Prime 1, and no one at the college had ever used GRAMCORD. Several frustrating visits to the computer lab neither quenched my desire to use the program nor dispelled my elitist belief that if students (some of whom, after reading their term papers, I deemed barely literate) could use the Prime 1 productively, then so could I. (The students, of course, immediately saw that I was as illiterate a would-be computer user as ever fumbled at a keyboard or read incomprehendingly through jargon-filled manuals.) My unspoken snobbery was not soon rewarded. After several spectacular and dismal failures (including catching a high-speed line-printer in an endless loop), I welcomed--indeed, solicited--the assistance of one and all, "literate" or not. After a good deal of help, my class and I were able to use GRAMCORD. Because of the system software or the way the program was installed or both, however, users had to wait 24 hours before the results of GRAMCORD operations were available. That delay did little to encourage regular use of the program, though it did illustrate the difference between batch and interactive processing. More recently, after three and a half years of research and writing, I have just completed "Bits, Bytes, and Biblical In October 1986, while researching and writing "Bits, Bytes, and Biblical Studies," I started the "Bits & Bytes Review," a review-oriented newsletter for academic and humanistic computing. This publication reviews microcomputer products in considerable detail, from the perspective of humanists, and in terms of how the products can enhance research and increase productivity. The newsletter appears nine times a year and is available to members of the Association for Computing in the Humanities at reduced rates. (Free sample copies are available from the publisher.) I am a member of the Association for Computing in the Humanities and a contributing editor to "The Electronic Scholar's Resource Guide" (Edited by Joseph Raben, Oryx Press, forthcoming). During the summer of 1988, I will teach an introductory-level course on academic word processing, desktop publishing, and text-retrieval programs at the University of Leuven through the Penn-Leuven Summer Institute. I am interested in using available electronic resources and tools to study the Hebrew Scriptures, the Septuagint, and the Greek New Testament. From: Subject: Date: X-Humanist: Vol. 1 Num. 420 (420) *Julien, Jacques I am assistant-professor at the Department of French & Spanish at the University of Saskatchewan, in Saskatoon. I am teaching language classes and French-Canadian literature and civilization. My field of research is French-Canadian popular song. I will have my Ph.D. thesis published in Montreal in next November. My subject was the popular singer: Robert Charlebois, and I have received my degree from the University of Sherbrooke, in 1983. I am working on a IBM/PC/XT compatable that can access the mainframe (VMS) through Kermit. Nota Bene is the wordprocessor I use more often. I am planning to use AskSam, by Seaside Software, a Text Base Management System, and SATO, from UQAM. I may say that my reasearch is based on computer assistance, as is my instruction. For example, I am very much interested at the software Greg Lessard is working on for interactive writing in French. Keywords that can define my work and my interests would be: French-Canadian literature and civilization, semiotics, sociology, CAI of French, stylistic analysis and Text Base Management. From: Subject: Date: X-Humanist: Vol. 1 Num. 421 (421) *Kenner, Hugh I am Andrew W. Mellon Professor of the Humanities (English) at Johns Hopkins. I co-authored the "Travesty" program in the November '86 BYTE. With my students, I do word-analysis of Joyce's Ulysses, using copies of the master tapes for the Gabler edition. From: Subject: Date: X-Humanist: Vol. 1 Num. 422 (422) *Lancashire, Ian Centre for Computing in the Humanities, Robarts Library, 14th Floor, University of Toronto, Toronto, Ontario M5S 1A5; (416) 978-8656. I am a Professor of English who became interested in applying database and text-editing programs to bibliographical indexes for pre-1642 British records of drama and minstrelsy. Somewhat earlier I had done concording for an edition of two early Tudor plays. These in turn led me in 1983 to offer a graduate course introducing doctoral students in English to research computing; and to help, my department offered to publish a textbook summarizing documentation and collecting scattered information. With the support of like-minded colleagues, especially John Hurd and Russ Wooldridge, I urged the university to set up a natural-language-processing facility. The Vice-President of Research obliged by doing so and giving us a full-time programmer at Computing Services. I worked with him on a collection of text utilities called MTAS, which we developed on an IBM PC-AT given by IBM Canada Ltd. Then we organized a conference on humanities computing at Toronto in April 1986, and a month later IBM Canada and the university signed a joint partnership to set up a Centre for Computing in the Humanities here. Four laboratories and a staff of five later, I am still a director who enjoys every hour of the extraordinary experience of leading people where they want to go, one of whom, the creator of HUMANIST, is a gentleman scholar who has worked with me from the mid-seventies and whose talents are fully revealed in the Toronto centre. My own research? I co-edit The Humanities Computing Yearbook, am interested in distributional statistical analysis of text (content analysis with pictures), and am working with Alistair Fox and Greg Waite of the University of Otago (New Zealand) and George Rigg of Medieval Studies at Toronto on an English Renaissance textbase, with emphasis on the dictionaries published at that time. I have given a fair number of well-meaning talks about the importance of humanities computing, a few of which have been published. I am optimistic that eventually some serious scholarship will come of all this chatter. My wife is a professor of English too, and we have three children, one cat, and five microcomputers between us. From: Subject: Date: X-Humanist: Vol. 1 Num. 423 (423) *Martindale, Colin Dept. of Psychology, Univ. of Maine, Orono, ME 04469 I guess that the main way that I support computing in the humanities is by doing it. I have been working in the area of computerized content analysis for about 20 years. I have constructed several programs and dictionaries that I have used mainly to test my theory of literary evolution originally described in my book, Romantic Progression (1975). More recent publications are in CHum (1984) and Poetics (1978, 1986). I have tried to convince--with some success--colleagues in the humanities to use quantitative techniques and computers. With more success, I have interested grad students in psychology to use computerized content analysis to study literature and music . From: Subject: Date: X-Humanist: Vol. 1 Num. 424 (424) *Miller, Stephen External Adviser, Computing in the Arts, Oxford University Computing Service, 13, Banbury Road, Oxford. OX2 6NN. 0865-273266 I would like to join HUMANIST - my role in the computing service here is to handle enquiries about computer applications in the humanities from users outside of Oxford in the main but also to provide an internal service if I can be of assistance From: Subject: Date: X-Humanist: Vol. 1 Num. 425 (425) *Nash, David MIT Center for Cognitive Science, 20B-225 MIT, Cambridge MA 02139 tel. (617)253-7355 (until Jan. 1988) I am involved in two projects which link computers, linguists, and word lists (and also text archives), namely the Warlpiri Dictionary Project (at the Center for Cognitive Science, Massachusetts Institute of Technology, and Warlpiri schools in central Australia), and the National Lexicography Project at AIAS (Australian Institute of Aboriginal Studies). The latter is a clearinghouse for Australian language dictionaries and word lists on computer media, recently begun, and funded until March 1989. Contact AIAS, GPO Box 553, Canberra ACT 2601, Australia; or the address below until next January. At the Center for Cognitive Science we use a DEC microVax, and Gnu Emacs and (La)TeX. We also use CP/M machines, and a Macintosh SE at AIAS, and have access to larger machines such as a Vax for data transfer. My training and interests are in linguistics and Australian languages. From: Subject: Date: X-Humanist: Vol. 1 Num. 426 (426) *O'Cathasaigh, Sean French Department, The University, Southampton SO9 5NH England I work in the French Department at Southampton, where I use microcomputers for teaching grammar and the mainframe for generating concordances of French classical texts. I'd be very interested in hearing from anyone who has used Deredec or its associated packages. I've thought of buying them for my Department, but have found it very difficult to get information from the authors. So a user report would be very welcome. Please contact: From: Subject: Date: X-Humanist: Vol. 1 Num. 427 (427) *O'Flaherty, Brendan My interest in humanities computing is primarily in the archaeolaogical field. I did my undergraduate and postgraduate degrees at University College, Cork and am currently Research Fellow in the Department of Archaeology in Southampton (address: The University, Souhampton SO9 5NH). My interest in computing include Computer-aided learning, typesetting and databases. From: Subject: Date: X-Humanist: Vol. 1 Num. 428 (428) *Paff, Toby C.I.T., 87 Prospect St., Princeton University, Princeton, New Jersey 08544 609-452-6068 I support, along with Rich Giordano, almost every aspect of computing in the humanities, where humanities includes the broadest number of fields possible. This means in particular, text processing, database work as it relates to humanities, text analysis, and linguistic analysis. I work a good deal with Hebrew and Arabic fonts, and with faculty and students who work in that area. Occasional work crops up in Chinese, but that comes and goes in waves. I am a SPIRES programmer and support things like the university serials list. My background is, in fact, in library work, though I support almost nothing bibliographical at this point. Given the generally cooperative atmosphere at Princeton, I work with micros, minis and mainframes... CMS and UNIX both. From: Subject: Date: X-Humanist: Vol. 1 Num. 429 (429) *Ruus, Lane Head, UBC Data Library, Data Library, University of British Columbia 6356 Agricultural Road, Vancouver, B.C. V6T 1W5 (604) 228-5587 Academic background: anthropology, librarianship What I do: see the following. UBC DATA LIBRARY AS A TEXT ARCHIVE The UBC Data Library is jointly operated by the UBC Computing Centre and Library. Its basic functions are to acquire and maintain computer-readable non-bibliographic files, in all necessary disciplines, to support the research and teaching activities of the University, to provide the necessary user services, and to act as an archive for original research data that may be used for secondary analysis by others. The Data Library is committed to three basic principles: (a) expensive data files should not be duplicated among a variety of departments on campus, but should be acquired centrally and made available to all, (b) original data resulting from research, that might be subject to secondary analysis in the future, should be preserved for posterity, as are publications in other physical media. They should therefore be deposited in data archives, with the professional expertise to preserve this fragile medium for future analysis, and (c) one of the basic tenets of academic research is the citation of all sources used, so as to facilitate the peer review process. Data files should therefore be cited, in publications, as are as a matter of course all other media of publication. Through such acknowledgement, creators of data will be encouraged to make their data available for secondary analysis. The Data Library's collection contains over 4600 files. Because of the size of the collection, all data are stored on magnetic tapes. Files vary in size from ten card images to a hundred million bytes or more. Subject matter varies from the Old Testament in Hebrew, to images from the polar-orbiting NOAA satellites. Data files are ordered from other data archives/libraries, on request (and as our budget allows), or are deposited by individual researchers. At present, the Data Library has textual data files in the following broad subject areas: American fiction, American poetry, Anglo-Saxon poetry, Bible (New Testament, Old Testament), Canadian poetry, English drama, English fiction, English poetry, French diaries, French language (word frequency, literature, poetry), German poetry, Greek (drama, language, literature, poetry), Hebrew literature, Indians of North America - British Columbia - legends, Irish fiction (English), Latin literature. All files are accessible at all times that the UBC G-system mainframe is operating in attended mode. Text files are generally maintained in the format in which they are received from the distributor. Generally this allows the researcher maximum flexibility to choose his/her favourite analysis package (e.g. OCP), download to a microcomputer, etc. Occasionally, the Data Library will compile an index to the contents of a large, complex file, or otherwise compile a computer-readable codebook. The Data Library maintains a catalogue of its collection under the SPIRES database management system, on the UBC G- system mainframe. Each record in the database contains information as to the substantive content, size, format, and availability of data files. It also includes information as to where documentation describing the files is to be found (whether on-line disc files or printed), and the information needed to mount the tape containing the file. The Data Library also maintains, on the G-system, an interactive documentation system. The system includes documents introducing the Data Library, how to mount Data Library owned tapes, as well as documents describing how to compile a bibliographic citation for a data file, how to deposit data files in the Data Library, etc. From: Subject: Date: X-Humanist: Vol. 1 Num. 430 (430) *Tompkins, Kenneth ARHU, Stockton State College, Pomona, NJ 08240 (609) 652-4497 (work) or (609) 646-5452 (home) Fundamentally, I support computing in the Humanities by witnessing. In 1981, I set up the college Microlab so that (1) there would be a place for the whole college to learn about micros and what can be done with them; and (2) so that non-information science students could have a place to work. Since then, I have held yearly faculty workshops, set up over 200 computers across the campus, designed an Electronic Publishing track in the Literature Program (English Dept.), set up a college BBS, and done anything I could to make sure my colleagues have a chance to use computers in their teaching and research. I did teach a course called Computers and the Humanities which was not an unqualified success. Oh yes, I built my first micro in 1975. My role, then, is to witness, persuade, pound on tables, cajole, and to make myself heard by busy, somewhat uncaring administrators and by overworked and fearful colleagues. I am a Medievalist and I have been teaching at undergraduate colleges since 1965. I came to Stockton as one of a five person team to start the college; after 15 months of work designing the curriculum and hiring 55 faculty, the college opened in 1971. I was Dean of General Studies until 1973 when I returned to full time teaching. Since 1978 I have been spending my summers at Wharram Percy in the Yorkshire Wolds. Wharram Percy is a Deserted Medieval Village archeaological dig. I am now the Chief Guide; last summer I led tours for over 1100 visitors. I have co-authored a small booklet on Deserted Villages. I am very interested in how computers can be applied to archeaology. My other projects involve graphic input to computers. At present, I have built digitizing boards and hope to begin digitizing Celtic art so that these complex pictures can be broken into constituent parts. I am also interested in graphic reconstruction of medieval buildings. From: Mark Olsen Subject: Electronic Text Date: Sun, 15 Nov 87 18:35:12 MST X-Humanist: Vol. 1 Num. 431 (431) I have one suggestion for the ACH electronic text guidelines. You might want to include codes to represent the edition, volumes and page numbers of the texts in question. I gave a paper recently and used, without thinking really, program generated references to texts that I had collected and from the Constitution Papers published by the Electronic Text Corporation. The commentator suggested that I refer to hard copies of the texts. No problem for material I had assembled, tho' it was a pain in the ~&(*%. But I do not have an indication from ETC as to the edition, publishers or page numbers of the texts in that collection. The moral of this story? Include numbered page breaks and edition information in electronic text, or you will join me in the exceedingly frustrating task of hunting down hard copy references to text you have on-line. Mark From: Steve Younker Subject: We can all relax now (until next time) Date: Mon, 16 Nov 87 10:49:34 EST X-Humanist: Vol. 1 Num. 432 (432) Good Morning, As promised, I now include an explanation for the flood of junk mail we all received last week. The fault had two parts. The HUMANIST distribution list had a complex but valid address for certain subscribers. (A shorter one could have been used and will now be used for sure!) This was the seed of disaster in the complex world of electronic mail and was therefore the first part of the problem. The second part of the problem was a bug in some mail-handling software at a node between the UofT and the particular subscriber in question. My opposite number at that node found the bug when I sent him a sample of the output with which you all became so familiar last week. :-) The fix is also a two part affair: my colleague fixes his bug, and we use the shorter address. The shorter address also has the fortunate characteristic of bypassing my colleague's node. So, even if my friend doesn't fix his software, won't be flooded with bits and bytes, at least until some new quirk arises. The subscriber who (unknowingly) started this whole affair can now be re-instated, and HUMANIST is off and running once again. As an aside, I'd like to mention that Wisconsin, a major gateway to ARPAnet will become extinct sometime in December. One or more replacements are in the works at this time. Since all of these new sites have the potential to use different software packages, I would not be surprised at another network burp occurring after the change-over. Whether or not this hypothetical burp hits HUMANIST remains to be seen. But, I feel a warning may be in order this time. So, if you've been holding back submissions to the list, hold back no longer. It is now safe to step forward into the fray. Murphy's Law says that I will eat these words. :-) Thanks again for your patience. Steve From: CSHUNTER@UOGUELPH Subject: Date: 16 November 1987, 12:35:13 EST X-Humanist: Vol. 1 Num. 433 (433) The following may be of interest to members of HUMANIST: 1987 RESEARCH CHALLENGES IN INFORMATION TECHNOLOGY VISUAL DATA REPRESENTATIONS: COPING WITH OVERLOAD AND IMPROVING OUR INSIGHT sponsored by University of Toronto/University of Waterloo Cooperative on Information Technology Friday, November 27, 1987 @ Siegfried Hall, St. Jerome's College, U of W Schedule of Activities * Paul Eagles, UW Recreation ``Graphical Representation of Breeding Bird Data: The Bird Atlas'' * Colin Ware, UNB Computer Science ``Colour Sequences for Univariate Maps'' * Howard Armitage and Efrim Boritz, UW School of Accounting ``Teaching Visual Representations to Undergraduates'' * Bruno Forte, UW Applied Mathematics ``Thresholding Grey-Level Histograms by Minimum Information Loss'' * John Moore, UW Management Sciences ``Instruction for Team Sports: The Electronic Playbook'' *Phillipe Martin, UT Experimental Phonetics Laboratory ``Intonation Display for Research and Teaching'' *Gordon Andrews and Peter Myshok, UW Mechanical Engineering ``Visual Data Representations in Engineering Design'' * Peter Wood, UT Computer Systems Research Institute ``Asking Questions About Graphs: A Visual Query Language'' *David H. Farrar and John J. Irwin, UT Chemistry ``Visual Representations for Understanding Chemical Models'' * Philip Robertson, UT Computer Science ``Colour Surface Representation of Images'' * Martin Lamb and David Smith, UT Faculty of Library & Information Sciences ``Visual Representation of a Chemical Database for Teaching Purposes'' * Alan Mitchell, Information Services, City of Toronto ``Improving City Planning and Development Using Computing Graphics: The City of Toronto's Challenge'' * John Danahy, UT Landscape Architecture ``Computer Displays in Architecture'' * Ron Baecker, UT Computer Science ``Visual Representations of Computer Programs'' FEES (Lunch included): Members of the Cooperative on Information Technology Affiliates and Subscribers $45.00 Non-Members $75.00 Students $15.00 If you will require transportation to and from Waterloo -- Bus fare $15.00 Cheques should be made payable to either the University of Toronto, c/o Judy Borodin, 140 St. George Street, Room 622, Toronto, Ontario M5S 1A1 or cheques made payable to the University of Waterloo, c/o Bonnie J. Kent, Sociology Dept., PAS 2061, Waterloo, Ontario N2L 3G1 From: JACKA@PENNDRLS Subject: Date: Monday, 16 November 1987 1244-EST X-Humanist: Vol. 1 Num. 434 (434) The Katholieke Universiteit Leuven (KUL) and the University of Pennsylvania are pleased to announce a Summer Institute on Computer Applications in the Humanities. The Institute will be from 18 July 1988 to 26 August at the University of Leuven in Belgium. The following courses will be taught both for undergraduate and graduate credit. A PRACTICAL INTRODUCTION TO COMPUTING IN THE HUMANITIES (John Hughes) COMPUTER APPLICATIONS IN THE HUMANITIES (John R. Abercrombie) TEXTUAL ANALYSIS (John Fought) INTRODUCTION TO THE OXFORD CONCORDANCE PROGRAM FOR RESEARCH (Susan Hockey) STYLISTIC ANALYSIS (Nicole Delbecque) COMPUTERS AND TRANSLATION (Frank Van Enyde) In addition to the full-time faculty, guest speaker from other European and American institutions will give special presentations. For general information on the Institute and/or an application, write to: Peter Steiner, Chairman Comparative Literature Department 420 Williams Hall University of Pennsylvania Philadelphia, PA 19104-6305 USA Electronic Address : Steiner @ PENNDRLN John R. Abercrombie, Assistant Dean for Computing (Humanities) University of Pennsylvania From: MCCARTY@UTOREPAS Subject: Date: 17 November 1987, 00:22:54 EST X-Humanist: Vol. 1 Num. 435 (435) Please excuse this brief test. From: MCCARTY@UTOREPAS Subject: No more junk mail -- unless you object (35 lines) Date: 17 November 1987, 00:26:52 EST X-Humanist: Vol. 1 Num. 436 (436) Dear Colleagues: A few of you have kindly written to me, telling me not to worry so much about the occasional floods of junk mail, that the value of HUMANIST offsets these accidents. I very much appreciate such support for HUMANIST, but I am not persuaded that the obnoxious floods do not upset many. So, I have finally decided to take on the job of filtering out the junk by having ListServ send me all messages intended for HUMANIST. I will then pass on the ones of human origin to ListServ for distribution to all of you. This is not much work, but it has the disadvantage of making the contributions slightly less immediate. From now on, when you send a message to HUMANIST you will receive a note from ListServ telling you that your message has been submitted to me. I'll pass it on within the day. I promise not to censor any human contribution, only the non-human obscenities. If anyone has any comments about the change of procedure, I'd be happy to receive them directly. If you want the old ways back, please say so. Yours, W.M. _________________________________________________________________________ Dr. Willard McCarty / Centre for Computing in the Humanities University of Toronto / 14th floor, Robarts Library / 130 St. George St. Toronto, Canada M5S 1A5 / (416) 978-4238 / mccarty@utorepas.bitnet From: MCCARTY@UTOREPASDr Abigail Ann Young 1-416-585-4504 YOUNG at UTOREPAS Subject: Forwarded from the Editor Date: 17 November 1987, 15:23:52 EST17 November 1987, 12:31:18 EST X-Humanist: Vol. 1 Num. 437 (437) [this message is roughly 35 ll, exclusive of the header lines] I append the following paragraph from the latest issue of The EDAM Newsletter (10.1, Fall 1987), p 7 Data Bank at Rutgers University A Medieval and Early Modern Data Bank at Rutgers University will provide access to numeric data, including currency, price, and wage information, from the Middle Ages and Renaissance. The Data Bank was established in 1982 by Professors Rudolph Bell and Martha Howell of the History Department at Rutgers in conjunction with Peter Spufford of Queen's [sic] College Cambridge. Dr Spufford's contribution of 20,000 entries, originally on index cards, has been described as the "cornerstone of the Data Bank," but much additional information is being entered into the computer as work progresses on indexing data derived from continental European archives in order to produce a major resource for scholars. The goal is to have the Data Bank functioning within two years. This is fascinating!! But it tells me almost none of the things I want to know. Are there any HUMANISTs who've heard about this before & who have more detailed information? I assume the original 20K of index cards was from British archives. Is all the data from previously unpublished archival sources? Are prices for the same commodities reported at each period for each region? How will it be accessed? Will it (I hope) be on-line? If there is anyone out there with more information or titles of descriptive articles, etc, please let me know, and I'll post a summary to HUMANIST as appropriate. Thank you. Abigail Young young@UTOREPAS From: MCCARTY@UTOREPASProf. Choueka Yaacov Subject: List of Institutes in CHUM Date: 17 November 1987, 15:45:04 ESTTue, 17 Nov 87 17:01:37 +0200 X-Humanist: Vol. 1 Num. 438 (438) I am trying to compile a list of all Centers/Institutes/Groups, etc., that are involved with Computing in the Humanities, natural language processing, computational linguistics or information retrieval, and are associated with universities or research institutions in general. The list is intended mainly for contact and mailing purposes; it will be made available to anyone who requests it once it'll have reasonable coverage. If you are in charge of such an institution, or just work there or even just happen to know about it, please send the pertinent information in the following format: Name of Institution Full Address Tel Person in Charge Title Tel E-mail address. Thanks for your help! Yaacov Choueka, Institute for Information Retrieval and Computational Linguistics, Bar-Ilan University, Ramat-Gan, Israel; choueka@bimacs From: MCCARTY@UTOREPAS Subject: HUMANIST's logs Date: 17 November 1987, 20:14:15 EST X-Humanist: Vol. 1 Num. 439 (439) Through an oversight HUMANIST's logbooks have until now not been accessible to members of the group. That fault has been corrected. You may recall that ListServ keeps monthly logbooks on the UTORONTO machine of all messages sent out by HUMANIST. These logbooks are named HUMANIST LOGyymm, where yy = the year and mm = the month. Thus the logbook for October is HUMANIST LOG8710. See your copy of the guidebook to HUMANIST for instructions on how to fetch the logs. Yours, W.M. From: MCCARTY@UTOREPASDr Abigail Ann Young 1-416-585-4504 YOUNG at UTOREPAS Subject: Date: 20 November 1987, 14:48:21 EST20 November 1987, 11:15:22 EST X-Humanist: Vol. 1 Num. 440 (440) [message approx. 56 lines long w/o headings & counting this line] Having recently returned from this year's conference at the Waterloo (Ont) Centre for the New OED (topic: large text data-bases), I've been mulling over various details, theories, arguments, etc, which came up in the two days of the conference. One thing which was of great interest was the tentativeness which seemed to me to be apparent about the use of CD-ROM for distributing and using large text-bases. Publishers seemed a) reluctant to enter the marketplace with reference material on CD-ROM, because they felt (based on market research) that there was not a large enough demand (except perhaps among institutional users, such as gov't departments or university libraries); b) curious about what effect the new IBM WORM drive would have; c) worried about the need to provide new software and new formats for data to make it really useable in electronic form (that is, publishers seem very aware that electronically publishing a book is not so simple as to write the text on a CD-ROM and sell it -- that seems to have been one idea which emerged strongly from both the publishers' and the users' point of view during work on the New OED). And the users' community (or at least that part of it represented in Waterloo) seemed to be a bit ambivalent: they wanted CD-ROMs because they could be used on micro's rather than mainframes, and because they offered security and permanence which mag tape doesn't have. But they wanted the textbases on those CD-ROMs to be structured differently from the text in the published reference works, and they wanted software based on the new structuring to be provided for information retrieval, etc, and they expected the CD-ROMs to be cheap: they didn't seem to want to hear from the publishers that that latter goal was disconsonant with the first two, unless there was a huge demand for the finished product. One former publisher summed it up rather well by saying that what the industry (publishing) was waiting for was an electronic best-seller, something with a broad enough appeal to individual users to cause them to go out and buy CD-ROM readers and whose particular usefulness and accessibility was enhanced by the electronic medium in a way that no conventional medium could approach. I was very interested by all this, and I think I've summarized fairly the kinds of attitudes being expressed. Many people, while not doubting the value of CD-ROM for long term data storage, doubted its value for day-to-day use, and everyone seemed to be waiting with great interest to see what would happen with MicroSoft's CD-ROM of Webster's, Roget, and three other standard reference works. I'm curious to know what other people think about all this. Are there HUMANISTs out there waiting with baited breath for the publication of the OED on CD-ROM? Do people want and need textbases on CD-ROM rather than mag tapes? What do you think? Abigail Young Research Assistant, REED YOUNG at UTOREPAS From: MCCARTY@UTOREPASJohn Bradley Subject: Sanskrit Word Processor needed Date: 20 November 1987, 19:01:51 EST X-Humanist: Vol. 1 Num. 441 (441) I'd appreciate a little information: I've been talking to someone here at U of T who wishes to produce a document with Western European language text mixed with Sanskrit (written with the Davanagari script). I believe they want Telugu and Tamil as well. They will be using an IBM PC. Does anyone out there have a happy experience with any software and hardware for an IBM PC that will support this? We haven't been able to lay our hands on a definitive list of languages and character sets that Nota Bene will be supporting. What other choices are there? I'd appreciate a reply directed to me, but will summarize for HUMANIST, if there is general interest. Thanks. ... John Bradley (U of Toronto Computing Service Netnorth/Bitnet: BRADLEY at UTORONTO. From: MCCARTY@UTOREPASARCHIVE@VAX.OXFORD.AC.UK Subject: query on political manifestos Date: 20 November 1987, 19:03:16 EST X-Humanist: Vol. 1 Num. 442 (442) Does anyone have machinereadable versions of any of the political manifestos produced by major British political parties since 1964? If so, please get in touch with ARCHIVE @ UK.AC.OXFORD.VAX Lou Burnard -------------------------------------------------------------------------- Editor's note: for those of you on Bitnet/NetNorth/EARN (and perhaps others) that e-mail address should read ARCHIVE@VAX.OXFORD.AC.UK -------------------------------------------------------------------------- From: MCCARTY@UTOREPAS"John J Hughes" Subject: Bits, Bytes, & Biblical Studies (24 lines) Date: 20 November 1987, 21:23:16 EST X-Humanist: Vol. 1 Num. 443 (443) I received a call from Zondervan Publishing House yesterday, informing me that my book BITS, BYTES, & BIBLICAL STUDIES: A REESOURCE GUIDE FOR THE USE OF COMPUTERS IN BIBLICAL AND CLASSICAL STUDIES is now available, though I have not yet received a copy. HUMANISTS may be interested to learn that it is (finally!) available. The book may be ordered from me c/o Bits & Bytes Computer Resources, 623 Iowa Ave., Whitefish, MT 59937 for $29.95 + $2.50 shipping and handling or from the publisher. Review copies may be ordered from the publisher. Contact Ed van der Maas, Zondervan Publishing House, 1415 Lake Dr. SE, Grand Rapids, MI 49506; (800) 233-3480 or (616) 698-6900, (616) 698-3461. The book is 650 pages, including glossary and indices. From: MCCARTY@UTOREPASCAMERON@EXETER.AC.UK Subject: From Valois to Bourbon Date: 21 November 1987, 16:12:07 EST X-Humanist: Vol. 1 Num. 444 (444) UNIVERSITY OF EXETER FROM VALOIS TO BOURBON December 14-16 1988. First Announcement To coincide with the quatercentenary of the Blois assassination of the Duke and Cardinal de Guise, which in turn prompted the assassination of Henri de Valois, a residential Conference/Colloquium has been arranged for December 1988 at the University of Exeter. Discussions on a wide variety of topics dealing with the closing months of Henri's reign will be stimulated by papers from Joseph Bergin (Manchester), Richard Bonney (Leicester), Denis Crouzet (Paris), Mark Greengrass (Sheffield) and Robert Knecht (Birmingham). It is estimated that the cost for full board, from 6.00p.m. on Wednesday 14 December to 4.00 p.m. on Friday December 16 will be 60 pounds and the Conference Fee 15 pounds. Pro-rata rates are available on request. FOR FURTHER INFORMATION, WRITE TO : Sarah Moore, Dept of French and Italian, Queen's Building, The University, EXETER, EX4 4QH, (UK). Or CAMERON@UK.AC.EXETER From: Subject: Date: X-Humanist: Vol. 1 Num. 445 (445) Four humanities departments at the Univ. of Alberta, Canada (English, Philosophy, Classics, and Religious Studies) have established a committee to investigate the possibility of a joint computer lab for faculty and graduate student research use. The five person committee has a small budget and one year to ascertain the needs and desires of those persons for whom the lab is intended, to view established labs at certain other institutions, and to draft a formal proposal for funding under a provincial government special initiatives program. If the proposal is approved by the cabinet of the Alberta government, the committee's budget will be renewed for a further twelve months during which time the proposal would be implemented and the lab established. The committee would welcome any suggestions or news of particular successes and difficulties which others may have encountered while setting up similar facilities. Comments or queries may be sent to the committee through SREIMER@UALTAVM.BITNET. Stephen R. Reimer Department of English, University of Alberta, Edmonton, AB T6G 2E5 From: MCCARTY@UTOREPASKRAFT@PENNDRLN Subject: CD-ROM, WORM, etc. Date: 23 November 1987, 12:19:34 EST X-Humanist: Vol. 1 Num. 446 (446) Abagail Young's instructive report on the Waterloo discussions, and her inquiry about our attitudes, provide a good opportunity to update HUMANISTs about the activities of the Center for Computer Analysis of Texts (CCAT), in cooperation with the Packard Humanities Institute (PHI) and the Thesaurus Linguae Graecae (TLG), as well as others, on such matters. Some TLG ancient Greek materials have been available on CD-ROM for two years, in two different forms, and have now been supplemented and updated in a new release. Persons with access to the IBYCUS Scholarly Computer (SC) system, which is set up to read CD-ROMs in the TLG format, will know how valuable this type of material is with the right hardware and software. The earlier TLG CD-ROM materials also appeared in an indexed version for accessing through programs developed at Brown (Paul Kahn) and Harvard (Gregory Crane), with impressive results, although I do not have any first hand experience with this approach. The most recent TLG CD-ROM (version "C") is set up in the provisional "High Sierra" format released last year, and it is the intention of TLG-PHI-CCAT to be "High Sierra" compatible in future releases with the hope that standard CD-ROM software can be used to access these texts from a variety of machines. CCAT is also developing software for the IBM-type machines to work with the TLG CD-ROM and the forthcoming CCAT-PHI CD-ROM. Thus far, CCAT has tried to obtain software from other sources that would work on the new TLG CD-ROM on the IBMs, but has not found such (it is still early). Meanwhile, CCAT and PHI are producing a CD-ROM of biblical and Latin materials, plus a wide sampling of other material from various sources, to encourage researchers to test this medium of data circulation. This CD-ROM should be available at very modest cost by the end of next month (December). This disk will be compatible with the new TLG disk, and thus will run immediately on the IBYCUS SC (with updated software). As CCAT and other developers produce software, these disks will be accessible on other hardware as well. CCAT will also put some of the texts on WORM disks to test that approach. Further details will be forthcoming, probably in December. Bob Kraft From: MCCARTY@UTOREPASJohn Bradley Subject: Followup on Sanskrit WP Date: 23 November 1987, 14:17:46 EST X-Humanist: Vol. 1 Num. 447 (447) I have already received several responses to my question about a word processor for Sanskrit (Devanagari). Thanks very much to all! Two correspondent suggested the Graphics Toolbox developed at Penn by Jack Abercrombie. However, one of the two warned that when he had looked at it it used only a "lowest- common-denominator" CGA display so the quality of the display was not as good as he'd like, with a rather basic editor. The same correspondent discussed some work that had been done at Another correspondent pointed out that Multilingual Scribe offered Devanagari (but not Tamil). Two other people suggested two pieces of software: one by a firm called LEABUS Ltd (114 Brandon St. London SE17 1AL tel: 01-708-2756), and the other by Gamma Productions Ltd, 609-710 Wilshire Blvd, Santa Monica, CA 90401 USA (213)394-8622. Another individual pointed me at James Nye's article entitled "Indic Fonts for Computer Printers" (in South Asian Library Notes and Queries 18 (Sprint 1985)). Several people remarked that the Macintosh was a more natural machine for this type of work. I agree -- but our client here already has an IBM PC and wishes to use it for this work. Of the people who have responded, none seemed to have used the software they were describing -- they were (kindly) passing on what they had heard. Other people gave me a couple of other interesting leads. After I have investigated them further, I'll post another note. Thanks again to all who replied. .... John Bradley (bitnet: BRADLEY@UTORONTO) From: MCCARTY@UTOREPASCMI011@IBM.SOUTHAMPTON.AC.UK Subject: Browsing programs Date: 23 November 1987, 15:00:07 EST X-Humanist: Vol. 1 Num. 448 (448) A quick note to thank people who sent me material about browsing programs; I will try and write a summary for HUMANIST, but this is just a 'rain check' until I sort out my mail backlog (not to mention a test of my new mailing program..) sebastian rahtz. computer science, southampton, uk From: MCCARTY@UTOREPASChuck Bush Subject: A software review (131 lines) Date: 24 November 1987, 20:37:09 EST X-Humanist: Vol. 1 Num. 449 (449) Sonar, A Text Retrieval System for the Macintosh We Macintosh users rarely envy our PC colleagues (and even more rarely admit it). There is only one PC program that makes me step out of my comfortable Macintosh mouse-fur slippers onto the cold tile floor of the PC Program Information: SONAR Text Retrieval System Virginia Systems Software Services, Inc. 5509 West Bay Court Midlothian, Virginia 23113 804-739-3200 Version Reviewed: 4.0 Minimum System: Macintosh Plus, works on SE and Mac II Copy protection: none Suggested price: $195 From: MCCARTY@UTOREPASGW2@vaxa.york.ac.uk Subject: OED ON-LINE Date: 24 November 1987, 20:45:30 EST X-Humanist: Vol. 1 Num. 450 (450) Does anyone have advance details of the forthcoming computerised OXFORD ENGLISH DICTIONARY? I'd be interested in both the hardware requirements and the software specifications. Geoffrey Wall From: MCCARTY@UTOREPASJACKA@PENNDRLS Subject: The Graphics Library (63 lines) Date: 25 November 1987, 14:18:53 EST X-Humanist: Vol. 1 Num. 451 (451) The Center for Computer Analysis of Texts is pleased to announce the availability of our Graphics library for EGA, CGA and Hercules displays. We are willing to provide interested colleagues with the essential routines to display foreign fonts on these adapters. In addition, we will provide you with the following initial fonts: Arabic, Armenian, Cyrillic, Devanagari, Greek, Hebrew, Phonetics, Punic, and Roman. The routines are written in Turbo Pascal and Assembler. We will send you a demonstration program with including source code. If you are interested in adding this facility to already written programs or new ones, you will have to agree to the following conditions: 1. Not to redistribute the demo disk to others. Other colleagues may receive copies from the Center. 2. To give appropriate scholarly recognition to the Center and its staff for this work. 3. To share any developments or resulting programs with the University of Pennsylvania at no cost and under University License Agreement. If you agree to all these conditions, we will provide you with the full library and additional documentation. We will coordinate updates and share with you any and all improvements.i In addition, we have an opportunity to organize a small workshop on Graphics display if there is sufficient interest. SELECTED LIST OF ROUTINES IN THE FULL LIBRARY: Window management Pop-up menus Microsoft mouse support TIFF support HPLaserJet interfaces Graphics display Selected application program for graphics support The cost of obtaining a demo disk will be $20. This nominal charge covers shipping and handling of the diskette only. JACK ABERCROMBIE ASSISTANT DEAN FOR COMPUTING, DIRECTOR OF THE CENTER FOR COMPUTER ANALYSIS OF TEXTS, UNIVERSITY OF PENNSYLVANIA JACKA @ PENNDRLS Center address: CCAT Box 36 College Hall Philadelphia, PA 19104 From: MCCARTY@UTOREPASKRAFT@PENNDRLN Subject: CD-ROM for IBM update Date: 25 November 1987, 14:21:15 EST X-Humanist: Vol. 1 Num. 452 (452) As an addendum to my earlier note about Abagail Young's query, CCAT has now received a copy of the MicroSoft DOS extension to permit the CD-ROM reader to be accessed as though it were just another disk drive on the IBM machine. We have been successful in reading the new TLG CD-ROM in this manner, which bodes well for future software development on such large bodies of material. Our present configuration is as follows: Sony CD-ROM reader with interface/controller card for IBM PC, Device Driver for the IBM (from Sony or from Discovery Systems), MS DOS extension licensed for $10 per drive through Discovery Systems, 7001 Discovery Blvd, Dublin OHIO 43017 (tel 614-761-2000). [There are other vendors similarly licensed, I am sure. Discovery Systems was most convenient for us.] Now, with the proof that the new TLG CD-ROM format can be accessed with off the shelf products, CCAT will focus on the search and retrieval software to increase efficiency on the DOS machines, and on quality control of data to be included on CCAT CD-ROM productions. Some of the current developments will be displayed at the combined annual meetings of the American Academy of Religion and Society of Biblical Literature and American Schools of Oriental Research in Boston on 5-8 December 1987. Cooperation and other sorts of input are encouraged. Bob Kraft From: MCCARTY@UTOREPASROBERT E. SINKEWICZ (416) 926-7128 ROBERTS at UTOREPAS Subject: The Greek Index Project Date: 25 November 1987, 15:24:54 EST X-Humanist: Vol. 1 Num. 453 (453) [This submission contains 73 lines] From: Subject: Date: X-Humanist: Vol. 1 Num. 454 (454) THE GREEK INDEX PROJECT OF THE PONTIFICAL INSTITUTE OF MEDIAEVAL STUDIES The Greek Index Project has been designed as an information access system for Greek manuscripts containing works written prior to A.D. 1600. The project was initiated in 1971 by Walter Hayes and has been directed by Robert Sinkewicz since 1985. It is owned and housed by the Pontifical Institute of Mediaeval Studies. The data assembled by the project have been taken primarily from printed catalogues of Greek manuscript collections. For this purpose a microfilm collection of such catalogues was put together with the assistance of other research institutes. Over a period of fifteen years the data were extracted from these sources and arranged in an organized retrieval system. Because of the incomplete nature of many catalogues further research was done to identify entries for many authors or works. The system contains four primary files: an inventory of manuscripts with basic information on each one, an inventory of authors and another for works, and finally a file that provides manuscript listings for each authored work. Anonymous works are treated separately because of the special problems associated with this area. The computerized section of the project is stored in an SQL database on an IBM 4361 mainframe operating under VM/CMS. Special data entry panels have been written to help assure accuracy and speed of data input. A set of utilities has also been written to allow a two-way transfer of data between dBase III PLUS and SQL. This enables us to use micros for data entry and correction in addition to offering collaborators at other sites to share our data in an electronic format. By September 1988 the entire manuscript inventory will be computerized (approximately 100,000 records). In addition, the author, title and manuscript listing files will be available for the Late Byzantine Period (1261-1453), at least for the authors listed in the first eight fascicules of the "Prosopographisches Lexikon der Palaiologenzeit." In the fall of 1988 we will be ready to publish a first edition of the "Manuscript Listings for the Authored Works of the Palaeologan Period." A second edition will be published when the "PLP" is completed. A "Studia Minora" series is also planned. This will be a series of shorter publications (30-40 pages each) devoted to individual authors of special interest or other minor research tools that we have assembled, such as a handlist of Greek manuscripts from the Phillipps collection. Two to five issues will be published each year for the next four years. If funding of the project is continued, the remaining data for the earlier periods will be computerized over the next four years. In the meantime data on authors and works for those periods is being provided to students and scholars for the cost of the data entry of the information requested ($15 per hour). The Greek Index Project is funded by the Social Sciences and Humanities Research Council of Canada. Support for our hardware and software installation has been provided by the Centre for Computing in the Humanities at the University of Toronto through a cooperative agreement with IBM Canada. Submitted by Robert E. Sinkewicz Pontifical Institute of Mediaeval Studies 59 Queen's Park Crescent East Toronto, Ontario CANADA M5S 2C4 From: MCCARTY@UTOREPASJohn J. Hughes Subject: Electronic OED (12 lines) Date: 25 November 1987, 19:22:11 EST X-Humanist: Vol. 1 Num. 455 (455) Some HUMANIST (whose name I have lost) recently inquired about the electronic version of the Oxford English Dictionary. According to a report in the CHRONICLE OF HIGHER EDUCATION (November 18, 1987, p. B60), Oxford University Press is putting all 22,000 pages and 500,000 definitions of the 16-volume OED on three CD-ROMs. The first two ROMs will contain the basic 12 volumes and should be available by the end of 1987. The four supplementary volumes will be placed on a third ROM, for which no release date was given. A new printed edition of the OED that contains 5,000 additional words will be published "early in 1989." A complete revision of the entire work is planned for 1990. From: MCCARTY@UTOREPAS Subject: Job posting (61 lines) Date: 25 November 1987, 19:38:31 EST X-Humanist: Vol. 1 Num. 456 (456) FACULTY OF ARTS AND HUMANITIES ...........LITERATURE AND LANGUAGE PROG ANTICIPATED VACANCY: Instructor or Assistant Professor of British Literature, starting September 1, 1988. A renewable, full time position. DUTIES INCLUDE: teach undergraduate courses in Brit. Lit., Intro. to Literature, Communications, and General Education. Teaching load is 24 semester hours per year, typically five 4-hour courses plus tutorials and/or independent studies or six 4-hour courses. RANK AND SALARY: Salary range is $20,173 - $23,819 for Instructor and $25,178 - $28,956 for Assist. Prof. plus state mandated fringe benefits. Salary negotiable depending on qualifications and experience. THE FACULTY: Arts and Humanities has 33 regular faculty members and degree programs in Studies in the Arts, Historical Studies, Literature and Language, Philosophy and Religion. STOCKTON STATE COLLEGE IS AN AFFIRMATIVE ACTION/EQUAL OPPORTUNITY EMPLOYER. WOMEN AND MINORITIES ARE ENCOURAGED TO APPLY. Send letter of application with resume to: Margaret Marsh Chair, Faculty of Arts and Humanities Stockton State College Pomona, N.J. 08240 ******************************************************************* Of particular interest to HUMANIST participants is the fact that we are attempting to start an undergraduate degree track in electronic publishing within the Literature and Language Program. The person appointed to this position will be able to develop this curriculum and establish an electronic publishing lab. So far, our search has not produced many humanists with computing backgrounds and skills. Ken Tompkins -- Stockton State From: MCCARTY@UTOREPAS Subject: The Information Retrieval List (90 lines) Date: 26 November 1987, 16:42:57 EST X-Humanist: Vol. 1 Num. 457 (457) Dear Colleagues: The following is a description of another electronic discussion group, IRList, which may be of interest to some of you. Subscription is obtained by sending a note to the moderator, Ed Fox, at one of the several e-mail addresses listed below. Yours, W.M. -------------------------------------------------------------------------- IRList is open to discussion of any topic (vaguely) related to information retrieval. Certainly, any material relating to ACM SIGIR (the Special Interest Group on Information Retrieval of the Association for Computing Machinery) is of interest. Our field has close ties to artificial intelligence, database management, information and library science, linguistics, ... A partial list of topics suitable are: Information Management/Processing/Science/Technology AI Applications to IR Hardware aids for IR Abstracting Hypertext and Hypermedia CD-ROM / CD-I / ... Indexing/Classification Citations Information Display/Presentation Cognitive Psychology Information Retrieval Applications Communications Networks Information Theory Computational Linguistics Knowledge Representation Computer Science Language Understanding Cybernetics Library Science Data Abstraction Message Handling Dictionary analysis Natural Languages, NL Processing Document Representations Optical disc technology and applications Electronic Books Pattern Recognition, Matching Evidential Reasoning Probabilistic Techniques Expert Systems in IR Speech Analysis Expert Systems use of IR Statistical Techniques Full-Text Retrieval Thesaurus construction Fuzzy Set Theory Contributions may be anything from tutorials to rampant speculation. In particular, the following are sought: Abstracts of Papers,Reports,Dissertations Address Changes Bibliographies Conference Reports Descriptions of Projects/Laboratories Half-Baked Ideas Histories Humorous,Enlightening Anecdotes Questions Requests Research Overviews Seminar Announcements/Summaries Work Planned or in Progress You may submit material for the digest to a variety of places, depending on what network you are on and how quickly and reliably you want mail to reach me. We do not have to pay for mail deliveries, but they do vary in speediness and reliability. Possibilities include: If on ARPANET and can use domains, or on CSNET, use fox@vtopus.cs.vt.edu fox@vtcs1.cs.vt.edu foxea%vtvax3.bitnet@wiscvm.wisc.edu If on ARPANET and can't use domains use one of the following fox%vtopus.cs.vt.edu@csnet-relay.arpa fox%vtcs1.cs.vt.edu@csnet-relay.arpa fox%vtcs1.bitnet@wiscvm.arpa foxea%vtvax3.bitnet@wiscvm.arpa If on BITNET, use fox@vtcs1 or foxea@vtvax3 If on UUCPNET, use something like one of the following ... seismo!vtcs1.bitnet!fox ... seismo!vtvax3.bitnet!foxea As you might expect, archival copies of all digests will be kept; feel free to ask for recent back issues. Note that FTP is not yet possible, so all communication must be by EMAIL or phone or letter. The list does not assume copyright, nor does it accept any liability arising from remailing of submitted material. Further, no liability is accepted for use of such materials for information retrieval research, including distribution of test collections. I reserve the right, however, to refuse to remail any contribution that I judge to be of commercial purpose, obscene, libelous, irrelevant, or pointless. Replies to public requests for information should be sent, at least in "carbon" form, to this list unless the request states otherwise. If necessary, I will digest or abstract the replies to control the volume of distributed mail. However, PLEASE DO contribute! I would rather deal with too much material than with too little. -- Ed Fox Edward A. Fox, Assistant Professor, Dept. of Computer Science, Virginia Tech (VPI&SU), McBryde Hall Rm. 562, Blacksburg VA 24061 (703) 961-5113 or 6931 From: MCCARTY@UTOREPAS Subject: Date: 26 November 1987, 16:55:57 EST X-Humanist: Vol. 1 Num. 458 (458) This is a test. AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZzed. From: MCCARTY@UTOREPAS Subject: News of a new LIST, in 30 lines Date: 26 November 1987, 22:01:39 EST X-Humanist: Vol. 1 Num. 459 (459) There is a new LIST that may interest some readers of HUMANIST and ENGLISH; it is ETHICS-L and you subscribe to it with a SEND (VMS) or TELL (VM) command: TELL LISTSERV@MARIST SUB ETHICS-L Your Name The sender of the news, Jane Robinett (ROBINET@POLYTECH) says: "Discussions of ethics in computing usually generate more heat than light. This list could do a lot toward generating more light if we do more than trade war stories and opinions of the "I'm right and you're NOT" variety. Of course we can't get any work done without some war stories, since they furnish food for thought. But we shouldn't stop there. Given our experiences, we ought to be able to delineate the basic issues and hot areas in computer ethics. Some current ones have to do with: - ownership of information (both data and program files) - responsibility for program failures (Is the company responsible? the programmer? the lead programmer? the project manager?) Who's responsible for the "fix"? - how much privacy is reasonable (there are all kinds of levels here -- data bases, systems, LANs, networks, etc.) "I will be teaching a course (required for all Polytechnic CS majors) next semester which has a heavy ethics component. That's one reason I'm especially interested in this list. "A current topic around here is what happens when systems programs fail? Is anyone responsible for damage done? Or is the responsibility only for the necessary fix?" Regards, M.G. From: MCCARTY@UTOREPAS Subject: Bible for the Macintosh Date: 27 November 1987, 13:57:43 EST X-Humanist: Vol. 1 Num. 460 (460) This is just a short inquiry: Does anyone know if the King James version of the Bible is available for the Apple Macintosh? (For both CP/M and MS-DOS systems there was a company that marketed the Bible, plus some sort of searching retrieval program, for under $200 as I recall, though I never had access to a copy). Jim Cerny University Computing, University of N.H., USA. J_CERNY@UNHH From: MCCARTY@UTOREPAS Subject: Two small points of order Date: 28 November 1987, 01:06:47 EST X-Humanist: Vol. 1 Num. 461 (461) Dear Colleagues: First point. Apparently the note from HUMANIST concerning the Ethics list was not entirely clear about how interested souls subscribe. The command "TELL LISTSERV@MARIST..." should not be sent to HUMANIST but to the ListServ program (a pseudo-user) at the MARIST node of Bitnet. Thus, if you're on Bitnet/NetNorth/EARN, you send this command directly, not in a note. If you're connected to Bitnet through a gateway (true of HUMANISTs in the UK, for example, who are on JANET) then you need to put this command as the first and only line in a note to ListServ@Marist. Second point. Some time ago we agreed that answers to specific questions asked on HUMANIST should be sent to the questioner directly, not to HUMANIST, and that the questioner would then gather up the replies, edit them if necessary, and post the results to HUMANIST. In this way those of us who have forgotten the original question won't be bothered with several replies out of context, but still the results will be available generally, in one convenient note. (Convenient for filing or deletion, depending on the recipient.) Then, too, asking a question carries with it a certain not altogether insalubrious burden. HUMANIST sometimes reminds me of Heraclitus' stream. For those of you who have hung around in the still pools beside the rushing current and have seen this foot before, please excuse my editorial reminder. Yours, W.M. _________________________________________________________________________ Dr. Willard McCarty / Centre for Computing in the Humanities University of Toronto / 14th floor, Robarts Library / 130 St. George St. Toronto, Canada M5S 1A5 / (416) 978-4238 / mccarty@utorepas.bitnet From: MCCARTY@UTOREPAS Subject: Electronic OED (31 lines) Date: 29 November 1987, 13:05:39 EST X-Humanist: Vol. 1 Num. 462 (462) A few of the details in the recent response from John J. Hughes (based on the report in the Chronical of Higher Education) might mislead those who are interested in the future of the OED. The form of the release of the integrated OED (i.e., 1928 version plus 4-volume Supplement plus new materials) is so far undecided. The following information might clarify the situation: There is planned to be a release of the OED (1928 version only) on CD-ROM around year-end. This comes with its own software for interactive access, and reqires an IBM PC with a CD player (any of four (I think) CD-Rom makes). The data is split across 2 CD-ROM disks. None of the materials from the 4-volume Supplement will be available in this form. There is also a possibility that OUP will release a version of the data on tape, using SGML-like tags. Software to browse and extract data will likely also be provided, allowing users to access the OED efficiently from conventional disks, either interactively or via programs. This version will likely not be ready until mid-88. Software will likely be provided for VM/CMS and UNIX, and perhaps for PCs as well. Again, none of the materials from the 4-volume Supplement will be available in this form. Finally the full OED, integrated with the Supplements and with new materials, will not be available until 1989. The first release will be in book form, with an electronic version sometime later. At this stage, the hardware and software requirements are not yet decided. Frank Tompa, Co-Director of the Waterloo Centre for the New OED From: MCCARTY@UTOREPAS Subject: How to answer questions, again Date: 29 November 1987, 14:23:42 EST X-Humanist: Vol. 1 Num. 463 (463) Dear Colleagues: Jim Cerny of Brown University (JAZBO@BROWNVM) sent the following to me in response to my recent point about the answering of ques- tions on HUMANIST. I reminded the membership that we had agreed to answer specific questions by responding to the questioner directly, not to HUMANIST, and as questioners to gather up the replies, edit them if necessary, and post the results to HUMANIST. I originally suggested this convention because I thought HUMANISTs would be annoyed at being sent replies to ques- tions they couldn't remember and which usually didn't concern them in any event. Cerny's challenging of the convention gives us the opportunity to rethink what we want to do. Here is most of what he said: ----------------------------------------------------------------- I understand the motivation for this editorial approach, but I would like to point out that shared iterative refinement of a question is typical both of group conferencing, including face- to-face interaction as well as telephony and most bulletin boards. For example, I just discovered a bug in the Microsoft QuickC compiler. The typical response of participants in the Microsoft conference (on BIX) is to suggest solutions, often in the framework of "have you tried X". These responses could be mailed to me instead of being posted in the conference, but they have public value. Often, these "responsive" questions contain errors that neither the original "speaker" nor the responder can correct. Or, even if the original "speaker" can correct them, it takes the voice of the multitude to convince. In addition, some of the most productive responses can be the most trivial in content, as various participants temporarily take up the burden of moderating. "Say more." "What do you mean by this?" It may be little more than assurance that others are interested and listening, confirmation that the original posting, which may have been in large part testing the waters anyway---that the original posting was appropriate and that the author should let out the stops.... I have been participating in electronic conferences for three or four years now, including BIX, CompuServe, and various bul- letin boards around the country. In my judgment, the current approach threatens to stifle discourse. In order for brief splinter discussions to form, everyone who might be interested would have to contact the original poster and request to be added to an ad hoc list, which may have value for no more than a few days and a few mailings anyway. Also, can we realisti- cally expect every HUMANIST to be prepared to take on the role of moderator and list server for a few days? The face-to-face analogue, I guess, would be the act of a few people stepping aside to hash out something that the group finds uninteresting. Or, might it just as well be the perking up of a few ears to see if anything interesting IS being said? or to see if one has anything to add to the conversation that seems to be develop- ing? How is this dynamism to occur if one immediately shunts aside all of the conversational shit work (as some feminists have so aptly put it)? Our goal, I believe, is, in part, to share information and, in part, to find out what information none of us have---to identify areas that need research or questions that one of us might be able to answer in a couple of months. Even such rela- tively pure information motives, however, cannot be well served by restricting group communication to the obviously information giving. Well, there it is. I believe that people should develop scan and delete skills instead of squelching discourse. ----------------------------------------------------------------- If you care at all about this, would you please let us know what you think and why? Thanks for your help. Yours, W.M. _________________________________________________________________________ Dr. Willard McCarty / Centre for Computing in the Humanities University of Toronto / 14th floor, Robarts Library / 130 St. George St. Toronto, Canada M5S 1A5 / (416) 978-4238 / mccarty@utorepas.bitnet From: MCCARTY@UTOREPAS Subject: An error Date: 29 November 1987, 14:27:49 EST X-Humanist: Vol. 1 Num. 464 (464) In my previous note I mistakenly said that Jim Cerny is from Brown University. He's actually from the Univ. of New Hampshire. His correct e-mail address is JAZBO@UNHH. My apologies! From: MCCARTY@UTOREPAS Subject: Compuserve Date: 29 November 1987, 19:27:39 EST X-Humanist: Vol. 1 Num. 465 (465) Can anyone advise me on the cheapest way of logging on to the US bbs "Compuserve" from my hometown of Winnipeg? When I lived in Vancouver, BC, a Compuserve gateway city, I had only to pay Compuserve's own extortionate costs; now it seems, I must also pay for longdistance costs, or a Datapac surcharge. Is there no way around this? From: MCCARTY@UTOREPAS Subject: 1st edn. of the OED in CD-ROM and 2nd edn. in hardcopy Date: 30 November 1987, 10:54:07 EST X-Humanist: Vol. 1 Num. 466 (466) I have received the following information from Tim Benbow of Oxford University Press about its publishing plans for the OED, in response to the query from Mr Wall. * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Oxford University Press has announced that early in 1988 it will publish the original Oxford English Dictionary, 1884-1928, issued in twelve printed volumes, on two CD ROM disks. OUP states that this product is very user-friendly, much more so than other similar products on the market. These CD ROMs can run on a PC, XT or AT or an IBM clone with a 640 K memory with either a CGA or EGA device. A Hitachi, Philips, or Sony disk drive is required. The display monitor may be monochrome, but a colour monitor is preferable, as colour is used to distinguish different types of information. OUP also plan to make the original OED available on magnetic tape in a fully structured version with embedded codes, written in IBM format. In 1989, OUP will publish the Oxford English Dictionary, Second Edition, which is the text of the original OED, plus supplements, plus new material which has been added recently. This will be published in a printed version of 20 volumes. The database containing this material will be made available in a number of electronic forms. From: MCCARTY@UTOREPAS"Kenneth R. van Wyk" Subject: A serious warningVirus warning! Date: 30 November 1987, 15:29:05 ESTMon, 23 Nov 87 08:05:57 EST X-Humanist: Vol. 1 Num. 467 (467) I SEND THE FOLLOWING ALONG IN THE INTEREST OF STOPPING THE SPREAD OF THIS PARTICULAR PROBLEM. LET ME REPEAT: ********* THIS IS NOT REPEAT NOT A JOKE ********* The following note was distributed in the network warning of a very serious problem which can infect a site which is obtaining Public Domain software for unknown sources, especially over the network. There have been other reports of this (or similar) problem from other sites and the effects are most devistating, especially for novice users of hard-disks who don't truly understand the need for backups! Please warn others who pick up strange Public Domain stuff. (There is at least one report of a virus in a program on a BBS!) Last week, some of our student consultants discovered a virus program that's been spreading rapidly throughout Lehigh University. I thought I'd take a few minutes and warn as many of you as possible about this program since it has the chance of spreading much farther than just our University. We have no idea where the virus started, but some users have told me that other universities have recently had similar probems. The virus: the virus itself is contained within the stack space of COMMAND.COM. When a pc is booted from an infected disk, all a user need do to spread the virus is to access another disk via TYPE, COPY, DIR, etc. If the other disk contains COMMAND.COM, the virus code is copied to the other disk. Then, a counter is incremented on the parent. When this counter reaches a value of 4, any and every disk in the PC is erased thoroughly. The boot tracks are nulled, as are the FAT tables, etc. All Norton's horses couldn't put it back together again... :-) This affects both floppy and hard disks. Meanwhile, the four children that were created go on to tell four friends, and then they tell four friends, and so on, and so on. I urge anyone who comes in contact with publicly accessible (sp?) disks to periodically check their own disks. Also, exercise safe computing - always wear a write protect tab. :-) This is not a joke. A large percentage of our public site disks has been gonged by this virus in the last couple days. Kenneth R. van Wyk User Services Senior Consultant Lehigh University Computing Center (215)-758-4988 From: MCCARTY@UTOREPASLOU@VAX.OXFORD.AC.UK Subject: an interesting problem Date: 30 November 1987, 15:35:18 EST X-Humanist: Vol. 1 Num. 468 (468) Here's an interesting problem someone may have an answer to: what's the best way of automatically detecting the language in which something is written? We have a library here in Oxford with a large (well, very large actually) catalogue of book titles in just about every european language you can think Lou Burnard From: MCCARTY@UTOREPAS Subject: range of discussion on HUMANIST Date: 1 December 1987, 00:21:22 EST X-Humanist: Vol. 1 Num. 469 (469) I appreciate the concern about HUMANIST's self-editorial policy. There is a spirit missing from HUMANIST's discussions that has been present in other discussions I have been a member of. On the other hand, I can remain a member of HUMANIST in good conscience because it does not take much time away from my other duties, which are considered by others here to be of a higher priority. Two specific examples: I was a member of the info-c discussion on ARPA (linked with the sister discussion group on usenet). This was a free-flowing discussion with frequent cries from subscribers asking submitters to control themselves. There was a great deal of redun- dancy and even inanity mixed with a few nuggets of valid and even brilliant discussion. Although I enjoyed it immensely on the whole, I had to quit because I couldn't afford that many hours of extra reading per week. After a while, the returns just weren't great enough to put up with the noise. On the other end of the editorial spectrum was the arpanet RISKS digest. A "digest" means that the moderator is also an editor. All submissions are sent to him, and he exercises editorial judgement on everything submitted. Once or twice a week, as volume dictates, the collected and edited submissions are mailed in one package to all sub- scribers, with a refreshing dash of humour added. The kind of give- and-take conversations that have been referred to can still happen in this environment, because the moderator is essentially benign unless serious redundancies and/or inanities occur. (I believe that the mod- erator was getting full credit for his work in this, and it was proba- bly a part of his job description.) Nevetheless, even so edited, the volume became more than I could deal with effectively (despite the fascinating subject matter, by the way: risks to the public from com- puters and automated systems). So although I find HUMANIST occasionally on the "dead" side, I have no trouble maintaining my subscription since it does not demand too much of my time. Discussions happen in private, and if I want to get in on them I can contact the initiator. I admit, I wouldn't mind seeing a bit more activity in HUMANIST on occasion, and I think people with issues of broad interest (such as the recent discussion on the OED) should feel free to bring these issues forward. But if HUMANIST has to err, I would rather it err on the dead side, lest I be forced to resign. Let my vote be so registered. Sterling Bjorndahl Institute for Antiquity and Christianity Claremont Graduate School Claremont, California From: MCCARTY@UTOREPAS Subject: The Dirty Dozen ... Plus??? Date: 1 December 1987, 09:25:37 EST X-Humanist: Vol. 1 Num. 470 (470) This is just to add to the warning passed on by Stuart Hunter about Lehigh's direct experience with a virus in some publicly obtained copies of COMMAND.COM. There are apparently a number of other programs that have had work done to their genes to turn them into malignant viruses. They have come to be called "The Dirty Dozen," though there are more than a dozen. These have been described in a number of computer center newsletters in the last year or so. The most recent description I've seen was "Beware The Dirty Dozen: Software That Destroys," CAUSE/EFFECT, v. 10, n. 6, November 1987, pp. 44-45. (which is reprinted from the "Technical Update" publication at the Univ. Cincinnati Computing Center, September 1, 1987). Jim Cerny University Computing, Univ. N.H. J_CERNY@UNHH From: MCCARTY@UTOREPAS Subject: Discussions [34 ll, counting this one] Date: 1 December 1987, 09:34:18 EST X-Humanist: Vol. 1 Num. 471 (471) Well, pace Sterling Bjorndahl, I don't find HUMANIST on the dead side and I don't want to! But I think I know exactly what he's talking about. Recently it seems that queries or opinions appear and then die in electronic silence. In fact, there seems in general to be less discussion now than there was a few months ago. I don't know to what to attribute this. It could reflect a need on the part of those of us who teach or provide services to students to prepare for and then deal with the demands of a new academic session. It could be that no-one has very much to say at the moment. But I have wondered recently whether we were all feeling a reluctance to say much brought on by our worthy moderator's urgings towards self-editing (with the consequent responsiblity of editing and posting a resulting conversation, if any) and our new awareness of the cost factor for the Antipodes at least. I certainly find the current "full" discussion on the details about the electronic OED interesting and a nice change, even though I had already found out a lot of it at the Waterloo conference, and I wish that I'd kept my query about the Rutgers database general now too. So I am glad that Willard has passed on what others have had to say, and I think perhaps we should try out for a bit making all discussion general. We could make use of a subject line to indicate the topic of a posting, and whether it were part of an on-going discussion, thus enabling those who need to clear their readers quickly to ignore discussions which were not of interest to them. Abigail Young Research Associate, Records of Early English Drama University of Toronto young at utorepas From: MCCARTY@UTOREPAS Subject: discussions (15 lines) Date: 1 December 1987, 11:14:50 EST X-Humanist: Vol. 1 Num. 472 (472) I would like to express my approval of the contents of the recent message about discussions on HUMANIST. Although I have no interest in scanning the turgid "flames" of the digitally deranged, it seems much more than unlikely that HUMANISTs will inundate one another with drivel; and, I am content to attempt to follow the threads of the discussions on my own. Certainly it >is< easy enough to dispatch into oblivion (I have set up a macro to just that purpose) any piece of mail in which one has no interest. As it now stands, HUMANIST seems a touch too formal. From: MCCARTY@UTOREPAS Subject: CD ROMs, micro- and mainframe computing with large corpora Date: 1 December 1987, 14:24:09 EST X-Humanist: Vol. 1 Num. 473 (473) A late contribution to the discussion provoked by Abigail Young about CDs as a medium of data distribution. [60 lines or so.] I think Dr. Young hit the nail on the head with the question "Are there people out there waiting with bated breath for the new OED on CD ROM?" Because certainly if we're not excited about the OED as a group, then we're not as a group going to be very excited about anything. Yes, I AM waiting with bated breath for an electronic OED, but I was far more excited to learn it would be available on tape than I was to hear about the CD ROM version. I like and use my PC, and I hope someday to be able to work with massive textual corpora on it, but at least for the moment I think magnetic tape is a far better medium for distribution. For one thing, I don't have a CD ROM drive, and I don't know anyone who does, except for Bob Kraft and a classicist here who has a Ibycus micro on loan but does her Greek word processing on our mainframe. Tape drives, on the other hand, will be available at any school in the country. For another, tape drives allow me to change the data -- add to it, enhance it, reduce its size -- and make another copy. CD ROM doesn't. For that reason alone, I'll wait for WORM before buying a new drive for my PC. And finally, mainframes seem to me by and large better at dealing with large quantities of data. That is changing, to be sure. But I can edit the Nibelungenlied in storage on the mainframe, and extract every occurrence of the name 'Sivrit' in a couple of seconds. My PC with its 640 Kbytes can only hold a fourth or so of the Nibelungenlied in RAM at a time. To be sure, a micro-Ibycus could also find all the occurrences of 'Sivrit' in a few seconds -- if the Nibelungenlied were on a CD ROM. But it's not, and there aren't enough Germanic philologists in the country to make it economically feasible to make one. Nor do I WANT a frozen, unalterable text of the Nibelungenlied. I want to be able to index it, to add parsing information or scansions to the file so I can search on them, and so on. Not to mention the need to correct typos in the transcription and add manuscript variants. For all this, we need erasable media, not CD ROMs. Magnetic tapes do have the drawback, for some users, that they are typically readable only on mainframes. (There are PC-based 9-track tape drives, but they aren't real common.) And many humanists don't like working on mainframes. Even for those users, however, the local academic computing center should be, and almost always is, in a position to read the tape and help the user download the data to a microcomputer. No, it's not always easy. And no, it's not always fast. A megabyte an hour or so. But the chances are good the academic computer center knows how to do it, and does it regularly. All the ones I've ever known as a user or staff member do. There may be centers that do NOT provide this kind of service, although I have never seen one and never heard of one. But if they exist, those centers should be DRIVEN to provide support for humanities computing, support for microcomputing, and support for data exchange between mainframes and micros. If they are not providing these services, they are not doing their job. Given the kind of support computer centers ought to be providing for humanist users, and given the kind of flexible text humanistic work seems to need, I think CD ROMs look much less promising as a means of data distribution than WORM disks and magnetic tape, and in some cases floppy disks. All of which is just one user's opinion. -Michael Sperberg-McQueen University of Illinois at Chicago (U18189 at UICVM) From: MCCARTY@UTOREPAS Subject: Subject line comments (25 lines) Date: 1 December 1987, 14:26:50 EST X-Humanist: Vol. 1 Num. 474 (474) Oh, my, the HUMANIST subject lines may get long. Now, in addition to the honest-to-goodness subject, and to the number of lines in the message, Abigail Young suggests "we could make use of a subject line to indicate the topic of a posting, and whether it were part of an on-going discussion, thus enabling those who need to clear their readers quickly to ignore discussions which were not of interest to them." Maybe we serious, dull writers can use such an augmented subject line as a place to pun? But woe is me for, alas, my mailer does not accept long subject lines. Can it be that some people will have to read the beginning of the message to learn what we want to ignore? Will we be like this lady: Lizzi Borden took an axe And plunged it deep into the VAX Don't you envy people who Do all the things you want to do? (Thanks to Jerry Whitnell in California for the ditty.) Maybe we'll relax a bit as our marking gets frantic and we hear the carols of the season. Marshall Gilliland U of Saskatchewan From: MCCARTY@UTOREPAS Subject: Concordance for Mac Date: 1 December 1987, 15:55:42 EST X-Humanist: Vol. 1 Num. 475 (475) Does anyone know of concordance programs for the Mac? Thanks. --Jim From: MCCARTY@UTOREPAS Subject: Text encoding guidelines -- progress report (225 lines) Date: 1 December 1987, 15:58:31 EST X-Humanist: Vol. 1 Num. 476 (476) A followup on the current status of the ACH effort to formulate guidelines for text encoding practices. ****************************************************************** * NOTE: The following encoding conventions have been used to * * represent French accents throughout this message: * * * * To Represent Accents -- Pour la representation des accents * * / acute accent - accent aigu * * ` grave accent - accent grave * * * * The accent codes are typed Les codes pour les accents se * * AFTER the letter, and are trouvent APRES la lettre qu'ils * * used with both upper and modifient, et s'utilisent avec * * lower case letters. les majuscules aussi bien que * * les minuscules. * ****************************************************************** On November 12 and 13, 1987, 31 representatives of professional societies, universities, and text archives met to consider the possibility of developing a set of guidelines for the encoding of texts for literary, linguistic, and historical research. The meeting was called by the Association for Computers and the Humanities and funded by the National Endowment for the Humanities. The list of participants is appended to this document. The participants heartily endorsed the idea of developing encoding guidelines. In order to guide such development, they agreed on the following principles: The Preparation of Re/daction des directives Text Encoding Guidelines pour le codage des textes Pougheepsie, New York 13 November 1987 1. The guidelines are intended 1. Le but des directives est de cre/er to provide a standard format un format standard pour l'e/change for data interchange in des donne/es utilise/es pour la humanities research. recherche dans les humanite/s. 2. The guidelines are also 2. Les directives sugge/reront intended to suggest principles e/galement des principes pour for the encoding of texts l'enregistrement des textes in the same format. destine/s a` utiliser ce format. 3. The directives should 3. Les directives devraient a. define a recommended a. de/finir une syntaxe recommande/e syntax for the format pour exprimer le format, b. define a metalanguage b. de/finir un me/ta-langage for the description de/crivant les syste`mes de of text-encoding schemes, codage des textes, c. describe the new format c. de/crire par le moyen de ce and representative me/talangage, aussi bien qu'en existing schemes both in prose, le nouveau syste`me de that metalanguage and codage aussi bien qu'un choix in prose. repre/sentatif de syste`mes de/ja` en vigueur. 4. The guidelines should 4. Les directives devraient proposer propose sets of coding des syste`mes de codage utilisables conventions suited for pour un large e/ventail various applications. d'applications. 5. The guidelines should 5. Sera incluse dans les directives include a minimal set of l'e/nonciation d'un syste`me de conventions for encoding codage minimum, pour guider new texts in the format. l'enregistrement de nouveaux textes conforme/ment au format propose/. 6. The guidelines are to be 6. Le travail d'e/laboration des drafted by committees on: directives sera confie/ a` quatre comite/s centre/s sur les sujets suivants: a. text documentation a. la documentation des textes, b. text representation b. la repre/sentation des textes, c. text interpretation c. l'analyse et l'interpre/tation and analysis des textes d. metalanguage definition d. la de/finition du me/talangage et and description of son utilisation pour de/crire le existing and proposed nouveau syste`me aussi bien que schemes ceux qui existent de/ja`. co-ordinated by a steering Ce travail sera coordonne/ par un committee of representatives comite/ d'organisation ou` of the principal sie`geront des repre/sentants des sponsoring organizations. principales associations qui soutiennent cet effort. 7. Compatibility with existing 7. Dans la mesure du possible, le standards will be maintained nouveau syste`me sera compatible as far as possible. avec les syste`mes de codage existants. 8. A number of large text 8. Des repre/sentants de plusieurs archives have agreed in grandes archives de textes en form principle to support the lisible par machine acceptent en guidelines in their function principe d'utiliser les directives as an interchange format. en tant que description des formats We encourage funding agencies pour l'e/change de leurs donne/es. to support development of Nous encourageons les organismes tools to facilitate this qui fournissent des fonds pour la interchange. recherche de soutenir le de/veloppement de ce qui est ne/cessaire pour faciliter cela. 9. Conversion of existing 9. En convertissant des textes machine-readable texts to lisibles par machine de/ja` the new format involves the existants, on remplacera translation of their automatiquement leur codage actuel conventions into the syntax par ce qui est ne/cessaire pour les of the new format. No rendre conformes au format nouveau. requirements will be made for Nul n'exigera l'ajout the addition of information d'informations qui ne sont pas not already coded in the de/ja` repre/sente/es dans ces texts. textes. (trad. P. A. Fortier) ****************** The further organization and drafting of the guidelines will be supervised by a steering committee selected by the three sponsoring The interchange format defined by the guidelines is expected to be compatible with the Standard Generalized Markup Language defined by ISO 8859, if that proves compatible with the needs of research. The needs of specialized research interests will be addressed wherever it proves possible to find interested groups or individuals to do the necessary work and achieve the necessary consensus. Formation of specific working groups will be announced later; in the meantime, those interested in working on specific problems are invited to contact either Dr. C. M. Sperberg-McQueen, Computer Center, University of Illinois at Chicago (M/C 135), P.O. Box 6998, Chicago IL 60680 (on - N.I., C.M.S-McQ ------------------------------------------------------------------------------ List of Participants Helen Aguera, National Endowment for the Humanities Robert A. Amsler, Bell Communications Research David T. Barnard, Department of Computing and Information Science, Queen's University, Ontario Lou Burnard, Oxford Text Archive Roy Byrd, IBM Research Nicoletta Calzolari, Istituto di linguistica computazionale, Pisa David Chestnutt (Assoc. for Documentary Editing, American Historical Assoc.), Department of History, University of South Carolina Yaacov Choueka (Academy of the Hebrew Language), Department of Mathematics and Computer Science, Bar-Ilan University Jacques Dendien, Institut National de la Langue Francaise Paul A. Fortier, Department of Romance Languages, University of Manitoba Thomas Hickey, OCLC Online Computer Library Center Susan Hockey (Association for Literary and Linguistic Computing), Oxford University Computing Service Nancy M. Ide (Association for Computers and the Humanities), Department of Computer Science, Vassar College Stig Johansson, International Computer Archive of Modern English, University of Oslo Randall Jones (Modern Language Association), Humanities Research Computing Center, Brigham Young University Robert Kraft, Center for the Computer Analysis of Texts, University of Pennsylvania Ian Lancashire, Center for Computing in the Humanities, University of Toronto D. Terence Langendoen (Linguistic Society of America), Graduate Center, City University of New York Charles (Jack) Meyers, National Endowment for the Humanities Junichi Nakamura, Department of Electrical Engineering, Kyoto University Wilhelm Ott, Universitaet Tuebingen Eugenio Picchi, Istituto di linguistica computazionale, Pisa Carol Risher (American Association of Publishers), American Association of Publishers, Inc. Jane Rosenberg, National Endowment for the Humanities Jean Schumacher, Centre de traitement e/lectronique de textes, Universite/ catholique de Louvain a` Louvain-la-neuve J. Penny Small (American Philological Association), U.S. Center for the Lexicon Iconographicum Mythologiae Classicae, Rutgers University C.M. Sperberg-McQueen, Computer Center, University of Illinois at Chicago Paul Tombeur, Centre de traitement e/lectronique de textes, Universite/ catholique de Louvain a` Louvain-la-neuve, Belgium Frank Tompa, New Oxford English Dictionary Project, University of Waterloo Donald E. Walker (Association for Computational Linguistics), Bell Communications Research Antonio Zampolli, Istituto di linguistica computazionale, Pisa, Italy [end of message] From: MCCARTY@UTOREPAS Subject: Date: 1 December 1987, 16:22:58 EST X-Humanist: Vol. 1 Num. 477 (477) (Was that too long, Marshall?) Does anyone have any information on WORM drives? A non-HUMANIST colleague told me he had heard about them at an IBM-sponsored conference and that they were the best thing since sliced bread, basically. I've also heard that a disk for an IBM WORM drive would be capable of being written to only once, which would certainly make such a disk only slightly more useful than a CD-ROM, and considerably less useful than a magnetic tape. I am always suspicious of new devices which will revolutionize my life and save me time, trouble, etc. I think it is because I tended to believe the Popular Science/Mechanics picture of the future when I was a child. But a WORM drive & disk capable of multiple disk writes as well as reads sounds very, very appealing. Abigail Ann Young Research Associate, Records of Early English Drama University of Toronto young at utorepas From: MCCARTY@UTOREPAS Subject: Sonar; Mac; concordance vs. retrieval (54 lines) Date: 2 December 1987, 00:17:29 EST X-Humanist: Vol. 1 Num. 478 (478) I asked about concordance programs for the Mac. Someone sent me the review of Sonar and a couple of others have mentioned it. The review does not say anything about concording texts with Sonar, however. I have never used one of these retrieval programs. I have used WatCon and have written a concordance program for the IBM PC (for multiple versions of the same text). IS Sonar appropriate for generating concordances? concordances that will be printed and distributed? Does it properly handle lines of poetry, for instance? and give columns of lines with locations? I assume that WordCruncher from BYU can do such, since it is a descendent of a concording program (unless there is an equivocation on "concord" here, and please let us all know if there is). I am in the process of designing a retrieval engine and browser for the American Heritage Dictionary. When I think of retrieval programs, I think of inverted indices, hash tables, and the like. "Use this information to go find X and then let's Y it." That, to me, is a typical retrieval action, and the access is typically random. Concording, however, at least in the traditional sense, is sequential and exhaustive. One COULD use a retrieval application to concord a text, but it would be very inefficient and would probably require additional programming anyway. One would have to have a means to call the retrieval engine iteratively for every word in the text as well as the means to format and write the results someplace. Are WordCruncher and Sonar dual applications? In order to index, one has to perform much of the same processing as is required for concording (process sequentially and exhaustively, split words out of lines, stop words, lemmatize?, cross reference (See also xxx)?). Well, some of the routines are the same anyway, at least to the extent that the developer of one type of application would have a start on developing the other. It begins to sound like integrated systems a la Symphony vs. 1-2-3. Does the system that offers both really do both jobs well? Or, first I guess, are there systems that offer both? --Jim Dr. James H. Coombs Software Engineer, Research Institute for Research in Information and Scholarship (IRIS) Brown University jazbo@brownvm.bitnet From: MCCARTY@UTOREPAS Subject: CD-ROM & WORM [88 lines] Date: 2 December 1987, 12:12:08 EST X-Humanist: Vol. 1 Num. 479 (479) The recent observations by Abigail Young and Michael Sperberg-McQueen on CD-ROM and WORM technologies call for some comment from the "pro" (and experienced) side. I hope to keep them brief, just to pinpoint some of the issues. Michael's comments seemed to me to miss many crucial points, and did not reflect the attitudes or situation of numerous people with whom I am in regular contact. 1. The difference between CD-ROM and WORM for this discussion is negligible, as Abigail suspected. Right now, WORM drives are more expensive and less tested publicly, but cheaper to produce a single disk. But once you have that single WORM disk, which currently costs about $65, there is no price advantage to making multiple copies (50 copies would cost $3250). With the CD-ROM, it might cost $3000 to master but each additional copy would cost very little (perhaps $ 7 each for 100). Thus it would be much cheaper to make 100 copies of a CD-ROM than 100 copies of a WORM disk at present. And the CD-ROM holds more than twice as much as the WORM disks with which we are working. So WORM is fine for limited production or in-house purposes, CD-ROM is better for larger distribution, etc. Neither can be changed once they are mastered, although WORM can be mastered in stages, while CD-ROM is a once for all mastering process. 2. Are people anxiously waiting for data distributed on CD-ROM? In my experience, YES. We have many advance orders for the CCAT CD-ROM, and more inquiries. Ted Brunner can report on the TLG experience. What sorts of people are asking? Obviously, IBYCUS SC owners (about 130 machines) who are set up to use CD-ROM as part of the package; Librarians, who need massive amounts of data in a bibliographically controlled context (static is good, in this setting!); the mass of individual scholars/students who are not in a tape-oriented environment such as Michael describes (his experience is not at all typical, even at the ideal level, of the majority of people with whom I am in contact -- people in small colleges, seminaries, or operating individually, with no access to a real mainframe or effective consultation). 3. What is attractive to these inquirers? Several fairly obvious things. (1) Amount of material available -- e.g. all of Greek literature through the 6th century on the TLG disk! (2) Price of the material (on tape, the TLG data cost over $4000; on CD-ROM, it is about 10% of that) (3) Convenience of storage, access, etc. -- I would rather download from a CD-ROM than from a tape drive, any day. It is the old roll vs codex issue once again (microfilm vs microfiche, etc.). (4) Quality control -- what is on the CD-ROM may have errors, but at least they can be identified and controlled (and corrected in a later release); I don't have to wonder whether my dynamic file has become corrupted (as happens more than I want to admit). (5) Speed of access to large bodies of data -- even if the programs are not yet in place and it will take 20 times as long to search a large CD-ROM file on the IBM than on IBYCUS, it is at least possible to do the search (or to search multiple files, in various configurations), which is extremely difficult in any other manner short of a dedicated mini. I am rambling and apologize. Much more needs to be said, but I need to finish preparing ID tables for the CCAT CD-ROM if it is to be mastered by the end of the year! Perhaps it would not be feasible economically to put the Nibelungenlied on its own CD-ROM, but to have it as a small part of a CD-ROM with all sorts of other texts is what we are talking about! That is not only feasible, but it seems to me highly desirable, IBYCUS or not. And I can still download what I want to edit, or manipulate, etc. I lose none of that capability. But I gain by having the original fixed at hand for comparison, etc. Libraries will rapidly be CD-ROM centered, and that is as it ought to be. Hopefully computer centers will not be bypassed by this exciting and useful development! Bob Kraft From: MCCARTY@UTOREPAS Subject: Summary of responses on KJV Bible for Macintosh (incl.[152 lines] Date: 2 December 1987, 14:41:59 EST X-Humanist: Vol. 1 Num. 480 (480) Thanks to everyone who responded to my recent inquiry about the availability of the King James Version of the Bible for the Apple Macintosh. I've tried to acknowledge or quote from all the responses (as of 01-Dec) in the summary that follows. From: Subject: Date: X-Humanist: Vol. 1 Num. 481 (481) John J. Hughes (XB.J24@STANFORD) had the most definitive answer, reflecting no doubt the research for his book "Bits Bytes and Bible Studies". Robin C. Cover (ZRCC1001@SMUVM1) referenced this book and Marshall Gilliland (GILLILAND@SASK) and Tim Seid (ST401742@BROWNVM) mentioned sources that Hughes lists. Hughes wrote: ---------------------------------------------------------------------- There are several companies that sell King James Versions of the Bible for Macintoshes. Here are their names, addresses, and so forth. The first program is reviewed in detail in chapter 3 of BITS, BYTES, & BIBLICAL STUDIES (Zondervan, 1987). THE WORD Processor Bible Research Systems 2013 Wells Branch Parkway, Suite 304 Austin, TX 78728 (512) 251-7541 $199.95 Requires 512K; includes menu-driven concording program CP/M version available for Kaypros. MacBible Encycloware 715 Washington St. Ayden, NC 28513 (919) 746-3589 $169 128K; text files that may be read by MacWrite and Microsoft Word. MacScripture Medina Software P.O. Box 1917 Longwood, FL 32750-1917 (305) 281-1557 $119.95 128K; text files designed to be used with MacWrite. From: Subject: Date: X-Humanist: Vol. 1 Num. 482 (482) Marshall Gilliland (GILLILAND@SASK) pointed to a very unexpected source, i.e., one of the DECUS (DEC Users Society) tapes. We are an active VAX/VMS site and we did indeed have the tape. It is on VAX System SIG Symposium tape VAX86D (from the Fall 86 DECUS meeting in San Francisco). In uncompressed form the files take about 9000 VAX disk blocks (roughly 5 MB). It is all in upper case. Presumably could be downloaded to a PC, but don't think I will attempt that! Gilliland wrote, in part: ----------------------------------------------------------------------- If you have VAX equipment there and get DECUS tapes then ask one of your systems people for the copy of the ascii text of the KJ Bible that was on a DECUS tape not too long ago (I think in 1987). Marshall Gilliland English Dept. U. of Saskatchewan From: Subject: Date: X-Humanist: Vol. 1 Num. 483 (483) Tim Seid (ST401742@BROWNVM) pointed me to CCAT (Center for Computer Analysis of Texts) and Bob Kraft (KRAFT@PENNDRLN) from CCAT also responded. Bob Kraft also sent me several files about CCAT and its services and I've tacked CCAT's info-file at the end of this summary ... "old hands" may be aware of CCAT's electronic newsletter, ONLINE NOTES, but it was new to me and their info-file tells how to subscribe. Bob Kraft wrote: ----------------------------------------------------------------------- I have not seen my MAC person (Jay Treat) since your inquiry about the KJV arrived, but I am reasonably sure that it is already available from CCAT for the MAC, or will be very soon. We have been distributing the KJV and RSV (along with the Greek and Hebrew texts of the Bible) to IBM types for over a year now, and all these materials will be on our soon to be released CD-ROM. Most of it has been ported to the MAC as well. I will send you an order form and other information separately. Bob Kraft From: Subject: Date: X-Humanist: Vol. 1 Num. 484 (484) Ronald de Sousa (DESOUS@UTORONTO) mentioned the possibility of using DIALOG services. de Sousa wrote: ----------------------------------------------------------------------- You'll probably get some satisfactory answers, but in the meantime I wonder whether you you that the cheap after-hours service of DIALOG Info Services, called "Knowledge Index", has the King James full text on line, and can be searched using the search options of that service. I seem to recall that for $200 you'd get about 8 hourse of search time -- quite enough for a limited project. Of course, the same is available on DIALOG itself, with somewhat more sophisticated options.. From: Subject: Date: X-Humanist: Vol. 1 Num. 485 (485) Roger Hare (R.J.HARE@EDINBURGH.AC.UK) responded from JANET that Catspaw Inc. has the King James Bible. They specialize in supporting PC-based implementations of SNOBOL and related products, as I recall. Roger Hare wrote: ----------------------------------------------------------------------- Catspaw do a version of the King james Bible for 50 dollars. My catalogue dosen't say what machine it's for, but if you have access to a maniframe perhaps you could get it onto your Macintosh via file transfers? their address is: Catspaw Inc. PO Box 1123 Salida Colarado 81201 USA. Roger Hare. From: Subject: Date: X-Humanist: Vol. 1 Num. 486 (486) Finally, Chuck Bush (ECHUCK@BYUADMIN) mentioned that they have the King James Bible at the Humanities Research Center at Brigham Young University and I presume he could supply more details. Chuck Bush wrote: ----------------------------------------------------------------------- At BYU we do have the text of the King James Bible in machine readable form. The original data is on a mainframe, but we have downloaded it to PC disks etc. for those who have ordered it in other forms. I have a copy of it on a Macintosh Bernoulli cartridge from which it would be relatively easy to copy it to some other Macintosh medium--even floppies. However, this is just the TEXT. There isn't any software to access it conveniently. Sonar is the only text retrieval software I know of for the Macintosh and I don't think it would be very satisfactory. For one thing, it couldn't give you chapter and verse references. Chuck Bush Humanities Research Center Brigham Young University From: Subject: Date: X-Humanist: Vol. 1 Num. 487 (487) Interested HUMANISTs should also consult the guide to external services of the Center for Computer Analysis of Texts (CCAT), Univ. of Pennsylvania, available from Jack Abercrombie (JACKA@PENNDRLS.BITNET) From: MCCARTY@UTOREPAS Subject: Vox populi (46 lines) Date: 2 December 1987, 20:29:00 EST X-Humanist: Vol. 1 Num. 488 (488) Dear Colleagues: My thanks to the several people who offered their views on the conversational style of HUMANIST. The majority of speakers have clearly voiced a preference for a somewhat more open manner of conversational exchange than has been the rule so far. For what it's worth, I welcome this change without reservation, since HUMANIST is by design ruled chiefly by its members rather than by its editor. Until an absolutely foolproof method of screening out junk mail is found, I will continue to have all submissions to HUMANIST sent first to me and will forward the ones of human origin to the membership. This means very little work for a very large improvement in the quality of the environment. One of the interesting (but, I guess, not surpising) characteristics of HUMANIST is the number of members who never say anything -- yet continue to put up with the large volume of mail. I imply no criticism whatsoever, for there are many noble and practical reasons for remaining silent. Nevertheless, I suspect that some members may occasionally have something to say but wonder if what they have to say is worthy. In general the advice I follow is, say it and see what happens. One possibility for the diffident is to send in a contribution with a note attached asking my advice, for whatever it's worth. Please let me know if anything about HUMANIST bothers you or otherwise seems to need improvement. The ListServ software (written and maintained on a voluntary basis by a remarkable person who lives in Paris) we cannot fundamentally alter. It has certain characteristics that some may consider flaws but that seem to me merely features to be exploited in the best possible way. Locally HUMANIST is supported by my Centre and by the good will of our Computing Services, i.e., by two busy people. There's not much that can be done given these resources, but some changes can be made without much effort -- like the screening of junk mail. In short, lead on! Yours, W.M. _________________________________________________________________________ Dr. Willard McCarty / Centre for Computing in the Humanities University of Toronto / 14th floor, Robarts Library / 130 St. George St. Toronto, Canada M5S 1A5 / (416) 978-4238 / mccarty@utorepas.bitnet From: MCCARTY@UTOREPAS Subject: Date: 2 December 1987, 22:53:10 EST X-Humanist: Vol. 1 Num. 489 (489) Heres one for the eager punters; a colleague of mine wants to study the New Kingdom El-Amarna literature (Egypt, mid 14th C BC). Anybody care to say if someone has already typed in such stuff onto the computer? apologies if its obvious... sebastian rahtz computer science university southampton uk From: MCCARTY@UTOREPAS Subject: CD ROMs, mainframes Date: 2 December 1987, 23:23:34 EST X-Humanist: Vol. 1 Num. 490 (490) Many thanks to Bob Kraft for his cogent remarks about CD ROMs. I seem to have given a rather scrooge-like impression in my most recent posting about CD ROMs and PCs, which does not reflect my positive opinion of PCs. Yes, CD ROMs are ideal for certain kinds of data distribution, especially for (a) stable data and (b) large numbers of recipients. For humanistic research applications with those characteristics, they are also obviously good ideas. WORM disks, or better yet eraseable mass storage devices, would make many of the same advantages available for non-static data and small numbers of recipients. But neither description fits all research fields. I am less convinced that institutional support for faculty use of mainframes and microcomputers is untypical in North America. This is an empirical question, and I would like to put it up for discussion: what is the situation at the sites represented on HUMANIST with regard to: (a) support for humanities computing formally provided by the institution via centralized or specialized facilities, (b) faculty-student computing on mainframes or minis (c) institutional support for microcomputing (d) institutional support for mainframe-micro data transfer. It is possible that Bob Kraft is right and my experience is untypical. But it seems also possible that Penn and CCAT get so much business from people without mainframe access because those who do have local computer centers get their help locally. It would be useful, I think, for all of us if we could get some idea of the facts in this area. The ACH Special Interest Group for Humanities Computing Resources (the sponsor of HUMANIST) did plan once to distribute a questionnaire to gather this information but the final questionnaire design seems to have been delayed, so let's caucus informally now. Michael Sperberg-McQueen, University of Illinois at Chicago From: MCCARTY@UTOREPAS Subject: Electronic OED -- for the blind? Date: 2 December 1987, 23:34:33 EST X-Humanist: Vol. 1 Num. 491 (491) I have a blind, computerized friend, a professor of English and a professional writer, who got very excited when I passed on to him the recent messages from HUMANISTS about plans for making the OED available in electronic form. He had visions - no joke intended - of consulting it through his speech synthesizer on his PC. His enthusiasm was dampened by the planned use of colour to display certain types of information. Does anyone happen to know if the OED has any plans for handicapped users? I suppose that there are still architects who design monumental buildings without ramps for wheelchairs, but perhaps... From: MCCARTY@UTOREPAS Subject: Al-Amarna Correspondence (in MRT format) [96 lines] Date: 3 December 1987, 09:55:14 EST X-Humanist: Vol. 1 Num. 492 (492) Sebastian Rahtz asked whether the El-Amarna letters exist in digitized format somewhere. I doubt whether many HUMANISTS are interested in west-semitized Akkadian texts, but this query (and its answer) provides an opportunity to tell a sad and familiar tale...and perhaps an opportunity for someone to come forward with better news than I have to tell... The good news (for our assyriologist friend in the UK) is that Knudtzon's edition of the El-Amarna letters is in machine-readable format. I have used the massive printed "concordances" (two tomes, each about 7 inches thick). These printouts originated at UCLA, so the best bet is to contact Giorgio Buccellati at the department of Near Eastern Studies, who might make tapes or diskettes available. UCLA has a growing corpus of MRT material for the ancient Near East, and in time it will be available publically as part of Buccellati's hypermedia project for Mesopotamia (Computer-Aided Analysis of Mesopotamian Materials); some materials are currently available from Undena, and Buccellati passed out sample diskettes of digitized Eblaite texts at AOS. The sad tale I mentioned earlier is as follows: Du Cerf (a Paris publisher) recently released a superb volume in its series Litteratures anciennes du Proche-Orient on the El-Amarna letters. Its author/translator is William Moran of Harvard University, recognized as a (probably THE) leading Amarna scholar, who has been putting together this polished volume over the past 30-odd years. His translations are based upon extensive museum collations of the tablets, together with restorations that can be made only by someone so familiar with the "idioms" of international diplomacy (in the 14th century B.C.E) as Professor Moran is. So, the MRT edition we *REALLY* want is Moran's, not the 1915 edition of Knudtzon. But you won't find it published on diskette with this Du Cerf volume (which does not even have transliterated original text). According to the publishers, it would not be cost-effective to publish the original text on paper, and as for a MRT edition of the text....well... Shortsightedness like this has to stop, but who is responsible for "stopping it?" A single individual (as in this case, Moran) probably can do very little to force publishers to change their ways. But how about collective bargaining....we publish such scholarly materials ONLY with publishers that are sensitive about the future of scholarship, and about the precious treasure we have in ancient literature. This means placing premium value on original texts in machine-readable form -- only thus are they truly useful and accessible to modern scholarship -- and making these texts available in the public domain. I suspect that this problem is more acute for orientalists than for classicists and other humanities-literary subspecialty areas; we have special orthographies and printing problems which are expensive and demanding. But my suggestion is that we must encourage and demand higher standards of cooperation from publishers such that valuable (priceless!) human efforts are not lost on a Macintosh diskette after it passes from the departmental secretary or word-processing pool to the publisher. Does anyone else share this point of view? Am I too idealistic? While I am in a lament mode, I might as well refer to another problem that needs attention: the problem of coding standards. There are several efforts underway internationally to "encode" ancient Near Eastern texts in transliteration (Toronto - RIM; UCLA; Rome; Helsinki; etc), but to my knowledge there are no agreed-upon standards. In the case of purely alphabetic scripts, the problem is frustrating but not fatal, since we can use consistent-changes programs to standardize the data for archiving. In the case of syllabic (logographic; heiroglyphic) scripts -- Akkadian, Sumerian, Hittite, Elamite, Egyptian -- the plethora of transliteration schemes is more problematic. No-one sends this kind of data with an SGML prologue, so the best we can hope is that the encoding is consistent and that we can unravel the format codes. If anyone knows about efforts to introduce standards for transliteration and format-coding, would you kindly let me know? I understand that the committee for encoding standards (Nancy Ide; Michael Sperberg-McQueen) recently funded by NEH will not initially address the needs of orientalists. If there are other orientalists "out there" on the HUMANIST reader list -- should we organize ourselves? Apologies to all if this is arcane, recondite or just downright boring. I'd like to know if anyone out there shares some of my frustrations, or sees solutions. Professor Robin C. Cover 3909 Swiss Avenue Dallas, TX 75204 (214) 296-1783 From: MCCARTY@UTOREPAS Subject: E-mail to Australia Date: 3 December 1987, 09:58:21 EST X-Humanist: Vol. 1 Num. 493 (493) Can anyone tell me if e-mail to the Antipodes (ie Australia) has a charge? and if so who pays---the sender if outside Australia or the Receipient? Thanks in advance. From: MCCARTY@UTOREPAS Subject: The Thesaurus Linguae Graecae (TLG) on CD-ROM Date: 3 December 1987, 13:36:52 EST X-Humanist: Vol. 1 Num. 494 (494) The following has been contributed by Theodore Brunner, Director of the TLG Project, from a memo circulated to all TLG customers. Anyone wishing to arrange for a license agreement should contact Professor Brunner, Thesaurus Linguae Graecae, University of California at Irvine, Irvine, CA 92717 U.S.A., telephone: (714) 856-7031, e-mail: TLG@UCIVMSA.bitnet. The license per CD-ROM , including a copy of the printed TLG Canon, is not expensive: ini- tial registration fee (plus first year fee) is $200 to institu- tions and $120 to individuals; annual fee $100 to institutions, $60 to individuals; optional one-time payment for 5 years $500 to institutions, $300 to individuals. (All prices are in US $.) _________________________________________________________________ TLG CD-ROM CUSTOMERS: We have been receiving numerous questions related to TLG CD ROM dissemination plans and policies; here is miscellaneous informa- tion on these subjects: l. To date, the TLG has produced two CD ROMs, disk "A" and disk "B". Disk "A" contains approximately 27 mlllion words of TLG text, as well as an electronic version of the TLG Canon. Disk "B" contains the same 27 million words of text, the TLG, and an Index to the TLG texts on the CD ROM. Disk "A" also contains miscellaneous non-TLG materials, including some Latin, Coptic, and Hebrew texts, some epigraphical materials, as well as portions of the Duke Data Bank of Documentary Papyri. The non-TLG materials were included on TLG CD ROM "A" for one reason only: this disk was produced (as was CD ROM "B") primarily for experimental purposes, i.e., to aid in the development of software resources designed to enhance utilization of the (relatively new) CD ROM data storage medium. Neither disk "A" nor disk "B" reflects the High Sierra format standard (established after both of these CD ROMs were produced. 2. In short order, the TLG will release a new CD ROM, disk "C". This disk will contain approximately 41.5 million words of TLG text, an index to this text material, and the TLG Canon. Individuals and institutions already holding license to "A" or "B" disks are entitled to receive "C" disks free of charge. This (as provided for in the license agreement governing use of TLG ROMs) will be on an exchange basis, i.e., disks previously issued by the TLG must be returned to the TLG prior to the issuance of a "C" disk. TLG LICENSEES SHOULD NOT RETURN THEIR "A" OR "B" DISKS UNTIL DISK "C" IS OFFICIALLY RELEASED. [Notice will appear on HUMANIST when disk "C" is ready.] 3. Questions have been raised about the absence of non-TLG material on the "C" disk. The TLG controls and licenses only its own materials, and license agreements previously executed pertain to the TLG materials on the disks only. Current TLG CD ROM licensees may, of course, continue to use their ("A" or "B") disks throughout the course of their license period; they will not be issued "C" disks, however, until they have returned their earlier CD ROM versions to the TLG. It is the case, however, that the Packard Humanities Institute (PHI) will be releasing its own CD ROM in the very near future; this disk will contain Latin, Coptic, Hebrew, and epigraphical materials, as well as a significant portion of the Duke papyrological data bank. It can be assumed that individuals and institutions desirous of these materials can make arrangements with PHI to gain access to them on a PHI disk. Further informa- tion on this subject can be obtained by contacting John Gleason, Packard Humanities Institute, P.0. Box 1330 Los Altos, CA 94022 U.S.A. 4. We have received numerous requests for technical documenta- tion related to the forthcoming TLG CD ROM "C". The internal organization of the text files and of the I.D. table files will be identical to the organization of these files on TLG CD ROM "A". The file directory and author table will be reorganized to reflect the High Sierra standard. More detailed documentation is currently being prepared and should be ready for distribution in the near future. Theodore F. Brunner, Director November 8, 1987 _________________________________________________________________ From: MCCARTY@UTOREPAS Subject: Enlightening the publishers, encoding Semitic (65 lines) Date: 3 December 1987, 15:00:56 EST X-Humanist: Vol. 1 Num. 495 (495) Three cheers for Robin Cover's idea of group pressure to bring publishers to their senses regarding the preservation and distribution of machine-readable materials. Some publishers, to their credit, are already alert to the issues involved--or so say people who should know. But there are still an awful lot of them out there who behave the way Renaissance printers did with Carolingian manuscripts: mark it up, print it, and throw it out. Anything we can do to preserve the fruits of scholarly labors, we should do. It would also be useful to have a better developed system of text archives in North America -- either a network of regional or discipline-based archives, or one central archive that would take anything (the way Oxford does). The latter would be appealing because fewer texts might fall through cracks in the system, but specialized collections would remain important because they can do more intensive work on their holdings, the way Penn's CCAT does. A central North American text archive, acting in concert with the European archives, might also be in a position to help exert the kind of group pressure on publishers that Robin Cover suggests. Making the publisher's texts usable, by documenting as far as possible the usual systems of typesetting codes found in the publishing industry, is one goal of the ACH/ACL/ALLC initiative for text-encoding guidelines. (That goal is not wholly explicit in the final document I posted here a couple of days ago, but it was discussed at length during the planning meeting at Vassar and clearly is important to a lot of people.) The consensus of the planners at Vassar was also that transliteration practices, and conventions for the encoding of character sets, should at least be documented as far as possible in the guidelines. Many participants were leery of making specific recommendations for the representation of specific characters, since local hardware features and requirements can vary so widely. Nevertheless, the experts present agreed that it would not be insuperably difficult to provide adequate documentation for the encoding of scripts which, like Semitic scripts, provide special challenges to most commonly available hardware. That means that the guidelines can and should contain full information on practices for encoding texts of interest to Orientalists--if the Orientalists will document their existing practices. If they can also agree on common recommendations for future work, that consensus can and should also be documented. The same goes for any and all other specialized interests. These guidelines will belong to the humanities computing community as a whole, and I hope the community will work together to make them as complete and useful as we can. Again, I reiterate the invitation: anyone interested in helping formulate the guidelines, either in general or with respect to some specific question (e.g. the encoding of Akkadian, or the encoding of numismatic materials, or the encoding of manuscript variants, or the prosodic transcription of oral texts, or the encoding of hypertext materials, or ...), should please contact Nancy Ide or myself. This invitation will be periodically renewed, as details for the formal arrangement of the drafting committees are set, but if you let us know now, we will have a better idea of how much interest there is, and what kinds of special problems are on people's minds. Michael Sperberg-McQueen, University of Illinois at Chicago P.S. The opinions here expressed are as always mine, not necessarily those of my employer, or the ACH, or the guidelines steering committee. From: MCCARTY@UTOREPAS Subject: E-mail to Australia (24 lines) Date: 3 December 1987, 19:16:39 EST X-Humanist: Vol. 1 Num. 496 (496) E-mail involving ACSNet (Australia, through the international gateways, or even domestically between sites I think) has a charge for the Australian end (whether sender or receiver). It was something like 10c/message plus 2c/line about a year ago. Apparently many institutions do not (yet?) pass on the charge to individual users. The official position could presumably be got from postmaster@munnari.oz, i.e. David Nash Center for Cognitive Science 20B-225 MIT Cambridge MA 02139 From: MCCARTY@UTOREPAS Subject: Archives (75 lines) Date: 3 December 1987, 19:20:14 EST X-Humanist: Vol. 1 Num. 497 (497) In response to Prof. Cover's impassioned plea, I can only say that it IS possible, with some concerted effort to force publishers to change their ways. The American Sociological Association has recently, as of Sept 1, 1987, in fact, begun to require of all periodicals published under their aegis, that any computer readable files, (both data and software) BE CITED in the bibliography. There is an effort under way now to convince other academic publishers to follow suit. There are a number of reasons for citation of computer files: (a) computer-readable files are intellectual property in their own right, quite as much as publications in other media, eg on paper, film, audio-tape, canvas, etc. (This has been recognized by that most conservative institution, the American Library Association, since the late 1970s.) The authors (properly called 'principal investigators'), producers, publishers, editors, and translators, of computer-readable files deserve for their labours the same acknowledgement and recognition as do the authors, composers, etc of intellectual property in more traditional media. (b) the citation of source materials in the bibliographies of publications acknowledges the source materials used in the research process, thus enabling ones peers to follow the same line of reasonsing, using the same source materials, to (hopefully) come to the same conclusions, thus corroborating our initial reasoning - ie the peer review process. (c) once computer-readable files are cited in bibliographies, they will get picked up in the citation indices, and thus eventually come to the attention of tenure committees. Thus individual 'authors' of these things will in time receive their due academic brownie-points. But citing computer-readable files is not enough. There must also be a mechanism for preserving them for posterity and making them available to others for secondary analysis. Researchers are reluctant to make 'their' files available to others for fear that they will not receive their due acknowledgement (- the polite reason). Mandatory citation of computer files in publications should help reduce this fear. Many researchers are not aware that there in fact exists a network of local data archives/data libraries in academic institutions throughout the United States and Canada, as well as a well developed system of national data archives in Europe, most recently in Hungary, Israel, and the USSR. Granted, these data archives primarily concentrate on 'social science' data files, primarily because that is the field from which the initial impetus for their creation came. However, this orientation is not cast in stone. And most of these data archives/libraries could with appropriate overtures, be convinced that there are other user communities that also need their services. The social scientists just happen to have been among the earliest and most vociferous. The point being that there is already an institutional framework, staffed by knowledgeable and experienced people who with very little effort could provide the network of text archives that humanists seem to want - all they want is a little proding. ------------------------------------------------------------ Laine Ruus, University of British Columbia Data Library userDLDB@ubcmtsg.bitnet From: MCCARTY@UTOREPAS Subject: E-mail to Australia Date: 3 December 1987, 19:22:30 EST X-Humanist: Vol. 1 Num. 498 (498) There is a relay at ULCC (UK) called EAN which links with ACSnet - the fact that you do not register before submitting suggests it is 'free': you may be able to learn further from mailing an enquiry to laision@uk.ac.ean-relay EAN can also link you to other European sites as well - maybe to addresses 'missing' from EARN stephen@uk.ac.oxford.vax From: MCCARTY@UTOREPAS Subject: CD-ROMs Date: 4 December 1987, 13:02:56 EST X-Humanist: Vol. 1 Num. 499 (499) Contributed by Bob Kraft Just to supplement Ted Brunner's information on the TLG CD-ROM, regarding the non-TLG materials such as were included on TLG disk "A" -- the present plan is for the Packard Humanities Institute (PHI) jointly with the Center for Computer Analysis of Texts (CCAT) at Penn to produce an "experimental" CD-ROM at the heart of which will be various Latin texts (being prepared at PHI), Greek Papyri (Duke) and Inscriptions (Cornell, Princeton Institute for Advanced Study), and a variety of biblical and related materials in various languages (Hebrew, Greek, Latin, Coptic, Syriac, Aramaic, Armenian) as well as sample files from various other sources and projects (e.g. Dante Commentary project, Milton Latin project, Kierkegaard in Danish, Arabic poetry, some word lists, etc.). I call this disk a "Sampler," and it is scheduled to be ready for distribution by the end of this month (December). Again, the aim is to give scholars, software developers, etc., a body of consistently formatted (more or less!) materials on which to work in various directions and at little cost. There will be a notice on HUMANIST when the PHI/CCAT joint CD-ROM "Sampler" is ready for distribution! Bob Kraft for CCAT From: MCCARTY@UTOREPAS Subject: Enlightening the publishers, encoding Semitic (65 lines) Date: 4 December 1987, 13:10:19 EST X-Humanist: Vol. 1 Num. 500 (500) Contributed by "James H. Coombs" Michael Sperberg-McQueen has suggested that we need a text archive in North America. Is that a generally felt need? What could a text archive here offer that Oxford does not offer? Certainly, shipping would be faster and cheaper, but is there something more substantial? Or are there real hardships now? Or, could our needs be addressed by some adjustments in the services that Oxford provides---such that we might better discuss our needs with Oxford instead of duplicating their efforts. If we DO need an archive in North America, who should institute and manage it? What is the proper sort of organization? And what's in it for them? Will it be a costly burden? Or are we willing to pay for materials in order support such a facility? Would it be commercial or non-profit? --Jim Dr. James H. Coombs Software Engineer, Research Institute for Research in Information and Scholarship (IRIS) Brown University jazbo@brownvm.bitnet From: MCCARTY@UTOREPAS Subject: ACH text markup Date: 4 December 1987, 13:17:33 EST X-Humanist: Vol. 1 Num. 501 (501) Contributed by James H. Coombs Some thoughts on guidelines for text markup, in response to Michael Sperberg-McQueen's note. 1) Markup must be descriptive. 2) Delimiters should be '<' and '>' in conformance with the default of the new SMGL standard. 3) Markup/tag attributes should be allowed, and attribute names should be descriptive. 4) There should be no attempt at establishing a "closed" tag set. The current AAP SGML application allows for definition of new tags, but it does not support such definition in a practical way. The consequence is that people will use "list items," for example, when they should be using "line of poetry." Within these guidelines, it can only be healthy to provide a list of tags that people should choose from when tagging certain entities. The point of this is that we cannot predict what textual elements will be of significance for what researchers. We have to allow for the discovery of textual elements that no one has categorized previously. At the same time, there is no point in having 30 different tags for "line of poetry." The guidelines should make clear that DESCRIPTION is paramount and that the use of particular tags is secondary. 5) In so far as possible, there should be requirements for minimal tagging. It would be a mistake to fail to tag "verse paragraphs" and "book" in *Paradise Lost*, for example, and any version that does not provide such tags must be considered inadequate and, ultimately, rejected. 6) There can be no limit placed on "maximal" tagging. If a researcher needs every word tagged, we must allow for this. It is a trivial matter to ignore or strip out such tagging. Researchers with such needs cannot, at least for now, reasonably expect that others will provide such exhaustive tagging. Putting (5) and (6) together, we have a principle of base-level tagging with as much additional information as the original researchers care to provide. Where there are common needs that may not be shared by the original researcher, it may still be appropriate to require that those common needs be met. For example, the original researcher may not need to know about verse paragraphs, but we should still require that they be appropriately tagged. 7) Referential markup should be used in place of "special" characters, such as accented characters. If a particular configuration supports an acute accent, for example, in hardware, the researcher may take advantage of those facilities. When checking the document into an archive or passing it on to others, however, the acute accent must be translated to "á" (or whatever the SGML standard specifies---don't have my copy at hand). This is off the top of my head, but enough for now. I have other ideas on this stuff, but they can come out if discussion ensues. I am interested in the project, but I don't have the time or money to travel to meetings right now. I also get the feeling from the preliminary document that you posted that people are re-inventing SGML. We already have, in SGML, a metalanguage for generating descriptive markup languages. I don't think that we need Document Type Definitions right now, but even they might turn out to be useful once SGML is established and SGML-support tools become widespread. I haven't provided any defense of descriptive markup or SGML here. We discuss the advantages of these systems in "Markup Systems and the Future of Scholarly Text Processing," *Communications of the ACM*, November 1987--- written with Allen H. Renear and Steven J. DeRose. Interested in any and all comments! --Jim Dr. James H. Coombs Software Engineer, Research Institute for Research in Information and Scholarship (IRIS) Brown University jazbo@brownvm.bitnet From: MCCARTY@UTOREPAS Subject: "International Educational Computing" Date: 4 December 1987, 16:03:15 EST X-Humanist: Vol. 1 Num. 502 (502) POSSIBLE COURSE 3551 - SUMMER, 1988 - R. G. RAGSDALE The European Conference on Computers in Education is being held in Lausanne, Switzerland, July 24-29, 1988. When the World Conference on Computers in Education was held there in 1981, a substantial number of OISE students attended, some as a portion of a course offered by Bob McLean. I propose to offer a course, 3551 - International Educational Computing: An Interaction of Values and Technology, which would take place in Switzerland, around the dates of the conference. Permission to offer the course formally depends on several factors, including the number of students likely to attend. Plans are incomplete at this time, but a projection of the plans indicates the following format, assuming that all necessary arrangements for housing, classroom, etc., can be made. The course participants will meet together July 18-22 to study previous research and theory on values and technology, methods for evaluating the effects of technology, and case studies in business and education of technology-value conflicts. The daily schedule will have more formal sessions (lecture, seminar) in the mornings and less formal sessions (group discussions) in the early evening, with afternoons free for individual study or other activities (scheduled class time for each day will be four hours, probably two and a half in the morning, one and a half in the evening). During this week, participants will select and prepare for the issue(s) they plan to study during the conference. At the conference, each participant will focus on one or more topics, such as a particular age range, subject matter area, or type of computer application. They will collect material from the formal sessions, but also from informal interviews with others attending the conference, both presenters and those who are only attending. August 1 is a Swiss national holiday (which all course participants should enjoy), so the remaining sessions will take place August 2-5, following the same schedule as the first week. During this time the results of the previous week's activities will be presented and group feedback will obtained. Final papers will be due in mid-September. Preliminary arrangements for accommodation and classroom space have been made at Aiglon College, an English boarding school in Chesieres, Switzerland, about one hour from Lausanne by train and bus. Room rates include the "taxe de sejour" which gives access to the recreational facilities of Villars, such as the swimming pool, ice skating, etc. _________ ____ Estimated_Cost Based on 1987 prices, the airfare to Geneva is $927, room and board is 860SF (Swiss Francs) for 20 days, and the conference registration is 280SF (higher after January 31). At current exchange rates, these items total almost $2,000. A better estimate would include ground transportation, other likely expenses (chocolate, etc.), and possible price increases. It seems extremely unlikely that necessary expenses would exceed $2,500. Anyone who is interested in participating in this course should indicate this to me in writing (including, if possible, your "estimate of certainty"). From: MCCARTY@UTOREPAS Subject: TEXT MARK-UP (73 lines) Date: 6 December 1987, 11:02:37 EST X-Humanist: Vol. 1 Num. 503 (503) Contributed by Nancy Ide I recently responded to Jim Coombs' remarks concerning the principles developed at Poughkeepsie as a basis for the development of a standard for encoding machine-readable texts. He suggested that we make our discussion "public," in the spirit of recent remarks on HUMANIST, and so I will briefly describe what has been said and put forth my reply. I inidicated to Jim that much of what he says is very much in the spirit of the discussions at Poughkeepsie among the 31 participants. This shou ld be made clearer in the minutes of the meeting, which Lou Burnard has drawn up and which will be available from him or me in a few days. Especially, we intend to make the standard extensible to accomodate the unforeseen needs of individual projects. I also indicated that the standard will *recommend* a minimum set of tags for texts, which is stated in the principles under number 5, I believe. We had a lively discussion on this topic (actually, all of the discussions we very lively!) at the Poughkeepsie meeting, with some disagreement about specifying a minumum. This is why *recommend* is in emphasis. The feeling at the meeting was that we can *require* nothing, but we can do our best to "guide the perplexed" and provide some idea of what it makes sense to encode regardless of how the text is originally intended to be used. I should point out here that among participants in the Poughkeepsie meeting, there were two clear perspectives on the whole issue of encoding texts: one saw most encoding as a future endeavor, and the other was focused on texts already encoded. One's opinion concerning whether most texts have been encoded already or have yet to be encoded obviously affects opinion on the importance of specifying a minimum set of tags for encoded texts. Jim responded to me suggesting that we could refuse to accept texts that had been encoded without the "minimum" tags we might expect. He made all of the excellent arguments for insisting that certain tags be included *anytime* a tex is encoded. But the problem here is that I am not sure who the "we" who is to do this refusing actually is. If someone does not provide the minimum tags but has encoded the collected works of some obscure author I am interested in, will I refuse to accept the text? If I am an archive, should I refuse to take the text--that is, is it better to have an inadequately tagged text or none at all? Admittedly, in some cases it may be better to start from scratch and re-enter a text, if the existing version is pitifully done. But most of the time it will be easier to go in and mark whatever I need to mark in the existing version than to re-enter the text entirely. Similarly, we cannot expect archives to ensure that their texts contain a minimum tag set. This was a point of considerable concern to the keepers of archives present at the meeting, and led to the final agreement that only the tags that are present (whatever they may be) in a text that is distributed by an archive will conform to the standard. This requirement in itself will necessitate the writing of programs to perform tranlsation to the new scheme, another topic addressed at some length and for which there seems to be support. However, note that the principles indicate that texts now contained in the archive need not be converted retrospectively. Naturally, although this is not required we hope that it will occur in many cases. So, the guidelines that will be developed will recommend a minimum set of tags---especially, for those things that are easily encoded when the source text is at hand and which are also obviously of use in most types of analysis. However, it does not appear to me that it is reasonable to require such tagging. We can only hope that the recommendation is enough to inspire most researchers to provide the minimum set of tags when they encode new texts. Nancy M. Ide ide@vassar.bitnet From: MCCARTY@UTOREPAS Subject: more on mark-up (34 lines) Date: 6 December 1987, 11:10:43 EST X-Humanist: Vol. 1 Num. 504 (504) Contributed by Nancy Ide In my earlier message I neglected to summarize my reply to Jim Coombs concerning SGML. We have every expectation that the standard we devise will be an application of SGML, but until we know fully our needs it is not prudent to commit ourselves to SGML. We know, for instance, that while it is possible to define multiple parallel hierarchies in SGML it is not entirely straightforward, and such parallel hierarchies are likely to be used extensively in encoding machine-readable texts intended for literary, linguistic, and historical analysis. We hope that in any event the standard will be compatible with SGML, which, as Jim points out, is bound to become widely accepted and used. Also, Jim had some concern about our defining a meta-language, since SGML (the abstract syntax) is in fact a meta-language for describing a mark-up scheme. The concrete syntax of SGML is one mark-up scheme described by this abstract syntax. However, our goal is to provide a meta-language in which *all* existing mark-up schemes can be described (which may prove to be impossible), and it seems to us that the abstract syntax of SGML is inadequate for this task. The abstract syntax of SGML was not intended for this purpose, it should be noted. Nancy M. Ide ide@vassar.bitnet From: MCCARTY@UTOREPAS Subject: Use of electronic communictions (29 lines) Date: 6 December 1987, 11:15:37 EST X-Humanist: Vol. 1 Num. 505 (505) Contributed by C. S. Hunter Willard notes the high percentage of "silent participants" on HUMANIST. My experience with computer conferencing systems makes his note not at all surprising. At the University of Guelph we have had our CoSy conferencing system available free of charge to all faculty for some years now. Only about 40 % of the faculty actually took us up on the offer of a free account on the system. Of that 40 %, only 25 % (or less) actively use the system more than once a week. The ratio of active to passive participants on the system is something like 1 : 9. The same is roughly true on the student system, where only about 10 % of the registered users are actual active participants. We are now studying the phenomenon to determine what factors contribute to the individual use or non-use of computer-mediated communication among academics. . C. Stuart Hunter, University of Guelph cshunter@uoguelph From: MCCARTY@UTOREPAS Subject: E-mail to Australia Date: 6 December 1987, 11:41:45 EST X-Humanist: Vol. 1 Num. 506 (506) Contributed by Emmanuel Tov IN REPLY TO THE QUESTION OF BRENDAN O'FLAHERTY (3 DEC) I CAN TELL YOU THAT MAIL FROM SYDNEY (MACQUARIE UNIV.) TO ISRAEL AND EUROPE AND THE U.S. IS FREE AS WELL AS REVERSE MAIL. EMANUEL TOV From: MCCARTY@UTOREPAS Subject: Text encoding Date: 6 December 1987, 16:58:56 EST X-Humanist: Vol. 1 Num. 507 (507) Contributed by "James H. Coombs" [In reply to Nancy Ide's points about SGML and related matters. The inset paragraphs quote from her messages. -- ed.] We have every expectation that the standard we devise will be an application of SGML, but until we know fully our needs it is not prudent to commit ourselves to SGML. A minor philosophical point, I guess: I don't think that we CAN know our needs fully. We need standards that accommodate needs that cannot be predicted today. The practical consequence of this observation, which I'm sure Nancy would agree with, is that one should seek a "productive" system instead of a system that satisfies everything on a list, and one should not spend a lot of time developing the list. We know, for instance, that while it is possible to define multiple parallel hierarchies in SGML it is not entirely straightforward, and such parallel hierarchies are likely to be used extensively in encoding machine-readable texts intended for literary, linguistic, and historical analysis. What are "multiple parallel hierarchies"? I can guess, but I want to be sure that I understand the problem. In a most documents, we have, for example, pragmatic and syntactic hierarchies. One has no difficulty marking up documents for both at the same time (although one does not normally mark up the latter descriptively). Pragmatically, we have things like [ [ [ ] [ ] ] ] CHAPTER SECTION PARAGRAPH PARAGRAPH Syntactically, we might have [ [ ] [ [ ] ] ] S NP VP NP So far as I know, there are no difficulties in marking up both types of hierarchies. One could argue that we really have a single hierarchy here, but, conceptually at least, we have two different domains: pragmatics and syntax. Well, this distinction is bound to be controversial, to say the least! This is probably the wrong list for a discussion about syntax vs. pragmatics, etc. I can try other examples, but I'm still guessing. And I'm still wondering what the difficulty is in encoding them under SGML. However, our goal is to provide a meta-language in which *all* existing mark-up schemes can be described (which may prove to be impossible), and it seems to us that the abstract syntax of SGML is inadequate for this task. What is the practical value of a metalanguage that generates all markup languages? I would think that it would be so abstract as to be of no value. I suspect that this is part of the goal of salvaging work that has been inadequately coded. I believe that we will be better off if we worry less about the past and plan more for the future. I suppose that it's true that publishers have typesetting tapes in their basements, and that we could use those tapes. I think that we have to accept that those tapes are of little value until someone converts the coding to descriptive markup. I have the typesetting tape for the American Heritage Dictionary (sorry, can't distribute it); no one wasted time trying to figure out how to use that tape as it is now. I know of several projects that are based on that tape, and all required conversions. Ideally, the tape would have been converted once and for all (and it apparently has been now). Whether it's a dictionary or a literary text, we can expect that inadequate coding will cause considerable work for anyone attempting to use the database. A metalanguage that includes procedural markup as well as descriptive markup will not help in such a case, because one still has to map procedural markup onto descriptive markup in order to be able to work with meaningful entities (definition, paragraph, etc.). Since procedural markup tends to be performed somewhat arbitrarily and does not normally provide a one-to-one relationship between entity and markup, there is no metalanguage that will help a researcher perform the necessary conversions. What we really need is a sensible and dynamic standard. I don't think that anyone would argue that that standard should be anything other than descriptively based. Since we are going to have to convert texts to descriptive markup in order to use them anyway, why not just develop the standard and convert as necessary. Trying to save the past is just going to retard development. I haven't mentioned SGML so far. Is there a problem with SGML? I have heard complaints, and we addressed them in our article. No one expects individual scholars to master the full syntax and to generate Document Type Definitions (DTD). What we want is accurate and consistent descriptive markup. In our experience at Brown, people have no difficulties mastering the principles of descriptive markup. We can leave the development of DTDs to experts. --Jim Dr. James H. Coombs Software Engineer, Research Institute for Research in Information and Scholarship (IRIS) Brown University jazbo@brownvm.bitnet From: MCCARTY@UTOREPAS Subject: Markup: on requirements Date: 6 December 1987, 17:12:16 EST X-Humanist: Vol. 1 Num. 508 (508) Contributed by "James H. Coombs" My thanks to Nancy Ide for moving the discussion out to HUMANIST. Things have fallen a little out of sequence, but the ideas are more important than the sequence anyway. I have also heard from Michael Sperberg-McQueen, and I hope that he will post his very informative note as well. If this discussion becomes aggravating for the majority of HUMANISTs and there is enough interest, then perhaps we can form a separate mailing list. So, here is my (unedited) reply to the issue of requirements. While we may not be able to require that people conform to a standard fully, we can refuse to accept inadequate texts. There is an atmosophere of poverty now such that we are anxious to have whatever we can get our hands on. At the extreme, even now most of us would reject a text that is all in upper case and contains errors---it turns out to be easier to do it oneself. If we consider what things will be like or could be like in a few years though, I think it's appropriate to say that there are certain minimal standards (or one must comply with with a standard). First, we don't accept just anything for other scholarly documents. Second, we will have more alternatives for sources. Third, we want high quality sources so that people won't have to keep reworking or entirely redoing. If I can't count on a text from a particular archive to meet my needs, what is my motivation for bothering with that archive; and what is the motivation for the archive's existence? I certainly would not want to see it supported by public funds. I don't think that this places an inordinate burden on individual researchers. For the most part, I'm sure that it's considerably less burdensome than ensuring that one's bibliography, for example, accords with the MLA style sheet (and what bibliography unambiguously does?). --Jim I should elaborate briefly. First, I have/had a tape of Milton's *Paradise Lost*; it was so bad that I would prefer to start from scratch. Second, I think that we have a right to expect archives to set and maintain certain standards. Perhaps they don't want to accept that responsibility right now. If not, then I think that we should be planning to develop and support a good archive. Does such an archive need several programmers for text validation and maintenance? Then they should have the support to hire them. Let's centralize the expense as much as possible. Currently, we have no idea who is entering what and how they are doing it. Even if we could get people to go to the archive, the current approach means that many people are going to have to massage texts into useful formats, and every project will have to ensure that the text is accurate. It's as if we all had to revise our copies of *Paradise Lost* and then go proof read them before we could use them. Finally, I have texts that I have entered, marked up, and proof read, but I'm reluctant to check them into an archive that is inconsistent at best. Whatever professional credit I might get for the contribution---well, let's say that the effort is somewhat discredited by the state of the archive. It's like publishing a book with XYZ press instead of ABC. I would be happy to send it off to someone who provides full services and validates text, and I would be happy to make any necessary corrections. To reverse the roles, I am reluctant to acquire a text from an archive that makes no guarantees. After all, in the process of keyboarding a text, I get to read it, and the time goes quickly. It's the proofreading that is burdensome, and I still have to proofread. (Or do I get to say that I used X's text, and X is going to accept the responsibility for errors.) --Jim Dr. James H. Coombs Software Engineer, Research Institute for Research in Information and Scholarship (IRIS) Brown University jazbo@brownvm.bitnet From: MCCARTY@UTOREPAS Subject: ACL Applied Natural Language Conference (833 lines) Date: 6 December 1987, 17:24:17 EST X-Humanist: Vol. 1 Num. 509 (509) The following is republished from IRLIST, the Information Retrieval List. -- ed.] -------------------------------------------------------------------------- The printed version of the following program and registration information will be mailed to ACL members early in December. Others are encouraged to use the attached form or write for a booklet to the following address: Dr. D.E. Walker (ACL), 445 South Street - MRE 2A379, Morristown, NJ 07960, USA, or to walker@flash.bellcore.com, specifying "ACL Applied" on the subject line. ASSOCIATION FOR COMPUTATIONAL LINGUISTICS SECOND CONFERENCE ON APPLIED NATURAL LANGUAGE PROCESSING 9 - 12 February 1988 Austin Marriott at the Capitol, Austin, Texas, USA ADVANCE PROGRAM Features: Six introductory and advanced tutorials Three days of papers on the state-of-the-art Distinguished luncheon speakers A panel of industry leaders Exhibits and demonstrations REGISTRATION : 7:30am - 3:00pm, Tuesday, 9 February, Joe C. Thompson Conference Center, University of Texas at Austin, 26th and Red River. EXHIBITS : 10:00am - 6:00pm, Wednesday, 10 February GENERAL SESSIONS WEDNESDAY, FEBRUARY 10, 1988 SESSION 1: SYSTEMS SESSION 2: GENERATION SESSION 3: SYNTAX AND SEMANTICS SESSION 4: MORPHOLOGY AND THE LEXICON THURSDAY, FEBRUARY 11, 1988 SESSION 5: SYSTEMS SESSION 6: TEXT PROCESSING SESSION 7: MACHINE TRANSLATION FRIDAY, FEBRUARY 12, 1988 SESSION 8: SYSTEMS SESSION 9: MORPHOLOGY AND THE LEXICON SESSION 10: SYNTAX AND SEMANTICS REGISTRATION INFORMATION AND DIRECTIONS PREREGISTRATION MUST BE RECEIVED BY 25 JANUARY; after that date, please wait to register at the Conference itself. Complete the attached ``Application for Registration'' and send it with a check payable to Association for Computational Linguistics or ACL to Donald E. Walker (ACL), Bell Communications Research, 445 South Street MRE 2A379, Morristown, NJ 07960, USA; (201) 829-4312; walker@flash.bellcore.com; ucbvax!bellcore!walker. If a registration is cancelled before 25 January, the registration fee, less $15 for administrative costs, will be returned. Full conference registrants will also receive lunch on the 10th and 11th. Registration includes one copy of the Proceedings, available at the Conference. Copies of the Proceedings at $20 for members ($30 for nonmembers) may be ordered on the registration form or by mail prepaid from Walker. TUTORIALS : Attendance is limited. Preregistration is encouraged to ensure a place and the availability of syllabus materials. RECEPTIONS : The Microelectronics and Computer Technology Corporation (MCC) will host a reception for the conference at its site on Wednesday evening. To aid in planning we ask that you complete the RSVP on the registration form. In addition there will be receptions at the conference hotel on Tuesday evening and Thursday afternoon. EXHIBITS AND DEMONSTRATIONS : Facilities for exhibits and system demonstrations will be available. Persons wishing to arrange an exhibit or present a demonstration should contact Kent Wittenburg, MCC, 3500 W. Balcones Center Drive, Austin, TX 78759; (512)338-3626; wittenburg@mcc.com as soon as possible. HOTEL RESERVATIONS : Reservations at the Austin Marriott at the Capitol MUST be made using the Hotel Reservation Form included with this flyer. Reservations subject to guest room availability for reservations received after 25 January 1988. Please mail to: Austin Marriott at the Capitol AIR TRANSPORTATION : American Airlines offers conferees a special 35% off full coach fare, 30% off full Y fares for passengers originating in Canada, or 5% off any published roundtrip airfare applicable to and from Austin. Call toll free 1-800-433-1790 and give the conference's STAR number S81816. If you normally use the service of a travel agent, please have them make your reservations through this number. DIRECTIONS : There is one public exit from Robert Mueller Airport in Austin; at the traffic light, turn right (onto Manor Rd.) and drive to Airport Blvd. (approx. 1/4 - 1/2 mile). Turn right on Airport Blvd., and drive to highway I-35 (approx. 1-2 miles). Turn left (south) onto I-35, heading toward town. Get off at the 11th-12th St. (Capitol) exit, and drive an extra block on the access road, to 11th St. The Marriott is on the SW corner of that intersection (across 11th St., on the right). A parking garage is attached. The Marriott at the Capitol operates a free shuttle to and from the airport. Cab fare would be approx. $6. The Joe C. Thompson Conference Center parking lot is on the SW corner of Red River and 26th Street; the entrance is on Red River, and a guard will point out the center (adjacent, to the west). Directions to JCT from Marriott parking garage: Turn right (S) on I-35 frontage road, turn right (W) on 10th St., turn right (N) on Red River, and drive [almost] to 26th. APPLICATION FOR REGISTRATION Association for Computational Linguistics, Second Conference on Applied Natural Language Processing, 9 - 12 February 1988, Austin, Texas NAME _________________________________________________________________ Last First Middle AFFILIATION (Short form for badge ID) ___________________________________________________________ ADDRESS _______________________________________________________________ _______________________________________________________________________ _______________________________________________________________________ _______________________________________________________________________ TELEPHONE ____________________________________________________________ COMPUTER NET ADDRESS _________________________________________________ REGISTRATION INFORMATION (circle fee) ACL NON- FULL-TIME MEMBER* MEMBER* STUDENT* by 25 JANUARY $170 $205 $85 at the Conference $220 $255 $110 *Member and Non-Member fees include Wednesday and Thursday luncheons; Students can purchase luncheon tickets at a reduced rate. LUNCHEON TICKETS FOR STUDENTS: $10 each; Wednesday _____; Thursday ________; amount enclosed $ ______ LUNCHEON TICKETS FOR GUESTS: $15 each; Wednesday _____; Thursday ________; amount enclosed $ ______ SPECIAL MEALS: VEGETARIAN ______ KOSHER ______ EXTRA PROCEEDINGS: $20 members; $30 non-members; amount enclosed $ ______ TUTORIAL INFORMATION (circle fee and check at most two tutorials) FEE PER TUTORIAL ACL NON- FULL-TIME MEMBER MEMBER* STUDENT by 25 January $75 $110 $50 at the Conference $100 $135 $65 *Non-member tutorial fee includes ACL membership for 1988; do not pay non-member fee for BOTH registration and tutorials. Morning Tutorials: select ONE: INTRODUCTION: Allen LEXICONS: Boguraev & SPEECH: Roucos Levin Afternoon Tutorials: select ONE: INTERFACES: Hafner LOGIC: Moore TRANSLATION: Nirenburg TOTAL PAYMENT MUST BE INCLUDED : $ ____________ (Registration, Luncheons, Extra Proceedings, Tutorials) Make checks payable to ASSOCIATION FOR COMPUTATIONAL LINGUISTICS or ACL. Credit cards cannot be honored. RSVP for MCC Reception: Please check if you plan to attend the MCC reception on Wednesday evening, February 10th. _________ Send Application for Registration WITH PAYMENT before 25 January to the address below; AFTER 25 January, wait to register at Conference: Donald E. Walker (ACL) Bell Communications Research 445 South Street, MRE 2A379 Morristown, NJ 07960, USA (201)829-4312 walker@flash.bellcore.com ucbvax!bellcore!walker APPLICATION FOR HOTEL REGISTRATION Reservations subject to guest room availability for reservations received after 25 January 1988. In the event of unanticipated demand, rooms will be assigned on a first-come, first-served basis. Please send in your reservation request as early as possible. NAME _________________________________________________________________ Last First Middle AFFILIATION ___________________________________________________________ ADDRESS _______________________________________________________________ _______________________________________________________________________ _______________________________________________________________________ _______________________________________________________________________ TELEPHONE ____________________________________________________________ Room Requirements Single $64 ________ Double $74 ________ Date and time of arrival _________________________________________ Date and time of departure _______________________________________ Complete if arrival after 6PM __________________________________________________________________ Credit Card Name Number Expiration Date Send Application for Hotel Reservation to: Austin Marriott at the Capitol ASSOCIATION FOR COMPUTATIONAL LINGUISTICS SECOND CONFERENCE ON APPLIED NATURAL LANGUAGE PROCESSING TUTORIALS 9 February 1988 Joe C. Thompson Conference Center, University of Texas at Austin Morning 8:30 A.M. - 12:30 P.M. ABSTRACT This tutorial will cover the basic concepts underlying the construction of natural language processing systems. These include basic parsing techniques, semantic interpretation and the representation of sentence meaning, as well as knowledge representation and techniques for understanding natural language in context. In particular, the topics to be addressed in detail will include augmented transition networks (ATNs), augmented context-free grammars, the representation of lexical meaning, especially looking at case-grammar based representations, and the interpretation of pronouns and ellipsis. In addition, there will be an overview of knowledge representation, including semantic networks, frame-based systems, and logic, and the use of general world knowledge in language understanding, including scripts and plans. Given the large range of issues and techniques, an emphasis will be placed on those aspects relevant to existing practical natural language systems, such as interfaces to database systems. The remaining issues will be more quickly surveyed to give the attendee an idea of what techniques will become important in the next generation of natural language systems. The lecture notes will include an extensive bibliography of work in each area. INTENDED AUDIENCE This tutorial is aimed at people who are interested in learning the fundamental techniques and ideas relevant to natural language processing. It will be useful to managers who want an overview of the field, to programmers starting research and development in the natural language area, and to researchers in related disciplines such as linguistics who want a survey of the computational approaches to language. BIOGRAPHICAL SKETCH Dr. James Allen is an Associate Professor and Chairman of the Computer Science Department at the University of Rochester. He is editor of the journal Computational Linguistics and author of the book Natural Language Understanding, published in 1987. In 1984, he received a five-year Presidential Young Investigator award for his research in Artificial Intelligence. ABSTRACT The lexical information contained explicitly and implicitly in machine-readable dictionaries (MRDs) can support a wide range of activities in computational linguistics, both of theoretical interest and of practical importance. This tutorial falls into two parts. The first part will focus on some characteristics of raw lexical data in electronic sources, which make MRDs particularly relevant to natural language processing applications. The second part will discuss how theoretical linguistic research into the lexicon can enhance the contribution of MRDs to applied computational linguistics. The first half will discuss issues concerning the placement of rich lexical resources on-line; raise questions related to the suitability, and ultimately the utility, of MRDs for automatic natural language processing; outline a methodology aimed at extracting maximally usable subsets of the dictionary with minimal introduction of errors; and present ways in which specific use can be made of the lexical data for the construction of practical language processing systems with substantial coverage. The second half of the tutorial will review current theoretical linguistic research on the lexicon, emphasizing proposals concerning the nature of lexical representation and lexical organization. This overview will provide the context for an examination of how the results of this research can be brought to bear on the problem of extracting syntactic and semantic information encoded in dictionary entries, but not overtly signaled to the dictionary user. INTENDED AUDIENCE This tutorial presupposes some familiarity with work in both computational and theoretical linguistics. It is aimed at researchers in natural language processing and theoretical linguists who want to take advantage of the resources available in MRDs for both applied and theoretical purposes. The issues of providing substantial lexical coverage and system transportability are addressed, thus making this tutorial of particular relevance to those concerned with the automatic acquisition, on a large scale and in a flexible format, of phonological, syntactic, and semantic information for nlp systems. BIOGRAPHICAL SKETCHES Dr. Branimir Boguraev is an SERC (UK Science & Engineering Research Council) Advanced Research Fellow at the University of Cambridge. He has been with the Computer Laboratory since 1975, and completed a doctoral thesis in natural language processing there in 1979. Recently he has been involved in the development of computational tools for natural language processing, funded by grants awarded by the UK Alvey Programme in Information Technology. Dr. Beth Levin is an Assistant Professor in the Department of Linguistics, Northwestern University, Evanston, IL. She was a System Development Foundation Research Fellow at the MIT Center for Cognitive Science from 1983-1987 where she assumed major responsibility for directing the MIT Lexicon Project. She received her Ph.D. in Electrical Engineering and Computer Science from MIT in June 1983. ABSTRACT: This tutorial will present the issues in developing spoken language systems for natural speech communication between a person and a machine. In particular, the performance of complex tasks using large vocabularies and unrestricted sentence structures will be examined. The first Advanced Research Projects Agency (ARPA) Speech Understanding Research project during the seventies will be reviewed, and then the current state-of-the-art in continuous speech recognition and natural language processing will be described. Finally, the types of spoken language systems' capabilities expected to be developed during the next two to three years will be presented. The technical issues that will be covered include acoustic-phonetic modeling, syntax, semantics, plan recognition and discourse, and the issues for integrating these knowledge sources for speech understanding. In addition, computational requirements for real-time understanding, and performance evaluation methodology will be described. Some of the human factors of speech understanding in the context of performing interactive tasks using an integrated interface will also be discussed. INTENDED AUDIENCE: This tutorial is aimed at technical managers, product developers, and technical staff interested in learning about spoken language systems and their potential applications. No expertise in either speech or natural language will be assumed in introducing the technical details in the tutorial. BIOGRAPHICAL SKETCH: Dr. Salim Roucos has worked for seven years at BBN Laboratories in speech processing such as continuous speech recognition, speaker recognition, and speech compression. More recently, he has been the principal investigator on integrating speech recognition and natural language understanding for developing a spoken language system. His areas of interest are statistical pattern recognition and language modeling. Dr. Roucos is chairman of the Digital Signal Processing committee of the IEEE ASSP society. Afternoon 1:30 P.M. - 5:30 P.M. ABSTRACT This tutorial will describe the development of natural language processing from a research topic into a commercial technology. This will include a description of some key research projects of the 1970's and early 1980's which developed methods for building natural language query interfaces, initially restricted to just one database, and later made "transportable" to many different applications. The further development of this technology into commercial software products will be discussed and illustrated by a survey of several current products, including both micro-computer NL systems and those offered on higher-performance machines. The qualities a user should look for in a NL interface will be considered, both in terms of linguistic capabilities and general ease of use. Finally, some of the remaining "hard problems" that current technology has not yet solved in a satisfactory way will be discussed. INTENDED AUDIENCE This tutorial is aimed at people who are not well acquainted with natural language interfaces and who would like to learn about 1) the capabilities of current systems, and 2) the technology that underlies these capabilities. BIOGRAPHICAL SKETCH Dr. Carole D. Hafner is Associate Professor of Computer Science at Northeastern University. After receiving her Ph.D. in Computer and Communication Sciences from the University of Michigan, she spent several years as a Staff Scientist at General Motors Research Laboratories working on the development of a natural language interface to databases. ABSTRACT This tutorial will survey the use of logic to represent the meaning of utterances and the extra-linguistic knowledge needed to produce and interpret utterances in natural-language processing systems. Problems to be discussed in meaning representation include quantification, propositional attitudes, comparatives, mass terms and plurals, tense and aspect, and event sentences and adverbials. Logic-based methods (unification) for systematic specification of the correspondence between syntax and semantics in natural language processing systems will also be touched on. In the discussion of the representation of extra-linguistic knowledge, special attention will be devoted to the role played by knowledge of speakers' and hearers' mental states (particularly their knowledge and beliefs) in the generation and interpretation of utterances and logical formalisms for representing and reasoning about knowledge of those states. INTENDED AUDIENCE This tutorial is aimed at implementors of natural-language processing systems and others interested in logical approaches to the problems of meaning representation and knowledge representation in such systems. BIOGRAPHICAL SKETCH Dr. Robert C. Moore is a staff scientist in the Artificial Intelligence Center of SRI International. Since joining SRI in 1977, Dr. Moore has carried out research on natural-language processing, knowledge representation, automatic deduction, and nonmonotonic reasoning. In 1986-87 he was the first director of SRI's Computer Science Research Centre in Cambridge, England. Dr. Moore received his PhD from MIT in 1979. ABSTRACT The central problems faced by a Machine Translation (MT) research project are 1) the design and implementation of automatic natural language analyzers and generators that manipulate morphological, syntactic, semantic and pragmatic knowledge; and 2) the design, acquisition and maintenance of dictionaries and grammars. Since a short-term goal (or even medium term goal) of building a system that performs fully automated machine translation of unconstrained text is not feasible, an MT project must carefully constrain its objectives. This tutorial will describe the knowledge and processing requirements for an MT system. It will present and analyze the set of design choices for MT projects including distinguishing features such as long-term/short-term, academic/commercial, fully/partially automated, direct/transfer/interlingua, pre-/post-/interactive editing. The knowledge acquisition needs of an MT system, with an emphasis on interactive knowledge acquisition tools that facilitate the task of compiling the various dictionaries for an MT system will be discussed. In addition, expectations, possibilities and prospects for immediate application of machine translation technology will be considered. Finally, a brief survey of MT research and development work around the world will be presented. INTENDED AUDIENCE This tutorial is aimed at at a general audience that could include both students looking for an application area and testbed for their ideas in natural language processing and people contemplating starting an MT or machine-aided translation project. BIOGRAPHICAL SKETCH Dr. Sergei Nirenburg, Research Scientist at the Center for Machine Translation at Carnegie-Mellon University, holds an M.Sc. in Computational Linguistics from Kharkov State University, USSR, and a Ph.D. in Linguistics from the Hebrew University of Jerusalem, Israel. He has published in the fields of parsing, generation, machine translation, knowledge representation and acquisition, and planning. Dr. Nirenburg is Editor of the journal Computers and Translation. SECOND CONFERENCE ON APPLIED NATURAL LANGUAGE PROCESSING Conference Committee General Chair Norman Sondheimer, USC/Information Sciences Institute Secretary-Treasurer Donald E. Walker, Bell Communications Research Program Committee Bruce Ballard (Chair), AT&T Bell Laboratories Madeleine Bates, BBN Laboratories Tim Finin, Unisys Ralph Grishman, New York University Carole Hafner, Northeastern University George Heidorn, IBM Corporation Paul Martin, SRI International Graeme Ritchie, University of Edinburgh Harry Tennant, Texas Instruments Tutorials Martha Palmer, Unisys Local Arrangements Jonathan Slocum, MCC (Chair) Elaine Rich, MCC Exhibits and Demonstrations Kent Wittenburg, MCC Publicity Jeffrey Hill and Brenda Nashawaty, Artificial Intelligence Corporation ------------------------------ From: MCCARTY@UTOREPAS Subject: Reply to James H. Coombs `ACH Text markup' message (109 lines) Date: 6 December 1987, 18:22:13 EST X-Humanist: Vol. 1 Num. 510 (510) Contibuted by Robert Amsler (I'll make this reply public from the start since Nancy Ide already had to double-back and make her's public afterwards. It may, however, become a suitable topic for a more extended private discussion between those with an interest in text encoding standards.) As Nancy already noted, SGML is the most likely model which will be used for the Humanities Text Standard, however there was considerable concern at the meeting by the French delegation about the workshop endorsing SGML as the official standard to be emulated. In view of that, it was deemed essential to avoid specifically saying this in favor of the broader statement that we'd attempt to be compatible with applicable existing standards where possible. Specifically, this also includes character transliteration standards--which are a considerable part of a humanities text standard's encoding problems. (I can hardly wait for ISO to adopt an official standard for encoding Egyptian hieroglyphics in ASCII!) I would also however like to make a strong statement that from a computational perspective there is no need for any one format to be the only one used. What is needed is that any format must be fully documented and an information-preserving transformation of the contents of any approved standard format. This was captured in the statement that the standard would be an `interchange' format. This does beg the issue of how the transformation takes place, i.e. a program needs to be written or capable of being run on the `other' format and on hardware available to the recipient of the data, but it is important to note that an SGML-like format may appear as very formidable to users who believe they will have to type in all the special codes manually--whereas a `keyboarding' format may be just as faithful in representing the information without undo burden to the typist. I'm sure you will agree to this since your excellent CACM article notes that one of the most overlooked forms of markup is the use of traditional English punctuation and spacing conventions. Returning to your message's points, your 4th point seems to me to be exceptionally good and something that we did not explicitly get to in the Poughkeepsie meeting, i.e., ``4) There should be no attempt at establishing a "closed" tag set. The current AAP SGML application allows for definition of new tags, but it does not support such definition in a practical way. The consequence is that people will use "list items," for example, when they should be using "line of poetry." Within these guidelines, it can only be healthy to provide a list of tags that people should choose from when tagging certain entities. The point of this is that we cannot predict what textual elements will be of significance for what researchers. We have to allow for the discovery of textual elements that no one has categorized previously. At the same time, there is no point in having 30 different tags for "line of poetry." The guidelines should make clear that DESCRIPTION is paramount and that the use of particular tags is secondary.'' I think the means by which this latter goal, of not having 30 equivalent tags for the same text element, is to be handled will be an important role of the text encoding standards subcommittees. What it strikes me are needed here are the database concept of a `data dictionary' to provide definitions for all the `tags' and the information-science concept of a tangled hierarchical thesaurus of tags (terms) including the 4 major categories of `broader tag' (BT), `narrower tag' (NT), `related tag' (RT) and `use instead' (XT ?) type of pointers. Thus the standards subcommittees should begin work on a thesaurus of tags which defines each tag's intended domain of text entities, its relationship to other more general and more specific tags as well as related tags and tags which should be used instead of a given tag. This means, for example, that in tagging a text feature, one could use a generic tag such as `paragraph' or a more specific tag such as `summation paragraph' and that an author would have a guidebook of established possible tags that would tell them the options and what qualifications a text object had to have in order to qualify for the use of such a tag. I do think it is important to allow for arbitrarily deep extensions of the tagging, but any standard will have failed if every author has to resort to inventing their own tags to encode text. Note, this is still independent of the issue of `required minimum tags' in that the dictionary and thesaurus of tags only tell the user how a tag should be used and what alternatives exist to its use--they do not say that a tag must (or must not) be used (except in the case of the `use instead' pointers that attempt to avoid tags being used ambiguously). My model of what such a Thesaurus should look like is the ERIC Thesaurus of Descriptors. Robert A. Amsler Bellcore 435 South St., Morristown, NJ 07960 (201) 829-4278 From: MCCARTY@UTOREPAS Subject: The CD-ROM debate: erasable optical disks Date: 7 December 1987, 09:40:52 EST X-Humanist: Vol. 1 Num. 511 (511) Contributed by Sterling Bjorndahl Speaking personally, I am not going to run out and buy a CD-ROM drive for my home computer until we see what the next generation of laser technology is going to look like in terms of cost and performance. The latest I have heard on the topic is on page 12 of the December _Byte_: "Matsushita, the large Japanese parent company of Panasonic, ... will deliver a prototype of an erasable optical disk drive next year, probably in the third quarter, a company spokesperson said. It will probably be competing with products from Sony, Philips, and Kodak. Matsushita has invested heavily in the phase-change scheme, so that's probably the technology that will be incorporated in the drive it brings to market. In phase-change technology, molecules of tellurium suboxide change from an amorphous noncrystalline state to a crys- talline state and back again, depending on the type of laser beam ap- plied. But the company is also studying other approaches, including magneto-optical (which Sony is using) and dye-polymer technologies. One hurdle all the pioneers of erasable optical drives will have to leap is the slowness of the units, caused partially by the size of the optical disk head, which is much bigger than a head in a typical magnetic drive." CD-ROM is available now, with texts that I want to use, so I am glad that I have access to a system that can use that technology (Ibycus). And I think Bob Kraft has listed some excellent reasons for using CD-ROM technology where appropriate. However, I don't want to spend my own money on something that will limit my flexibility in the future. Thus my caution, until I can determine whether the new technology will be practical for me. I remember reading an article on erasable optical drives in a popular m aga- zine within recent months. I thought it was Scientific American, but I can't locate the article among the issues in my magazine rack. Does anyone else know of it? I believe it was on magneto-optical technology, and I remember it mentioning data densities on the order of current CD-ROM tech- nology rather than the current WORM technology (which seems to be worse than half as dense as CD-ROM). Sterling Bjorndahl Institute for Antiquity and Christianity Claremont Graduate School Claremont, California From: MCCARTY@UTOREPAS Subject: Research opportunity (58 lines) Date: 7 December 1987, 13:23:51 EST X-Humanist: Vol. 1 Num. 512 (512) Contributed by E S Atwell Dear fellow "Computational Humanities" researcher, Do you know of any young graduates interested in corpus-based computational research on the English language? I have an opportunity for an aspiring researcher to come to Leeds for a 'taster', to work on a large collaborative project. I would be very grateful if you could forward the following details to any potential candidates you know of. Thank you for your help, Eric Steven Atwell Centre for Computer Analysis of Language and Speech AI Division, School of Computer Studies Artificial Intelligence Group, School of Computer Studies, Leeds University COMMUNAL is a large collaborative research project aiming to develop a robust Natural Language interface to Expert Systems, allowing access through natural English dialogue. This will require software to analyse and parse the user's input; convert it into the internal Knowledge Representation formalism; infer an appropriate response or solution; and generate output in ordinary English. At Leeds University, we will develop a powerful parser, based on a Systemic- Functional model of English grammar. The other partners in the project are: UWIST (project coordinators), the Ministry of Defence, ICL, and Longman. The appointee will be principally involved in designing, building, testing and documenting the parser software, using POPLOG prolog on a Sun Workstation. She/he will be expected to liaise with and learn from other researchers in the Centre for Computer Analysis of Language and Speech (CCALAS) and related research groups at Leeds and elsewhere; there will be opportunities for travel, to coordinate research with other partners, and to present results at international conferences. The post is for a fixed term of 18 months in the first instance, although the project may continue to a Second Phase. Starting salary is to be 8185 p.a., with an expected 7% increase in March 1988 and a further increment later. We require an appointment as soon as possible; please contact Eric Atwell via JANET (eric@uk.ac.leeds.ai) or EARN/BITNET (eric%leeds.ai@ac.uk), or by phone on (+44 532) 431751 ext.6119 or 6307 for further details of the post and how to apply; I can also give some idea of cost of living, housing etc for applicants outside the UK. From: MCCARTY@UTOREPAS Subject: Coombs' ``Markup: On Requirements'' message Date: 8 December 1987, 09:34:02 EST X-Humanist: Vol. 1 Num. 513 (513) While I have great sympathy for the goals expressed by James H. Coombs in this message, I have no optimism about the methods suggested to achieve those goals. The issue here is one of money and the existing source of such funding would be the same source of funding which currently supports research in the humanities. If we propose that a computer archive in the humanities should have all these desirable properties, then unless a new source of funding is provided, it would have to take funds away from other types of humanities research. The alternative would be to create a self-funded archive which would have to derive funding from the sale of copies of its machine-readable data. This seems possible, perhaps funded by a surcharge something like that of the current copyright clearance center to whom most libraries send payments when they make photocopies of magazine and journal articles. However such a center would have to also be prepared to legally sue users of copyrighted data who did not pay for their copies. I have no trouble with this since as Howard Webber recently said, if we interfere with the flow of funding back to the creators of intellectual property, we will eventually cut off the funds to develop such works. At present most texts in the humanities in machine-readable form are either the result of funded research or `donations' of humanists time. This creates a poorman's archive. The real owners of the bulk of the humanities texts not available are the publishers, who routinely destroy the machine-readable works they print because of a variety of excuses similar to those of monks burning manuscript pages to light their candles. We need to form an archive in which major humanities publishers would be eager to deposit their machine-readable tapes--for the purpose of generating additional revenue from their computational use. I do not think attempting to Prussianize either the volunteer humanities data enterers nor the existing marginally-funded archives would be a very good idea. Robert A. Amsler Bellcore Morristown, NJ 07960 From: MCCARTY@UTOREPAS Subject: Text Encoding, a reply to James H. Coombs comments Date: 8 December 1987, 09:38:34 EST X-Humanist: Vol. 1 Num. 514 (514) [This is a reply to some of James H. Coombs comments on Nancy Ide's message] Coombs writes: ``A minor philosophical point, I guess: I don't think that we CAN know our needs fully. We need standards that accommodate needs that cannot be predicted today. The practical consequence of this observation, which I'm sure Nancy would agree with, is that one should seek a "productive" system instead of a system that satisfies everything on a list, and one should not spend a lot of time developing the list.'' Once upon a time I was doing a survey of the keywords and descriptors used to characterize articles in the Communications of the ACM. The keywords were author-supplied terms that described their article's content; the descriptors were selected by the author's from a pre-specified set of similar content descriptors supplied by the ACM. What I discovered was that as I collected more and more instances of keywords created by the authors, there was no closure whatsoever. The set of terms just kept expanding and there were large numbers of keywords which only one author used, and then only for one article. This is how I see the problem of the selection of tags for text entities in documents. That is, if the system is completely open and `productive' there will be little commonality between author's selections--whereas if the authors are offered a wide-range of approved tags to select from, then they will manage to find tags which meet their needs. ------- ``What are "multiple parallel hierarchies"? I can guess, but I want to be sure that I understand the problem. In a most documents, we have, for example, pragmatic and syntactic hierarchies.'' The term was used at the Vassar meeting by David Barnard, I believe. The statement was in reference to the difficulty of software developers in providing software capable of interpreting a document written in the full SGML standard and as far as I'm aware there is still no full-SGML-capable software available. I assumed that he was referring to the potential division of a work into OVERLAPPING tagged segments, i.e. it would be possible to have a work with tags which ended inside the span of other still running open tags, e.g., ... ... <\line> ... <\foreign-word> ... <\sentence> <\line> The problem here is that some entity would be broken into two parts if any entity were extracted. ``What is the practical value of a metalanguage that generates all markup languages? I would think that it would be so abstract as to be of no value.'' Who said `generates'; what we were discussing was a meta-language which `parses' all markup languages--a sort of least upper bound markup language. The thought was that we needed to accomodate all reasonable existing texts with markup information already in them. We weren't intending to require existing texts with carefully worked out markup schemes to be redone in a scheme which would offer nothing new to their markings other than a different way of noting the same information. However, your next point is well-taken... ``I suspect that this is part of the goal of salvaging work that has been inadequately coded.'' Actually we were thinking of salvaging work that had been ADEQUATELY coded before a standard was available. Rather than requiring every such work to be recoded in a new format, it was hoped that the new format could accept the existing works as is. Whether that is possible or not, as Nancy stated, is an open question since we haven't yet collected the documentation for existing collections of text and their formats. ``I believe that we will be better off if we worry less about the past and plan more for the future. I suppose that it's true that publishers have typesetting tapes in their basements, and that we could use those tapes.'' Actually, no they don't. They ordinarily don't get the tapes from the printers and if they did would only get the last version on tape before the final manual cut-and-paste corrections. Publishers thus routinely ignore and discard this phototypesetting data as a useless intermediate step and save the `valuable' printing plates instead. One reason is there is no common format in which to save the data for reuse. Each printer has their own variant of the hardware/software. However, regardless of that, the next is very true. ``I think that we have to accept that those tapes are of little value until someone converts the coding to descriptive markup. .... Whether it's a dictionary or a literary text, we can expect that inadequate coding will cause considerable work for anyone attempting to use the database. A metalanguage that includes procedural markup as well as descriptive markup will not help in such a case, because one still has to map procedural markup onto descriptive markup in order to be able to work with meaningful entities (definition, paragraph, etc.). Since procedural markup tends to be performed somewhat arbitrarily and does not normally provide a one-to-one relationship between entity and markup, there is no metalanguage that will help a researcher perform the necessary conversions. You are mixing two things here. First, while it is true one cannot go from a typesetting tape to a descriptive markup in one step, it doesn't necessarily follow that the procedural markup is useless. A case in point is dictionaries. There IS NO descriptive markup standard for dictionary entries (I'm working on developing one with a number of other computational lexicologists, but none exists right now), yet the phototypesetting tapes of dictionaries are very useful to creating a descriptive markup of their contents. Headwords are typeset in boldface, possibly outdented, certainly starting new lines; parts of speech are in italic, pronunciations are in special fonts for their phonetic characters, usually enclosed in (,)'s or similar delimiters. Etymologies prefer [,]'s. Labels are in italics, sense numbers in boldface, definition texts in Roman type, with examples sometimes offset in <,>'s and sometimes in italics. All of these are positionally context-sensitive within the dictionary entry. Their descriptive nature can usually be unambiguously determined from the positional and font information on a phototypesetting tape. It would be a genuine aid to the people who today decode such phototypesetting tapes if they were in only ONE procedural markup language. At present they are in innumerably many different markup languages. ``What we really need is a sensible and dynamic standard. I don't think that anyone would argue that that standard should be anything other than descriptively based. Since we are going to have to convert texts to descriptive markup in order to use them anyway, why not just develop the standard and convert as necessary. Trying to save the past is just going to retard development.'' The reason is that ther conversion is going to have to be done fairly often UNTIL a standard for both procedural and descriptive markup is available. We have no future without the publisher's adopting a descriptive markup eventually, but until they do, we have no sensible future in hand-entry of published books when some electronic typesetting format is available. Keyboarding the OED, for instance, took several MILLION dollars! If the typesetting data had been available in machine readable form, it would probably have reduced the effort by a factor of ten. Again... Robert A. Amsler Bellcore Morristown, NJ 07960 From: MCCARTY@UTOREPAS Subject: Encoding schemes, text archive (reply to Coombs) Date: 8 December 1987, 09:46:13 EST X-Humanist: Vol. 1 Num. 515 (515) Contributed by "Michael Sperberg-McQueen" James Coombs has suggested I post my reply to his comments on text encoding. This is it; I have also appended a note on the phrase 'multiple incompatible hierarchies' which seems to be unclear. ---------- Many thanks for your note about text encoding and the ACH/ACL/ALLC initiative in particular. I agree with you in every detail, as far as I can see, and will just try to clarify a couple of things quickly, which can be discussed at length later. 1 in preparing for the Vassar meeting, the ACH committee on text encoding had come up with a plan similar to the base + extensions that you suggest. The basic idea of user extensions was universally accepted, but the word 'require' was complete anathema to a number of people at the Vassar meeting. These were (a) those in charge of large existing text archives, who wanted to make very sure the guidelines would not turn into something their funding agencies would eventually require (no single quotes here!) them to conform to; and (b) some people worried about the possibility that funding agencies and their reviewers might use the 'requirements' of any guidelines to refuse funding to anyone who deviates from the required minimal tagging, even for adequate scholarly reasons. It was agreed to 'recommend' certain minimal tags (the verse paragraphs of Milton would be a good example) for newly encoded texts, but consensus could not be reached on any more than that. This was a disappointment to me, but appears on reflection to affect not the structure of the guidelines but only the choice of words to describe it. In any case, there will be a fairly extensive pre-defined tag set, I expect, but not a closed one. 2 SGML should have been mentioned explicitly in the closing document, but at the last minute some delegates objected that such details were too low-level to deserve mention in such a statement of principles. The objection was presented as being stylistic, but may have been partly substantive. In any case, the planning group at Vassar were unwilling to commit themselves to SGML without reservation, because it was not clear how well SGML proper could handle the multiple incompatible hierarchies necessary for a lot of textual research, and some objected to what they said was SGML's verbosity. The SGML supporters did succeed in persuading the group that SGML should be used, unless experience showed it simply could not. (We know full well experience will show no such thing.) Whether we try to formulate formal document type definitions or not remains to be seen, but given the unregulated habits of the texts we study, cleanly defined hierarchies of the sort DTDs are designed for won't be very easy or do anyone much good. (The OED people said that they use SGML syntax but have never bothered with a DTD and never missed one. The variety of entries in the dictionary, they said, is such that a type definition couldn't be written in advance anyway, and written after the fact would just be an inventory of the various forms of entries they had empirically found.) In any case, we are hoping not to re-invent SGML. In fact, some people were very interested in attempting to use SGML for the metalanguage required to describe existing encoding schemes, but I am uncertain whether SGML itself will be useful in defining the syntax and semantics of procedural markup or of old card-oriented encodings with author / play / act / scene / line references encoded in columns 73-80. But perhaps when I finally get my hands on a copy of the standard itself, I'll find out it can do all of that too. ------- [ end of extract from original note ] ----- HUMANISTs will be very interested in the article Coombs et al. have just published in the Communications of the ACM, and I encourage anyone interested in encoding texts or in using encoded texts to read it. Further clarifications and suggestions: 3 'multiple parallel hierarchies' (Ide) and 'multiple incompatible hierarchies' (above) seem not to be immediately clear to all. We Michael Sperberg-McQueen, Univ. of Illinois at Chicago From: MCCARTY@UTOREPAS Subject: Bibliography or bibliographer needed (40 lines) Date: 8 December 1987, 09:58:27 EST X-Humanist: Vol. 1 Num. 516 (516) Contributed by "Rosanne G. Potter" I am editing a book on Literary Computing and Literary Criticism containing essays by Richard Bailey, Don Ross, Jr., John Smith, Paul Fortier, C. Nancy Ide, Ruth Sabol, myself and others. I am looking for someone who already has, in a fairly advanced state, a bibliography on this subject, or who is an experienced bibliographer and can put one togehter in the month of Jan and Feb (the latest). Anyone who meets either of these descriptions or can suggest the name of someone who could fulfill this need, please let me know. The book is completely written, a copy could be sent immediately to anyone seriously interested in the project. The current situation is that Art Evans at U of Penn Press wants to publish the book is, in fact, planning to get it into the Fall List, but we are both waiting on two readers reports--the UPENN board is not as enthusiastic about the possibility of a collocation between LIT CRIT and Computing as either Art or I wish--so they must be convinced by the readers reports. Whether Penn publishes it or not, I have little doubt that I will be able to find a suitable publisher--though not likely one who will publish it as quickly--and that a bibliography will be required by some reader or board soon. (I'm surprised it hasn't happened yet.) Rosanne G. Potter Department of English Iowa State University Ross Hall 203 (515) 294-2180 (Main Office) (515) 294-4617 (My office) (515) 232-4473 (Home) From: MCCARTY@UTOREPAS Subject: Text archives, centers (92 lines) Date: 8 December 1987, 10:06:53 EST X-Humanist: Vol. 1 Num. 517 (517) Contributed by "Michael Sperberg-McQueen" Jim Coombs asks whether a North American text archive would get us anything we can't already get from Oxford, and if so how it should be funded and organized. I think it can, and this is why: 1 First, note: We *don't* need a North American text archive just as an archive or text repository. In this area, Oxford's work can hardly be faulted. They take everything, they work hard to document everything, they distribute as freely as their donors allow them to. 2 A North American center, though, could and should be set up to be funded by a number of universities. No one university in North America is likely to fund the kind of public service Oxford performs, let alone anything more. But ongoing funding from many schools could make it possible for a center to do some things that the Oxford Archive just does not have the funding or staff to do. 3 A North American center does not (thank heaven) have to compete with Oxford; it would make far better sense to work in cooperation. Oxford (in the person of Lou Burnard), it pleases me to say, agrees. 4 A center should provide a locus for cooperation in all the areas where universities now must pay large amounts of money to re-invent the wheel. That is: a creating new machine-readable texts (preferably according to some rational plan, as well as on demand) b documenting existing machine-readable texts both for users and for collecting libraries -- that is, the center should provide a basis for cooperative library cataloguing of machine-readable texts and the distribution of the catalogue records to the library community c upgrading existing machine-readable texts, converting them to a standard format and checking (or spot-checking) their validity d distributing all these texts to scholars e training of users and of computer advisors/consultants, via summer seminars, short-term grants to individuals to work in residence on their projects (in funding-agency terms, acting as a re-granting agency, I think this is called) f (possibly) assisting software development, either by helping establish and encourage cooperation among university-based developers or by performing development work of its own. (Frankly, I'm a little unsure how useful or feasible this is, but it's a point one often hears, so I mention it.) This is not an exhaustive list. It reflects what I know happens at Oxford, Toronto, Penn, BYU and such places. Also what ICPSR (the Inter-University Consortium for Political and Social Research) does now. 5 Funding -- it seems to me the universities should pay for this center, just as they do for ICPSR. We don't want just a consortium of humanities-computing centers, because many universities don't choose to support humanities computing that way. We want services to include enough library services that at least some university or college libraries will want to join. We want data distribution to be important enough that local data archives will want their schools to join. We want enough emphasis on humanities research that humanities departments will lobby for membership, enough benefits to computing consultants/support staff that computer centers will be in favor too. Who pays the membership fee will obviously depend on the internal politics of the institution, but the membership should be by institution. (Obviously it also must be possible to support the needs of independent scholars. But arrangements to that end must not allow schools to reap the benefits of having a center while evading the costs of supporting it.) The motive to join must, I think, be partly altruistic, partly financial. By joining such a consortium a school can help support humanities research in general, and get an awful lot of data free. Not joining, then, must mean the data costs money. It is not hard to figure that a consortium membership can be far cheaper than acquiring a scanner, paying maintenance on it, and running it within the school. Joining must also be preferable to buying the data from the consortium. 6 A center like this could support text-encoding standards in ways I think Oxford would find difficult. Without some deep changes in its funding, the Oxford Archive can't hope to convert its holdings to a new standard. A new center could make that part of its raison d'etre. 7 Obviously, there is no need to limit the membership of a consortium such as I have just described to North America. But I think that's where the need is, research in Europe being organized on different lines. 8 This concept of a consortium-supported general-purpose archive and center contrasts sharply with that of cooperation among humanities computing centers and with that of a set of regional or discipline-based centers, which have been propounded in recent years from some quarters. I hope those who prefer those plans to this will be persuaded to describe to use how they would prefer to see things organized. That would be extremely useful to us all. Michael Sperberg-McQueen, University of Illinois at Chicago From: MCCARTY@UTOREPAS Subject: In reply to Robert Amsler on ACH Text Markup Date: 8 December 1987, 10:11:52 EST X-Humanist: Vol. 1 Num. 518 (518) Contributed by "James H. Coombs" Robert says: I would also however like to make a strong statement that from a computational perspective there is no need for any one format to be the only one used. What is needed is that any format must be fully documented and an information-preserving transformation of the contents of any approved standard format. This was captured in the statement that the standard would be an `interchange' format. First, I am confused by the word "format." I would like to see something more specific, such as "markup language." Perhaps ACH does intend something broader than markup language though. I need a definition to know that I know what is being referred to. Does "format" here include things like the location of markup? Microsoft Word, I am told, stores markup at the end of the file. Well, that seems to me on further thought to be a markup language with something like a postfix syntax. While it's true that we can process files with a variety of markup languages, we need more than full documentation. We have full documentation of the procedural markup language for Waterloo SCRIPT, for example, but that markup language does not provide us with the information that we need; instead of telling researchers that an entity is a "verse paragraph", it tells Waterloo SCRIPT what procedures to perform at a particular point in the text stream. Perhaps "information preserving" is intended to capture this notion somewhat. Well, the information must be encoded in the text in the first instance before we can preserve it, and we want researchers to encode the information when they enter the text. An "interchange format"? Again, I'm a little confused, or I'm not convinced that primary needs are being addressed. We have standards for document interchange that preserve nothing but formatting information, e.g., font changes. If the primary instance of a text encodes nothing more than formatting information, then we will have information preservation, but the information that we preserve will not be the information that we really need. We will know how to print a text, but we won't know (computationally) what the individual entities are. I imagine that Robert and others agree with everything that I have said. What I am asking for is more precision and, above all, a commitment to descriptive markup. In addition, I am asking for some restrictions on the markup languages. Specifically, markup should be contiguous with the elements that are being marked up; markup should appear in the text stream. Yes, we can process files that store all of the (electronic) markup at the beginning or at the end, or part here and part there. But why should we invite this complexity? Can we reasonably expect every scholar to have access to programmers who can convert many formats into one? Or can we reasonably expect every scholar to have a separate concordance program, a separate retrieval program, etc., for every possible markup langauge? or for every markup language that is used in one of the texts that the scholar needs? While it may be convenient in the short term for someone to sit down and type in Microsoft Word, our acceptance of Word documents would be very expensive to many people for many years. What is the value of a standard that allows this? it is important to note that an SGML-like format may appear as very formidable to users who believe they will have to type in all the special codes manually--whereas a `keyboarding' format may be just as faithful in representing the information without undo burden to the typist. I'm sure you will agree to this since your excellent CACM article notes that one of the most overlooked forms of markup is the use of traditional English punctuation and spacing conventions. If SGML appears formidable to people, let's educate them, and let's develop software that minimizes the effort. Currently popular software seems to minimize markup effort, but it fails to record sufficient markup. Unaware of the deficiencies of their software, people say that they want more fonts, for example, and that they are not interested in descriptive markup. We need to make it clear to Microsoft, Dragonfly, and others that we need descriptive markup. I think that we want to stay away from the word "keyboarding." One of the complaints has been that people do not want to "keyboard" markup. So, if we use "keyboarding" for the act of performing scribal markup (punctuational and presentational) but not for the act of typing descriptive (and referential) markup, then we invite confusion. Clearly Robert is referring to the use of punctuation, and he must also be referring to the use of presentational markup (e.g., skipping space between paragraphs). Both of these forms of markup have deficiencies that descriptive (and referential) markup do not have. Above all, they are ambiguous; in addition, they are often much harder to parse. First, ambiguity. Periods are used to end sentences and they are used to indicate that a string of characters is an abbreviation (Mr.). Perhaps even worse, the same character is used to indicate that a word is possessive and to indicate the end of an imbedded quotation, e.g.: a) She told him, "Do not say 'dogs' house' anymore." How much time do we want to waste on developing algorithms to parse this markup? Why not use (b) instead? b) She told him, Do not say dogs' house anymore. Software can easily display (a) when it has recorded (b), but it cannot easily generate (b) when it has recorded (a). Of course it is easier for most of us to enter (a) than it is to enter (b); it is always easier to do half the job. Once we start accepting this responsibility, we will start convincing software developers to support our needs, and entering (b) will not require much more than entering (a) does now. Why would anyone want to record (b)? Well, they might want to print the text with open and close quotation marks. They might want to study all of the quotations, or all of the imbedded quotations. They might want to study the use of possessives. And so on. Similar problems occur with presentational markup. Yes, if we have a one-to-one mapping between presentational markup and text element, then presentational markup records all of the information that descriptive markup does. We don't really need tags for each line of poetry in *Paradise Lost*, for example. We need know only that each line of poetry is terminated with '\n', for example. There is no conflict with SGML here, however, since SGML supports this method of marking up texts. In fact, in such a case we don't really have presentational markup at all, we have descriptive markup; the markup serves not to enhance the presentation but to identify that a stream of text is a line of poetry. You also load things a little with the phrase "undue burden," Robert. In part, I am arguing that there is a "due burden" that scholars must accept if we are to get anywhere in this whole project of using computers to assist our scholarship directly. Part of that "due burden" is the proper encoding of texts. In addition, I think that you over emphasize the costs of entering descriptive markup. You do so partially by implicating that presentational markup is easier to select and perform. We argue in our article that presentational markup is considerably harder to select, and that there is no pretheoretical motivation for believing that either form of markup is easier to perform than the other. In addition, you seem to classify markup as presentational whenever it does not consist of tags. Under our functional definition of descriptive markup, at least, the markup that you are talking about is actually descriptive markup. In any case, the sort of markup that you are talking about is provided for under the SGML standard. I thank you for your kind words on our article. Our next article will help clarify the distinctions that we make and how we are making them. For the present, it seemed more important to make people aware of the advantages of descriptive markup. I hope that my response does not seem overly microscopic. I find again and again that conceptual confusion leads to unnecessary practical problems. In order for scholars to decide what form of markup to use, they must know clearly what the competing forms of markup are and what each form has to offer. Finally, your discussion of thesauri and tag sets is interesting. I'm not sure that I have anything to add to it. Need to think about it more. Cheers! --Jim Dr. James H. Coombs Software Engineer, Research Institute for Research in Information and Scholarship (IRIS) Brown University jazbo@brownvm.bitnet From: MCCARTY@UTOREPAS Subject: archival politics (50 lines) Date: 8 December 1987, 10:18:46 EST X-Humanist: Vol. 1 Num. 519 (519) Contributed by Lou Burnard [This message was delayed as a result of finger trouble on my part - I sent it to the wrong node - LB] " If I can't count on a text from a particular archive to meet my needs, what is my motivation for bothering with that archive; and what is the motivation for the archive's existence? I certainly would not want to see it supported by public funds." (JAZBO on Friday) This is fighting talk! The only defence I can offer is that a community gets the Archive it deserves. If you guys don't have the sense to agree on a common language, why should the humble archivist be expected to do it? On the contrary, I could argue that I had a responsibility to preserve accurately the current state of affairs as a dire warning to future generations. I expect librarians would like to insist that all publishers produced books to the same dimensions too (makes the shelving so much easier dontcha know). I expect there were even once some librarians who did so insist. But I doubt whether they won many friends. There is a self-evident crying need "to set and maintain standards". But it has to come from the community of users. Once a standard has been defined, it is possible for an archive to indicate whether or to what degree a text is conformant to it, and that is certainly something every user has a right to expect of an archive. Once a standard exists it is also reasonable to expect an archive to seek ways of converting and enhancing nonconformant texts. But I don't think a general purpose deposit archive has any right to decide what is or isn't acceptable until such standards have been defined. After all, most of the texts we have WERE useful to someone, at least once. Finally, may I with the greatest deference point out that an archive is emphatically not the same as a publisher. Publishers have to please their public or they go under. An archive is a mirror of its users. If all that its users wish to share is rubbish, reserving the best quality stuff for themselves, then the archive will be full of rubbish. It's up to you. Lou Burnard Oxford Text Archive From: MCCARTY@UTOREPAS Subject: Archives Date: 8 December 1987, 10:33:48 EST X-Humanist: Vol. 1 Num. 520 (520) Contributed by dartvax!psc90!jdg@ihnp4 (Dr. Joel Goldfield) Regarding Jim Coombs questions concerning Michael Sperberg-McQueen's queries and comments, having a text archive at Oxford but not in North America as well seems adequate if a pledge is made by Oxford to supply these texts at a reasonable price (to be determined) and reasonably quickly. The only negative aspect I can think of at the moment if these conditions are met is that it would certainly be costly to download them via transatlantic (satellite) communication. I would hope that telephone/ modem linkage to receive this information would be cost-effective and that we wouldn't be limited to sending CD-ROM's or magnetic tape by mail. --Joel D. Goldfield Plymouth State College (NH, USA) From: MCCARTY@UTOREPAS Subject: Analysis of papyrological mss Date: 8 December 1987, 10:38:11 EST X-Humanist: Vol. 1 Num. 521 (521) Contributed by Jack Abercrombie In the interest of improving a program for papyrologists, the Center for Computer Analysis of Texts is willing to make available to colleagues a preliminary version of a program for mss analysis. Manuscripts first must be digitized and stored in a TIFF format, a common file structure used in desktop publishing. The program allows one to enhance the digitized image on an EGA screen. If you have serious interest in assisting in the development work, we would be willing to send you the source code. You would have to have access to a digitizer. WRITE TO: JACKA @ PENNDRLS. John R. Abercrombie Assistant Dean for Computing, Director of the Center for Computer Analysis of Texts (University of Pennsylvania) From: MCCARTY@UTOREPAS Subject: Coombs' ``Markup: On Requirements'' Date: 8 December 1987, 10:44:17 EST X-Humanist: Vol. 1 Num. 522 (522) Contributed by Richard Giordano I really can't see what all the fuss is about. If people are serious about creating both national and international standards for data "markups", a data archive, and such related issues, I don't see why we don't work in close collaboration with at least these three organizations: The American Library Association; the Library of Congress; and the Research Libraries Group. They have the resources, know-how, and institutional connections to develop such standards, communication formats, and the like--and they have a track record in this regard that extends back over twenty years. Someone mentioned somewhere here that ALA was "a conservative bastion". I have no idea what he means by this. Traditionally, ALA and LC have both taken the lead in the scholarly world in providing machine-readable information. The technical problems that LC has addressed have been fundamental to data processing. Rich From: MCCARTY@UTOREPAS Subject: Sending messages to HUMANIST, an editorial plea (30 lines) Date: 8 December 1987, 10:47:21 EST X-Humanist: Vol. 1 Num. 523 (523) Dear Colleagues: The new arrangement, whereby I intercept all messages to HUMANIST, seems to have worked well so far, but about one thing some confusion has arisen. Messages apparently intended for distribution sometimes are sent to me directly, that is, to mccarty@utorepas.bitnet, rather than to HUMANIST, i.e., humanist@utoronto.bitnet. My life would occasionally be made simpler if you would all adopt the convention of sending messages for distribution only to humanist@utoronto, even if you want my opinion on whether or not they should be distributed. (In that case, put a note to that effect in the message; I can easily delete the note.) If you want to write to me *as editor* of HUMANIST, then please send your message to mccarty@utorepas. Finally, beware of the distinction between UTORONTO, where HUMANIST lives, and UTOREPAS, where I electronically reside. Thanks very much for making HUMANIST a lively place. Yours, W.M. _________________________________________________________________________ Dr. Willard McCarty / Centre for Computing in the Humanities University of Toronto / 14th floor, Robarts Library / 130 St. George St. Toronto, Canada M5S 1A5 / (416) 978-4238 / mccarty@utorepas.bitnet From: MCCARTY@UTOREPAS Subject: A North-American text archive and service centre (80 lines) Date: 8 December 1987, 12:29:31 EST X-Humanist: Vol. 1 Num. 524 (524) Contributed by Ian Lancashire Michael Sperberg-McQueen argues for a national North American text archive and service centre, supported by a consortium of colleges and universities. He contrasts this to a consortium of humanities computing centres, which (because it involves fewer institutions) can be perceived as serving only a small percentage of faculty and students. He also challenges someone to dispute this. I'm suspicious of any proposal to centralize computing needs about one data-processing shop. Competition is the essence of being American, isn't it? The more heads at work on a problem, the better chance of finding an answer, or ideally something completely new that we didn't expect in the first place. Most of us have just won the fight for personal computing equipment and software: resources for which we're beholding to no-one because they are in the marketplace, available for a price that's affordable even to students (or should I say even to faculty). Hasn't centralized computing lost the war in most universities? Do we want to perpetuate it on a national scale? The more people creating text archives, the better, because what we need are specialized collections from the scholarly editors who have previously worked only with paper books. Will the research projects set up to edit works by individual authors trust a central archive to do their work for them? Surely not. Look at the same argument for centralized software provision on a national scale. You can find clearinghouses of MS-DOS programs at North Carolina and at Wisconsin, and competitors emerge monthly from the woodwork. Our colleagues cannot agree to accept only one place for a software depository and distribution centre. They long ago rejected centralized software development because business proved it could produce far better work than any academic could. I'd rather buy my car from a car dealership that's in business for the money than from the government or from my engineering colleagues who occasionally build faster, more efficient cars for academic reasons. Few people in this field will argue with the idea of cooperation or consortia. The question Michael poses is, should the consortium be a collection of workers or a collection of customers? Probably a consortium of humanities computing centres and facilities would be a good beginning to to persuading our colleagues (wherever they are, inputting whoever) that a circle has more strength than a scattergram. We could at least help make the market for machine-readable texts profitable enough that companies now selling them (in the States, Electronic Text Corp. comes to mind) do well enough to subsidize (modestly, from royalties) further reliable machine-readable editing. [-30-] From: MCCARTY@UTOREPAS Subject: Text encoding (63 lines) Date: 8 December 1987, 16:52:09 EST X-Humanist: Vol. 1 Num. 525 (525) Contributed by "Michael Sperberg-McQueen" Four quick observations on text encoding provoked by the recent barrage of postings: 1 Jim Coombs is right to praise the better information content of descriptive tagging, but still we should not require descriptive markup for *all* texts. Confronted with a printed book or a manuscript, there will be cases where we don't *know* whether something is a 'chapter' or a 'section'--what we know objectively might be that there is a page break followed by centered 14-point Baskerville saying XXXX, followed by 28 points of white space, followed by text. Everything more is interpretation. If we do have an interpretation, I'm in favor of encoding it in descriptive markup. But sometimes we won't and won't want to. The Carmina Burana manuscript is a classic example of this: it has been rebound and the gatherings re-arranged at least once, and different parts of the manuscript (and different hands) may well reflect multiple attempts to impose some (mutually incompatible) structure(s) on the collection. It would be sound practice to separate an editorial judgment on the intended structure(s) of the manuscript from a codicological description of the information that leads to that editorial judgment. The First Folio of Shakespeare, similarly, must be encoded with detailed typographic information if it is to be used for textual criticism, since the position of a word in the line, on the page, within the gathering, and within the volume, are all relevant to judging the authority of the word and its spelling. 2 Yes, the coming flood of machine-readable texts will overwhelm the material we now have in the machine, but still we must make our peace with (a) other (existing) markup schemes and (b) specifically presentational and procedural markup schemes. They will continue in use at least for a while and we must provide migration paths into the new scheme if we can. And markup restricted to font, etc. may be a useful first step in analysing any complex text, as dictionary work by Amsler and by Raimund Drewek at Zurich seems to show. 3 No, people should not have to have one concordance program for every encoding scheme in the world. (That is the current situation, though.) But many people do have large software systems built around specific formats. There is no need to cut them off, if we can develop one scheme capable of representing texts in those special formats without information loss. Given N different encoding schemes, such a universal scheme would reduce the translation problem from magnitude N * N to magnitude 2 * N. That, I believe, is a good reason to work for an "interchange format," and a good reason to accept in the interchange format whatever level of information is in the source. (Specific recommendations for minimal markup content shouldn't prevent this.) Eventually, we can always hope software developers will see that they might as well work directly with the interchange format rather than engaging in preliminary translation. But first we have to survive in the existing world dominated by existing schemes and non-schemes. 4 The library community is interested in machine-readable cataloguing data, and some of it also interested in collecting and cataloguing machine-readable data. But are they also interested in creating it? If so, then yes we should surely cooperate with them. But the only useful basis for any cooperation is for each group to be clear on its own point of view. And that is what all this fuss is about. Michael Sperberg-McQueen, University of Illinois at Chicago From: MCCARTY@UTOREPAS Subject: more archival politics (this one shd run and run) Date: 8 December 1987, 16:57:06 EST X-Humanist: Vol. 1 Num. 526 (526) Contributed by Lou Burnard Disagreeing with Ian is not to be undertaken lightly. Nevertheless... "I'm suspicious of any proposal to centralize computing needs about one data-processing shop." An archive is not a data processing shop. "Hasn't centralized computing lost the war in most universities? Well, actually, no, it hasn't - not at those where it's been recognised that there's room for both private and public resources anyway. Some of us didnt even know there was a war going on... "The more people creating text archives, the better" Maybe we need a definition here. The more people creating *text resources* the better, of course. But the more centres competing to archive and secure those resources? I'm not so sure! How many libraries does your university need? Ours has far too many - and when it started thinking about the problems of integrating their various catalogues, it soon became apparent that no one library could impose its will on the others. So, guess what, a consortium emerged. A centralized quasi-official embodiment of the university's collective desire to bang the librarians' heads together until they started squeaking in tune. I'm all in favour of competition and the American Way (I want to see New York again too). But an archive has responsibilities which distinguish it very sharply from data producers or consumers. Recently, an organisation called the Knowledge Warehouse came into being here in the UK. It was funded by a consortium of UK publishers as a private company and also got a grant from the British Library. The idea was to set up some sort of archival service for publishers typesetting tapes etc. The scheme looked good on paper and had a lot of money behind it. But it doesnt seem to have been successful. The consensus amongst those I've talked to was that too few publishers wanted to play ball with an organisation which they at least perceived as a competitor. The moral I draw from this is that just as with books, there is a place for bookshops and private collections and state-owned and maintained great libraries, so is there a place for electronic text corpora and private collections of texts as well as for great archives. But it's important to distinguish them, because their roles and priorities are quite different. I wouldnt put a bookseller in charge of a library - nor would I expect a librarian to make much money in publishing. Lou From: MCCARTY@UTOREPAS Subject: Text Encoding Date: 8 December 1987, 19:01:37 EST X-Humanist: Vol. 1 Num. 527 (527) Contributed by "James H. Coombs" In reply to Robert Amsler's of 8 December 1987, 09:38:34 EST On closed vs. open tag sets, Robert concludes: This is how I see the problem of the selection of tags for text entities in documents. That is, if the system is completely open and `productive' there will be little commonality between author's selections--whereas if the authors are offered a wide-range of approved tags to select from, then they will manage to find tags which meet their needs. I would agree with Robert if I could bring myself to believe that we can develop a tag set that will genuinely meet our needs. The AAP tag set, for example, does not provide a "poetry quotation" tag; and we can expect scholars to realize that they can use a "list type" for poetry quotations in order to meet the immediate needs of 1) tagging an entity and 2) getting the entity formatted in a particular way. To some extent, we also have to say that this approach would meet many of the needs of descriptively marking up a text (as long as the chosen list type is used only for poetry quotations---with internal consistency). Some of the advantages of descriptive markup are lost in such an approach, however. Above all, the choice of the tag is not intuitive; both the original researcher and anyone using the text later will have to perform extra work to determine what "list type 2" is used for. I don't want to go on about this too long here, so let me just appeal to people's intuitions by saying that the tag "poetry quotation" has many advantages over the tag "list type 2". (None of these advantages are computational however; once a programmer determines that he/she should do X to "list type 2", the two tags have equal value.) If we discount the sort of advantages that I am referring to (discussed in our article---I'm not hedging), then we can solve the problem quite I used the following tags: E0 for paragraphs E1 for poetry quotations . . . E2347 for passages that allude to Genesis Of course, people will immediately say things like, "Let's all use for paragraphs." The many motivations that would cause such a response are the same motivations that cause us to provide

for paragraphs in the first instance. Because we cannot predict all entities that people need to mark up, we tend to throw our hands up in the air and say one of two things: 1) Let's just fake it from here on out and provide several list types. 2) We need to keep the tag set open. 3) [another approach that I am missing??] AAP has chosen approach (1) and then said go look at the standard if you really need something else. (The information for developing AAP-compliant documents with user-defined tags is not provided in the authors' documentation.) The deficiencies of approach (1) should be immediately clear to us when someone like the AAP ignores something so basic (to humanists) as poetry quotations. Moreover, if I am really analyzing a document, I will quickly run out of AAP list types. (And I don't think that I could twist things quite so far as to use a list type for my anyway.) Who is capable of providing a closed tag set that addresses these problems? Yes, the approach addresses them to some extent, but then what have we gained over ? Ok, so perhaps we remember to provide a tag for allusions. But what tags will we provide for post-structualist critics? For the next major critical theory? I agree that we should provide "a wide range of approved tags to select from," but I think it even more important to ensure that documents are marked up descriptively. (I recognize that I am close to equivocating in my use of "description." I am not fully satisfied with the functional definition that we offer in our article. Renear and I are working on this, and it gets complicated quickly. Basically, however, I want to say that is descriptive in some way that is not.) And, a posting just arrived from Michael Sperberg-McQueen, who argues that descriptive markup is not always appropriate. I suspect that Michael is saying that we sometimes need to describe the manuscript instead of the abstract text; in which case, we still want descriptive markup (i.e, we don't want Waterloo SCRIPT font instructions; we want something that says that X is/was in F font). In any case, I can hedge and conclude: insofar as a text is susceptible to description, it should be marked up descriptively and, further, that tag sets should be 1) open and 2) descriptive in this more intuitive sense of descriptive that favors over . --Jim Dr. James H. Coombs Software Engineer, Research Institute for Research in Information and Scholarship (IRIS) Brown University jazbo@brownvm.bitnet From: MCCARTY@UTOREPAS Subject: North American Archive(s) Issue Date: 8 December 1987, 19:06:03 EST X-Humanist: Vol. 1 Num. 528 (528) Contributed by amsler@flash.bellcore.com (Robert Amsler) How about a Humanities Archive Network (HumAN) I think we have an opportunity to do something considerably greater than the Oxford Archive and in fact an obligation to do this because of the state of networking available in the USA. What I'd propose is a collection of sites across the country ALL offering to host the archive or provide access to its data via their computing facilities. We should be thinking of downloading of information electronically as the PRIMARY means of distribution of archive data, with only rare recourse to writing the information out onto magnetic media as a dissemination method. The model I have in mind is based upon that used for the ARPANET's Network Information Center (NIC), which maintains a list of software and personnel at all the sites it serves. One can access this database via connecting to it from anywhere on the network, and determine where the data you want is located, and set about its retrieval by either anonymous remote login and file-transfer-protocol (FTP) downloading of the data; or finding out who to contact as the holding institution's network liaison. So... data would be distributed around the country as suited the individual institutional member's computing facilities. Some institutions might opt to have copies of everything; others to themselves store nothing, but instead to keep texts they created on equipment elsewhere. Each member institution would have a designated liaison who maintained contact with the central information resource center which itself kept a complete database of what was available where, both in terms of data and computing facilities (not unlike a list of libraries, their holdings and research facilities) and also of researchers and their interests and how to reach them electronically. This part of the Humanities Archive Network would require funding, as well as the creation of the HumAN itself--though this is becoming easier and easier as more and more research communities take to setting up their own networks. I would think the NEH ought to find such a proposal well justified in terms of the potential multiplier effect it would have upon the entire field of (computational) research in scholarship. Robert A. Amsler Bellcore Morristown, NJ 07960 From: MCCARTY@UTOREPAS Subject: The Humanities Computing Yearbook (53 lines) Date: 8 December 1987, 19:12:29 EST X-Humanist: Vol. 1 Num. 529 (529) Contributed by Willard McCarty (in this case as YEARBOOK@UTOREPAS) Dear Colleagues: As some of you will know, I am gathering information about interesting and worthy software for a new serial, the Humanities Computing Yearbook, to be published by Oxford U.P. The announcement for the Yearbook follows. Please send your recommendations to me, c/o yearbook@utorepas.bitnet. Thanks very much for your help. -------------------------------------------------------------------------- The Humanities Computing Yearbook On behalf of Oxford University Press, the publishers, the Centre for Computing in the Humanities is pleased to announce a new periodical, The Humanities Computing Yearbook. Ian Lancashire and Willard McCarty are the co-editors. An editorial board is in process of being set up. The first volume, scheduled for publication in the summer of 1988, aims to give a comprehensive guide to publications, software, and specialized hardware organized by subject or area of application. Research and instructional work in many fields will be covered: ancient and modern languages and literatures, linguistics, history, philosophy, fine art, archaeology, and areas of computational linguistics affecting text-based disciplines in the humanities. The more notable software packages will be described in some detail. We welcome your suggestions of what we should consider. We are especially interested in discovering innovative software that may not be widely known, including working prototypes of systems in development. Electronic correspondence should be sent to YEARBOOK@UTOREPAS.BITNET, conventional mail to the Editors, The Humanities Computing Yearbook, Centre for Computing in the Humanities, Univ. of Toronto, 14th floor, Robarts Library, 130 St. George Street, Toronto, Canada M5S 1A5. Our telephone number is (416) 978-4238. Please feel free to distribute this notice. Ian Lancashire Willard McCarty From: MCCARTY@UTOREPAS Subject: Text Encoding: salvaging texts (addendum) Date: 8 December 1987, 22:00:06 EST X-Humanist: Vol. 1 Num. 530 (530) Contributed by "James H. Coombs" Oops. I should have added that I am not saying that people should throw away everything that does not accord with the standard. I am saying that the standard should not try to accommodate inadequate texts. I like (my interpretation of) what Lou Burnard says about them (implicitly?): they are "rubbish." Well, ok, so we may be better off recycling many of them instead of just throwing them out, but let's say that they are in the recycle bin and not that they are in the approved bin. --Jim Dr. James H. Coombs Software Engineer, Research Institute for Research in Information and Scholarship (IRIS) Brown University jazbo@brownvm.bitnet From: MCCARTY@UTOREPAS Subject: Text Encoding: salvaging texts Date: 9 December 1987, 09:03:00 EST X-Humanist: Vol. 1 Num. 531 (531) Contributed by "James H. Coombs" Robert Amsler corrects my perhaps overly vigorous condemnation of texts that have been marked up procedurally. I have no intention of entering the American Heritage Dictionary from scratch or even of working with a version that has all markup stripped away. Such markup can help one considerably in the process of deriving a descriptively marked up version. (Just to clarify, I AM working with the AHD.) I don't feel the same way about *Paradise Lost*, however. Perhaps I am being overly vigorous again, but I would rather enter that relatively tiny (compared to the AHD) and simple document myself than spend the same time negotiating for a tape, getting it loaded on the mainframe, learning the markup system, writing the programs to convert it to descriptive markup, etc. So, first point, dictionaries are unusually large and complicated. Poems, even long poems, imbue one with the poetic experience even when the task is as mindless as keyboarding (but they better be good poems too!). We have a continuum, and we have all of that old philosophical stuff about points at which one would just prefer to enter and proof read than to negotiate, acquire, interpret, program, etc. Second and final point, my concern was with the value of a metalanguage. Correct me if I am wrong, but the fact that I would rather convert the AHD than enter and proof read it has nothing to do with our ability to develop a metalanguage that will generate(JHC)/parse(RA) both the procedural and the descriptive markup. Perhaps one CAN develop a context-sensitive grammar that will enable one to uniquely identify every element type in the AHD. I don't know anyone who believes that they can develop that grammar more quickly than they can perform partial conversion automatically and then finish up by hand. If it's that difficult to generate the context-sensitive grammer, won't it be much more difficult to generate a metalanguage? Now, if a single grammar will work for many dictionaries (and we actually have the need to convert many dictionaries), then it may be justified to develop the grammar. Is this what you are working on, Robert? My goal was (and remains) to discourage what seems to me to be a quixotic pursuit: the development of a metalanguage that will generate(JHC)/parse(RA) all forms of markup for all documents. The fact that one may be better off with procedural markup than without it in some/many cases does not address my claim that such a metalanguage is impossible or even my weaker claim that even if it is impossible, it's not worth the effort (again, what's the gain?). --Jim Dr. James H. Coombs Software Engineer, Research Institute for Research in Information and Scholarship (IRIS) Brown University jazbo@brownvm.bitnet From: MCCARTY@UTOREPAS Subject: archives, mark-up and money (75 lines) Date: 9 December 1987, 09:04:34 EST X-Humanist: Vol. 1 Num. 532 (532) Contributed by Phillipa Mathieson Interesting how HUMANIST discussions on standards for text encoding, making publishers aware of the need for electronic texts, assessing copyrights for such texts, establishing text archives, and programs for text searching and retrieval all seem to come together. It almost sounds as if we were all aiming for the same thing: texts on-line in machine-readable format with the software to manipulate them, available to all who want them. The main question seems to be "whose money are we going to use to achieve this?" Ian Lancashire's analogy of buying a car from commercial dealers, and the acceptance by other HUMANISTS from time to time of "intellectual copyright," and of the restrictive mechanisms needed to insure financial returns for the owners of that copyright, alarm me. I see no reason why academic grant money intended for humanist research should not be spent on laying down guidelines for the encoding of texts and the software to read them, and for distributing the results. And I agree with Lou Burnard that "the community gets the archive it deserves." If we aren't enough of a community, or interested enough in the disinterested rewards of scholarship, to share our work with others without demanding additional financial rewards (in most cases, over and above those already granted us by salaried positions in academic institutions or as staff members of publically-funded educational projects), we don't deserve either the positions or the use of on-line materials. I recently discussed with a Toronto software firm my need to use a database program (Empress32 from Rhodnius) on a second computer with slightly different architecture from the first. We had already bought a licence to use the program on one, and we wished to use the same program for the same project on another. Their attitude was that a firm which expands and buys a new machine must pay for a second licence for the new machine. I think they saw this as a kind of tax on the profits of the firm which their software was assumed to have contributed to. When I said we had no profits, the salesman kindly tried to explain to me that they had to protect their copyright in the program by charging individual licences for individual machines: "If you wrote an article and someone else used it as the basis for his own work, without acknowledgement, and made a great success of it, you'd sue the balls off him." This kind of commercial attitude has no place in humanist scholarship, and putting the development of archives and their software on a commercial basis will simply cheapen (in the sense of "lowering the quality"--certainly not in the financial sense) and restrict humanist activites. It is good to have an organized group establishing guidlelines for text mark-up and doing so in an open forum. It would be bad to have a commercially-based central text archive system which discouraged individual scholars from making available their work by maintaining an arcane set of instructions for mark-up, which only "they" really knew how to insert so that the standard software programs could use it. Michael Sperberg-McQueen's reservations about the software-development function of a central archive system are a good sign: setting up a central archive system seems to me likely to lead to the development of software for the specifications of that archive, and if you add the commercial competition angle, we'll all end up paying through the nose for the software *and* the texts, and running round nervously trying to comply with restrictive copyright requirements for texts long since free from their original publication copyright restrictions. At which point, it will again become easier to type it in yourself, and the idea of a community of scholars sharing their work will bite the dust yet again. From: MCCARTY@UTOREPAS Subject: What new information in texts of the Oxford Archive? (33 lines) Date: 9 December 1987, 09:14:37 EST X-Humanist: Vol. 1 Num. 533 (533) Contributed by Lou Burnard The Text Archive gets a fair amount of criticism for not providing more information about the texts in the catalogue ('fair' meaning both "a modest quantity" and "justifiable"). As I am now embarking on a major overhaul consequent on a local change of mainframe, I'd like to start trying a bit harder to rectify this situation. Humanists and others who have a view can help by making some suggestions about what information they think ought (minimally) to be provided in the catalogue. I should stress that I dont have the resources to do a proper cataloguing job - not yet anyway. But some things that could be added to the current shortlist are 1. more bibliographic info (e.g. date of first publication/composition, genre etc) 2. some sort of code for level/type of markup 3. some sort of code indicating completeness, accuracy, level of verification 4. (probably not in the catalogue, but generated for each text) text profile i.e. everything that a program I havent written yet can deduce automatically about the text - size in records, tags used, character usage profile etc. Comments? Preferences? Concrete suggestions? Lou From: MCCARTY@UTOREPAS Subject: Archives and encoding (51 lines) Date: 9 December 1987, 12:46:41 EST X-Humanist: Vol. 1 Num. 534 (534) Contributed by Richard Giordano From what I've been reading, four issues seems to be - if there is to be an machine-readable archive, where should it be? - who will pay for it? - what constitutes a coding standard? Michael Sperberg-McQueen also includes the questions, who is going to do the conversion, as well as the coding? The library community certainly will not get involved in conversion efforts. But you can be sure that the institutional structures already exist within the library community to both establish and maintain a data archive of machine readable text. They're in the business of collecting and making available information to users, and I think the best of them do a great job at it. Anyway, you can be certain that sooner or later--and probably sooner--the American Library Association is going to take up the issue. And when it does, the first thing that will come up is the establishment of a standard interchange format--much the same way that cataloging and other data is exchanged throughout the world in a standard MARC format. As for the Libraries point of view: nothing more and nothing less than to (1) preserve information; (2) index and describe the information so that users can easily get to the source information, as well as having an idea of what the information is about; (3) making the information available to users. There might be more to it than this, but I think this pretty-much covers it. It seems so obvious to me that the institutional structure exists, as well as the expertise, to establish a national archive of machine-readable texts, as well as assistance in generating a standard communications format. Libraries can also be of use in helping to establish practices by which text itself is indexed (since the indexing and retrieval of information for untrained users is at the heart of every librarian's professional education). Libraries, however, are not in the position to convert the sources into machine-readable form. Richard Giordano From: MCCARTY@UTOREPAS Subject: Salvaging Texts (20 lines) Date: 9 December 1987, 14:41:51 EST X-Humanist: Vol. 1 Num. 535 (535) Contributed by Mark Olsen Funny that James Coombs should mention *Paradise Lost* since I am currently going through the process of pulling off of a tape and formatting it for my purposes. I think that he seriously over- estimates the effort required to use existing text data and under- estimates the effort required to scan and correct even a simple text. The materials stored at Oxford, Packard and ARTFL in any condition can be corrected, coded and formatted much faster than starting from hardcopy. Mark From: MCCARTY@UTOREPAS Subject: A national archive for the U.S. (97 lines) Date: 9 December 1987, 19:05:35 EST X-Humanist: Vol. 1 Num. 536 (536) Contributed by Jack Abercrombie We have been following with much interest the discussion on establishing a national archive center similar in some respects to the Oxford Archive. Many of you are not aware that some four years ago the staff of the Center for Computer Analysis of Texts (CCAT) submitted a working proposal to the National Endowment for the Humanities advocating the establishment of a US national center for textual studies (including archive). From the comments then received as the proposal was circulated by NEH as well as other comments received at the Grenell conference (1985) where our draft proposal was discussed by fifteen representatives from national and international centers, the following conclusions seemed accurate. First, a national center at that time did not have a "snowball's prayer in hell" of coming into being given the general lack of collaboration and cooperation amongst US institutions and their faculties on this very issue. Second, regional centers, an idea originally proposed to us by the late Art Hanson from Princeton University (1982), would be a better approach in that a regional/discipline specific center can concentrate on a few tasks and do them well with limited resources. With this in mind, we established the Center for Computer Analysis of Texts (1984). CCAT has focused on three specific areas both for internal and external In the spring of 1986 at the University of Toronto, we again proposed to the representatives of existing and potential centers (in US, Canada and England) that we should share information on our archival holdings, and that we should coordinate more fully our efforts to add texts to our archives as well as in software development. Of the six centers represented at that meeting, there seemed to be general agreement that it was a good idea to try to federate our efforts to avoid duplication and to cut costs in supporting international, accessible archives. Again we proposed to seek funding to make this a reality as well as to solve some other minor problems within the proposed consortium. Our hastily prepared proposal to begin implementing these ideas was submitted to NEH and severely criticized by some reviewers. We accepted the reality here, and have proceeded to work with other equally concerned institutions to make them aware of our archival holdings and to keep them informed on the projects taking place at CCAT (e.g. CD-ROM Project). This chronicle of frustration and also hope, we think, is instructive, because it points out that the ideal (that is, a national archive or even a federated sytem of archives) may not be realistic given the number and nature of the relevant participants. The reality, regional and discipline specialized centers, continues to grow in many positive ways. Unfortunately from our perspective, we would like to see more coordination than is possible as long as we work within the blinders of discipline, university, nation, etc. At the very least, centers should be sharing, as some already do, information on their archival holdings and additions to their archives whether by acquisition or data entry. (NOTE: To obtain information on CCAT's accessible archive request information sheet from CCAT, Box 36 College Hall, Philadelphia, PA 19104.) Centers should also foster new ways for cooperation and collaborations. Towards this end, the Center for Computer Analysis of Texts in coordination with Computer Assisted Research Group (CARG) of the Society of Biblical Literature has begun an ambitious project to prepare an archival list of biblical and other material deemed relevant to CARG members. A first step will be to build an archival list along the lines of the information submitted by CCAT to the Rutgers Inventory Project. The second step will be acquiring copies of the texts not in CCAT's archives and placing that information in the same, consistent format (that is, the present format or a future format as is being discussed) of all the other material in CCAT's accessible archive. Prepared by John R. Abercrombie (Assistant Dean for Computing and Director of the Center for Computer Analysis of Texts) with cooperation from Robert Kraft (Coordinator of External Affairs CCAT and Director of CARG) From: MCCARTY@UTOREPAS Subject: Correction (re: Yaacov Choueka's affiliation) Date: 9 December 1987, 19:29:44 EST X-Humanist: Vol. 1 Num. 537 (537) Contributed by "Michael Sperberg-McQueen" In the posting about the Vassar conference for planning the basic structure of the ACH/ACL/ALLC text encoding guidelines, Yaacov Choueka's affiliation was wrongly given. It should read: Institute for Information Retrieval and Computational Linguistics, and Department of Mathematics and Computer Science, Bar=Ilan University I apologize for the error. From: MCCARTY@UTOREPAS Subject: Warning about another Christmas virus Date: 9 December 1987, 19:30:58 EST X-Humanist: Vol. 1 Num. 538 (538) Contributed by "Michael Sperberg-McQueen" We've already had several score, by now probably a few hundred copies of this turning up here; it may reach you next. If you are at a CMS site and receive a program called CHRISTMA EXEC, please (a) warn your postmaster and (b) discard the exec (or keep a copy for the postmaster to look at, but DO NOT RUN IT). This exec paints a Christmas tree on your screen and then sends itself to everyone named in either your NAMES or NETLOG files. The result is potentially serious stress on Bitnet and on your local spool system, and possibly a few system crashes here and there as the number of reader files soars and exceeds the maximum. The Christmas tree isn't all that pretty, and the joke is pretty mean. A word to the wise. Your postmaster will thank you. Michael Sperberg-McQueen, UIC From: MCCARTY@UTOREPAS Subject: archives-coding-texts Date: 9 December 1987, 19:35:31 EST X-Humanist: Vol. 1 Num. 539 (539) Contributed by Bob Kraft I really must finish up the task of insuring consistent ID coding for the dozens of texts on the forthcoming CCAT-PHI CD-ROM, or I would plunge in at length on the current Humanist discussions. Meanwhile, I will take a minute to UNDERSCORE the comments of Mark Olsen. It is hard for me to conceive of a situation in which it would be more efficient to rekey or to scan anew a text already extant in some electronically readable form. I also have *Paradise Lost* in the CCAT Archive, and formatted it into TLG Beta Code ID form last week, checking the results against a library edition. It probably took me about an hour, including making sure that every line began in upper case and that the "paragraph" type breaks in the poetry were indicated. This text will be on the CD-ROM and is available on IBM diskette to anyone who would like it for $25 (CCAT minimum charge) and who agrees (by signing the CCAT Users Contract) to use it non-commercially and responsibly. One person's "rubbish" is another's treasure. Some of the happiest hours of my weekend life have been spent in junkyards. Incidentally, CCAT texts come with a "convert" program to permit the user to change the file so that explicit book/line locators are inserted at the left margin of each line. This type of software development permits us to be consistent and frugal about coding the IDs without inconveniencing the user who might otherwise be mystified by the implicit nature of the ID system. To leave that task in the hands of others made no sense to us. We will handle "SGML-type" markup requests similarly, for existing textual materials. If people want concrete information about the issues raised in the current HUMANIST discussions, just ask. Few of the issues are hypothetical, at least to those of us already engaged in archiving, (re)coding, formatting, and distributing -- not to mention searching for funding and other types of support! Bob Kraft From: MCCARTY@UTOREPAS Subject: Rage for chaos, or, in praise of polymorphic encoding (50 lines) Date: 9 December 1987, 21:31:31 EST X-Humanist: Vol. 1 Num. 540 (540) Contributed by Sebastian Rahtz I have just been reprieved from the gallows! I had approx 110 HUMANIST messages from the last couple of weeks in my mailbox which I hadnt really read, and I had been planning to print out the whole lot and read it at home tonight. Due to a combination of unfortunate circumstances (daily backup, me reading my mail etc), I have now lost the whole damned lot! I feel so relieved! Can I put in my trivial penyworth, tho? Lets face it, I dislike SGML so much is because its UGLY. But however important it all is, could those who care about text archives gather in a corner away from HUMANIST for a while? i was under the impression that there was a conference about it recently, so is there a need for the same people to discuss it in public..... it all reminds me of archaeology. Some years ago field archaeologists in Britain used to bicker at every opportunity about standardisation of recording methods, and all the same arguments were trotted out every time. No-one ever agreed, various people said they would set up global answers, and even now there remain a multiplicity of schemes. Why did it all fail? Because the problem was really that people didnt know why they were collecting the data in the first place.... I for one no longer believe in absolute recording; I believe that each excavation record, or each encodedtext, is a reflection of its creator, not the real world. But i apologize for dipping my toe in the text-encoding water; I vote for chaos, though, when the chips are down. Why? Because I used to be an archaeologist, and therefore I am interested in historical processes not in fossilisation. In the same way that I would have anyone who wanted walk all over Stonehenge because 20th C destruction of monuments is itself archaeology, so I wouldn't shed many tears if Lou Burnard's archive went up in flames (sorry, Lou), because the variety of texts lost is itself interesting (would we compare the loss of Lou's tapes to the destruction of Alexandria?). People who try and impose 'standards' on the world are basically misguided--variety is the spice of life. Sorry, is there some NEED to analyse all texts in the world NOW that I am not aware of? And there was I thinking scholarship was only a joke..... sebastian rahtz, computer science, southampton, UK From: MCCARTY@UTOREPAS Subject: More Markup etc. (62 lines) Date: 9 December 1987, 21:50:33 EST X-Humanist: Vol. 1 Num. 541 (541) Contributed by amsler@flash.bellcore.com (Robert Amsler) I must admit I do have many reservations about the feasibility of coming up with a universal metalanguage for all markup schemes. I think this is first an empirical problem though, not a theoretical one. We need to know what markup systems are in use and how much text is/could be available in these systems. That will determine how much effort should be made to accomodate their markup system in any future standard. The exact trade-off point between developing a parser to read text marked up in an inadequate markup language and then adding `useful' markup vs. starting over and typing the text in with the `right' markup is a hard one to specify. Dictionaries are on the `use any machine-readable copy' side by a massive amount (i.e. probably the data entry effort is ten or a hundred times the effort of the `figure out how to use what they have' effort). However there is still another issue here, the likelihood that anyone else will want to markup the text in a manner that you would find completely satisfying. There strikes me as a large range of variations in descriptive markup from noting simple text units to noting full interpretive tagging of historical and symbolic meaning `believed' to be associated with certain parts of a text. The inference I get from James Coombs side is that there is somehow an easily understood common agreement as to what should be marked in a text. I am not certain I agree with that when one leaves the domain of markup which recreates the visible form of the original document and enters the interpretive tagging area. In fact, I would define `inadequate' markup as markup from which one cannot recreate the original form of the document--regardless of whether it is descriptive or procedural markup. I've be concerned about is that one cannot tag a text with all the descriptive markup that everyone might want to be there. Could anyone imagine a historic text being published with ALL the commentary upon its meaning being interspersed in the text? We'd have to have tags with authors names on them and maybe even dates. I think perhaps what is needed is a means of integrating interpretive tags with a rather sparsely marked up version of a document. That is, having a tag set which is stored independently of the text to which it refers and which can readily be sorted into the linear sequence of the document as desired. In fact I even imagine a futuristic world in which a scholar can distribute ONLY their tag set for a well-known work, such that the recipients can study it with their copy of the original text on a variety of software and hardware systems. Some might simply elect to have the `annotated' text printed out on paper for study--others to have it loaded into a hypertext system for interactive reading on-line. Robert Amsler Bellcore Morristown, NJ From: MCCARTY@UTOREPAS Subject: Heisenbergian mark-up (46 lines) Date: 9 December 1987, 23:06:31 EST X-Humanist: Vol. 1 Num. 542 (542) Contributed by Willard McCarty Here's a brief and probably one-sided observation about textual mark-up, offered by someone interested chiefly in the themes and images of literary texts rather than in their syntactic structures or physical features. Regardless of the medium, when I mark up a text for interpretation I am doing something like reading it, that is, taking it in, attaching to its words things I know, discover, or think about, and preserving all that along with the original text. I want to mark-up my own text because (a) marking-up in my sense is primarily an intellectual, not a mechanical activity, and (b) it is utterly dependent on some hypothetical construct I have or am developing. (Building this construct may owe a debt to things that can be counted, hence "objectively" tagged, but the construct cannot be verified by relating it to countable things.) At the same time I must always keep a clear distinction between the words as the author or editor has given them, and if I'm doing this electronically with proper software, I have the liberty of erasing easily the remnants of interpretation I no longer respect. Note that I am not making a distinction here between an "objective" text and "subjective" commentary; that distinction misses the point of literary criticism altogether. So, I don't want anybody's scheme for marking up (in my sense), and I don't expect my marked up text to be of interest to anybody either. Nevertheless, if I'm successful, the final result (an essay or book) will say something valuable to others. Can it be said that there are aspects of textual mark-up that do not have to take interpretation into account at all? Sebastian Rahtz has suggested that there aren't. Willard McCarty, Univ. of Toronto (mccarty@utorepas.bitnet) From: MCCARTY@UTOREPAS Subject: Copyright-free Texts Wanted (130 lines) Date: 10 December 1987, 09:15:25 EST X-Humanist: Vol. 1 Num. 543 (543) Contributed by amsler@flash.bellcore.com (Robert Amsler One project I and some others at Bellcore are interested in is an effort to integrate a dictionary with citations to texts with these texts. The OED is the dictionary we have in mind, though I am also working with the Century Dictionary (not yet in machine-readable form) and other dictionaries such as the Collins English Dictionary, the Merriam-Webster Seventh Collegiate Dictionary and the Oxford Advanced Learners Dictionary. `Integrate' here means to provide access to the complete textual work from the dictionary definitions and visa versa, to provide access to the definitions from within the textual work. This is being envisioned as a form of hypertext access. The primary requirement is to obtain textual works which are cited in these dictionaries--which basically means most classical works in English. Appended to this message is a lit of the most frequently cited authors and works from the OED (compiled by actually searching the OED database thanks to Frank Tompa' help). 29140 citations - Shakespeare 1311 citations - Hamlet (1600-1) 1034 citations - Love's Labour's Lost (1594-5) 906 citations - 2 Henry IV (1590-91) 877 citations - Merchant of Venice (1596-7) 874 citations - King Lear (1605-6) 868 citations - The Tempest (1611-12) 865 citations - Romeo and Juliet (1594-5) 862 citations - 1 Henry IV (1597-8) 846 citations - Macbeth (1605-6) 841 citations - Henry V (1598-9) 834 citations - Othello (1604-5) 821 citations - Merry Wives of Windsor (1599-1600) 801 citations - Midsummer Night's Dream (1595-6) 794 citations - King John (1596-7) 779 citations - Richard III (1592-3) 778 citations - Troilus and Cressida (1601-2) 775 citations - As You Like It (1599-1600) 705 citations - Measure for Measure (1604-5) 15499 citations - Scott, Sir Walter 890 citations - The Heart of Midlothian (1817) [Novel] 880 citations - The Fair Maid of Perth (1828) [Novel] 694 citations - Guy Mannering (1815) [Novel] 644 citations - The Antiquary (1816) [Novel] 616 citations - Kenilworth (1821) [Novel] 599 citations - Lady of the Lake (1810) [Poem] 592 citations - Waverley (1814) [Novel] 543 citations - Rob Roy (1817) [Novel] 532 citations - Old Mortality (1816) [Novel] 490 citations - Marmion (1808) [Poem] 474 citations - The Monastery (1820) [Novel] 428 citations - Ivanhoe (1820) [Novel] 405 citations - Quentin Durward (1823) [Novel] 344 citations - Lord of the Isles (1815) [Novel] 328 citations - Woodstock (1826) [Novel] 11967 citations - Milton, John 4945 citations - Paradise Lost 648 citations - Samson Agonistes (1671) [Poem] 640 citations - Paradise Regained (1671) 625 citations - Comus (1634) [Poem] (A Maske presented at Ludlow Castle 1634: on Michaelmasse night etc.) 11000 citations - Chaucer 1238 citations - Troylus (Troilus ? and Criseyde) (1382?) [8200 line poem] 986 citations - (Translation of Boeth(ius)'s ``Consolation of Philosophy'') (1380?) [Prose] 877 citations - The Legend of Good Women (1382) 663 citations - Prologue (to The Legend of Good Women) 549 citations - The Knight's Tale 506 citations - The House of Fame 10759 citations - Wyclif 1166 citations - Selected Works 1072 citations - Works 713 citations - Sermons 474 citations - Genesis 420 citations - Isa 413 citations - Matt 315 citations - Ecclus 306 citations - Ps 278 citations - Luke 265 citations - Prov 9554 citations - Caxton 1282 citations - The Golden Legend (1483) 718 citations - The Foure Sonnes of Aymon (1489?) 668 citations - The boke yf (= of) Eneydos'' (1490) 639 citations - The chronicles of englond (1480) 610 citations - The historie of Jason (1477) 457 citations - Geoffroi de la Tour l'Andri (the knyght of the toure) (1483) 399 citations - The historye of reynard the foxe (1481) 399 citations - The book of fayttes of armes and of chyualrye (1483) 8745 citations - Dryden 11041 citations - 11041 citations - Cursor Mundi 5385 citations - Sebastian Rahtz says: People who try and impose 'standards' on the world are basically misguided--variety is the spice of life. This may be true, but I should point out that no one on HUMANIST is trying to impose standards or anything else on anyone. As if anyone could! And if anyone must rely on the differences between TLG and SGML methods for encoding chapter headings to give spice to intellectual life, humanities computing is in even deeper trouble that I thought. Willard McCarty says: So, I don't want anybody's scheme for marking up [...] and I don't expect my marked up text to be of interest to anybody either. [...] At the same time I must always keep a clear distinction between the words as the author or editor has given them, and if I'm doing this electronically with proper software, I have the liberty of erasing easily the remnants of interpretation I no longer respect. Would a conventional set of markup rules restrict one's freedom more than the conventional alphabets and syntax we already use? But the crucial point is that the "proper software" you describe cannot do its work without *some* encoding scheme. We have the choice of all of us developing software independently, so as to ensure that we use different schemes and make certain that once you have marked up your text with your software you cannot concord it with my software, and vice versa, or we can try to find a framework that allows sharing and flexibility. It is not standardization but chaos that produces rigidity and destroys freedom. -Michael Sperberg-McQueen University of Illinois at Chicago From: MCCARTY@UTOREPAS Subject: "electronic paralanguage" (100 lines)Computer-mediated Communication Date: 10 December 1987, 12:23:15 ESTSun, 6-DEC-1987 12:38 EST Janet F. Asteroff X-Humanist: Vol. 1 Num. 545 (545) From Dr Abigail Ann Young 1-416-585-4504 YOUNG at UTOREPAS This notice appeared in IRList, and I found it sufficiently interesting to pass along to HUMANIST. I apologise in advance to those who will be getting it twice! ********************************************** . . . I recently completed my dissertation on paralanguage in electronic mail, the abstract of which is appended to this posting. I found, among the 16 people I studied, many forms of "extra expression" in the form of "paralanguage." Ultimately, I documented enough differences between writing on the computer and writing through other media to identify it as "electronic paralanguage" with its own formal definition. Many people believe that face-to-face communication is the richest form of communication because of the variety of signals and channels, and as well the potential for channel redundancy. I have no problem with this assumption. I do, however, take issue with comparing other forms of communication to face-to-face and then judging any other medium as "information poor." Some scholars of computer-mediated communication carry this negative frame of reference over to their own work. While the computer may not provide as many channels as face-to-face communication, and the channel itself may be somewhat more limited, there is considerable research to indicate that computer users have done some interesting things to convey their meaning and message. Since I am not a fan of clogging up boards with long messages, anyone interested in my work can contact me directly at Asteroff@cutcv1.bitnet and I will be happy to send you more material. The dissertation is also available through University Microfilms: Janet F. Asteroff, "Paralanguage in Electronic Mail: A Case Study." Teachers College, Columbia University, May, 1987. /Janet (Asteroff@cutcv1.bitnet) ABSTRACT PARALANGUAGE IN ELECTRONIC MAIL: A CASE STUDY Janet F. Asteroff This study explores the use of paralanguage in electronic mail communication. It examines the use of paralanguage according to the electronic mail and computing experience and technical expertise of 16 library science graduate students who fall into two groups by rank of experience, novice and advanced. These respondents used electronic mail in a non-elective and task-related situation to communicate with their instructor. This case study is based on a multi-level qualitative content analysis of the electronic mail exchanged between the respondents and the instructor, and the attitudes and experiences of the respondents about their use of electronic mail and computers. This research interprets the roles and functions of paralanguage in computer-mediated communication and explores the phenomenon as an indicator of certain kinds of expression. Paralanguage is a component of spoken, written, and electronic communication. It gives to what is being communicated a character over and above that which is necessary to convey meaning in the linguistic or grammatical sense. Paralanguage in electronic mail is positioned between spoken and written paralanguage in its visual and interpretive structures. Electronic paralanguage, a term developed to describe paralanguage in computer-mediated communication, is defined Electronic paralanguage is revealed to be a component of communication which in some situations showed substantial differences by the rank of the respondent, as well as differences in individual behaviors. Novice respondents used more paralanguage in more types of messages than did advanced respondents. Electronic paralanguage also provides a robust picture of the character of communication. The use of exclamation points by novice respondents in task-related messages showed that electronic paralanguage can in certain cases be a general measure of stress and experience, and as well is a precise indicator of different kinds of positive and negative psychological stress. ------------------------------ From: MCCARTY@UTOREPAS Subject: More on ACH Standards for Markup (63 lines) Date: 10 December 1987, 16:25:16 EST X-Humanist: Vol. 1 Num. 546 (546) From Nancy Ide > In response to Jim Coombs' comments and questions: > (1) My messages may have been sent out in the wrong order, but I meant to make it clear that we fully agree with Jim's assertion that we cannot know al l of ou needs yet, and for that reason the standard will be extensible--and we hope to make user-defined extensions far easier to deal with than AAP does. But as Bob Amsler pointed out, a standard which specifies so little that most researchers end up inventing their own tags anyway is not of much use. Bob also mentioned that our subcommittees are going to have to work closely together to avoid redundancy and to make note of places where alternate descriptions, related to different applications, describe what may be physically the same thing--something Jim was concerned about. I think Bob's idea of a "data dictionary" is excellent and I hope we can implement it in the course of development. (2) By "multiple parallel hierarchies" I mean something along the lines of what Jim outlined--except that his examples all use nicely nested entities. The problems arise when we have overlapping entities, for example, the dog ran where "" and "" mark the beginning and end of a noun phrase, respectively, and "" and "" mark the beginning and end of another entity--say, a thematic unit of some kind. The context-free syntax of SGML cannot handle this, and so special mechanisms are required to enable multiple tag sets in which overlapping entities may be specified. As I mentioned, these exist in SGML but are not entirely straightforward, from my understanding. (3) The ultimate goal of an attempt to develop some formal description of existing schemes is to facilitate the development of translation programs to translate old formats into the new one. I sympathize with Jim's feeling that we shouldn't spend so much time converting the past, but I also understand, after spending 48 hours with the keepers of the European and Middle-eastern archives which house millions of words of machine-readable text, that it is not possible to mount this effort without considering what to do about texts that already exist in machine-readable form. I should also point out that at the end of the two-day meeting in Poughkeepsie we had a discussion about establishing a North American archive, but by that time many particpants had left and those who remained had little energy left to address the issue vigorously. However I understood those who spoke to say that they didn't feel the need to establish such an archive, and that in any case the Oxford model (where no guarantees of quality are made) is sufficient. I personally tend to agree with Jim that an archive--North American or better yet, international--should be established in which texts are "guaranteed," and which more importantly serves as a central clearinghouse. Oxford does this as well as it can now but without considerably more funding cannot more vigorously pursue the acquisition and creation of machine-readable texts nor ensure that they are both correct and tagged in a standard form. Nancy Ide ide@vassar From: MCCARTY@UTOREPAS Subject: A new HUMANIST GUIDE (268 lines) Date: 10 December 1987, 23:11:09 EST X-Humanist: Vol. 1 Num. 547 (547) Dear Colleagues: A somewhat revised version of the guide to HUMANIST follows. As always your comments are welcome, to mccarty@utorepas.bitnet. Yours, W.M. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ A Guide to HUMANIST +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ C O N T E N T S I. Nature and Aims II. How to use HUMANIST A. Sending and receiving messages B. Conventions and Etiquette C. Distributing files D. ListServ's commands and facilities E. Suggestions and Complaints From: Subject: Date: X-Humanist: Vol. 1 Num. 548 (548) I. Nature and aims From: Subject: Date: X-Humanist: Vol. 1 Num. 549 (549) Welcome to HUMANIST, a Bitnet/NetNorth/EARN discussion group for people who support computing in the humanities. Those who teach, review software, answer questions, give advice, program, write documentation, or otherwise support research and teaching in this area are included. Although HUMANIST is intended to help these people exchange all kinds of information, it is primarily meant for discussion rather than publication or advertisement. HUMANIST is an activity of the Special Interest Group for Humanities Computing Resources, which is in turn an affiliate of both the Association for Computers and the Humanities (ACH) and the Association for Literary and Linguistic Computing (ALLC). Although participants in HUMANIST are not required to be members of either organization, membership in them is highly recommended. HUMANIST currently has more than 170 members in 10 countries around the world. In general, HUMANISTs are encouraged to ask questions and offer answers, to begin and contribute to discussions, to suggest problems for research, and so forth. One of the specific motivations for establishing HUMANIST was to allow people involved in this area to form a common idea of the nature of their work, its requirements, and its standards. Institutional recognition is not infrequently inadequate, at least partly because computing in the humanities is an emerging and highly cross-disciplinary field. Its support is significantly different from the support of other kinds of computing, with which it may be confused. It does not fit easily into the established categories of academia and is not well understood by those from whom recognition is sought. Apart from the general discussion, HUMANIST encourages the formation of a professional identity by maintaining an informal biographical directory of its members. This directory is automatically sent to new members when they join. Supplements are issued whenever warranted by the number of new entries. Members are responsible for keeping their entries updated. Those from any discipline in or related to the humanities are welcome, provided that they fit the broad guidelines described above. Please tell anyone who might be interested to send a message to me, giving his or her name, address, telephone number, and a short biographical description of what he or she does to support computing in the humanities. This description should cover academic background and research interests, both in computing and otherwise; the nature of the job this person holds; and, if relevant, its place in the university. Please direct applications for membership in HUMANIST to MCCARTY@UTOREPAS.BITNET, not to HUMANIST itself. From: Subject: Date: X-Humanist: Vol. 1 Num. 550 (550) II. How to Use HUMANIST From: Subject: Textual archives and encoding (45 lines) Date: X-Humanist: Vol. 1 Num. 551 (551) A. Sending and receiving messages ----------------------------------------------------------------- Although HUMANIST is managed by software designed for Bitnet/NetNorth/EARN, members can be on any comparable network with access to Bitnet, for example, Janet or Arpanet. Users on these networks suffer only slight restrictions, which will be mentioned below. Submissions to HUMANIST are made by sending electronic mail as if to a person with the user-id HUMANIST and the node-name UTORONTO. All valid submissions are sent to every member, without exception. The editor of HUMANIST screens submissions only to prevent the inadvertent distribution of junk mail, which would otherwise be a serious problem for such a highly complex web of individuals using a wide variety of computing systems linked together by several different electronic networks. The editor will usually pass valid mail on to the membership within a few hours of submission. The volume of mail on HUMANIST varies with the state of the membership and the nature of the dominant topic, if any. Recent experience shows that as many as half a dozen or more messages a day may be processed. For this reason members are advised to pay regular, indeed frequent attention to their e-mail or serious overload may occur. A member planning on being away from regular contact should advise the editor and ask to be temporarily removed from active membership. The editor should be reminded when active membership is to be resumed. The editor also asks that members be careful to specify reliable addresses. In some cases the advice of local experts may help. Any member who changes his or her userid or nodename should first give ample warning to the editor and should verify the new address. If you know your system is going to be turned off or otherwise adjusted in a major way, find out when it will be out of service and inform the editor. Missed mail can be retrieved, but undelivered e-mail will litter the editor's mailbox. [Please note that in the following description, commands will be given in the form acceptable on an IBM VM/CMS system. If your system is different, you will have to make the appropriate translation.] ----------------------------------------------------------------- B. Conventions and Etiquette ----------------------------------------------------------------- Conversations or asides restricted to a few people can develop from the unrestricted discussions on HUMANIST by members communicating directly with each other. This may be a good idea for very specific replies to general queries, so that members are not burdened with messages of interest only to the person who asked the question and, perhaps, a few others. Members have, however, shown a distinct preference for unrestricted discussions on nearly every topic, so it is better to err on the side of openness. If you do send a reply to someone's question, please restate the question very briefly so that the context of your answer will be clear. [Note that the REPLY function of some electronic mailers will automatically direct a user's response to the editor, from whom all submissions technically originate, not to the original sender or to HUMANIST. Thus REPLY should be avoided in many cases.] Use your judgment about what the whole group should receive. We could easily overwhelm each other and so defeat the purpose of HUMANIST. Strong methods are available for controlling a discussion group, but the lively, free-ranging discussions made possible by judicious self-control seem preferable. Controversy itself is welcome, but what others would regard as tiresome junk- mail is not. Courtesy is still a treasured virtue. Make it an invariable practice to help the recipients of your messages scan them by including a SUBJECT line in your message. Be aware, however, that some people will read no more than the SUBJECT line, so you should take care that it is accurate and comprehensive as well as brief. If you can, note the length of your message in the subject line. The resulting line should look something like this: Use your judgment about the length of your messages. If you find yourself writing an essay or have a substantial amount of information to offer, it might be better to follow one of the two methods outlined in the next section. All contributions should also specify the member's name as well as e-mail address. This is particularly important for members whose user-ids bear no relation to their names. ----------------------------------------------------------------- C. Distributing files ----------------------------------------------------------------- HUMANIST offers us an excellent means of distributing written material of many kinds, e.g., reviews of software or hardware. (Work is now underway to provide this service for reviews.) Although conventional journals remain the means of professional recognition, they are often too slow to keep up with changes in computing. With some care, HUMANIST could provide a supplementary venue of immediate benefit to our colleagues. There are two possible methods of distributing such material. More specialized reports should probably be reduced to abstracts and posted in this form to HUMANISTs at large, then sent by the originators directly to those who request them. The more generally interesting material in bulk can be sent in an ordinary message to all HUMANISTs, but this could easily overburden the network so is not generally recommended. We are currently working on a means of centralized storage for relatively large files, such that they could be fetched by HUMANISTs at will, but this means is not yet fully operational. At present the only files we are able to keep centrally are the monthly logbooks of conversations on HUMANIST. See the next section for details. ----------------------------------------------------------------- D. ListServ's Commands and Facilities ----------------------------------------------------------------- As just mentioned, ListServ maintains monthly logbooks of discussions. Thus new members have the opportunity of reading contributions made prior to joining the group. To see a list of these logbooks, send the following command: TELL LISTSERV AT UTORONTO SENDME HUMANIST FILELIST Note that in systems or networks that do not allow interactive commands to be given to a Bitnet ListServ (I will call such systems "non-interactive"), the same thing can be accomplished be sending a message to HUMANIST with the command as the first and only line, which should read as follows: GET HUMANIST FILELIST The logbooks are named HUMANIST LOGyymm, where "yy" represents the last two digits of the year and "mm" the number of the month. The log for July 1987 would, for example, be named HUMANIST LOG8707, and to get this log on a system that supports interactive commands to HUMANIST you would issue the following: TELL LISTSERV AT UTORONTO GET HUMANIST LOG8705 On a non-interactive system, you would send HUMANIST a message with the following line: GET HUMANIST LOG8705 Note that on a non-interactive system as many of these one-line commands as you wish can be put in a message to HUMANIST. ListServ accepts several other commands, for example to retrieve a list of the current members or to set various options. These are described in a document named LISTSERV MEMO. This and other documentation will normally be available to you from your nearest ListServ node and is best fetched from there, since in that way the network is least burdened. You should consult with your local experts to discover the nearest ListServ; they will also be able to help you with whatever problems in the use of ListServ you may encounter. Once you have found the nearest node XXXXXX, type the following: TELL LISTSERV AT XXXXXX INFO ? or, on a non-interactive system: INFO ? The various documents available to you will then be listed. ----------------------------------------------------------------- E. Suggestions and Complaints ----------------------------------------------------------------- Suggestions about the running of HUMANIST or its possible relation to other means of communication are very welcome. So are complaints, particularly if they are constructive. Experience has shown that an electronic discussion group can be either beneficial or burdensome to its members. Much depends on what the group as a whole wants and does not want. Please make your views known, to everyone or to me directly, as appropriate. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Willard McCarty, 8 December 1987 Editor of HUMANIST, Centre for Computing in the Humanities, University of Toronto (MCCARTY@UTOREPAS.BITNET) From: MCCARTY@UTOREPAS Subject: Reply to Rahtz's pro-chaos message (30 lines) Date: 10 December 1987, 23:16:27 EST X-Humanist: Vol. 1 Num. 552 (552) From Robert Amsler Actually, `THE' standard for encoding text already exists, so I'm afraid it doesn't really matter whether we like it or not. I speak of SGML itself, ISO standard 8879, which was approved Sept., 1986. The AAP (American Association of Publishers) also has completed work on an application of the SGML standard, the so called AAP standard--which itself will soon be adopted--also, whether or not humanities computing professionals care or not. The good news for chaos fans is that so-far very few publishers have made much of an effort to convert to the AAP standard or to pledge to make their data available in that standard. Some agencies of the US govrnment are making noises about accepting electronic texts in the standard (such as NSF, NLM, etc.) and some software merchants (SoftQuad) have marketed programs which use the standard for typesetting, editing, etc. So... what remains. The AAP standard (and SGML itself) is based on the concept of `document types' having their own appropriate set of `tags'. The document types which have been created are only the most generic sort for magazine articles and books--though they contain specs for tables and math. equations. The humanities community has expresed no preferences so far, such as developing its own document types for things such as plays, poetry, etc. The stage is set for humanists to have a voice in the future of publisher's formats. From: MCCARTY@UTOREPAS Subject: The flavour of HUMANIST (22 lines) Date: 11 December 1987, 09:00:16 EST X-Humanist: Vol. 1 Num. 553 (553) [The following was sent me by Sebastian Rahtz. It is a quotation from the Archaeological Information Exchange and applies well to the kind of thing HUMANIST is, or at least what I think it is. It is quoted with gratitude but without permission. --ed.] "An archaeological information exchange network should avoid programmatic constraints, thereby maintaining the sense of immediacy, the ebb and flow of discourse and activity which represent the situational flux of daily life, while at the same time providing explicit points of reference in order to prevent total chaos." [Brian Molyneaux] From: MCCARTY@UTOREPAS Subject: Text Archives/Markup/TeX Date: 11 December 1987, 10:22:59 EST X-Humanist: Vol. 1 Num. 554 (554) From dow@husc6.BITNET (Dominik Wujastyk) Just a point of information, relating to the current debate about markup systems and text archives. It was mentioned by Michael Sperberg-McQueen that the First Folio has to be encoded right down to the typographical level in order to be of maximum use. This reminded me of an on-line database of mathematical abstracts offered by the American Mathematical Society, called MathSci (if I remember correctly). The whole (large) database is encoded in TeX, and a micro implementation of TeX is sold/given to all subscribers to the system. You dial up, search the database, and download whatever you want, or can afford. Then you run your entries through TeX, which is sitting there quietly on your hard disk, and presto, you have a typeset version of your mathematical texts. You can view it on your screen using a DVI previewer, or print it out to paper on anything from a 9-pin matrix printer to a phototypesetter. The important thing in this is the different levels of encoding being represented. The TeX markup specifies the main structural elements of the datum, but the macro package that is located with the TeX implementation (AMSTeX) controls the interpretation of the tags in the database, right down to the positioning of individual characters on the output medium. Just a thought. Dominik Wujastyk dow@harvunxw.bitnet From: MCCARTY@UTOREPAS Subject: Author's query on scholars and telecommunications (25 lines) Date: 12 December 1987, 10:57:48 EST X-Humanist: Vol. 1 Num. 555 (555) From Terry Erdt For a book forthcoming from Paradigm Press, entitled The Electronic Scholar's Resource Guide, I am putting together a piece on telecommunications, which will include bulletin board systems, libraries with catalogs capable of dial-up connections, Humanet on Scholarsnet, BRS and Dialog, some forums on CompuServe, Bitnet's IRList Digest, as well as, of course, Humanist. I would appreciate any suggestions for broadening the scope of coverage as well as any information about specific resources. Terrence Erdt Erdt@vuvaxcom.Bitnet Technical Review Editor Computers and the Humanities Grad. Dept. of Library Science Villanova University Villanova, PA 19085 From: MCCARTY@UTOREPAS Subject: No text or archive; joy in 29 lines Date: 12 December 1987, 14:59:03 EST X-Humanist: Vol. 1 Num. 556 (556) From Marshall Gilliland * * * * * ** ** ** ** * * * * * / \ / v \ Seasons *-- --* Happy \ o @ & / / \ / | \ Greetings *-- O --* New / * \ / * \ / % * * * \ and *-- --* Year / * * * * \ / * Saskatoon * *\ / | * | * | * | * \ / O & $ @ \ *-------------------------* | | | | | _&_ % | | Q U | / | |_____| _\@/_ |___| | # | |#SASK| | # == Marshall Gilliland _________________________________________|__#__|_____________________ From: MCCARTY@UTOREPAS Subject: text encoding: a Thesaurus Linguae Graecae user's perspective. Date: 13 December 1987, 13:41:53 EST X-Humanist: Vol. 1 Num. 557 (557) Contributed by Brad Inwood (124 lines) Text mark up and coding have turned out to be THE issues for HUMANISTs to get productively excited about. It is, after all, the most natural focus for the rather loose set of interests shared by those who think of themselves as humanities types. Some observations: The debate about what constitutes an adequate set of tags in a machine-readable text is obviously a reflex of the interests (and discipline) of the researcher. It would be astounding if we could agree about acceptable minimal markup. My own view is that text archives need only hold texts which preserve streams of words, minimal and transparent editorial markup to signal emendations, restorations, etc.; flags for verse and metre; and precise and unambiguous indications of the normal reference format (Bekker pages, columns and lines for Aristotle; line numbers for Greek tragedies; book and line for Homer; or whatever is standard). Where no standard reference style descends to the level of single lines (as within chapters of modern novels) line-level reference should be imposed by the file itself (e.g. chapter 3, line 769: 769th line of that chapter in the electronic file) rather than imported from the text one happens to be imputting from -- even if it is what the researcher regards as the best text. If archives are to create standard reference forms where none exist for printed editions (as in this case) they should do so in a manner appropriate to their medium, not the printed medium. researchers who require more markup should bloody well add it themselves and not burden archives with worries about anything more elaborate. My own work with machine readable texts is limited to various materials in the Thesaurus Linguae Graecae text base; my own notions of minimally acceptable coding and entry format stem from this experience. without pausing too much for the rationale in each case, I would extract the following lessons: 1. preservation of information about page breaks and line ends in the original text is not worth the effort. 2. it is particularly bad if one preserves that information at the cost of retaining soft hyphens in the text which are of no semantic significance but a mere product of the typesetter's art. contrary to what everyone says, it is not trivial to strip them out in a global move; more important, software must be made to do fancy tricks not to take line ends alone as separators; to make it ignore hyphens at line end is even harder. yes, I know it can be done, but why must we bother? removing hyphens from such text is the single most difficult and time-consuming job I have had, and the one with the highest risk of introducing errors into an already proof-read text. 3. the TLG preserves page layout information in a fanatical way: e.g. it will tell you that a given line is to be indented by so many tabs, but not that it is a verse quotation. translating the tabs to spaces is easy enough, but why not just have a tag to mark verse and the particular metre? 4. markup for standard reference style is there in the TLG, but inconsistently implemented from author to author. in the Platonic corpus, for example, Stephanus pages and columns are declared at the head of each dialogue and subsequently incremented by a flag; a special programme is needed to convert the 79th [x] after a declared [x21] actually means Stephanus page 100. Line numbers are usually suppressed (although they are part of the standard reference format for Plato), but occasionally lines are indicated explicitly. No guidance is given as to why this is so, when the much more important page and column references are so badly handled. 5. never let anyone tell you that a decent proofreading job can be done by someone who does not know the language well or is not reading with attention to the sense. the TLG was keyboarded, not scanned, and yet broken characters in the printed edition used have turned up as the wrong character in the final corrected files (Burnet's Oxford Text of Plato is still running on the original plates and there are a lot of broken characters which no literate reader could mistake; but no one caught the rho with the missing tail: looks just like an omicron and would even have scanned as one; the best visual confirmation based on comparison with the printed source would not reveal an error). I fear that scholars are really going to have to have one round of proofreading everything, so I am pleased that some HUMANISTs feel that keyboarding Milton can be fun. 6. standard coding delimiters are needed which can be regarded as non-separators by software (I guess that means that the opening and closing delimiters must be characters NEVER used as delimiters, parentheses or punctuation marks). Otherwise the coding used to mark a conjectural supplement will break the word when the text stream is analyzed by software. 7. if you really want the kind of information which would make an electronic text useful for serious editorial work (full apparatus, notation of font changes, change of hands, etc.) then it seems to me you need something more than an electronic text. you probably need a hyper-text system of some sort or a fantastically complex data-base-cum-text. with custom software. I start from the assumption that most users of electronic texts want a clean accurate electronic copy, well-referenced, so that they can mark it up for their own analytical purposes or search for and analyze the words in it. it is a job of an entirely different order to prepare a data base which can be used to help edit a text. the TLG's omission of textual apparatus is much lamented, and reasonably so; but in this case I think they got it right. better to get the text out and make it usable. if they had waited to settle on how to handle the apparatus in the electronic text, (a) we would still be waiting for the TLG, and (b) they would have had, in effect, to re-edit all of ancient Greek literature, not just enter and correct. the editorial talent for that job simply does not exist. the media via of just typing the apparatus which happens to be in the text you choose to keyboard or scan is a perfect example of falling between two stools: not enough to make electronic analysis possible, too much for absolute ease of use and speed of production. this is all pretty bitty, but that is how user's experience tends to come out, I guess. maybe the peon's perspective will be of some use when the theoretical issues threaten to get out of hand -- or proportion. Brad Inwood From: MCCARTY@UTOREPAS Subject: Gaudeamus igitur in about 200 lines Date: 14 December 1987, 23:49:56 EST X-Humanist: Vol. 1 Num. 558 (558) Dear Colleagues: I've never before sent you a list of everyone who belongs to HUMANIST, thinking that you could get this information yourself from ListServ at any time. This time of year, however, motivates me to do so, I guess in celebration of an unusual, international (though monolingual), once noisy, sometimes argumentative, and to me always interesting fellowship. So, to all of us listed below -- some directly, some hidden in redistribution lists -- I wish a very happy Hanukkah and very merry Christmas. I think the Buddha's birthday is also celebrated about this time of year, and doubtless I have missed other holidays, for which forgive me. Yours, W.M. -------------------------------------------------------------------------- * * HUMANIST Discussion list - created 07 MAY 87 * CJOHNSON@ARIZRVAX Christopher Johnson OWEN@ARIZRVAX David Owen ATMKO@ASUACAD Mark Olsen ATPMB@ASUACAD Pier Baldini CNNMJ@BCVMS M. J. Connolly CHOUEKA@BIMACS Yaacov Choueka ALLEN@BROWNVM Allen H. Renear JAZBO@BROWNVM James H. Coombs ST401742@BROWNVM Timothy Seid HAMESSE@BUCLLN11 Jacqueline Hamesse THOMDOC@BUCLLN11 CETEDOC Belgium WORDS@BUCLLN11 Robert Hogenraad JONES@BYUADMIN Randall Jones ECHUCK@BYUHRC Chuck Bush H_JOHANSSON%USE.UIO.UNINETT@CERNVAX Stig Johansson BJORNDAS@CLARGRAD Sterling Bjorndahl YOUNGC@CLARGRAD Charles M. Young spqr@cm.soton.ac.uk Sebastian Rahtz FKOCH%OCVAXA@CMCCVB Christian Koch PRUSSELL%OCVAXA@CMCCVB Roberta Russell mffgkts@cms.umrcc.ac.uk Tony Smith nash@cogito.mit.edu David Nash epkelly@csvax1.tcd.hea.irl Elizabeth Dowse MCCARTHY@CUA William J. McCarthy JMBHC@CUNYVM Joanne M. Badagliacco RSTHC@CUNYVM Robert S. Tannenbaum TERGC@CUNYVM Terence Langendoen MTKUS@CUVMA Mark Kennedy RCLUS@CUVMA Robert C. Lehman SLZUS@CUVMA Sue Zayac cul.henry@cu20b.columbia.edu Chuck Henry cul.lowry@cu20b.columbia.edu Anita Lowry cul.woo@cu20b.columbia.edu Janice Woo humanist@edinburgh.ac.uk Humanist Group r.j.hare@edinburgh.ac.uk Roger Hare cameron@exeter.ac.uk Keith Cameron amsler@flash.bellcore.com Robert Amsler GAUTHIER@FRTOU71 Robert Gauthier DOW@HARVUNXW Dominik Wujastyk GALIARD@HGRRUG5 Harry Gaylord HUET@HUJIPRMB Emanuel Tov ST_JOSEPH@HVRFORD David Carpenter ayi017@ibm.soton.ac.uk Brendan O'Flaherty cpi047@ibm.soton.ac.uk Simon Lane fri001@ibm.soton.ac.uk Sean O'Cathasaigh ayi004@ibm.southampton.ac.uk Brian Molyneaux CONSERVA@IFIIDG Lelio Camilleri GG.BIB@ISUMVS Rosanne Potter S1.CAC@ISUMVS Carol Chapelle sano@jpl-vlsi.arpa Haj Sano nick@lccr.sfu.cdn Nick Cercone bol@liuida.uucp Birgitta Olander RPY383@MAINE Colin Martindale psc90!jdg@mnetor.uucp Joel Goldfield humanist@mts.durham.ac.uk Humanists' Group CHADANT@MUN Tony Chadwick DGRAHAM@MUN David Graham H156004@NJECNVM Kenneth Tompkins FAFEO@NOBERGEN Espen Ore FAFKH@NOBERGEN Knut Hofland collins@nss.cs.ucl.ac.uk Beryl T. Atkins g.dixon@pa.cn.umist.ac.uk Gordon Dixon KRAFT@PENNDRLN Robert Kraft JACKA@PENNDRLS Jack Abercrombie jld1@phx.cam.ac.uk John L. Dawson PKOSSUTH@POMONA Karen Kossuth sdpage@prg.oxford.ac.uk Stephen Page T3B@PSUVM Tom Benson BALESTRI@PUCC Diane P. Balestri RICH@PUCC Richard Giordano TOBYPAFF@PUCC Toby Paff d.mitchell@qmc.ac.uk David Mitchell BARNARD@QUCDN David T. Barnard LESSARDG@QUCDN Greg Lessard LOGANG@QUCDN George Logan ORVIKT@QUCDN Tone Orvik WIEBEM@QUCDN M. G. Wiebe weinshan%cps.msu.edu@relay.cs.net Donald Weinshank GILLILAND@SASK Marshall Gilliland JULIEN@SASK Jacques Julien FRIEDMAN_E@SITVXA Edward A. Friedman JHUBBARD@SMITH Jamie Hubbard ZRCC1001@SMUVM1 Robin C. Cover GX.MBB@STANFORD Malcolm Brown XB.J24@STANFORD John J. Hughes ACDRLK@SUVM Ron Kalinoski DECARTWR@SUVM Dana Cartwright bs83@sysa.salford.ac.uk Max Wood A79@TAUNIVM David Sitman lb0q@te.cc.cmu.edu Leslie Burkholder ECSGHB@TUCC George Brett DUCALL@TUCCVM Frank L. Borchardt DYBBUK@TUCCVM Jeffrey Gillette SREIMER@UALTAVM Stephen R. Reimer TBUTLER@UALTAVM Terry Butler USERDLDB@UBCMTSG Laine Ruus EGC4BFD@UCLAMVS Kelly Stack IBQ1JVR@UCLAMVS John Richardson IMD7VAW@UCLAMVS Vicky Walsh IZZY590@UCLAVM George Bing U18189@UICVM Michael Sperberg-McQueen qghu21@ujvax.ulster.ac.uk Noel Wilson BAUMGARTEN@UMBC Joseph Baumgarten J_CERNY@UNHH Jim Cerny CLAS056@UNLCDC3 John Turner FELD@UOFMCC Michael Feld CSHUNTER@UOGUELPH Stuart Hunter AMPHORAS@UTOREPAS Philippa Matheson ANDREWO@UTOREPAS Andrew Oliver ANNE@UTOREPAS Anne Lancashire BRAINERD@UTOREPAS Barron Brainerd ERSATZ@UTOREPAS Harold Chimpden Earwicker IAN@UTOREPAS Ian Lancashire INWOOD@UTOREPAS Brad Inwood MCCARTY@UTOREPAS Willard McCarty ROBERTS@UTOREPAS Robert Sinkewicz STAIRS@UTOREPAS Mike Stairs WINDER@UTOREPAS Bill Winder YOUNG@UTOREPAS Abigail Young ZACOUR@UTOREPAS Norman Zacour humanist@utorgpu.utoronto Humanist Redistribution List S_RICHMOND@UTOROISE S. Richmond BRADLEY@UTORONTO John Bradley DESOUS@UTORONTO Ronald de Sousa ESWENSON@UTORONTO Eva V. Swenson LIDIO@UTORONTO Lidio Presutti PARROTT@UTORONTO Martha Parrott PAULIE2@UTORONTO Test Account 42104_263@uwovax.uwo.cdn Glyn Holmes 42152_443@uwovax.uwo.cdn Richard Shroyer IDE@VASSAR Nancy Ide a_boddington@vax.acs.open.ac.uk Andy Boddington aeb_bevan@vax.acs.open.ac.uk Edis Bevan may@vax.leicester.ac.uk May Katzen catherine@vax.oxford.ac.uk Catherine Griffin dbpaul@vax.oxford.ac.uk Paul Salotti john@vax.oxford.ac.uk John Cooper logan@vax.oxford.ac.uk Grace Logan lou@vax.oxford.ac.uk Lou Burnard stephen@vax.oxford.ac.uk Stephen Miller susan@vax.oxford.ac.uk Susan Hockey v002@vaxa.bangor.ac.uk Thomas N. Corns udaa270@vaxa.cc.kcl.ac.uk Susan Kruse wwsrs@vaxa.stir.ac.uk Keith Whitelam ej1@vaxa.york.ac.uk Edward James gw2@vaxa.york.ac.uk Geoffrey Wall jrw2@vaxa.york.ac.uk John Wolffe chaa006@vaxb.rhbnc.ac.uk Philip Taylor srrj1@vaxb.york.ac.uk Sarah Rees Jones cstim@violet.berkeley.edu Tim Maher f.e.candlin@vme.glasgow.ac.uk Francis Candlin CHURCHDM@VUCTRVAX Dan M. Church ERDT@VUVAXCOM Terry Erdt fwtompa@watdaisy.uucp Frank Tompa DDROB@WATDCS Don D. Roberts VANEVRA@WATDCS James W. Van Evra WALTER@WATDCS Walter McCutchan drraymond@watmum.waterloo Darrell Raymond makkuni.pa@xerox.com Ranjit Makkuni xeroxhumanists~.x@xerox.com Humanists at Xerox ELI@YALEVM Doug Hawthorne YAEL@YKTVMH2 Yael Ravin DANIEL@YORKVM1 Daniel Bloom YFAN0001@YORKVM1 Gerald L. Gold YFPL0004@YORKVM1 Shu-Yan Mok YFPL0018@YORKVM1 Paul Kashiyama CS100006@YUSOL Peter Roosen-Runge GL250012@YUVENUS Jim Benson * * Total number of users subscribed to the list: 168 From: MCCARTY@UTOREPAS Subject: An idea about biographies; Supplement 5 (440 lines) Date: 15 December 1987, 23:40:40 EST X-Humanist: Vol. 1 Num. 559 (559) Dear Colleagues: At some point in the near future, if anyone would care for such a thing, I have it in mind to do a proper job on the biographies. Apart from the editing and formatting, this would involve collecting a revised biographical statement from each of you, if you'd care to supply one. These might be written or rewritten according to a suggested list of things to be mentioned -- to make them *slightly* less chaotic without taking the play out. The revised collection would be for circulation only on HUMANIST. What do you think? Please let me know if the idea strikes you as worthy of effort. What do you think should be on the list of things to be mentioned? Meanwhile, the next supplement follows. Yours, W.M. -------------------------------------------------------------------------- Autobiographies of HUMANISTs Fifth Supplement Following are 23 additional entries to the collection of autobiographical statements by members of the HUMANIST discussion group. Further additions, corrections, and updates are welcome, to mccarty@utorepas.bitnet. W.M. 16 December 1987 From: Subject: Date: X-Humanist: Vol. 1 Num. 560 (560) *Atwell, Eric Steven Centre for Computer Analysis of Language and Speech, AI Division, School of Computer Studies, Leeds University, Leeds LS2 9JT; +44 532 431751 ext 6 I am in a Computer Studies School, but specialise in linguistic and literary computing, and applications in Religious Education in schools. I would particularly like to liaise with other researchers working in similar areas. From: Subject: Date: X-Humanist: Vol. 1 Num. 561 (561) *Benson, Tom {akgua,allegra,ihnp4,cbosgd}!psuvax1!psuvm.bitnet!t3b (UUCP) t3b%psuvm.bitnet@wiscvm.arpa (ARPA) Department of Speech Communication, The Pennsylvania State University 227 Sparks Building, University Park, PA 16802; 814-238-5277 I am a Professor of Speech Communication at Penn State University, currently serving as editor of THE QUARTERLY JOURNAL OF SPEECH. In addition, I edit the electronic journal CRTNET (Communication Research and Theory Network). From: Subject: Date: X-Humanist: Vol. 1 Num. 562 (562) *CETEDOC (CENTRE DE TRAITEMENT ELECTRONIQUE DES DOCUMENTS) CETEDOC, LLN, BELGIUM THE CETEDOC (CENTRE DE TRAITEMENT ELECTRONIQUE DES DOCUMENTS) IS AN INSTITUTION OF THE CATHOLIC UNIVERSITY OF LOUVAIN AT LOUVAIN-LA-NEUVE, BELGIUM. ITS DIRECTOR IS PROF. PAUL TOMBEUR. From: Subject: Date: X-Humanist: Vol. 1 Num. 563 (563) *Chadwick, Tony Department of French & Spanish, Memorial University of Newfoundland St. John's, A1B 3X9; (709)737-8572 At the moment I have two interests in computing: one is the use of computers in composition classes for second language learners, the socond in computerized bibliographies. I have an M.A. in French from McMaster and have been teaching at Memorial University since 1967. Outside computers, my research interests lie in Twentieth Century French Literature. From: Subject: Date: X-Humanist: Vol. 1 Num. 564 (564) *Coombs, James H. Institute for Research in Information and Scholarship, Brown University Box 1946, Providence, RI 02912 I have a Ph.D. in English (Wordsworth and Milton: Prophet-Poets) and an M.A. in Linguistics, both from Brown University. I have been Mellon Postdoctoral Fellow in English and am about to become Software Engineer, Research, Institute for Research in Information and Scholarship (IRIS). I have co-edited an edition of letters (A Pre-Raphaelite Friendship, UMI Research Press) and have written on allusion and implicature (Poetics, 1985; Brown Working Papers in Linguistics). Any day now, the November Communications of the ACM will appear with an article on "Markup Systems and the Future of Scholarly Text Processing," written with Allen H. Renear and Steven J. DeRose. I developed the English Disk on the Brown University mainframe, which provides various utilities for humanists, primarily for word processing and for staying sane in CMS. I wrote a Bibliography Management System for Scholars (BMSS; 1985) and then an Information Management System for Scholars (IMSS; 1986). Both are in PL/I and may best be considered "aberrant prototypes," used a little more than necessary for research but never commercialized. I am currently working on a system with similar functionality for the IBM PC. Last year, I developed a "comparative concordance" for the multiple editions of Wordsworth's Prelude. I am delayed in that by the lack of the final volume of Cornell's fine editions. A preliminary paper will appear in the working papers of Brown's Computing in the Humanities User's Group (CHUG); a full article will be submitted in January, probably to CHUM. I learned computational linguistics from Prof. Henry Kucera, Nick DeRose, and Andy Mackie. Richard Ristow taught me software engineering management or, more accurately, teaches me more every time I talk to him. I worked on the spelling corrector, tuning algorithms. I worked on the design of the grammar corrector, designed the rule structures, and developed the rules with Dr. Carol Singley. Then I started with Dr. Phil Shinn's Binary Parser and developed a language independent N-ary Parser (NAP). NAP reads phrase structure rules as well as streams of tagged words (see DeRose's article in Computational Linguistics for information on the disambiguation) and generates a parse tree, suitable for generalized pattern matching. Finally, at IRIS, I will be developing online dictionary access from our hypermedia system: Intermedia (affix stripping, unflection, definition, parsing, etc.). In addition, we are working on a unified system for accessing multiple databases, including CD-ROM as well as remote computers. From: Subject: Date: X-Humanist: Vol. 1 Num. 565 (565) *Dawson, John L. University of Cambridge, Literary and Linguistic Computing Centre Sidgwick Avenue, Cambridge CB3 9DA England; (0223) 335029 I have been in charge of the Literary and Linguistic Computing Centre of Cambridge University since 1974, and now hold the post of Assistant Director of Research there. The LLCC acts as a service bureau for all types of humanities computing, including data preparation, and extends to the areas of non-scientific computing done by members of science and social science faculties. Much of our work remains in the provision of concordances to various texts in a huge range of languages, either prepared by our staff, by the user, or by some external body (e.g. TLG, Toronto Corpus of Old English, etc.) Some statistical analysis is undertaken, as required by the users. Recently, we have begun preparing master pages for publication using a LaserWriter, and several books have been printed by this means. My background is that of a mathematics graduate with a Diploma in Computer Science (both from Cambridge). I am an Honorary Member of ALLC, having been its Secretary for six years, and a member of the Association for History and Computing. My present research (though I don't have much time to do it) lies in the comparison of novels with their translations in other languages. At the moment I am working on Stendhal's "Le Rouge et le Noir" in French and English, and on Jane Austen's "Northanger Abbey" in English and French. I have contributed several papers at ALLC and ACH conferences, and published in the ALLC Journal (now Literary & Linguistic Computing) and in CHum. From: Subject: Date: X-Humanist: Vol. 1 Num. 566 (566) *Giordano, Richard I am a new humanities specialist at Princeton University Computer Center (Computing and Information Technology). I come to Prinecton from Columbia University where I was a Systems Analyst in the Libraries for about six years. I am just finishing my PhD dissertation in American history at Columbia as well. From: Subject: Date: X-Humanist: Vol. 1 Num. 567 (567) *Johnson, Christopher Language Research Center, Room 345 Modern Languages, University of Arizona Tucson, Az 85702; (602) 621-1615 I am currently the Director of the Lnaguage Research Center at the University of Arizona. Masters in Educational Media, Univeristy of Arizona; Ph.D. in Secondary Education (Minor in Instructional Technology), UA. I have worked in the area of computer-based instruction since 1976. I gained most of my experience on the PLATO system here at the University and as a consultant to Control Data Corp. Two years ago I moved to the Faculty of Humanities to create the Language Research Center, a support facility for our graduate students, staff, and faculty. My personnal research interests are in the area for individual learning styles, critical thinking skills, Middle level education and testing as they apply to computer-based education. The research interests of my faculty range from text analysis to word processing to research into the use of the computer as an instructional tool. From: Subject: Date: X-Humanist: Vol. 1 Num. 568 (568) *Johansson, Stig Dept of English, Univ of Oslo, P.O. Box 1003, Blindern, N-0315 Oslo 3, Norway. Tel: 456932 (Oslo). Professor of English Language, Univ of Oslo. Relevant research From: Subject: Date: X-Humanist: Vol. 1 Num. 569 (569) Academic Computing Services, 215 Machinery Hall, Syracuse University Syracuse, New York 13244; 315/423-3998 I am Associate Director for Research Computing at Syracuse University and am interested in sponsoring a seminar series next spring focusing on computing issues in the humanities. I hope that this will lead to hiring a full-time staff person to provide user support services for humanities computing. From: Subject: Date: X-Humanist: Vol. 1 Num. 570 (570) *Langendoen, D. Terence Linguistics Program, CUNY Graduate Center, 33 West 42nd Street, New York, NY 10036-8099 USA; 212-790-4574 (soon to change) I am a theoretical linguist, interested in parsing and in computational linguistics generally. I have also worked on the problem of making sophisticated text-editing tools available for the teaching of writing. I am currently Secretary-Treasurer of the Linguistic Society of America, and will continue to serve until the end of calendar year 1988. I have also agreed to serve on two working committees on the ACH/ALLC/ACL project on standards for text encoding, as a result of the conference held at Vassar in mid-November 1987. From: Subject: Date: X-Humanist: Vol. 1 Num. 571 (571) *Molyneaux, Brian Department of Archaeology, University of Southampton, England. I am at present conducting postgraduate research in art and ideology and its relation to material culture. I am also a Field Associate at the Royal Ontario Museum, Department of New World Archaeology, specialising in rock art research. I obtained a BA (Hons) in English Literature, a BA (Hon) in Anthropology, and an MA in Art and Archaeology at Trent University, Peterborough, Ontario. My research interest in computing in the Humanities includes the analysis of texts and art works within the context of social relations. From: Subject: Date: X-Humanist: Vol. 1 Num. 572 (572) *Olofsson, Ake I am at the Department of Psychology, University of Umea, in the north of Sweden. Part of my work at the department is helping people to learn how to use our computer (VAX and the Swedish university Decnet) and International mail (Bitnet). We are four system-managers at the department and have about 40 ordinary users, running word-processing, statistics and Mail programs. From: Subject: Date: X-Humanist: Vol. 1 Num. 573 (573) *ORVIK, TONE POST OFFICE BOX 1822, KINGSTON, ON K7L 5J6; 613 - 389 - 6092 WORKING ON BIBLE RESEARCH WITH AFFILIATION TO QUEEN'S UNIVERSITY'S DEPT. OF RELIGIOUS STUDIES; CREATING CONCORDANCE OF SYMBOLOGY. HAVE WORKED AS A RESEARCHER, TEACHER, AND WRITER, IN EUROPE AND CANADA; ESPECIALLY ON VARIOUS ASPECTS OF BIBLE AND COMPARATIVE RELIGION. INTERESTED IN CONTACT WITH NETWORK USERS WITH SAME/SIMILAR INTEREST OF RESEARCH. From: Subject: Date: X-Humanist: Vol. 1 Num. 574 (574) *Potter, Rosanne G. Department of English, Iowa State University, Ross Hall 203, (515) 294-2180 (Main Office); (515) 294-4617 (My office) I am a literary critic; I use the mainframe computer for the analysis of literary texts. I have also designed a major formatting bibliographic package, BIBOUT, in wide use at Iowa State University, also installed at Princeton and Harvard. I do not program, rather I work with very high level programming specialists, statisticians, and systems analysts here to design the applications that I want for my literary critical purposes. I am editing a book on Literary Computing and Literary Criticism containing essays by Richard Bailey, Don Ross, Jr., John Smith, Paul Fortier, C. Nancy Ide, Ruth Sabol, myself and others. I've been on the board of ACH, have been invited to serve on the CHum editorial board. From: Subject: Date: X-Humanist: Vol. 1 Num. 575 (575) *Renear, Allen H. My original academic discipline is philosophy (logic, epistemology, history), and though I try to keep that up (and expect my Ph.D. this coming June) I've spent much of the last 7 years in academic computing, particularly humanities support. I am currently on the Computer Center staff here at Brown as a specialist in text processing, typesetting and humanities computing. I've had quite a bit of practical experience designing, managing, and consulting on large scholarly publication projects and my major research interests are similarly in the general theory of text representation and strategies for text based computing. I am a strong advocate of the importance of SGML for all computing that involves text; my views on this are presented in the Coombs, Renear, DeRose article on Markup Systems in the November 1987 *Communications of the ACM*. Other topics of interest to me are structure oriented editing, hypertext, manuscript criticism, and specialized tools for analytic philosophers. My research in philosophy is mostly in epistemic logic (similar to what AI folks call "knowledge representation"); it has some surprising connections with emerging theories of text structure. I am a contact person for Brown's very active Computing in the Humanities User's Group (CHUG). From: Subject: Date: X-Humanist: Vol. 1 Num. 576 (576) *Richardson, John Associate Professor, University of California (Ls Angeles), GSLIS; (213) 825-4352 One of my interests is analytical bibliography, the desription of printed books. At present I am intrigued with the idea that we can describe various component parts of books, notably title pages, paper, and typefaces, but the major psycho-physical element, ink, is not described. Obviously this problem involves humanistic work but also a far degree of sophistication with ink technology. I would be interested in talking with or corresponding with anyone on this topic... From: Subject: Date: X-Humanist: Vol. 1 Num. 577 (577) *Taylor, Philip Royal Holloway & Bedford New College; University of London; U.K; (+44) 0784 34455 Ext: 3172 Although not primarily concerned with the humanities (I am principal systems programmer at RHBNC), I am freqently involved in humanties projects, particularly in the areas of type-setting (TeX), multi-lingual text processing, and natural language analysis, among others. From: Subject: Date: X-Humanist: Vol. 1 Num. 578 (578) *Whitelam, Keith W. Dept. of Religious Studies, University of Stirling, Stirling FK9 4LA Scotland; Tel. 0786 3171 ext. 2491 I have been lecturer in Religious Studies at Stirling since 1978 with prime responsibility for Hebrew Bible/Old Testament. My research interests are mainly aimed at exploring new approaches to the study of early Israelite/ Palestinian history in an interdisciplinary context, i.e. drawing upon social history, anthropology, archaeology, historical demography, etc. I have been constructing a database of Palestinian archaeological sites, using software written by the Computing Science department, in order to analyse settlement patterns, site hierarchies, demography, etc. The department of Environmental Science has recently purchased Laser Scan an offered me access to the facilities. This will enable me to display settlement patterns, sites, etc in map form for analysis and comparison. I am particularly interested in corresponding/discussing with others working on similar problems, particularly in Near Eastern archaeology. I have also been involved in exploring the possibilities of setting up campus-wide text processing laser printing facilities. It looks as though we shall be able to offer a LaTeX service in the New Year. We are also planning to offer a WYSIWYG service, such as Ventura on IBM or a combination with Macs for the production of academic papers. Again I have a particular interest in the use of foreign fonts, e.g. Hebrew, Akkadian, Ugaritic, Greek, etc. My teaching and research on the Hebrew Bible leads to a concern with developing computer-aided text analysis, although I have had little time to explore this area. We have OCP available on our mainframe VAX but my use of this has been very limited. I see this as an important area of future development in teaching and research along with Hebrew teaching. From: Subject: Date: X-Humanist: Vol. 1 Num. 579 (579) *Wilson, Noel Head of Academic Services, University of Ulster, Shore Road Newtownabbey, Co. Antrim, N. Ireland BT37 0QB; (0232)365131 Ext. 2449 My post has overall responsibility for the central academic computing service, offered by the Computer Centre, to the University academic community. Within this brief, my Section is responsible for the acquisition/development and documentation of CAL and proprietary software. We currently provide a program library in support of courses and research which contains approx. 400 programs; of these approx. 80 are in-house developments, 50 proprietary systems and the remainder obtained from a variety of sources incl. program libraries (eg CONDUIT - Univ. of Iowa). We have only very recently addressed computing within the Faculty of Humanities; academic staff in the Faculty have used computers in a research capacity and are now turning towards the various u'grad. courses. Presently we hold a grant of 79,000 pounds from the United Kingdom Computer Board for Universities and Research Councils, for the development of CAL software in support of Linguistics and Lexicostatistics. Within this project we are attempting to develop courseware to support grammar teaching in French, German, Spanish and Irish (details of existing materials appropriate to u'grad. teaching would be most welcome!). We also are investigating the creation of software to support an analysis of text (comparative studies) - in this area we are looking at frequency counts assoc. with words/expressions/words within registers etc. - again help would be appreciated. I am happy to provide further details on any of the above points and wish to keep informed of useful Humanities-related CAL work elsewhere. We currently use the Acorn BBC micro. but are also moving in the direction of PC clones. From: Subject: Date: X-Humanist: Vol. 1 Num. 580 (580) *Wood, Max Computing Officer, 403 Maxwell Building, The University of Salford The Crescent, Salford, G.M.C. ENGLAND; 061-736-5843 Extension 7399 We are involved in a project to introduce the use of computing in teaching here in the Business and Management Department of Salford University and I am keen to extend links to other Business schools both here in the U.K. and indeed in the U.S.A. Obviously therefore I would like to join your forum so as to possibly exchange ideas news etc. My background is essentially in computing and I mainly supervise the computing resources available to our Department, and have formulated much of the teaching systems we currently use. From: Subject: Date: X-Humanist: Vol. 1 Num. 581 (581) *Wujastyk, Dominik I am a Sanskritist with some knowledge of computing. Once upon a time (1977-78) I learned Snobol4 from Susan Hockey at Oxford, where I did undergraduate and later doctoral Sanskrit. More recently, I have been using TeX on my PC AT (actually a Compaq III), and in the middle of this summer I published a book _Studies on Indian Medical History_, which was done in TeX and printed out on an HP LJ II, and sent to the publisher as camera ready. It all went very well. I have received the MS DOS Icon implementation from Griswold at Arizona, but have not spent time on it. I am trying to teach myself at the moment, just to learn enough to knock out ocassional routines to convert files from wordprocessor formats to TeX, and that sort of thing. (Probably reinventing the wheel.) At the present time I am editing a Sanskrit text on medieval alchemy, and doing all the formatting of the edition in LaTeX. Before I ever started Sanskrit, I did a degree in Physics at Imperial College in London, but that is so long ago that I don't like to think about it! From: Subject: Date: X-Humanist: Vol. 1 Num. 582 (582) *Young, Charles M. Dept. of Philosophy, The Claremont Graduate School I am a member of the American Philosophical Association's committee on Computer Use in Philosophy. One of my pet projects is to find some way of making the Thesaurus Linguae Graecae database (all of classical Greek through the 7th century C.E.) more readily available to working scholars. From: Subject: Date: X-Humanist: Vol. 1 Num. 583 (583) *END* From: MCCARTY@UTOREPAS Subject: National text archive (45 lines) Date: 16 December 1987, 15:24:57 EST X-Humanist: Vol. 1 Num. 584 (584) From C. Faulhaber (U.C. Berkeley, ked@coral.berkeley.edu) via Tim Maher 1) Text Archives. What is needed is some sort of alliance between the computing types and the professional librarians. It seems to me that there is a much better chance of getting a national text archive if it can be integrated into an ongoing concern. I list three candidates, in decreasing order of feasibility: a) RLG: Through their PRIMA project they are actively interested in providing access to new information resources. b) The organization at the U. of Michigan which already maintains data bases for use in the social sciences. c) OCLC: They have relatively less experience than RLG in providing services for research institutions but are aggressively expanding their range. 2) Citation dictionaries: John Nitti (Medieval Spanish Seminary, 1120 Van Hise Hall, U. of Wisconsin, Madison 53720) has been working on just such a dictionary (Dictionary of the Old Spanish Language) since ca. 1970, although the original plan was to draw the citations from texts transcribed specifically for that purpose and publish in standard format on OED lines. With optical disk technology, the possibility now exists to combine DOSL and athe texts themselves. In fact, we are contemplating the possibility of combining these 2 elements with my own Bibliography of Old Spanish Texts serving as a data base front end in order to search through texts on the basis of, e.g., date, author, subject. Prof. Charles Faulhaber Dept. of Spanish and Portuguese Univ. of California, Berkeley. ked@coral.berkeley.edu From: MCCARTY@UTOREPAS Subject: Info (30 lines) Date: 17 December 1987, 15:53:31 EST X-Humanist: Vol. 1 Num. 585 (585) From Mark Olsen A student here is doing a project on the discourse of John Woolman and is looking for computer readable versions of texts by other 18th century American Quakers for comparisons. I would appreciate any info concerning the availability of these texts before scanning them in. A second, stranger request has come through. I have a faculty member who is studying a 19th century manuscript. Parts of it were crossed out and she is wondering if there is the possibility of using computer enhancement of the images to improve readability. She has tried blowing-up the images, but has not gotten much. Any ideas? I must that I know nothing about image processing except what I read about concerning the space shots. Maybe I should try JPL (snicker). Thanks in advance, Mark Olsen I don't know how many lines of text this has, but it doesn't conform to any known mark-up standard. From: MCCARTY@UTOREPAS Subject: Christmas gift for HUMANISTs (50 lines) Date: 17 December 1987, 19:51:24 EST X-Humanist: Vol. 1 Num. 586 (586) From Sebastian Rahtz The following Christmas gift for HUMANISTs is prompted by a description Lou Burnard sent me of the Vassar 'text encoding standards' meeting, and by the subsequent HUMANIST discussion (no I dont have permission to 'publish' this) Incidentally, a recent contribution to HUMANIST implied that text-encoding standards were a central issue to all HUMANISTs. May I stand up for the archaeologists, musicians, art-historians, linguists and philosophers amongst us to say that there is more to humanities computing than text! equality for all. Sebastian Rahtz (spqr@uk.ac.soton.cm) A cold coming we had of it, just the worst time of the year for a journey, and such a long journey: the ways deep and the weather sharp, a hard time we had of it. at the end we preferred to travel all night, sleeping in snatches, with the voices singing in our ears, saying that this was all folly. but there was no information, and so we continued and arrived at evening, not a moment too soon finding the place; it was (you may say) satisfactory. all this was a long time ago, I remember, and I would do it again, but set down this set down From: MCCARTY@UTOREPAS Subject: Test message Date: 21 December 1987, 15:01:37 EST X-Humanist: Vol. 1 Num. 587 (587) This is a test. Please neither do nor conclude anything because of its appearance. From: MCCARTY@UTOREPAS Subject: Whereabouts of R.G. Ragsdale Date: 21 December 1987, 19:29:37 EST X-Humanist: Vol. 1 Num. 588 (588) From Christian Koch On December 4 an announcement was sent out over HUMANIST regarding a proposed course to be offered in connection with the European Conference on Computers in Education to be held next summer in Lausanne, Switzerland. The proposal was by R.G. Ragsdale and the course in question was International Educational Thanks! Christian Koch Oberlin College From: MCCARTY@UTOREPAS Subject: ICEBOL (106 lines) Date: 22 December 1987, 10:50:41 EST X-Humanist: Vol. 1 Num. 589 (589) From David Sitman ICEBOL3 April 21-22, 1988 Dakota State College Madison, SD 57042 ICEBOL3, the International Conference on Symbolic and Logical Computing, is designed for teachers, scholars, and programmers who want to meet to exchange ideas about non-numeric computing. In addition to a focus on SNOBOL, SPITBOL, and Icon, ICEBOL3 will feature introductory and technical presentations on other dangerously powerful computer languages such as Prolog and LISP, as well as on applications of BASIC, Pascal, and FORTRAN for processing strings of characters. Topics of discussion will include artificial intelligence, expert systems, desk-top publishing, and a wide range of analyses of texts in English and other natural languages. Parallel tracks of concurrent sessions are planned: some for experienced computer users and others for interested novices. Both mainframe and microcomputer applications will be discussed. ICEBOL's coffee breaks, social hours, lunches, and banquet will provide a series of opportunities for participants to meet and informally exchange information. Sessions will be scheduled for "birds of a feather" to discuss common interests (for example, BASIC users group, implementations of SNOBOL, computer generated poetry). Call For Papers Abstracts (minimum of 250 words) or full texts of papers to be read at ICEBOL3 are invited on any application of non-numeric programming. Planned sessions include the following: artificial intelligence expert systems natural language processing analysis of literary texts (including bibliography, concordance, and index preparation) linguistic and lexical analysis (including parsing and machine translation) preparation of text for electronic publishing computer assisted instruction grammar and style checkers music analysis. Papers must be in English and should not exceed twenty minutes reading time. Abstracts and papers should be received by January 15, 1988. Notification of acceptance will follow promptly. Papers will be published in ICEBOL3 Proceedings. Presentations at previous ICEBOL conferences were made by Susan Hockey (Oxford), Ralph Griswold (Arizona), James Gimpel (Lehigh), Mark Emmer (Catspaw, Inc.), Robert Dewar (New York University), and many others. Copies of ICEBOL 86 Proceedings are available. ICEBOL3 is sponsored by The Division of Liberal Arts and The Business and Education Institute of DAKOTA STATE COLLEGE Madison, South Dakota For Further Information All correspondence including abstracts and papers as well as requests for registration materials should be sent to: Eric Johnson ICEBOL Director 114 Beadle Hall Dakota State College Madison, SD 57042 U.S.A. (605) 256-5270 Inquiries, abstracts, and correspondence may also be sent via electronic mail to: ERIC @ SDNET (BITNET) ------- End of Forwarded Message From: MCCARTY@UTOREPAS Subject: The reason for a silent HUMANIST from 15 to 21 December Date: 23 December 1987, 21:46:36 EST X-Humanist: Vol. 1 Num. 590 (590) Dear Colleagues: You may have assumed that the silence of HUMANIST from about 15 to 21 December was due to a mass unplugging of terminals and departing for seasonal festivals, but this is not entirely so. A new version of ListServ (the software that runs HUMANIST), just installed about then, ran amok, wrote about 1,000,000 lines in the system log here, and so provoked our postmaster into disconnecting it -- and with it HUMANIST. The messages that seemed to be sent out during that period went into limbo, where apparently they still sit. These may suddenly appear in your readers, perhaps even in duplicate or triplicate, or they may not show up at all. Against the latter possibility, I am sending you copies of the limbo'd messages in two batches, with my best wishes for your good health and prosperity in the new year. Yours, W.M. _________________________________________________________________________ Dr. Willard McCarty / Centre for Computing in the Humanities University of Toronto / 14th floor, Robarts Library / 130 St. George St. Toronto, Canada M5S 1A5 / (416) 978-4238 / mccarty@utorepas.bitnet From: MCCARTY at UTOREPAS Subject: An idea about biographies; Supplement 5 (440 lines) Date: 15 December 1987, 23:40:40 EST X-Humanist: Vol. 1 Num. 591 (591) Dear Colleagues: At some point in the near future, if anyone would care for such a thing, I have it in mind to do a proper job on the biographies. Apart from the editing and formatting, this would involve collecting a revised biographical statement from each of you, if you'd care to supply one. These might be written or rewritten according to a suggested list of things to be mentioned -- to make them *slightly* less chaotic without taking the play out. The revised collection would be for circulation only on HUMANIST. What do you think? Please let me know if the idea strikes you as worthy of effort. What do you think should be on the list of things to be mentioned? Meanwhile, the next supplement follows. Yours, W.M. -------------------------------------------------------------------------- Autobiographies of HUMANISTs Fifth Supplement Following are 23 additional entries to the collection of autobiographical statements by members of the HUMANIST discussion group. Further additions, corrections, and updates are welcome, to mccarty@utorepas.bitnet. W.M. 16 December 1987 From: Subject: Date: X-Humanist: Vol. 1 Num. 592 (592) *Atwell, Eric Steven Centre for Computer Analysis of Language and Speech, AI Division, School of Computer Studies, Leeds University, Leeds LS2 9JT; +44 532 431751 ext 6 I am in a Computer Studies School, but specialise in linguistic and literary computing, and applications in Religious Education in schools. I would particularly like to liaise with other researchers working in similar areas. From: Subject: Date: X-Humanist: Vol. 1 Num. 593 (593) *Benson, Tom {akgua,allegra,ihnp4,cbosgd}!psuvax1!psuvm.bitnet!t3b (UUCP) t3b%psuvm.bitnet@wiscvm.arpa (ARPA) Department of Speech Communication, The Pennsylvania State University 227 Sparks Building, University Park, PA 16802; 814-238-5277 I am a Professor of Speech Communication at Penn State University, currently serving as editor of THE QUARTERLY JOURNAL OF SPEECH. In addition, I edit the electronic journal CRTNET (Communication Research and Theory Network). From: Subject: Date: X-Humanist: Vol. 1 Num. 594 (594) *CETEDOC (CENTRE DE TRAITEMENT ELECTRONIQUE DES DOCUMENTS) CETEDOC, LLN, BELGIUM THE CETEDOC (CENTRE DE TRAITEMENT ELECTRONIQUE DES DOCUMENTS) IS AN INSTITUTION OF THE CATHOLIC UNIVERSITY OF LOUVAIN AT LOUVAIN-LA-NEUVE, BELGIUM. ITS DIRECTOR IS PROF. PAUL TOMBEUR. From: Subject: Date: X-Humanist: Vol. 1 Num. 595 (595) *Chadwick, Tony Department of French & Spanish, Memorial University of Newfoundland St. John's, A1B 3X9; (709)737-8572 At the moment I have two interests in computing: one is the use of computers in composition classes for second language learners, the socond in computerized bibliographies. I have an M.A. in French from McMaster and have been teaching at Memorial University since 1967. Outside computers, my research interests lie in Twentieth Century French Literature. From: Subject: Date: X-Humanist: Vol. 1 Num. 596 (596) *Coombs, James H. Institute for Research in Information and Scholarship, Brown University Box 1946, Providence, RI 02912 I have a Ph.D. in English (Wordsworth and Milton: Prophet-Poets) and an M.A. in Linguistics, both from Brown University. I have been Mellon Postdoctoral Fellow in English and am about to become Software Engineer, Research, Institute for Research in Information and Scholarship (IRIS). I have co-edited an edition of letters (A Pre-Raphaelite Friendship, UMI Research Press) and have written on allusion and implicature (Poetics, 1985; Brown Working Papers in Linguistics). Any day now, the November Communications of the ACM will appear with an article on "Markup Systems and the Future of Scholarly Text Processing," written with Allen H. Renear and Steven J. DeRose. I developed the English Disk on the Brown University mainframe, which provides various utilities for humanists, primarily for word processing and for staying sane in CMS. I wrote a Bibliography Management System for Scholars (BMSS; 1985) and then an Information Management System for Scholars (IMSS; 1986). Both are in PL/I and may best be considered "aberrant prototypes," used a little more than necessary for research but never commercialized. I am currently working on a system with similar functionality for the IBM PC. Last year, I developed a "comparative concordance" for the multiple editions of Wordsworth's Prelude. I am delayed in that by the lack of the final volume of Cornell's fine editions. A preliminary paper will appear in the working papers of Brown's Computing in the Humanities User's Group (CHUG); a full article will be submitted in January, probably to CHUM. I learned computational linguistics from Prof. Henry Kucera, Nick DeRose, and Andy Mackie. Richard Ristow taught me software engineering management or, more accurately, teaches me more every time I talk to him. I worked on the spelling corrector, tuning algorithms. I worked on the design of the grammar corrector, designed the rule structures, and developed the rules with Dr. Carol Singley. Then I started with Dr. Phil Shinn's Binary Parser and developed a language independent N-ary Parser (NAP). NAP reads phrase structure rules as well as streams of tagged words (see DeRose's article in Computational Linguistics for information on the disambiguation) and generates a parse tree, suitable for generalized pattern matching. Finally, at IRIS, I will be developing online dictionary access from our hypermedia system: Intermedia (affix stripping, unflection, definition, parsing, etc.). In addition, we are working on a unified system for accessing multiple databases, including CD-ROM as well as remote computers. From: Subject: Date: X-Humanist: Vol. 1 Num. 597 (597) *Dawson, John L. University of Cambridge, Literary and Linguistic Computing Centre Sidgwick Avenue, Cambridge CB3 9DA England; (0223) 335029 I have been in charge of the Literary and Linguistic Computing Centre of Cambridge University since 1974, and now hold the post of Assistant Director of Research there. The LLCC acts as a service bureau for all types of humanities computing, including data preparation, and extends to the areas of non-scientific computing done by members of science and social science faculties. Much of our work remains in the provision of concordances to various texts in a huge range of languages, either prepared by our staff, by the user, or by some external body (e.g. TLG, Toronto Corpus of Old English, etc.) Some statistical analysis is undertaken, as required by the users. Recently, we have begun preparing master pages for publication using a LaserWriter, and several books have been printed by this means. My background is that of a mathematics graduate with a Diploma in Computer Science (both from Cambridge). I am an Honorary Member of ALLC, having been its Secretary for six years, and a member of the Association for History and Computing. My present research (though I don't have much time to do it) lies in the comparison of novels with their translations in other languages. At the moment I am working on Stendhal's "Le Rouge et le Noir" in French and English, and on Jane Austen's "Northanger Abbey" in English and French. I have contributed several papers at ALLC and ACH conferences, and published in the ALLC Journal (now Literary & Linguistic Computing) and in CHum. From: Subject: Date: X-Humanist: Vol. 1 Num. 598 (598) *Giordano, Richard I am a new humanities specialist at Princeton University Computer Center (Computing and Information Technology). I come to Prinecton from Columbia University where I was a Systems Analyst in the Libraries for about six years. I am just finishing my PhD dissertation in American history at Columbia as well. From: Subject: Date: X-Humanist: Vol. 1 Num. 599 (599) *Johnson, Christopher Language Research Center, Room 345 Modern Languages, University of Arizona Tucson, Az 85702; (602) 621-1615 I am currently the Director of the Lnaguage Research Center at the University of Arizona. Masters in Educational Media, Univeristy of Arizona; Ph.D. in Secondary Education (Minor in Instructional Technology), UA. I have worked in the area of computer-based instruction since 1976. I gained most of my experience on the PLATO system here at the University and as a consultant to Control Data Corp. Two years ago I moved to the Faculty of Humanities to create the Language Research Center, a support facility for our graduate students, staff, and faculty. My personnal research interests are in the area for individual learning styles, critical thinking skills, Middle level education and testing as they apply to computer-based education. The research interests of my faculty range from text analysis to word processing to research into the use of the computer as an instructional tool. From: Subject: Date: X-Humanist: Vol. 1 Num. 600 (600) *Johansson, Stig Dept of English, Univ of Oslo, P.O. Box 1003, Blindern, N-0315 Oslo 3, Norway. Tel: 456932 (Oslo). Professor of English Language, Univ of Oslo. Relevant research From: Subject: Date: X-Humanist: Vol. 1 Num. 601 (601) Academic Computing Services, 215 Machinery Hall, Syracuse University Syracuse, New York 13244; 315/423-3998 I am Associate Director for Research Computing at Syracuse University and am interested in sponsoring a seminar series next spring focusing on computing issues in the humanities. I hope that this will lead to hiring a full-time staff person to provide user support services for humanities computing. From: Subject: Date: X-Humanist: Vol. 1 Num. 602 (602) *Langendoen, D. Terence Linguistics Program, CUNY Graduate Center, 33 West 42nd Street, New York, NY 10036-8099 USA; 212-790-4574 (soon to change) I am a theoretical linguist, interested in parsing and in computational linguistics generally. I have also worked on the problem of making sophisticated text-editing tools available for the teaching of writing. I am currently Secretary-Treasurer of the Linguistic Society of America, and will continue to serve until the end of calendar year 1988. I have also agreed to serve on two working committees on the ACH/ALLC/ACL project on standards for text encoding, as a result of the conference held at Vassar in mid-November 1987. From: Subject: Date: X-Humanist: Vol. 1 Num. 603 (603) *Molyneaux, Brian Department of Archaeology, University of Southampton, England. I am at present conducting postgraduate research in art and ideology and its relation to material culture. I am also a Field Associate at the Royal Ontario Museum, Department of New World Archaeology, specialising in rock art research. I obtained a BA (Hons) in English Literature, a BA (Hon) in Anthropology, and an MA in Art and Archaeology at Trent University, Peterborough, Ontario. My research interest in computing in the Humanities includes the analysis of texts and art works within the context of social relations. From: Subject: Date: X-Humanist: Vol. 1 Num. 604 (604) *Olofsson, Ake I am at the Department of Psychology, University of Umea, in the north of Sweden. Part of my work at the department is helping people to learn how to use our computer (VAX and the Swedish university Decnet) and International mail (Bitnet). We are four system-managers at the department and have about 40 ordinary users, running word-processing, statistics and Mail programs. From: Subject: Date: X-Humanist: Vol. 1 Num. 605 (605) *ORVIK, TONE POST OFFICE BOX 1822, KINGSTON, ON K7L 5J6; 613 - 389 - 6092 WORKING ON BIBLE RESEARCH WITH AFFILIATION TO QUEEN'S UNIVERSITY'S DEPT. OF RELIGIOUS STUDIES; CREATING CONCORDANCE OF SYMBOLOGY. HAVE WORKED AS A RESEARCHER, TEACHER, AND WRITER, IN EUROPE AND CANADA; ESPECIALLY ON VARIOUS ASPECTS OF BIBLE AND COMPARATIVE RELIGION. INTERESTED IN CONTACT WITH NETWORK USERS WITH SAME/SIMILAR INTEREST OF RESEARCH. From: Subject: Date: X-Humanist: Vol. 1 Num. 606 (606) *Potter, Rosanne G. Department of English, Iowa State University, Ross Hall 203, (515) 294-2180 (Main Office); (515) 294-4617 (My office) I am a literary critic; I use the mainframe computer for the analysis of literary texts. I have also designed a major formatting bibliographic package, BIBOUT, in wide use at Iowa State University, also installed at Princeton and Harvard. I do not program, rather I work with very high level programming specialists, statisticians, and systems analysts here to design the applications that I want for my literary critical purposes. I am editing a book on Literary Computing and Literary Criticism containing essays by Richard Bailey, Don Ross, Jr., John Smith, Paul Fortier, C. Nancy Ide, Ruth Sabol, myself and others. I've been on the board of ACH, have been invited to serve on the CHum editorial board. From: Subject: Date: X-Humanist: Vol. 1 Num. 607 (607) *Renear, Allen H. My original academic discipline is philosophy (logic, epistemology, history), and though I try to keep that up (and expect my Ph.D. this coming June) I've spent much of the last 7 years in academic computing, particularly humanities support. I am currently on the Computer Center staff here at Brown as a specialist in text processing, typesetting and humanities computing. I've had quite a bit of practical experience designing, managing, and consulting on large scholarly publication projects and my major research interests are similarly in the general theory of text representation and strategies for text based computing. I am a strong advocate of the importance of SGML for all computing that involves text; my views on this are presented in the Coombs, Renear, DeRose article on Markup Systems in the November 1987 *Communications of the ACM*. Other topics of interest to me are structure oriented editing, hypertext, manuscript criticism, and specialized tools for analytic philosophers. My research in philosophy is mostly in epistemic logic (similar to what AI folks call "knowledge representation"); it has some surprising connections with emerging theories of text structure. I am a contact person for Brown's very active Computing in the Humanities User's Group (CHUG). From: Subject: Date: X-Humanist: Vol. 1 Num. 608 (608) *Richardson, John Associate Professor, University of California (Ls Angeles), GSLIS; (213) 825-4352 One of my interests is analytical bibliography, the desription of printed books. At present I am intrigued with the idea that we can describe various component parts of books, notably title pages, paper, and typefaces, but the major psycho-physical element, ink, is not described. Obviously this problem involves humanistic work but also a far degree of sophistication with ink technology. I would be interested in talking with or corresponding with anyone on this topic... From: Subject: Date: X-Humanist: Vol. 1 Num. 609 (609) *Taylor, Philip Royal Holloway & Bedford New College; University of London; U.K; (+44) 0784 34455 Ext: 3172 Although not primarily concerned with the humanities (I am principal systems programmer at RHBNC), I am freqently involved in humanties projects, particularly in the areas of type-setting (TeX), multi-lingual text processing, and natural language analysis, among others. From: Subject: Date: X-Humanist: Vol. 1 Num. 610 (610) *Whitelam, Keith W. Dept. of Religious Studies, University of Stirling, Stirling FK9 4LA Scotland; Tel. 0786 3171 ext. 2491 I have been lecturer in Religious Studies at Stirling since 1978 with prime responsibility for Hebrew Bible/Old Testament. My research interests are mainly aimed at exploring new approaches to the study of early Israelite/ Palestinian history in an interdisciplinary context, i.e. drawing upon social history, anthropology, archaeology, historical demography, etc. I have been constructing a database of Palestinian archaeological sites, using software written by the Computing Science department, in order to analyse settlement patterns, site hierarchies, demography, etc. The department of Environmental Science has recently purchased Laser Scan an offered me access to the facilities. This will enable me to display settlement patterns, sites, etc in map form for analysis and comparison. I am particularly interested in corresponding/discussing with others working on similar problems, particularly in Near Eastern archaeology. I have also been involved in exploring the possibilities of setting up campus-wide text processing laser printing facilities. It looks as though we shall be able to offer a LaTeX service in the New Year. We are also planning to offer a WYSIWYG service, such as Ventura on IBM or a combination with Macs for the production of academic papers. Again I have a particular interest in the use of foreign fonts, e.g. Hebrew, Akkadian, Ugaritic, Greek, etc. My teaching and research on the Hebrew Bible leads to a concern with developing computer-aided text analysis, although I have had little time to explore this area. We have OCP available on our mainframe VAX but my use of this has been very limited. I see this as an important area of future development in teaching and research along with Hebrew teaching. From: Subject: Date: X-Humanist: Vol. 1 Num. 611 (611) *Wilson, Noel Head of Academic Services, University of Ulster, Shore Road Newtownabbey, Co. Antrim, N. Ireland BT37 0QB; (0232)365131 Ext. 2449 My post has overall responsibility for the central academic computing service, offered by the Computer Centre, to the University academic community. Within this brief, my Section is responsible for the acquisition/development and documentation of CAL and proprietary software. We currently provide a program library in support of courses and research which contains approx. 400 programs; of these approx. 80 are in-house developments, 50 proprietary systems and the remainder obtained from a variety of sources incl. program libraries (eg CONDUIT - Univ. of Iowa). We have only very recently addressed computing within the Faculty of Humanities; academic staff in the Faculty have used computers in a research capacity and are now turning towards the various u'grad. courses. Presently we hold a grant of 79,000 pounds from the United Kingdom Computer Board for Universities and Research Councils, for the development of CAL software in support of Linguistics and Lexicostatistics. Within this project we are attempting to develop courseware to support grammar teaching in French, German, Spanish and Irish (details of existing materials appropriate to u'grad. teaching would be most welcome!). We also are investigating the creation of software to support an analysis of text (comparative studies) - in this area we are looking at frequency counts assoc. with words/expressions/words within registers etc. - again help would be appreciated. I am happy to provide further details on any of the above points and wish to keep informed of useful Humanities-related CAL work elsewhere. We currently use the Acorn BBC micro. but are also moving in the direction of PC clones. From: Subject: Date: X-Humanist: Vol. 1 Num. 612 (612) *Wood, Max Computing Officer, 403 Maxwell Building, The University of Salford The Crescent, Salford, G.M.C. ENGLAND; 061-736-5843 Extension 7399 We are involved in a project to introduce the use of computing in teaching here in the Business and Management Department of Salford University and I am keen to extend links to other Business schools both here in the U.K. and indeed in the U.S.A. Obviously therefore I would like to join your forum so as to possibly exchange ideas news etc. My background is essentially in computing and I mainly supervise the computing resources available to our Department, and have formulated much of the teaching systems we currently use. From: Subject: Date: X-Humanist: Vol. 1 Num. 613 (613) *Wujastyk, Dominik I am a Sanskritist with some knowledge of computing. Once upon a time (1977-78) I learned Snobol4 from Susan Hockey at Oxford, where I did undergraduate and later doctoral Sanskrit. More recently, I have been using TeX on my PC AT (actually a Compaq III), and in the middle of this summer I published a book _Studies on Indian Medical History_, which was done in TeX and printed out on an HP LJ II, and sent to the publisher as camera ready. It all went very well. I have received the MS DOS Icon implementation from Griswold at Arizona, but have not spent time on it. I am trying to teach myself at the moment, just to learn enough to knock out ocassional routines to convert files from wordprocessor formats to TeX, and that sort of thing. (Probably reinventing the wheel.) At the present time I am editing a Sanskrit text on medieval alchemy, and doing all the formatting of the edition in LaTeX. Before I ever started Sanskrit, I did a degree in Physics at Imperial College in London, but that is so long ago that I don't like to think about it! From: Subject: Date: X-Humanist: Vol. 1 Num. 614 (614) *Young, Charles M. Dept. of Philosophy, The Claremont Graduate School I am a member of the American Philosophical Association's committee on Computer Use in Philosophy. One of my pet projects is to find some way of making the Thesaurus Linguae Graecae database (all of classical Greek through the 7th century C.E.) more readily available to working scholars. From: Subject: Date: X-Humanist: Vol. 1 Num. 615 (615) *END* From: MCCARTY at UTOREPAS Subject: National text archive (45 lines) Date: 16 December 1987, 15:24:57 EST X-Humanist: Vol. 1 Num. 616 (616) From C. Faulhaber (U.C. Berkeley, ked@coral.berkeley.edu) via Tim Maher 1) Text Archives. What is needed is some sort of alliance between the computing types and the professional librarians. It seems to me that there is a much better chance of getting a national text archive if it can be integrated into an ongoing concern. I list three candidates, in decreasing order of feasibility: a) RLG: Through their PRIMA project they are actively interested in providing access to new information resources. b) The organization at the U. of Michigan which already maintains data bases for use in the social sciences. c) OCLC: They have relatively less experience than RLG in providing services for research institutions but are aggressively expanding their range. 2) Citation dictionaries: John Nitti (Medieval Spanish Seminary, 1120 Van Hise Hall, U. of Wisconsin, Madison 53720) has been working on just such a dictionary (Dictionary of the Old Spanish Language) since ca. 1970, although the original plan was to draw the citations from texts transcribed specifically for that purpose and publish in standard format on OED lines. With optical disk technology, the possibility now exists to combine DOSL and athe texts themselves. In fact, we are contemplating the possibility of combining these 2 elements with my own Bibliography of Old Spanish Texts serving as a data base front end in order to search through texts on the basis of, e.g., date, author, subject. Prof. Charles Faulhaber Dept. of Spanish and Portuguese Univ. of California, Berkeley. ked@coral.berkeley.edu From: MCCARTY at UTOREPAS Subject: Info (30 lines)From Mark Olsen Date: 17 December 1987, 15:53:31 EST X-Humanist: Vol. 1 Num. 617 (617) A student here is doing a project on the discourse of John Woolman and is looking for computer readable versions of texts by other 18th century American Quakers for comparisons. I would appreciate any info concerning the availability of these texts before scanning them in. A second, stranger request has come through. I have a faculty member who is studying a 19th century manuscript. Parts of it were crossed out and she is wondering if there is the possibility of using computer enhancement of the images to improve readability. She has tried blowing-up the images, but has not gotten much. Any ideas? I must that I know nothing about image processing except what I read about concerning the space shots. Maybe I should try JPL (snicker). Thanks in advance, Mark Olsen I don't know how many lines of text this has, but it doesn't conform to any known mark-up standard. From: MCCARTY at UTOREPAS Subject: Christmas gift for HUMANISTs (50 lines)From Sebastian Rahtz Date: 17 December 1987, 19:51:24 EST X-Humanist: Vol. 1 Num. 618 (618) The following Christmas gift for HUMANISTs is prompted by a description Lou Burnard sent me of the Vassar 'text encoding standards' meeting, and by the subsequent HUMANIST discussion (no I dont have permission to 'publish' this) Incidentally, a recent contribution to HUMANIST implied that text-encoding standards were a central issue to all HUMANISTs. May I stand up for the archaeologists, musicians, art-historians, linguists and philosophers amongst us to say that there is more to humanities computing than text! equality for all. Sebastian Rahtz (spqr@uk.ac.soton.cm) A cold coming we had of it, just the worst time of the year for a journey, and such a long journey: the ways deep and the weather sharp, a hard time we had of it. at the end we preferred to travel all night, sleeping in snatches, with the voices singing in our ears, saying that this was all folly. but there was no information, and so we continued and arrived at evening, not a moment too soon finding the place; it was (you may say) satisfactory. all this was a long time ago, I remember, and I would do it again, but set down this set down From: MCCARTY at UTOREPAS Subject: Offline 16 (20 lines)From Bob Kraft Date: 18 December 1987, 14:08:49 EST X-Humanist: Vol. 1 Num. 619 (619) My bimonthly OFFLINE column for Religious Studies News has just been sent off to the printer for the January or February issue of RSNews. It consists of a report on the computer aspects of the recent annual meetings of the Society of Biblical Literature, American Academy of Religion, and American Schools for Oriental Research, held jointly in Boston on 5-8 December 1987. If any HUMANISTS would like a pre-publication electronic copy of OFFLINE 16, I am willing to send it upon request. Happy Holidays! Bob Kraft From: MCCARTY@UTOREPAS Subject: Hypermedia bibliography Date: 23 December 1987, 22:31:33 EST X-Humanist: Vol. 1 Num. 620 (620) Anyone wishing a copy of a recent bibliography of items on hypermedia, compiled at IRIS (Brown Univ.), should sent a note to me requesting it. The bibliography, which recently appeared on IRLIST, comes in three parts, each approximately 500 lines long. W.M. From: MCCARTY@UTOREPAS Subject: Library of Congress: markup and MRTs? Date: 29 December 1987, 13:53:52 EST X-Humanist: Vol. 1 Num. 621 (621) From James H. Coombs In a note posted on 8 Dec 1987: Richard Giordano states, Traditionally, ALA [American Library Association] and LC [Library of Congress] have both taken the lead in the scholarly world in providing machine-readable information. The technical problems that LC has addressed have been fundamental to data processing. Could you provide more information, e.g., citations of articles? I know that LC is considering SGML, but they seem to be much more of a follower than a leader in this effort at least. I also believe that the LC is more interested in microfilm than in electronic media for the preservation of materials printed on paper that is not acid free. I was somewhat distressed when I first read this (wish I knew where I read it too), but apparently microfilm lasts longer than computer tape and requires less maintenance. (Still might be the wrong decision.) So, I've missed out on what the LC is doing for Machine Readable Texts [MRTs] and the like. Any information appreciated. Thanks. --Jim P.S. Well, the same for ALA and RLG [Research Libraries Group]. What are they doing? Dr. James H. Coombs Software Engineer, Research Institute for Research in Information and Scholarship (IRIS) Brown University jazbo@brownvm.bitnet From: MCCARTY@UTOREPAS Subject: music-encoding standards? (50 lines) Date: 29 December 1987, 13:56:42 EST X-Humanist: Vol. 1 Num. 622 (622) From James H. Coombs I'm glad to see Humanist up again! In a posting of 17 December, Sebastian Rahtz says: Incidentally, a recent contribution to HUMANIST implied that text-encoding standards were a central issue to all HUMANISTs. May I stand up for the archaeologists, musicians, art-historians, linguists and philosophers amongst us to say that there is more to humanities computing than text! equality for all. Just so, Sebastian! ANSI X3V1.8M/87-17---Journal of Technical Developments discusses the application of SGML to music (Work Group, Music Processing Standards). According to an article in TAG (The SGML Newsletter), the goal is to describe music not only for documentation and hard copy preparation but also to be included in technical documentation and played in a real time rendition simultaneously while viewing a particular part of the document. Dr. Goldfarb referred to the inclusion of music in a technical document, and therefore to the concept of time, as "technical documentation in four dimensions." (vol. 1, no. 3, page 10) --Jim Dr. James H. Coombs Software Engineer, Research Institute for Research in Information and Scholarship (IRIS) Brown University jazbo@brownvm.bitnet From: MCCARTY@UTOREPAS Subject: texts wanted (19 lines) Date: 1 February 1988, 11:44:12 EST X-Humanist: Vol. 1 Num. 623 (623) -------------------------------------------- From Mark Olsen before we scan and keyboard some texts by walter pater, does anyone have or know of the following texts on disk: *marius the epicurian* and *gaston de la tour*. i am not sure that the latter exists in print form, so we might have to keyboard it from mss. any other texts by pater would also be useful. thanks. mark From: MCCARTY@UTOREPAS Subject: OS/2 (50 lines, and not of code) Date: 1 February 1988, 13:41:31 EST X-Humanist: Vol. 1 Num. 624 (624) -------------------------------------------- From Jack Abercrombie Last week I attended an IBM sponsored course on OS/2, IBM's new micro operating system. I came away with some personal observations that I haven't seen in most published descriptions. First, OS/2 is the IBMer's ultimate answer to microcomputing: mainframe computing brought to you in a smaller box. For me, the idea of batching processing in the background while I work in the foreground interactively is an exciting opportunity that I have missed since I left the mainframe and the minicomputer worlds for the micro environs. Nevertheless, I worry that the processor is too small to handle the multi-tasking. I am also concerned that most humanist don't need that kind of power. In almost all cases, they require better peripherals and not a box that can juggle several tasks at once. My second observation is that OS/2 will not be as difficult to teach as some have led us to believe. Most users, I don't think, will start off using OS/2 fully and will continue to work in the provided DOS mode. When these users start to move to batch processing, there is limited knowledge needed to make the machine work reasonably well. The reason for this is that in the installation of OS/2 the configuration files sets up the entire system for a user. As long as the user does not fool with the default settings, they should be able to work in a batch mode. Of course, they will not be able to control the hardware as well as they possible could if they understood how to set speeds of processing, memory allocation, etc. My last observation concerns the hardware. It is clear to me that OS/2 needs and eats memory. To avoid swapping memory to disk which slows your processing time down and can lead to other nasty problems, buy OS/2 with more than enough memory especially if you plan to do more than two or three tasks at a time. Also, I have strong doubts that OS/2 multi-tasking will work reasonably fast on low-end machines such as an IBM AT or System 2 (Model 50). I think that eventually will find that IBM will suggest that if you really want to do multi-tasking do it on an 80, 90, 100, and other models to be announced. Of course, you could always use the mainframe, n'est-ce pas? From: MCCARTY@UTOREPAS Subject: Ibycus computer users Date: 2 February 1988, 09:03:37 EST X-Humanist: Vol. 1 Num. 625 (625) -------------------------------------------- From Sterling Bjorndahl I would like to get a head count, if I may, as to how many Ibycus computer users there are reading HUMANIST. I am interested in setting up an online discussion forum for Ibycus users (especially the microcomputer version). If we are few enough, and we are all on BITNET/NETNORTH/EARN, we can set it up very "cheaply" using CSNEWS@MAINE's CSBB bulletin board utility (to subscribe on CSNEWS you must send an interactive message - hence the network limitation). The advantage of CSNEWS is that we won't have to mess with LISTSERV software :-). I am interested in hearing from all interested parties. Sterling Bjorndahl Institute for Antiquity and Christianity Claremont, CA BJORNDAS@CLARGRAD.BITNET From: MCCARTY@UTOREPAS Subject: OS/2 (45 lines) Date: 2 February 1988, 09:07:53 EST X-Humanist: Vol. 1 Num. 626 (626) -------------------------------------------- From Wayne Tosh I found the observations on OS/2 interesting, particularly the one concerning whether humanists even need multi-tasking, given the nature of most of their activities (such as word-processing). On the one hand, such a blanket dismissal is always a bit troubling. On the other, we do have in my own department a colleague who is pushing for the purchase of a 286-class machine to support the multi-tasking environment of Desqview. While I myself like the idea of popping from one application to another quickly, I wonder whether most of my colleagues wouldn't rather have more (cheaper) workstations. They have found the learning of word-processing (PC-WRITE) a steep enough process that most are, for the moment, still unwilling to go on to database and spreadsheet software, for instance. So I wonder whether it isn't premature to be spending our limited funds on a 286 machine in order that, as this colleague puts it, "Everyone can have a chance to sit down and play (sic) with it (Desqview)." One measure of the prematureness of this proposal is, I think, my colleagues' unenthusiastic reception of a menuing interface which I recently put at their disposal. If they feel, as they seem to, that it is too much to read a few lines of options from which to choose in order to execute some program or other, then I doubt that they will take readily to a shell like Desqview and the juggling of several processes at once. Are you aware of any discussion on this subject? Wayne Tosh, Director Computer Instructional Facilities English Dept--SCSU St. Cloud, MN 56301 612-255-3061 WAYNE@MSUS1.bitnet From: MCCARTY@UTOREPAS Subject: RE: texts wanted (19 lines) Date: 2 February 1988, 09:09:30 EST X-Humanist: Vol. 1 Num. 627 (627) -------------------------------------------- From Wayne Tosh One possible source of further information might be Mark Emmer Catspaw, Inc. P. O. Box 1123 Salida, CO 81201 Mark publishes irregularly the newsletter "A SNOBOL's Chance" and markets his implementation of SNOBOL4+ for the PC, in addition to Elizabethan texts and the King James Bible on disk. Wayne Tosh, Director Computer Instructional Facilities English Dept--SCSU St. Cloud, MN 56301 WAYNE@MSUS1 From: MCCARTY@UTOREPAS Subject: Re: OS/2 and multitasking (56 lines) Date: 2 February 1988, 09:16:21 EST X-Humanist: Vol. 1 Num. 628 (628) -------------------------------------------- From Hans Joergen Marker Jack Abercrombie had some comments on OS/2, and although I share his generally skeptical view on the matter, there is a point in his comment in which I disagree. It is: "I am also concerned that most humanist(s) don't need that kind of power. In almost all cases, they require better peripherals and not a box that can juggle several tasks at once" This may or may not hold true for most humanists, but it is not true in the field of history. Many historians may feel that they don't need much computing power because the software to make use of the increased power is not available yet. But in order to make the computer an adequate research tool for the historian, and not just an expanded typewriter/calculator, what we need is exactly a multitasking software environment. (On the lines of what Manfred Thaller describes as the historical workstation.) In this concept calculation of ancient mesure and currency, geographical references and searches for appropriate quotations are handled by background applications. Leaving the historian free to take care of his actual job of making history out of the bit and peaces of information on the past. I feel that in historical research we often have a problem with making one person's research useful for the next person doing research in a related field. Most historians feel that they have to understand for themselves how the different units of a particular system of mesurement relate to each other and in that way we all remain on the same level of abstraction. It is my hope that through the use of software as the means of communicating the results of research, a qualitatively different way of making historical research will be made possible. An example: If I know that the Danish currencies of the early 17th century relate in a certain way to each other, I provide not only the article with tables and stuff like that, but also a piece of software that does the actual conversions. This approach would naturally be more useful if a general framework existed in which the different pieces of software fitted in, and combined to an inte- grated unit: The historical workstation. Given the existense of a historical workstation future research can take two major paths, either utilising the tools provided in it for traditional historical research aims, or refining or expanding the tools provided. In this concept software development becomes an integrated part of historical research. The term for this could be "historical informatics". Hans Joergen Marker From: MCCARTY@UTOREPAS Subject: OS/2 (38 lines) Date: 2 February 1988, 09:22:36 EST X-Humanist: Vol. 1 Num. 629 (629) -------------------------------------------- From Sebastian Rahtz The note on OS/2 was interesting; it seems curious that IBM are writing a new operating system to do what non-micro users have had for years. Why should I get excited about OS/2? Because it will allow me to run MS-DOS programs in batch mode? wow. Since MS-DOS has at least one root as a cut-down Unix, it seems perverse to build it up again in a new direction - why not just use Unix? My regular daily machines are a Sun 3/50, and a Masscomp 5600; both of these have a single chip (68020) doing the work which provides enough power for me and a number of other people, in the context of a mature operating system (Unix) which already gives me a vast selection of tools for my work. If I had any money, a Sun 3/50 of my own would set me back about 5000 pounds, which I dont regard as a quantum leap above a fully configured PS/2 (such as a model 80 with 8 Mb of memory etc). Of course this is a trivial point, and IBM aren't going to give up on OS/2, and it will all be successful, yawn yawn. But lets not kid ourselves that it adds anything to our desktop facilities; now if you gave me a machine with half a dozen transputers in, and a language to let me play with them, there would be an intellectual stimulus in the challenge of co-ordinating my new friends... Let me hear praise for OS/2 from someone who has used both that and a decent Sun workstation, and then I'll start being convinced. yrs a dinosaur From: MCCARTY@UTOREPAS Subject: Text encoding (36 lines) Date: 2 February 1988, 14:00:26 EST X-Humanist: Vol. 1 Num. 630 (630) The following is extracted from a note from Paul Fortier, who is not a member of HUMANIST, but who suggested that we might air this on HUMANIST in order to get reactions. Replies may be sent to Paul at FORTIER@UOFMCC.BITNET or to me (IDE@VASSAR.BITNET). ----------------------------------- It seems to me that the ACH text encoding guidelines should have, parallel to the printed version, a program version which will run on as many machines as possible, at least all micros. The user would load this program when she/he wants to begin inputting a new text, and the program would interrogate the user on the features of the text: language, genre, author/anon., date, edition used, and on and on and on, right down to how accents are coded in languages that use them. I had thought this would be a useful way of encouraging users to have an explicit header record on text files so that archives could pass them on, etc. This is rarely suggested in the literature, possibly since most of us old timers wrote such information with a felt-nib pen on the top of the hollreith cards in the first box of the file, and never really thought to put it in the text file when we switched up to better technology. A second advantage to this approach is that it could also at the same time be used to fill in tables for filter and markup minimization routines (like Chesnutt's program for printer-drivers) automatically. That way people who wanted a five or ten-character code for an 'e' grave accent could have it, and I could input e`. From: MCCARTY@UTOREPAS Subject: OS/2 Date: 2 February 1988, 14:04:12 EST X-Humanist: Vol. 1 Num. 631 (631) -------------------------------- From Mark Olsen Jerry Pournelle got it right: OS/2 -- Yesterday's Software Real Soon Now. Unfortunately, the MS-DOS-OS/2 kludge is the only serious game in town for day-to-day IBM micros. Why would anyone want multi-users on a 286/386 box, anyway? If running applications in the background is all we really want OS/2 for, then use something like DoubleDOS (which is being given away for $29.97). OS/2 will win, inspite of it all, because of those three magic letters: IBM. Mark From: MCCARTY@UTOREPAS Subject: OS/2 (38 lines) Date: 2 February 1988, 14:05:02 EST X-Humanist: Vol. 1 Num. 632 (632) -------------------------------- From Jack Abercrombie Let me clear up a point concerning OS/2. You can only run MS-DOS applications in the foreground and not in the background in batch mode. There is a linker system for converting DOS applications into OS/2 applications. It appears that the linker works best for "C" programs rather than TURBO PASCAL though we have yet to try it out since we lack sufficient memory to run OS/2 on our System 2 machines and IBM AT's. Another aspect about OS/2 that another reader raised is that there is insufficient third party software for applications. I think then you might ask me what we plan to do given this situation and also the fact that we are going to install a System 2 (80) on the network for general access to large text bases from remote locations. Jack, what operating system do you plan to run? The answer is UNIX! OS/2 is not there yet and won't be in place for our type of application until 1989. Furthermore, we have a number of SUNs and APOLLOs that also are UNIX based as well as VAX computers so that the sensible thing for us over the short-term and perhaps the long-term is a UNIX operating system. Of course, we are willing to review this decesion at a later date. JACK ABERCROMBIE ASSISTANT DEAN FOR COMPUTING (HUMANITIES) DIRECTOR OF THE CENTER FOR COMPUTER ANALYSIS OF TEXTS UNIVERSITY OF PENNSYLVANIA From: MCCARTY@UTOREPAS Subject: OS/2. multi-tasking, and all that (49 lines) Date: 2 February 1988, 14:06:22 EST X-Humanist: Vol. 1 Num. 633 (633) -------------------------------- From Jim Cerny At the risk of getting somewhat tangential to the interests of most HUMANIST subscribers, I can't resist making a few observations about multi-tasking. I have been using VAX/VMS systems (from 8650 size to VAXstation 2000 size) and a Macintosh for quite a while. As multi-tasking, or the promise of it, comes to Macs and IBM PCs, I occasionally try to extrapolate from our VAX/VMS usage to imagine how people will use multi-tasking on desk-top machines. "Our" covers various kinds of users. There are myself and the other staff in our Large Systems Support Group who are relatively expert in VMS usage and who are to varying degrees involved in VMS system management. There are faculty users. There are student users. Assorted others. The big multi-tasking use I see is background printing. Then, for some users there is the need to run batch jobs. For faculty in the definitely non-humanist number-crunching areas, there are spells when long batch jobs get run again and again. For staff involved in system management there are various periodic (daily, weekly, monthly) maintenance tasks to run in background. But overall it is background printing that is needed on the large machines and which I see as the primary extra task(s) needed on the desktop machines. When I look long and hard at the most sophisticated things we do as computer support staff, it is to "spawn" one or more additional processes to do something while leaving the original process suspended. That is multi-tasking, but not very demanding. It is what Switcher has provided on the Macintosh, except (and this is a big except) for the appropriate memory and process management to keep one process from straying and clobbering another one. Jim Cerny, University Computing, University of NH. From: MCCARTY@UTOREPAS Subject: OS/2 (39 lines) Date: 2 February 1988, 16:08:59 EST X-Humanist: Vol. 1 Num. 634 (634) -------------------------------- From dow@husc6.BITNET (Dominik Wujastyk) Regarding the OS/2 debate, I am definitely in need of a multitasking operating system, although I do not think of myself as a particularly computer-intensive worker. I.e., I mainly process text, not program or anything. I use TeX all the time for formatting everything I write, except letters. While PCTeX on a 12Mhz Compaq port. III is quite fast, as TeX goes (about the same as a medium loaded Vax), I still have to twiddle my thumbs while it chugs away. I cut my stuff up into 10--20 page pieces, which helps a bit, but the TeX processing still seems an intrusive nuisance when one it concentrating on the ideas IN the text. Even worse is the fact that I cannot print in bacground mode. TeX output is put on paper as a graphics image, so on a matrix printer -- which is what I have at home for drafting -- it is *very* slow by any normal standards. This wouldn't matter so much if I could print in the background, but with PC DOS I can't. Some printer buffers and spoolers can help, and I have used this route to alleviate the problem to some extent, but it is still not the answer, because a page of graphics is a LOT more information than a page of ASCII character codes. My ideal would be to be able to have a wordprocessor in the forground, sending text to TeX running as another job, with my previewer putting the pages up on the screen in another window simultaneously (or as soon as TeX had finished them). And, of course, background printing. Now THAT would be cooking! Dominik From: MCCARTY@UTOREPAS Subject: OS/2, Multitasking, and all that (21 lines) Date: 3 February 1988, 00:04:51 EST X-Humanist: Vol. 1 Num. 635 (635) -------------------------------- From ked@garnet.Berkeley.EDU (Charles Faulhaber) Multitasking I use a Sun 3/50 and right now have 9 windows open, in five of which processes are running. I use it primarily as a writing tool (so far), but have found it immensely useful to have two files open simultaneously in order to compare 2 versions of a text or to cut and paste from one file to another or to access my mainframe account while working on the Sun. I was a reasonably experienced UNIX user, but I find no comparison between "old" UNIX and a windowing environment. From: MCCARTY@UTOREPAS Subject: Dictionaries; OS/2 and restraint (39 lines) Date: 3 February 1988, 00:11:00 EST X-Humanist: Vol. 1 Num. 636 (636) -------------------------------- From goer@sophist.uchicago.edu(Richard Goerwitz) In NELC at the University of Chicago there are several projects underway that one might generally call dictionary-making. We have, of course, the Chicago Assyrian Dictionary, and the Hittite Dictionary. We also have a couple of peo- ple doing lexical work in related areas. Not all of this work is exactly state of the art. The CAD has been done mostly without any electronic help. The Hittite Dictionary is being done with TRS-80 machines. Others are using dBase on MS-DOS machines. I am wondering whether there are any established approaches one can use to text-base construction. dBase is not exactly a linguist's dream. Are there better approaches available, either in theory or "off the shelf"? Let me add a parting word about another topic: OS/2. I'd hate to see the dis- cussion get too out of hand until we know what we are talking about. After all not too may folks have seen OS/2 yet. And even fewer have gotten to play with it. As for speculation about whether the majority of scholars will want to work in a multitasking environment, I don't think there's much way of knowing. We just don't have software that is built to take advantage of it in a way that will attract scholars in the humanities in large numbers. Restraint!! -Richard From: MCCARTY@UTOREPAS Subject: Printing in the background Date: 3 February 1988, 00:13:35 EST X-Humanist: Vol. 1 Num. 637 (637) ---------------------------- From goer@sophist.uchicago.edu(Richard Goerwitz) In response to grumblings about not being able to "print in the background," let me point out that in MS/PC-DOS, printing is inherently multitasking. You can run the DOS print command in the background. If you have a word-proces- sor that doesn't print in the background, print to disc (most wp's have this feature). Then print the file using this DOS print command. A good print spooler will speed this process up a lot. (A print spooler is a program that intercepts DOS printer interrupts, sending the file being printed into RAM memory, where it waits for opportune moments to be fed out to the printer. A good spooler will work fast, but yet shut down quickly when the user demands computer processing time. Good examples of MS-DOS spoolers include the PD programs MSPOOL and SPOOL.) If background down/uploads are needed, use Mirror, a Crosstalk clone that does background work like this. If more serious background work is necessary, use a program called Double Dos. As one recent poster pointed out, it can be had for under $30. -Richard From: MCCARTY@UTOREPAS Subject: Deadline for the ACH Newsletter is soon! (22 lines) Date: 3 February 1988, 00:16:05 EST X-Humanist: Vol. 1 Num. 638 (638) Vicky Walsh, editor of the Newsletter of the Association for Computing in the Humanities, reminds me that the deadline for submitting material to be considered for the next issue is 19 February. Any member of the ACH -- one of our major sponsors -- is welcome to submit material for the Newsletter. As difficult as it may be to believe, some computing humanists cannot be reached by electronic mail, indeed, some even actively refuse to become connected. So, the ACH Newsletter does reach people whom you cannot contact through HUMANIST. Vicky can be reached by e-mail; she is imd7vaw@uclamvs.bitnet. Yours, Willard McCarty From: MCCARTY@UTOREPAS Subject: Multitasking (41 lines) Date: 3 February 1988, 09:02:25 EST X-Humanist: Vol. 1 Num. 639 (639) ---------------------------- From Sebastian Rahtz I find the claim that all the average person wants multi-tasking for is background printing incredible! Like Charles Faulhaber, I often have half a dozen windows open when I am using a Sun, and all I am doing is writing something, like him. Is it so difficult to imagine how I can have one window for my mail (brooding on what to reply), one for playing Mazewar, one for editing my file, one for running it through LaTeX, another for previewing, another for a database process thats getting some data I want? I do not know about you people and your computers, but they are my no means fast enough for me - I often want to start a new job while the computer is tediously processing another. If we take a reasonable job of editing a book, it took me about 40 minutes on a Sun 3 to process from scratch the whole of a conference proceedings I just finished (3 passes through LaTeX and 1 through BibTeX - dont tell me to use a silly Mac, I have my standards..); what am I supposed to do while this burbles away? read a book? no, i want to write a letter, edit a chapter thats just been processed, run a program etc; i WANT multi-tasking. I suspect that those who think of multi-tasking as 'batch processing' haven't used a proper windowing system... or to be more 'academic', lets take a project being worked on here, a archaeological database that extracts details of pots and sends an image of each in PostScript to a NeWS process; if we did this normally, the database would suspend, draw a picture and then resume; with each pot in a separate window, i can play with the generated images while the database is working, and I can keep a number of images on my desk. Sebastian Rahtz From: MCCARTY@UTOREPAS Subject: Paul Fortier's sensible remarks about textual encoding (21 ll.) Date: 3 February 1988, 09:04:22 EST X-Humanist: Vol. 1 Num. 640 (640) ---------------------------- From Grace Logan Paul Fortier says such sensible things! I would just like to enthusiastically support his suggestions, especially the part about header information being easily (or even automatically) entered. I have been in computing long enough to have gone back to texts as much as ten years later and I fervently wish that Paul's recommendations had been in force when they were input. Having the kind of information Paul talks about at the top of every file would have saved me so much time! From: MCCARTY@UTOREPAS Subject: Multitasking and all that (16 lines) Date: 3 February 1988, 09:07:38 EST X-Humanist: Vol. 1 Num. 641 (641) This is really a rather interesting discussion. I recall something that Gaston Bachelard says in _The Psychoanalysis of Fire_, that "Man is a creature of desire, not of need." Let us not ever put shackles on our imagination, especially not here! Willard McCarty, mccarty@utorepas From: MCCARTY@UTOREPAS Subject: Multitasking according to IBM (20 lines) Date: 3 February 1988, 20:14:07 EST X-Humanist: Vol. 1 Num. 642 (642) ---------------------------- From Jack Abercrombie Just a quick note on IBM's concept of multi-tasking. What was discussed in the seminar I attended was not having multiple windows open at the same time though they made it clear that that is a direction IBM hopes to move in with the release of the Presentation Manager at the end of 1988. No. They presented a mainframe batch processor, and not a true windows environment. JACK ABERCROMBIE From: MCCARTY@UTOREPAS Subject: Fortier-style texts (19 lines) Date: 3 February 1988, 20:16:22 EST X-Humanist: Vol. 1 Num. 643 (643) ---------------------------- From Sebastian Rahtz It seems obviously sensible that texts should have a "Fortier heading" explaining what they are about, but I dont really think a specific program for adding this stuff is really a very good idea. Surely you text-encoding standard gurus have dealt with the idea of the format of a text header, however it is created? If not, shame on you... sebastian rahtz From: MCCARTY@UTOREPAS Subject: Data headers and SGML (not too vehement) (41 lines) Date: 3 February 1988, 20:17:59 EST X-Humanist: Vol. 1 Num. 644 (644) ---------------------------- From David Durand There has already been a lot of discussion of SGML on this list which does not have to be re-opened. However, it is worth noting that a document type definition or SGML prologue is detailed documentation of a file format, with the additional advantage that one is required to use mnemonic names to indicate the special information in the text. That is not to say that an SGML prologue gives you all the information you might want, just that it gives much that is essential, and requires (hopefully) meaningful names for all indicated information. Some other points are worth noting: the creation of the structure definitions for a file provides a very useful discipline to control the consistency of entered data, despite its time consuming creation and seeming obstruction of the straightforward process of data entry. I think that in some ways the SGML debate is like the programming community's debate over structured programming. It all seems like such a bother, in part because it is an attempt to reduce the total effort of all users of the data at the expense of some extra effort on the part of the preparers. Finally, it is worth remembering that SGML is optimized for interchange, and that fairly simple tools can be used to convert to and from SGML and special purpose formats to allow more efficient searching or data retrieval or scansion or whatever. Well, a simple comment about format headers has turned into a small rant on the virtues of standard markup. In closing I'd like to say that I don't necessarily think that SGML is perfect, just that it has addressed the right questions in the right KIND of way. Certainly, it could have been ten times simpler and still done the job. From: MCCARTY@UTOREPAS Subject: Multitasking and windows (36 lines) Date: 3 February 1988, 20:26:26 EST X-Humanist: Vol. 1 Num. 645 (645) ---------------------------- From Sebastian Rahtz I used to play with Microsoft Windows; apart from the speed etc, we can assume (I hope) that OS/2 will not *look* that different. What was wrong with it was not that it was cripplingly slow, but that the area of the screen you could carve into windows was too small. The 25 x 79 screens on our PCs are TOO SMALL to work well on. Bring on at least A4 size screens if not bigger.... theres no point in saying this mind, its like asking for a better keyboard. Do any punters in HUMANIST-land have an extended edition OS/2 with the micro-DB2 grafted underneath? Does it exist yet? Now there IS an interesting development, if it works as it might, with references to data being passed through a relational database manager instead of sequential file access. Somewhere recently I read an interview Laurence Rowe (of INGRES fame) who saw the future as a hypercard interface to INGRES; I like this - lets stop seeing our hard disks as collections of named files, but see it as giant relational database reflecting the relationships of all the data we possess. Our applications need then only pass on an abstract, file independent, query to the OS, and get back an answer. hoorah, high-level coding rules OK. I expect Bill Gates and his boys thought of all this ages ago. Does anyone have experience with Microsoft Bookshelf? sebastian rahtz From: MCCARTY@UTOREPAS Subject: Contributions to the ACH Newsletter (20 lines) Date: 3 February 1988, 20:30:31 EST X-Humanist: Vol. 1 Num. 646 (646) Good news: Nancy Ide tells me that contributors to the ACH Newsletter don't have to be members of that organization. So, if you've got something to say to North American computing humanists, or something to ask them, the Newsletter is also open to you, even if (God forbid) you are not a member of the ACH. Again, the editor is Vicky Walsh, her address imd7vaw@uclamvs.bitnet, the deadline for the next issue 19 February. Yours, Willard McCarty From: MCCARTY@UTOREPAS Subject: Printing in the background (24 lines) Date: 3 February 1988, 20:45:39 EST X-Humanist: Vol. 1 Num. 647 (647) ---------------------------- From dow@husc6.BITNET (Dominik Wujastyk) Richard Goerwitz didn't take the point about background printing of graphics data. The DOS PRINT.COM only works with a stream of plain ASCII characters, not graphics data. The other solutions he mentions, e.g. DoubleDos, may well work, although all the "simple" multitasking efforts on the market that I have tried all had some fatal flaw. Background spooling can work, as I said, but for more that a very small amount of graphics data a PC will run out of memory very quickly. A test file of the words "this is a test" ended up a file of 991 bytes, to give an concrete example. This is, of course, for a printer that does not support downloaded fonts, so the whole bitmap for every character is there. Dominik Wujastyk From: MCCARTY@UTOREPAS Subject: 9 windows at once (18 lines) Date: 4 February 1988, 16:38:01 EST X-Humanist: Vol. 1 Num. 648 (648) ---------------------------- From Ronnie de Sousa Re Cha Faulhaber's nine windows open at once: if you are just writing, you don't need OS/2 for that. All you need is a decent scholar-oriented word processor like NOTA BENE, in which you can also open nine files at once, and even automatically compare the two (finding next point of discrepancy and then next point of agreement.) ...Ronnie de Sousa, Toronto From: MCCARTY@UTOREPAS Subject: Looking for Adam Smith on-line (21 lines) Date: 4 February 1988, 16:43:48 EST X-Humanist: Vol. 1 Num. 649 (649) ---------------------------- From Malcolm Brown Does anyone have either "Theory of Moral Sentiments" or "Wealth of Nations" by Adam Smith on-line and available? If so, please send a note to me (GX.MBB@STANFORD.BITNET) thanks! Malcolm Brown Stanford University From: MCCARTY@UTOREPAS Subject: OS/2, multitasking, multiple windows, and more (31 lines) Date: 4 February 1988, 17:07:51 EST X-Humanist: Vol. 1 Num. 650 (650) Last month we had a discussion about some of our needs for software. Now we seem to be having another about hardware. We all know how silly and moronic the ruling things of the present tend to be, being nevertheless very useful, but what about the future? What gizmos would we as humanists like to have? Perhaps our collective influence is usually minuscule, but I suspect that if we imagine well, what we imagine may stir someone with the means. Multitasking would appear to be one thing we want, with a multitude of windows, and not just for wordprocessing. Diverging back to software, perhaps the problem of small screens can be solved by having "rooms" as well as "windows." (Who has heard of the work being done at PARC on "rooms"? Would one of our members there like to report on this?) Who has had experience with current multitasking shells, e.g., DESQVIEW, MicroSoft Windows? Does this experience suggest anything about future systems? What else? Willard McCarty mccarty@utorepas.bitnet From: MCCARTY@UTOREPAS Subject: Headings for documents (25 lines) Date: 5 February 1988, 09:07:16 EST X-Humanist: Vol. 1 Num. 651 (651) ---------------------------- From amsler@flash.bellcore.com (Robert Amsler) Sebastian Rahtz expressed my sentiment exactly. There already exist plenty of programs in which you can create this header information, they are called `text editors'. The problem is there doesn't (yet) exist any statement of what the lines of such text should contain. Rather than a hopeless quest to write software for every PC on the market, it would be more sensible to describe what the attributes should be for a machine-readable text to be acceptable. I really think the archives have some obligations here to nag their contributors to provide this information since if they don't get it, then it will result in multiple recipients of the archives data having to do without it or individually nag the originating author-----or maybe that is a good idea. Maybe we should gather together the names of all the people who created undocumented machine-readable text and ALL send them letters asking for the information. From: MCCARTY@UTOREPAS Subject: OS/2 and the Mac Date: 5 February 1988, 09:13:27 EST X-Humanist: Vol. 1 Num. 652 (652) ---------------------------- From elli%ikaros@husc6.BITNET (Elli Mylonas) All the discussion on OS/2 centers around comparisons of this operating system with mainframes and other higher-end machines. Background processing and a semblance of multitasking are available NOW on the Macintosh, using the Multifinder system (version 4.2). It is possible print in the background, or to download files in the background, while working away at something else in the main window. It is also possible to have more than one application open at the same time, although only one is active. I know that this requires more memory than the average Mac has, but even a memory upgrade costs less than the machines against which OS/2 is being measured (an extra 1MB for Mac + -- $175, upgrade to 2.5MB for the SE -- $460). It is surprising how fast one can come to depend on the multiple window, multiple application environment that Multifinder offers. Mac users, even those who are not expert users, start to make use of it immediately, and without realizing they are doing something fundamentally different. This is primarily due to the consistency of the Mac user interface, which consistency Multifinder adheres to. So, to answer those who say that humanists only do word processing, and do not need to do 2 things at once, all the humanists who are given the opportunity to do so make use of it, if it is not hard to learn. After all, compiling an index or pulling cross references require cpu time, when the writer just sits and waits. Furthermore, few people do *just* word processing. They have their references in a database, they may look at images or maps they have online, and they may be logged in on their local networked machine reading HUMANIST. Not to mention more mundane chores like looking up an address in their electronic phonelist, or cleaning out their files. I do not want to say that the Mac with Multifinder is the solution to everyone's computing needs, but it is available now, on an inexpensive machine. We cannot all have Suns, and we do not all have that kind of networking, so as to be able to use workstations off a central server. Elli Mylonas elli@wjh12.harvard.edu From: MCCARTY@UTOREPAS Subject: File Documentation (45 lines) Date: 5 February 1988, 10:12:29 EST X-Humanist: Vol. 1 Num. 653 (653) ---------------------------- From Bob Kraft OK, I'm ready to get serious and gather the combined wisdom of the collected HUMANISTs on what you want by way of information about a text file. This is timely, because I am in the final stages of attempting to document the materials included on the CCAT part of the new PHI/CCAT CD-ROM (see the OFFLINE 17 list). This documentation will be included with each disk -- ideally (and in the future), it would be on the CD-ROM itself, but in this instance it was not yet ready. In any event, the categories I have used are as follows: (1) Edition of the text used (if applicable), or background information about the text; (2) Encoding information -- who deserves credit for creating the electronic form and/or for making it available? (3) Coding information -- what special symbols are used, how are non-English characters represented, etc. -- often with reference to appended charts; (4) Verification status -- how well verified is the text (if known)? (5) Special Notes -- e.g. to whom should questions or corrections be addressed (where does quality control reside), are there any special issues to consider in using the text (e.g. restrictions of any sorts, relation to similar texts, future plans for revising the format). I have not thought it necessary to stipulate the size of each file (some files are anthologies -- e.g. Arabic, Sanskrit -- while others are homogeneous), although that might be useful information especially for persons who plan to offload material from the CD-ROM for individual treatment. Are there other important pieces of information you think should be included in such documentation? I should look at the format of the Rutgers Inventory to see whether the librarian's needs are covered as well. Speak now .... Bob Kraft for CCAT From: MCCARTY@UTOREPAS Subject: SGML editing (23 lines) Date: 5 February 1988, 10:24:48 EST X-Humanist: Vol. 1 Num. 654 (654) ---------------------------- From David Nash We are about to draw deep breaths and plunge into converting the Warlpiri dictionary master files to SGML. We have been inspired to do this by most of what we know about SGML. Does anyone want to talk us out of it? Has anyone experiences to share of SoftQuad's Author/Editor SGML-based software for the Macintosh? Are there any alternatives on the market? Less importantly, is ISO 8879 (on SGML) available in machine-readable form? -DGN From: MCCARTY@UTOREPAS Subject: Windows and rooms (36 lines) Date: 5 February 1988, 10:47:16 EST X-Humanist: Vol. 1 Num. 655 (655) ---------------------------- From BobKraft Although I get the feeling that many people don't want to keep hearing about the IBYCUS SC -- its that special scholarly micro-workstation that has been working with CD-ROMs since 1985 (!!), among other things -- you should at least know that the SC (for "Scholarly Computer") has ten obvious "windows" (or perhaps better, "rooms") that are accessed through the number pad keys and can be used to access and work with various files conveniently. Actually there are more than 10, but the 10 are obvious. The SC does not "multi-task" in the sense of being able to run programs in each room at the same time. Only one program can be actually running, in the foreground or in the background, but the memory for each of the rooms (to the limits of available RAM) is readily accessible at a keystroke. Thus I can write my 9 different articles at the same time while using window/room 0 to pull materials off the CD-ROM. Why mention this? Because if we want to discuss what scholars think they need, and how they might want to use various types of proposed options, it is good to know what some scholars have, and to find out how their "needs" and hopes/wants change once they have what they thought they wanted. What do the IBYCUS SC users see as the next level of wants in relation to their windowing/rooming environment? Bob Kraft, CCAT From: MCCARTY@UTOREPAS Subject: Windows, multitasking, and programming environments (33 ll.) Date: 5 February 1988, 11:46:50 EST X-Humanist: Vol. 1 Num. 656 (656) ---------------------------- From Randall Smith <6500rms@UCSBUXB.BITNET> I have been experimenting with Windows and XTREE PRO, trying to find a suitable environment for programming in C. I have not yet been successful in getting Windows to work smoothly, but it does not seem to be too slow, though I am running on a 12 MHz clone with a high speed hard disk. I am expecting version 2.0 shortly, and I hear that it is much faster. As far as size of screen, I am using one the super EGA cards (the Vega Deluxe) which gives me a resolution of 752x410. This is *much* better than the standard resolution and provides much more room to put things on the screen. My goal is to be able to perform a compilation according to a make file in one window and do another task or two while that compiling is continuing. Also, I am trying to get my TLG search software to run in a window. I know that this will slow it down, but I would rather have it take ten minutes during which I can use the computer for something else than five minutes during which the machine is lost to me. I will pass along more information on this if I can get it to work. Has anyone tried anything like this with Windows 386 or Desqview? By the way, XTREE PRO with a mouse and its new editor is not a bad dos manager. I recommend giving it a try. I will be happy to provide a Logitech mouse driver for it if anyone is interested. Randall Smith From: MCCARTY@UTOREPAS Subject: multi-tasking (18 lines) Date: 5 February 1988, 20:56:37 EST X-Humanist: Vol. 1 Num. 657 (657) ---------------------------- From Wayne Tosh > > Rahtz makes some very good points concerning multi-tasking and > multiple windows as he is able to realize them on his Sun > workstation. Would that we all had NOW such a large-screen > environment! My objection is to colleagues who want to spend > limited (English department) funds on small-screen 286-based > machines--who wants to do multitasking while peering through a > keyhole? From: MCCARTY@UTOREPAS Subject: Tennessee Williams' plays on-line? (17 lines) Date: 5 February 1988, 21:03:30 EST X-Humanist: Vol. 1 Num. 658 (658) ---------------------------- From Rosanne Potter Does anyone have copies of The Glass Menagerie, A Streetcar Named Desire and/or Cat on a Hot Tin Roof on-line? or know of their existence in an archive? Please respond to me at POTTER@UK.AC.OXFORD.VAX [for those outside JANET that's potter@vax.oxford.ac.uk -- W.M.] Thanks. From: MCCARTY@UTOREPAS Subject: documenting texts (26 lines) Date: 5 February 1988, 21:07:31 EST X-Humanist: Vol. 1 Num. 659 (659) ---------------------------- From Lou Burnard Just before Xmas I sent an enquiry to Humanist, requesting feedback on just what minimal information people would like to see recorded about texts in the Oxford Text Archive catalogue. I know the message didnt get lost because I happened to meet one Humanist in person a week or so later (always an inexplicable pleasure to see those acronyms fleshed out in a suit) who gave me his views using that curious old technology known as speech. Alas that represented exactly 50% of the response rate my enquiry provoked, i.e. I got one (1) other reply. What I want to know, apart from the answer to my original query, which Bob Kraft has just posed again, is (a) is the response rate to queries placed on Humanist always so low? (b) or was it a boring question? I had considered mailing an enquiry to all other enquiriers, but forbore! Lou From: MCCARTY@UTOREPAS Subject: SGML for the dictionary Date: 5 February 1988, 21:14:43 EST X-Humanist: Vol. 1 Num. 660 (660) ---------------------------- From Nancy Ide I would like to suggest that NASH at MIT consider holding off on the conversion of the dictionary to SGML. I expect they will be defining document types and ne tags for this application, and it may be that the effort will duplicate that of the ACH/ACL/ALLC Text Encoding Initiative. We will have a very large group at work on tagging schemes for dictionaries, and while this work will not be well enough along for at least 18 months to provide a concrete scheme for actual use, the wait might be worthwhile. We expect our scheme to be based on or even an extension of SGML and the AAP implementation of SGML for typesetting, and so they will get what they need, plus compatibility, without the trouble of developing the tags on their own. Nancy Ide ide@vassar From: MCCARTY@UTOREPAS Subject: Hardware wars (30 lines) Date: 5 February 1988, 21:18:57 EST X-Humanist: Vol. 1 Num. 661 (661) ---------------------------- From David Graham This discussion clearly has the potential to degenerate very quickly into one of those depressing and unproductive flame wars about hardware that periodically rage through the Usenet comp.sys.* groups. [As someone recently wrote there, "Oh no, not another of those 'Your favorite microprocessor is sh*t' discussions".] Instead of flaming one another's preferences and arguing about whether or not Multifinder is 'true multitasking' (I can see that one coming), may I suggest that we listen to Willard McCarty's suggestion to re- strict the discussion to accounts of actual experience, and resist (insofar as possible) the temptation to evangelize? I can't afford a Sun either (I can't even afford a memory upgrade for my Mac), and it doesn't help matters to have the feeling that HUMANIST's Sun users are looking down their noses from a great height. One of the reasons I joined HUMANIST was that I thought we were all in this together, as Humanities people with an interest in computing, and because I thought that HUMANIST would provide a forum for some interesting discussions. So far I haven't been disappointed (though frequently reminded of my ignorance), but if we're going to waste time and bandwidth flaming each other, I'll stop reading. Am I being thin-skinned? Is this hopelessly idealistic? David Graham dgraham@mun.bitnet From: MCCARTY@UTOREPAS Subject: PhD exams in Computing and the Humanities? (89 lines) Date: 5 February 1988, 21:21:35 EST X-Humanist: Vol. 1 Num. 662 (662) ---------------------------- From Sterling Bjorndahl Here is another twist on the issue of academic credit in the Humanities for work with computers. Is it academically legitimate for a PhD student to write one of his or her exams in the general area of "Computers and blank" where 'blank' is his or her field of study? In the case I am thinking of, the topic would be something like "Computer Assisted Research and the Study of the New Testament and Christian Origins," including early Christian literature and movements. I might even be willing to broaden it further and include all of the biblical corpus. Some arguments pro: One can develop a Forschungsbericht, and in our field at any rate that seems to be a kind of magic that makes something a legitimate field of study. Admittedly this history of the investigation is not that old, but it is at least as old as is structuralism in the study of this corpus of literature - if not older! One could do a very nice job, I think, of looking at various computer-assisted projects, evaluating their methods and results, identifying diachronic changes as machines and methods became more sophisticated, and analyzing the difference that the computer made to each investigation. One could then attempt to generalize about the role of computers in this area of study, and extrapolate as to how the role will change in the future. I must admit that this is the only aspect of such an exam that I can imagine at this time. We typically have four exams in our field, each exam being four hours long and consisting of from two to four questions. Could one write for four hours on such a Forschungs- bericht? Probably not. But one would probably find that one hour is insufficient. What else could one write about? There are also very good arguments against allowing such an exam. The computer does function, after all, more like a "tool" than a "method," and we seldom allow exams in "tools." We would be unlikely to allow an exam in lexicons, say, or synopses of the Gospels. I have already discussed this question privately with a couple of people, and Bob Kraft has made the most eloquent statement of the issues to date. The following paragraphs are from his response to my From Bob Kraft: > If my category for the computer is that it is a "tool" in some ways > similar to typewriters, indices, concordances, scrapbooks, cards, > etc., etc., I resist focusing on it by itself, although I am open to > the idea of examining the student on the uses of research tools > (including, but not only, computers). If I see it as an "approach" > similar to archaeological method, then it would seem to be an > appropriate subject in itself. In between these two models might fall > the "library science" model, which encompasses a special set of tools > in a fieldwork environment. Would I permit a PhD exam on library > methodology? I would hesitate, despite the fact that there are > courses, programs, etc. > > Yes, we teach graduate level courses in humanistic computing, and > there are examinations in them. We also have courses in archaeological > methods. And there are courses and programs in library science. I > don't think that fact is determinative of what is appropriate to a PhD > exam. The issue that I need to explicate is why I am not very > uncomfortable about the archaeology model. Partly because discussion > of "archaeological method" has developed in a partially > confrontational context vis-a-vis "historical-philological method," in > a way that clearly required exposure of assumptions, justifications > for valuation of certain types of evidence, etc. It involves more than > knowledge of how to use a tool or set of tools efficiently (although > this "more" is not necessarily inherent in the category!). I'm not > sure that, in isolation, a similar case can be made for "computer > methodology," but I am open to being persuaded. Finally, I wonder if this would be a non-issue if this were an information science PhD rather than a New Testament/Christian Origins PhD? Someone studying computers _per se_ could very well be able to examine their application in a particular field of the humanities. Does this make a difference? Am I really asking a cross-disciplinary question? Sterling Bjornahl From: MCCARTY@UTOREPAS Subject: SGML standard document: ref. and ordering info. (22 lines) Date: 5 February 1988, 21:25:58 EST X-Humanist: Vol. 1 Num. 663 (663) ---------------------------- From David Durand In response to number of requests, here is the reference for the SGML standard document: American National Standards Institute. "Information Processing -- Text and Office Systems -- Standard Generalized Markup Language (SGML)" ISO 8879-1986, ANSI, New York, 1986. I called them in New York (at: (212)-354-3300) and got the following ordering information: It is very important that you mention that you want document number ISO 8879-1986. Apparently the name may not be sufficient. $58.00 -- + 6.00 shipping and Handling Mail to: ANSI From: MCCARTY@UTOREPAS Subject: Public-domain UNIX relational database program? (41 lines) Date: 5 February 1988, 21:49:45 EST X-Humanist: Vol. 1 Num. 664 (664) A colleague in Toronto, Frank Davey (English, York), is looking for a relational database program in the public domain. Any suggestions would be very welcome. In the following he describes the intended application. Willard McCarty ---------------------------------- From Frank Davey We are looking for a programme that will let us compile bibliographic entries for searches that will be useful for research into the history of Canadian publishing as an institution. We'd like to be able to search for combinations of key fields, to answer questions such as between 1900 and 1914, what publishers published fiction by women, or between 1860 and 1900, what cities were the places of publication for 1st books of poetry. On the other hand, we don't want necessarily to have to establish in advance the sorts of questions we want to be able to ask, and we'd like to be able to add fields (such as which book -- first, second, etc. -- an item represents in a writer's career) if we hadn't thought of it first time around. That feature would be particularly useful if a graduate student wanted to modify the database slightly so that it could answer a new set of research questions. My understanding is that a relational database would allow one to do exactly this, as well as allow an immense variety of utterly different projects. A useful feature of the Empress database programme is that it can be output through the Standard Generalized Markup codes that Softquad is developing for Apple (I think that Mac programme is called Author/Editor). From Frank Davey From: MCCARTY@UTOREPAS Subject: mark-up (37 lines) Date: 6 February 1988, 16:12:46 EST X-Humanist: Vol. 1 Num. 665 (665) ---------------------------- From Mark Olsen Other than a minimal amount of textual identification -- titles, editions, etc. -- coding of texts will depend on the applications and intentions of the collector. Rather than impose a SGML or some such thing why not have a header that clearly identifies each element of codes being used. I have several texts from the Oxford Archive that have extensive codes with no explanation of what they mean or how they were determined. This could be appended to the data file as part of the contributed text. I suspect that we gather text for the immediate application at hand -- I know I work that way -- without realizing that someone 20 years later needs some footprint to follow the trail. The general rule might be that if the character did NOT appear in the original printed edition or mss, then it is a code that must be defined. That defintion should form the bulk of a header. The poor response rate from HUMANISTS recently lamented by Lou Burnard and Bob Kraft might be due to the nature of e-mail. If I can't fire off a quick response, I file the note, to be lost forever in an ever growing HUMANIST NOTEBOOK. Stored there, out of the way, they do not form an annoying pile which threatens to overwhelm my desk. Free from the threat of avalanche, I can forget the with the good faith that I will get to it "real soon now." With a clear desk and a clean conscience, I continue on my way, safe in the knowledge that "out of sight is truly out of mind." Mark From: MCCARTY@UTOREPAS Subject: Document Characteristics (63 lines) Date: 6 February 1988, 16:13:45 EST X-Humanist: Vol. 1 Num. 666 (666) ---------------------------- From amsler@flash.bellcore.com (Robert Amsler) Minimally the description should make it possible to identify exactly what material was used as the source of keyboarded data such that someone else will be able to find another (or, if the source was unique, the specific) copy of the source to recheck the input for accuracy. Thus, the first goal is ``How can I tell someone where to check my data against the original from which it was made''. The next goal is to describe those attributes which will enable someone to appreciate how the data was captured. To describe the methods by which it was put down in the computer. Specifically, what transliterations were used; what aspects of the original were not captured (e.g. original hyphenation, orginal page boundaries, etc.); whether data is as-is or has been corrected in some way for possible abberations in the original (e.g. black smudge in printing obscured letters here, but context implies it said ...; misspelling or incorrect numbers corrected by (a) checking with dictionary or (b) through incorporating errata notes from material into the copy, etc.); method by which disjoint parts of materials were entered (e.g. footnotes entered all in special footnote file, or entered at point at which footnote number appeared in text; or entered at bottoms of each page, etc.; physical arrangement of text which was captured vs. which were not--i.e. how is the blank space in the original document being dealt with (a problem here is that original text with variable width letters must be distorted in some fashion to be keyboarded on computers with only fixed-width character displays). What is being done to represent different fonts (both fontsize and italic/bold/small-caps/Roman, etc.) Thus, the goal here is to answer the question ``How can I tell someone what steps to follow once they find the original source material to result in an exact matching copy of this machine-readable file should they also accurately type it in'' A sub-part of this last answer should include how to distinguish the original source material from any contributions of the data enterer, that is--if the data enterer created what business folks are fond of calling ``added value'' by further clarifying the text in some way (e.g. adding line/verse/chapter/etc. numbers; adding definitions from another work; providing translations of foreign quotes, or even interpreting the meaning, etc. of the material--this added value should be capable of being distinguished from the original such that the original text and the added material could be separated again.) Added value sources, where such exist, should also be identified as in the first step, and where needed, their method of capture itself should be described as in the second step (this forming a type of recursion that hopefully finishes). From: MCCARTY@UTOREPAS Subject: Ph.D. in humanities computing (29 lines) Date: 6 February 1988, 16:16:25 EST X-Humanist: Vol. 1 Num. 667 (667) ---------------------------- From Norman Zacour Given what I have seen so far, I daresay that someone will soon ask to do a Ph.D on the use of computers in medieval historical research. Why should I find such an idea uncomfortable? We teach medieval history; we teach a great deal about computers; we want our medievalists to apply the use of computers to their scholarship as much as possible; and finally, such a study might be quite interesting in itself. But is it worth a Ph.D? Does any interesting book describing the work of academics warrant the doctorate? I suspect that I find all this problematical precisely because I think of the doctorate as a disciplinary qualification, and while I am used to disciplines such as computer science and medieval history, there is no point in pretending that the question of historians' using computers is in itself a disciplinary qualification. It's a great idea, and I hope that we will soon have some good works on the manner in which computers have stimulated scholarship and modified techniques of study and research. This is the kind of thing that practictioners write, not students. But on the other hand, when I see what we accept now... Norman Zacour@Utorepas From: MCCARTY@UTOREPAS Subject: A colloque (78 lines) Date: 6 February 1988, 16:33:08 EST X-Humanist: Vol. 1 Num. 668 (668) ---------------------------- From Robert Gauthier This year,from July .7th to July . 13th, the theme of the Colloque international d'Albi, in the south of France will be Pictures and Texts. Workshops on visual semiotics( French B. D.,films,posters...)and textual analysis will be scheduled in the morning and early afternoon. A daily conference will take place in the late afternoon. The trend of the whole colloque will be to link formal analysis with either current ideologies and axiologies or psychic human traits. Didactical aspects will not be neglected and the inter- disciplinary approach will be sustained by the participation of semioticians, psychologists, linguists, philosophers , sociologists and communication experts. Among others, world-known specialists like Courtes J., Ducrot O...have announced their participation. FRTOU71 Le Colloque d'Albi se propose de mettre l'accent sur l'etude de l'image, et par dela, de l'imaginaire. Etant donne l'importance croissante du visuel - dans notre vecu individuel et social - il convient de s'interroger sur son fonctionnement et sur sa fonction dans notre univers socio-culturel. Meme si aucune parole ne l'accompagne l'image est un texte a lire dont la signification est fonction de regles particulieres. Comme texte l'image est evidemment le support de valeurs : elle n'est jamais que pretexte a une axiologie determinee ; elle vise a convaincre au dela de ce qu'elle represente. Inversement le texte se presente souvent de maniere imagee au point de produire un effet de sens "realite". Nous voici alors au point de depart de l'imaginaire, de l'onirique : inventivite sans fin des images, avec ou sans paroles. Notre objectif sera de proposer des strategies pedagogiques utilisables a l'ecole et au lycee, issues de la confrontation entre les theories scientifiques et les demarches pratiques des enseignants. On s'appuiera sur l'etude linguistique et litteraire de textes (poemes, nouvelles, textes administratifs, etcI), de messages audio-visuels (productions cinematographiques, publicitaires, illustrations, bandes dessinees, etcI) de maniere a construire des systemes de valeurs tels qu'ils peuvent se degager par l'analyse semantique. Le theme choisi permettra un travail de collaboration entre linguistes, litteraires, philosophes, historiens, sociologues et specialistes de la communication. Dans des ateliers on analysera le texte et l'image, on etudiera notamment "le personnage" dans le recit comme lieu d'investissement (valeurs, ideologies, fantasmatiqueI). On s'interrogera sur les rapports entre l'environnement culturel actuel et les pre-requis exiges pour la comprehension des textes. Dans le but d'aider les eleves a preparer l'epreuve du baccalaureat, on reflechira au theme en question dans le cadre de l'exercice dit de "groupement de texte", a partir de l'exemple des descriptions litteraires ; etcI En resume, le but poursuivi est de degager des outils d'analyse a la fois pour le texte et pour l'image. Les specialistes de l'image pourront utiliser les ressources offertes par le Musee TOULOUSE-LAUTRE d'Albi et ses expositions temporaires. Participation Form to be sent to: G. MAURAN 19 rue du Col du Puymorens 31240 L'UNION France NAME.................................... ................. Fees : 300F (students : 100F) ADDRESS................................. ............. Fees+lunch: 550F/350 PROFESSION.............................. ............ Fees+1/2 Board: 900/700 TEL..................................... .................... " + Full Board:1150F/950 Rooms in Guest House 12 rue de la Republique -ALBI Guests may arrive on Wednesday from 6 p.m. You will be met at the station Time of arrival................................. Do you want reduced train-fare................. Children holiday-center: lunch and tea : 300 .................................... From: MCCARTY@UTOREPAS Subject: Preparing electronic manuscripts (28 lines) Date: 6 February 1988, 16:39:41 EST X-Humanist: Vol. 1 Num. 669 (669) ---------------------------- From Tom Benson 814-238-5277 This is a question about manuscript presention and text editing/formatting, rather than about research per se. As such, it may be too elementary for this list, and if so, my apologies. I am preparing a book-length manuscript for a publisher who has asked that it be prepared in machine-readable form according to the markup system of the University of Chicago Press's GUIDE TO PREPARING ELECTRONIC MANUSCRIPTS. The explanation of what the text should look like is straightforward enough, but it results, if I understand it, in a situation where the only text that can be printed out is a marked-up one--which is clumsy to read, at the very least. Is there a reasonable way to prepare such a text so that one would have a form marked up as the Press advises and at the same time a "normal" looking text for reading, reviewing, and revising? The two manuscript preparation systems to which I have easiest access are XEDIT and SCRIPT on the university's mainframe VM/CMS system, and DW4 on an IBM PC. If anyone out there has experience working with the Chicago format, I'd be grateful for suggestions--including the suggestion that I should just go ahead and do it their way and not worry about having "normal" looking output at any stage before the final printed book. Tom Benson Penn State University T3B@PSUVM From: MCCARTY@UTOREPAS Subject: Who uses CD-ROMs? Date: 6 February 1988, 23:08:16 EST X-Humanist: Vol. 1 Num. 670 (670) ---------------------------- From David Nash Beryl T. Atkins (Collins Publishers, 11 South Street, Lewes, Sussex, England BN7 2BT) cannot receive HUMANIST at the moment, and would like to ask you all a question: "What I want to ask them is: how many of them actually use CD ROMs in their daily work & research? [Collins] are hesitating about CD ROM publication of concordances because they don't believe enough people use CD ROMs. And they say, rightly, that one CD ROM drive in the University library isn't going to make people in departments buy their own research material." I would prefer that you reply to MCCARTY@UTOREPAS.bitnet rather than me directly, but either way I'll amalgamate replies and pass them on to Atkins. -DGN From: MCCARTY@UTOREPAS Subject: SGML and word processors (29 lines) Date: 6 February 1988, 23:10:07 EST X-Humanist: Vol. 1 Num. 671 (671) ---------------------------- From goer@sophist.uchicago.edu(Richard Goerwitz) It shouldn't be too hard to get just about any word processor to output SGML or U of Chicago or whatever marked text, as long as one is willing to create an appropriate printer table. Nota Bene printer tables are pretty easy to cus- tomize. In fact, I've customized my NB 2.0 so that it outputs Hebrew, Greek, and Syriac - which turned out to be an easier job than I had anticipated. I would assume that any major word-processor would be sufficiently customizable that one could have it output SGML markers rather than printer codes. Really, though, shouldn't the makers of major academic word processors create SGML, UofC, and other appropriate tables for us? Or is such a suggestion a bit premature? Richard Goerwitz From: MCCARTY@UTOREPAS Subject: Re: Preparing electronic mss. (28 lines) Date: 6 February 1988, 23:11:51 EST X-Humanist: Vol. 1 Num. 672 (672) ---------------------------- From dow@husc6.BITNET (Dominik Wujastyk) I have recently changed to XyWrite II plus precisely because the underlying text file is very close indeed in format to the type of markup that the Chicago guide recommends. At the interface level, XyWrite is as polished as any major word processor. Footnotes are hidden, underlining and bold show as such on the screen, etc, etc. It is also fast and programmable. Dominik From: MCCARTY@UTOREPAS Subject: Re: Preparing electronic mss. (51 lines) Date: 6 February 1988, 23:17:56 EST X-Humanist: Vol. 1 Num. 673 (673) ---------------------------- From Allen H. Renear Tom, you should not let your publisher bully you into text processing practices with which you are uncomfortable or which do not support you as an author. Many of us have argued in many places for the AAP/SGML style tags presented the Chicago Guide -- but the last thing such tags should be is a burden on the author. Descriptive markup is a fundamentally correct approach to text processing: it should simplify and enhance *all* aspects of scholarly writing and publishing. First, talk to your publisher about exactly how they plan to process your tagged manuscript. It may turn out that they only want to get a plain ascii file with as much descriptive markup as possible. In that case you should be able to use Script GML. This will allow you to get nicely formatted copy for proofing and good support from your computer center. I suspect this is the situation. I always demand descriptive markup for typesetting projects -- but it makes less difference to me whether the tags are GML, Scribe, troff -ms, TEX, AAP or homegrown, as long as they describe the editorial objects of the document rather than specify formatting procedures. But if your publisher says that they really must have the tags described in the Chicago Guide you still have several options available. For instance, you can define Script macros that parallel the Chicago Guide tags, have each one end in a ">", and then use Script's ".dc" command to change the control word indicator to "<". Presto, your source file will have Chicago's AAP/SGML style tags and yet can be formatted by the Script formatter. You should have your Computer Center help you with this; it's their job. (I'm assuming your Script is Waterloo Script). In any case you will be using a general editor (Xedit) to prepare the files. This leaves something to be desired of course, but that's where we are today. For the direction in which text processing should be moving look at Softquad's Author/Editor. This is an AAP/SGML based editor for the Mac. I thought this much of my reply to Tom would be of general interest to the list. Anyone who wants further details should contact me directly. Allen Renear Computing and Information Services Brown University From: MCCARTY@UTOREPAS Subject: Use of CD-ROMs Date: 7 February 1988, 14:41:41 EST X-Humanist: Vol. 1 Num. 674 (674) Beryl Atkins has asked, "how many of them actually use CD ROMs in their daily work & research? [Collins] are hesitating about CD ROM publication of concordances because they don't believe enough people use CD ROMs. And they say, rightly, that one CD ROM drive in the University library isn't going to make people in departments buy their own research material." From our point of view as researchers, I suspect that we almost unanimously want Collins and others to produce the CD-ROMs despite the fact that very few now use the technology "daily", so that we can make up our minds whether or not to buy the readers and disks. After all, our private and departmental funds are very limited, and few of us will put out the cash unless we can be sure that we'll make significant use of this technology. Because they earn their living at some peril, however, the publishers want us to clamour for CD-ROM publishing so that they can minimize their risks. So, how can we answer Beryl's question? I suggest that we say (1) what CD-ROMs we would buy if we already had readers, and (2) what minimum selection of CD-ROMs would drive us to buy a reader. This is my list: (1) desirable CD-ROMs (a) the CCAT/PHI disk (soon available; see OFFLINE 17) (b) the New OED (when available & depending on software provided) (c) the TLG (if I didn't already have access to an Ibycus & the TLG in my office) (d) the Thesaurus Linguae Latinae (from the PHI), when it becomes available (e) a disk of 16th & esp. 17th cent. English lit. (2) minimum selection (a) & (e), or better (a), (d) & (e) None of the above, I'd guess, are likely to be published by Collins, so this reply may not encourage them. I have great difficulty, however, imagining what I would use on CD-ROM that I don't use regularly in any form because it's not available electronically. Willard McCarty mccarty@utorepas.bitnet From: MCCARTY@UTOREPAS Subject: Preparing electronic mss. Date: 7 February 1988, 19:16:30 EST X-Humanist: Vol. 1 Num. 675 (675) ---------------------------- From dow@husc6.BITNET (Dominik Wujastyk) While we are on this subject, I have just been given _Goedel, Escher and Bach_ which was apparently produced by the author himself using a text processor called TV-Edit. Anyone heard of it? Dominik From: MCCARTY@UTOREPAS Subject: CD-ROMs (41 lines) Date: 7 February 1988, 19:21:47 EST X-Humanist: Vol. 1 Num. 676 (676) ---------------------------- From Norman Zacour [The following was sent to me as a private message; I'm submitting it to HUMANIST with the author's blessing and with a few very minor changes. -- W.M.] For what it is worth, I do not use, I have not used in the past, nor shall I ever use in the future, CD-ROMs. When libraries have the machinery installed, why bother duplicating everything at home? I am assuming that the great advantage to the scholar is the rapidity of access of large reference works - dictionaries, concordances, and the like. Have you counted recently the number of such references you consult in a year? How about 15 for a good guess? Is the expenditure worth it? I have at my fingertips dozens of language dictionaries, bibles, bible concordances, and medieval reference works in canon law etc etc that won't get CD-ROMMED in my lifetime. But the real point is that I won't get around to consulting most of them in my lifetime either. Beyond about a dozen helps that I lean on extensively (all of which I have in my office) I consult other such works only very occasionally. [Do you know about] the Domesday project, an extensive project undertaken by the BBC a couple of years ago, now on CD-ROMs? It [is] quite breathtaking, and as an aid to teaching school-children about England through the medium of pictures and graphs it's unbeatable. As an aid to scholarship, it's a bust. Nevertheless, it is possible that many people will buy CD-ROM machines for their home for uses other than scholarship (my colleague John Benton, of Cal Tech, for example - they are wonderful for movies) who would then use them as aids to scholarship also. How extensive that kind of market would be is anybody's guess. From: MCCARTY@UTOREPAS Subject: CD-ROM use (22 lines) Date: 7 February 1988, 19:30:46 EST X-Humanist: Vol. 1 Num. 677 (677) ---------------------------- From Mark Olsen The Humanities Computing Facility is currently in the process of purchasing two CD-ROM players in order to experiment with the technology and access material that comes online in future. The biggest problem I see is getting university administrations to catch up to the technology. Budget requests etc. take time. The ASU library has a dozen laser disc installations running iwth PCs devoted to a couple of information services. These are not 4.5 inch disks, but that is only because the services they subsrcibe to have not converted to CD format. From personal experience, it is almost impossible to get on these systems during weekdays ... they are very popular. Mark From: MCCARTY@UTOREPAS Subject: Who uses CD-ROMs? Date: 7 February 1988, 19:35:26 EST X-Humanist: Vol. 1 Num. 678 (678) ---------------------------- From Sterling Bjorndahl I do. The Institute for Antiquity and Christianity has two Ibycus micros with the TLG texts - and we will get the PHI and CCAT CD's too. I would say there are a half dozen of us who use the CD regularly, with a few more occasional users. Sterling From: MCCARTY@UTOREPAS Subject: CD-ROM use (28 lines) Date: 8 February 1988, 08:58:22 EST X-Humanist: Vol. 1 Num. 679 (679) ---------------------------- From Randall Smith <6500rms@UCSBUXB.BITNET> This is a reply to David Nash's question of behalf of Beryl T. Atkins concerning CD-ROM use. The Classics Department at University of California at Santa Barbara has its own CD-ROM system, and we use it regularly to do text searches on TLG materials. We also plan to obtain Latin texts on CD as soon as they are available, and since we have the equipment, we would be interested in other items, such as journals, book collections, etc., which might become available on CD's, as long as the price is reasonable. Also, several members of the Classics Department have purchased computers with an eye to purchasing CD units as soon as the necessary CD's containing Greek and Latin text are available for home use. As far as we are concerned, there is plenty of interest in CD's, as long as the price is kept reasonable. Randall M. Smith From: MCCARTY@UTOREPAS Subject: CD-ROM query (63 lines) Date: 8 February 1988, 16:00:17 EST X-Humanist: Vol. 1 Num. 680 (680) ---------------------------- From (M.J. CONNOLLY [Slavic/Eastern]) I speak only from the Macintosh world, where the release of two important products (9 March?) will rapidly change the CD-ROM scene and the size of the prospective audience: CD-ROM drives and driver software, and the new version of HyperCard (to handle CD-ROM files currently inaccessible to version 1.0.1). One reasonably expects the Microsoft Bookshelf to run in the Macintosh environment then (announcement perhaps also to come in early March) and also the OED. Some large corporations, of course, have not waited, and these produce their own drivers and discs for internal use. Databases like 4th Dimension should have no difficulty with CD-ROM, once the appropriate driver is in the System folder. I see a market that will take off very soon. We have been tinkering with CD-ROM for our new Instructional Development Lab, but await the 'official' releases and know that there are a number of vendors out there ready to pull the wraps off once the Apple( (617)552-3912 cnnmj@bcvax3.bitnet From: MCCARTY@UTOREPAS Subject: OS/2 (35 lines) Date: 8 February 1988, 19:15:32 EST X-Humanist: Vol. 1 Num. 681 (681) ---------------------------- From Dan Church Given the fact that most of us don't seem to have regular access to mainframes or advanced workstations, most of the discussion of OS/2 and multi-tasking along the lines of "I can already do that on my... [Fill in the blank with the name of your favorite mini or mainframe.]" appears to me to be beside the major point. Even granted that a Macintosh with enough memory and MULTIFINDER can already do most of what we would like to be able to do with OS/2, most of us who use PC's or clones can't afford to junk them and run out to buy an SE. So what about us? I suggest that we start by reading the editorial in the latest (January/February 1988) issue of _Turbo Technix_, the new technical magazine put out by Borland and sent free to anyone who has purchased a Borland product. The editorial by Jeff Duntemann entitled "DOS, The Understood" argues that DOS will outlast OS/2 because a) it can be made to fake most of OS/2's features seamlessly, b) OS/2 was designed primarily for the 80286, a "dead-end processor", c) a 386 machine with DOS and programs such as WINDOWS/386 or PC MOS-386 is already everything OS/2 claims to be, and d) we will never be able to do as much with OS/2 as with DOS because it is designed around a kernel that is a black box highly resistant to probing by hackers. This editorial strikes me as one of the most sensible discussions of the supposed advantages of OS/2 I've read so far. I would have quoted the whole thing for you if it hadn't appeared on the same page as the warning that no part of the magazine may be reproduced without permission. But I'd be willing to bet that you could get a reprint of it by writing to Borland Communications, 4585 Scotts Valley Drive, Scotts Valley, CA 95066, U.S.A. From: MCCARTY@UTOREPAS Subject: documenting texts (36 lines) Date: 8 February 1988, 20:07:06 EST X-Humanist: Vol. 1 Num. 682 (682) ---------------------------- From Lou Burnard 1. Mark Olsen rightly complains that the texts he received from the Text Archive were inadequately documented. Alas, he does not say whether or not he intends, having (presumably) gone to the trouble of identifying what all those mysterious tags actually represent, to pass the information back to us... 2. Such information should (in theory) be available from the depositor of the text. In this connexion, may I ask what the general feeling is about publishing names and address of depositors? We have this information, necessarily, for all A and X category texts, but it is not in the catalogue so as to save space. Should it be? Should we also indicate a (possibly many years out of date) contact address for all U category texts? How do actual or potential depositors feel about this? How do actual or potential punters feel? 3. I have just finished a document (about 10 pages) which describes in some detail the various english language dictionaries available from the text archive. Please send a note to archive@uk.ac.oxford.vax if you would like a copy. Lou Burnard Oxford Text Archive P.S. Sorry Rosanne, we're fresh out of Tennessee Williams. Would Tom Stoppard do? From: MCCARTY@UTOREPAS Subject: Dan Brink's tour (18 lines) Date: 8 February 1988, 20:09:41 EST X-Humanist: Vol. 1 Num. 683 (683) ---------------------------- From Dan Brink I am planning an eastern tour to check out computer conferencing systems in early April. NJI, Guelph, UMich are on the tour so far, and maybe NYIT. Any suggestions of other good places to try to visit would be appreicated. *****P L E A S E R E S P O N D T O Dan Brink ATDXB@ASUACAD *****& N O T T O H U M A N I S T From: MCCARTY@UTOREPAS Subject: Producing GML markup with Xedit (34 lines) Date: 8 February 1988, 20:16:44 EST X-Humanist: Vol. 1 Num. 684 (684) ---------------------------- From Michael Sperberg-McQueen In addition to the good suggestions of Allen Renear, it should also be mentioned that Waterloo GML can also be modified in the two ways salient for Tom Benson's problem: the tags defined by the U of C style can be defined, as GML tags, and added to the set of GML tags provided by Waterloo, and (if the publisher thinks it important, or the author finds it makes the file easier to read) '<' and '>' can be used as tag delimiters instead of ':' and '.'. The advantages of adding new GML tags instead of new Script macros are that you can use existing Waterloo tags and their underlying macros where appropriate, and you can use GML tags in the middle of a line instead of only in column 1, which makes it easier to have a clean, readable input file. The consultants on your CMS system ought, as Allen Renear suggests, to help you with the adaptations. They may, however, need to be told to look at the .GT and .GA control words in the reference manual to see how to define new GML tags and map them either to existing Script macros or to ones you and they define, and to change the GML delimiters with ".DC GML <;.DC MCS >" -- even good Script consultants may not know these ins and outs of GML. Michael Sperberg-McQueen, University of Illinois at Chicago From: MCCARTY@UTOREPAS Subject: CD-ROMS & Players (30 lines)CD-ROMS & Players Date: 8 February 1988, 20:22:25 EST X-Humanist: Vol. 1 Num. 685 (685) ---------------------------- From Dr Abigail Ann Young My attitude to this is similar, I think, to what Norman Zacour said earlier. I tend to look on the CD's themselves and the equipment necessary to use them as something a library, rather than an individual, would acquire. I use the Thesuarus Linguae Latinae, and DuCange's Lexicon of Mediaeval Latin, but I don't own them: the Library has them readily available, and if occasionally I have to wait for a few minutes because someone else is using a volume I want to consult, well, it's just not that great a hardship! I also consult the PG and PL of Migne, and the more modern critical editions of ancient and mediaeval church fathers and teachers in the library, except for a few volume(s) of authors I regularly work with. I can no more imagine buying a CD-ROM of the TLL for myself than I can imagine buying the printed TLL for myself, but I think that that sort of thing ought to be available in the University Library System on CD-ROM as well as in print: it seems to be the librarians' dream medium, no matter how many people use it, the "book" can't be hurt! From: MCCARTY@UTOREPAS Subject: Using CD-ROM for textual research (43 lines) Date: 8 February 1988, 20:25:23 EST X-Humanist: Vol. 1 Num. 686 (686) ---------------------------- From Robin C. Cover In response to the inquiry of David Nash on the use and popularity of CD-ROM's, I would suggest (at least) that he find out how many IBYCUS micro-computers there are in use. Even if used institutionally, they constitute available drives that could be used with other CD-ROM disks. Secondly, I would add that our institution has done some market research concerning the potential popularity of CD-ROM drives *provided that* tantilizing software and databases were available. It was determined that CD-ROM is a viable market (we can get OEM prices for close to $400, and the prices will probably drop). So, we are planning a hypertext CD-ROM product for biblical studies, the first version of which is due (maybe) late this year. In response to Norman Zacour, who says he will never buy a personal CD-ROM unit, and could not really conceive of its use: would you be seduced if we could provide you with original biblical texts linked by hot-key to the major lexica, grammars (etc) together with programs to do interactive concording on the texts, and searching (grep/Boolean) of these texts to boot? Finally, the only dark cloud I see with respect to CD-ROM is the advent of read/write optical-magnetic disk, which already is available. It has 30 millisecond disk access time, which is a considerable improvement over the 500 millisecond time of CD-ROM, and hinders performance. If these drives drop to within the $1000 range during the next year or so, I think many of us would want to support this medium rather than CD-ROM; the removable (90 megabyte, 650 megabyte) disks would be optimal for our other storage problems as well. Professor Robin C. Cover ZRCC1001@SMUVM1.bitnet From: MCCARTY@UTOREPAS Subject: Text encoding initiative (16 lines) Date: 8 February 1988, 20:27:32 EST X-Humanist: Vol. 1 Num. 687 (687) ---------------------------- From Sebastian Rahtz Nancy Ide's throwaway remark "even an extension of SGML" fills me with horror. Isnt SGMl complicated enough for you text-encoding people? Why create something non-standard for humanists, why not go with the crowd NOW. I say good luck to the dictionary chap that wanted to use SGML. Much as I hate S*, its not that bad From: MCCARTY@UTOREPAS Subject: CD-ROM and videodisk (26 lines) Date: 8 February 1988, 20:29:17 EST X-Humanist: Vol. 1 Num. 688 (688) ---------------------------- From Randall Jones Norman Zacour's recent note about the Doomsday Project has prompted me to offer a clarification concerning a misconception that apparently exists among some of us. The Doomsday project is N O T on CD-ROM, rather is is on videodisc, a medium that is similar to CD-ROM only in that it is optical storage that uses laser technology. Videodisc stores analog video images and can also store digital information, but for most applications the video material what is important. There are digital motion video programs now available, but they are still quite experimental and very expensive. Randall Jones Humanities Research Center 3060 JKHB Brigham Young University Provo, Utah 84602 From: MCCARTY@UTOREPAS Subject: The Sun also rises on HUMANIST (22 lines) Date: 8 February 1988, 20:42:03 EST X-Humanist: Vol. 1 Num. 689 (689) ---------------------------- From Sebastian Rahtz David Graham feels that Sun users are looking down on him, and urges us not to start a hardware war on HUMANIST. Yes, I agree! But its not "evangelizing" to say that multiple tasks in multiple *visible* windows is an excellent working environment. I don't think our Sun is an expensive luxury, any more that an Ibycus would be if we could afford it ..... Why don;t you declare the correspondence on multi-tasking over, Willard? Sebastian Rahtz From: MCCARTY@UTOREPAS Subject: PhD exams in computing Date: 8 February 1988, 20:43:34 EST X-Humanist: Vol. 1 Num. 690 (690) ---------------------------- From Sebastian Rahtz I would have thought that the idea of a "computer methodology" was a non-starter; after all, the virtue of the computer is that it is a *general purpose* tool. Could the exam subject not be "a quantitative approach to New Testament studies", making it comparable to "structuralist", "Marxist" etc approaches? If quantification is the issue addressed by these NT 'n' computing courses referred to. We are about to start an MSc course in Archaeological Computing here; the punters will do the programming/database/graphics sorts of things you might expect. I also have a friend whose PhD revolves around a statistical approach to Roman pottery. I'd hate to defend her doing a PhD on "programming in archaeology" though. Sebastian Rahtz From: MCCARTY@UTOREPAS Subject: Relational database software in the public domain Date: 8 February 1988, 20:45:07 EST X-Humanist: Vol. 1 Num. 691 (691) ---------------------------- From Sebastian Rahtz I sympathise with the request for a PD database, but, really, you cannot expect to get EVERYTHING for free! People will recommend PC-FILE (supposed to be good stuff) for a PC, but wouldn't it be worth spending a few 100 [pounds,dollars] on a commercial product with support and a manual, if its going to be used a lot? Creating SGML-conformant output shouldn't be hard from any reputable database. But if you are on a mainframe, what about Famulus77? Its not PD, but its cheap; its not relational but it would do what you asked for? Lou Burnard will tell you all about it on request, I am sure. As an example of a commercial product, PC-Ingres cost us 250 pounds for a site license. OK, so it may not be appropriate, but for that kind of money for the whole site, going outside PD isn't impossible. (That was an academic price, mind you!) Sebastian Rahtz From: MCCARTY@UTOREPAS Subject: Texts? Date: 8 February 1988, 20:46:44 EST X-Humanist: Vol. 1 Num. 692 (692) ---------------------------- From Mark Olsen I have a request for the Consolatione de philosophiae by Boethius. The user would be particularly interested in the Loeb edition, (1952?). Any information on this would be greatly appreciated. If we can not find it, we might have to scan it. Thanks Mark From: MCCARTY@UTOREPAS Subject: SGML, markup, and texts (106 lines) Date: 8 February 1988, 20:49:40 EST X-Humanist: Vol. 1 Num. 693 (693) ---------------------------- From Stephen DeRose Well, I've been watching HUMANIST with interest for some time, and I guess it's time to dive in. First, on the issue of data format and headers: SGML provides the features I have so far seen requested on HUMANIST. An SGML file is pure ASCII, and contains text, tags, and entities. Tags are mnemonic names for parts of the text, marked off by angle brackets (e.g., "

"for paragaph). Entities name things that can't otherwise be coded in straight ASCII (perhaps "ℵ"). That's all there need be to SGML, unless you want to get fancy. A "prolog" in a well-defined format defines the document's particular tag-set, entities, and any non-default syntax. Because it is all printable characters, you don't lose data going through mailers, dealing with IBM mainframes, etc. Because the tags are descriptive rather than procedural, you need not encode the specifics of your word-processor, printer, current style sheet, display characteristics, etc. etc. A block quote is still a block quote regardless of any of these factors. Also, because the tags are mnemonic and pure ASCII, even with *no* word-processor a human can read an SGML file. The objections I hear to SGML are usually: 1) "It doesn't have the tags I need." This shows a widespread misunderstanding of SGML. SGML is not a tag-set at all, but a way of *specifying* tag-sets, entity-names, and their syntax. A well-known tag-set called "AAP" (for it is from the American Association of Publishers) is *one* instance of an SGML-conforming tag-set; but saying it "is" SGML is like saying that a particular user program "is" Pascal. 2) "It takes up too much space." But just about any mnemonic for (say) paragraph is sure to be shorter than 2 RETURNs and 5 spaces, or procedural commands to skip line and change the indentation level, etc., etc. One can also define abbreviations (say, for "ℵ"), gaining the brevity of transliteration without losing the other advantages, all within the easy part of the SGML standard. So, for example, if one is doing a lot of Hebrew, one defines a "" tag, within the scope of which a defined transliteration scheme is used. 3) "Typing pointy brackets and mnemonics is a pain." SGML says nothing whatsoever about what you have to type. Any word-processor with "style sheets" at least allows SGML-like mnemonic descriptors -- and how you specify them is as varied as the programs themselves. Also, it seems obvious that even *typing* a mnemonic is less pain than the usual task of translating the mnemonic into a long series of formatting codes which are specific to some particular word processor. 4) "Slashes (or whatever) are better than pointy brackets." This is of course insignificant. One can change the default, but in any case the choice of tag punctuation is a trivial matter. Globally changing "

" to ":p." is a problem of a very different magnitude from locating all paragraphs given only a file with miscellaneous carriage returns and spaces, some of which sometimes happen to mark paragraphs. It's the difference between artificial intelligence and a text-editor "change" command. 5) "SGML isn't WYSYWYG". This is simply false; just as with typing, the display can be anything. MS Word using style sheets (which is a very poor but real example of a descriptive tagging system) is no less "WYSIWYG" than MS Word using (shudder!) rote formatting all the time. Of course, the true, ultimate "WYSIWYG" word-processor is the pencil. 6) "SGML isn't efficient enough for purpose X." Usually, X is some specialized kind of information retrieval. One must consider Fisher's Fundamental Theorom from Genetics: "the better adapted a system is to a particular environment, the less adaptable it is to new environments." To draw an analogy from my own domain, one can always design a specialized grammatical theory for a single language, which is more effective for that language than any of the general theories. But linguists are trained to avoid this, because such analysis usually contributes nothing to the work of students of other languages. It is true also in Computer Science: if one optimizes a program for one machine/language/ task, it will be vastly more difficult to adapt it for a new of extended one. An SGML file can be trivially converted to other forms for special purposes. Consider that the SGML version of the entire OED can be searched in a small number of seconds. On another topic, it's interesting to watch the multi-tasking debate. There is so much about OS-2 and Windows. Discussions with Microsoft indicate it has little consciousness of the problems of writing systems. Even accents are not handled adequately. Someone called the Mac "silly" -- that's fine for him, but since I can do almost everything I want (and almost everything I have heard Humanists and HUMANIST's express desire for) in a multitude of languages, off- the-shelf, with *any* monitor and *any* video card, with standard commercial software on the Mac, in an interface style that IBM is working hard to copy, I'm willing to use a "silly" machine. Steven J. DeRose Brown University and the Summer Institute of Linguistics D106GFS at UTARLVM1.bitnet From: MCCARTY@UTOREPAS Subject: CD Rom caveat Date: 8 February 1988, 20:53:21 EST X-Humanist: Vol. 1 Num. 694 (694) ---------------------------- From Sterling Bjorndahl M.J. Connolly's enthusiastic response about cd/rom, cd/i, etc., reminded me of a caveat. I raised this on humanist some time ago already and aroused zero response. Warning: readable/writeable optical media are being developed. Some of these use laser-magnetic technology; others use laser-phase change technology, and there are yet other technologies being investigated. My own feeling is that this will send the read-only media the way of the eight-track audio cartridge. ROM works fine for the short and medium term, and I'm very glad I have access to it, and I expect to see more of it now that it is a practical and functioning technology. However, if I were a major commercial publisher I would think twice about investing my own money *heavily* in the read only technologies. If the hardware developers can get the read/write heads to move fast enough (for I read that this is a major design problem at this point), we may all have 500MB drives hooked up to our micros as a matter of course, and these may be as easy to use as modern floppies (if I understand the technology correctly). At that point we won't need or want the ROM devices, unless perhaps we are running a text archive. Sterling Bjorndahl BJORNDAS@CLARGRAD.BITNET From: MCCARTY@UTOREPAS Subject: Seductive biblical hot keys for that 5% (27 lines) Date: 9 February 1988, 00:03:55 EST X-Humanist: Vol. 1 Num. 695 (695) ---------------------------- From Norman Zacour To Robin Cover and his seductive biblical hot keys I can only respond with Luke 4:5-8. With a CD Rom he should have no trouble finding it. Seriously, however, the technology available for rapid and thorough consultation of reference works is quite admirable, and will become more so; its role, however, is the role that the indices of scriptoria, archives, and libraries played in the past. They are really institutional in nature, useful - indeed essential - in their place. But since I spend about 95% of my working time reading, thinking, writing and swearing, and only 5% (if indeed that) looking things up, I cannot get excited about moving a proxy library into my apartment. I think that what I'm talking about is a sense of proportion. I also have a sneaking suspicion, somewhat confirmed by Cover's last paragraph, that the latest obsession can quickly become the latest obsolescence, unavoidable in this day and age, perhaps, but preferably to be borne at the institutional level. From: MCCARTY@UTOREPAS Subject: SGML/AAP tag text processing today (36 lines) Date: 9 February 1988, 00:09:24 EST X-Humanist: Vol. 1 Num. 696 (696) ---------------------------- From Allen H. Renear Michael Sperberg-McQueen's approach to defining SGML/AAP tags in Script GML is the correct one of course. Within seconds of posting my note I realized what an embarassing kludge I was about to exhibit to the world and fired off notes to McCarty (to excise the offending bits) and Benson (to keep him on the right path). But, alas, CORNELLC went down and my northward mail queued up &c. &c. Anyway, both Script and GML allow the delimiters -- both beginning and ending -- to be reassigned; and tailoring the GML delimiters, as Michael noted, makes the most sense. *If* you feel you really must change delimiters at all. As Sperberg-McQueen hints delimiters are trivial; stick with the sensible ":" and "." of GML and just define AAP GML tags. If your publisher says they *have* to be "<>"s you can change them at the end. There are some general morals here though. One of them is that SGML/AAP style text processing is indeed possible today, apart from any special SGML software. And it can be supported by powerful formatters and programmable editors. Another is that using SGML/AAP style tags is easy, in fact, nothing could be easier. Of course we do want software that will support our tag based text processing more actively than general editors and formatters do. And that's coming. Consider, again, Softquad's Author/Editor -- it creates an SGML/AAP file, but as you simply choose document components from a context sensitive menu (it shows you only the relevant components for that point in the document) not only do you not bother with delimiters, you don't even bother with tags per se -- that's all handled by the editor. This is the sort of stuff we can hope to see more of. From: MCCARTY@UTOREPAS Subject: Zacour's sense of proportion (33 lines) Date: 9 February 1988, 00:20:16 EST X-Humanist: Vol. 1 Num. 697 (697) Norman Zacour has responded with a certain lack of enthusiasm to the various comments about CD-ROMs and what they offer. I also spend a great deal of time thinking, swearing, etc., and less time looking things up, but I think my percentages are not quite his. Having spent the last 3 years or so tinkering with database software and learning to depend on it for gathering, arranging, and retrieving the textual evidence I use, I am less resistant to the vision (from "an high mountain" to be sure, but I smell no sulphur) of wonderfully vast amounts of source material. At the fingertips, in one's own study, this material will tend to be used much more than if it's only in the Library. Perhaps that's good, perhaps not; anyway, for the kind of work I do, the easier the sources are to get to, the better. Zacour has a more basic point with which I have no trouble whatsoever. Forgive me, but I sometimes, in some contexts, wonder what happened to the "humanities" in "humanities computing." I am reminded of our ancient colleague Archimedes, who is supposed to have said that if he were given a place to stand and a lever long enough, he would be able to move the world. I suppose that he would have, too. Where would we be now? Spinning beyond the orbit of Pluto? Willard McCarty mccarty@utorepas.bitnet From: MCCARTY@UTOREPAS Subject: CD-ROMS and what not (64 lines) Date: 9 February 1988, 09:42:19 EST X-Humanist: Vol. 1 Num. 698 (698) In response to Robin Cover's posting, asking whether we would be tempted to buy a CD-ROM given advanced hypertext systems, let me point out that these systems are still vaporware! And when they do come out, most will not even be able to act like a simple concordance, allowing you to look up things by root (or some generalized dictionary entry). Most will not even allow a sophisticated pattern-matching set (say regular expressions). I guess I should let everyone in on the fact that we have been corresponding privately about this, and that the system you are working on actually has these capabilities in the works. I wish you the best of luck in your endeavors. When I begin to see electronic products that can offer me keyword searches, regular expressions, and textual variants, I most certainly will purchase them! Until then, I'll have to keep the hardcopy at hand constantly anyway (hard to use the LXX, BHK, etc. with no textual apparatus), so the expense is hard for me to justify, given my graduate student's budget! Let me point out that, even if such systems do not rival their hardcopy coun- terparts in comprehensiveness, those who can afford them will probably find them useful for browsing, or for quick location of scattered references in various texts. One thing that is troublesome about hardcopy is that it takes oodles of time to flip through several texts at once, trying to locate little things here and there, all the while staying parallel in each work (this is often the case in biblical work, where one has the original, and several of the versions lying out on one's desk at once, not to mention reference works and commentaries). I believe that some will find the computer something of a time-saver in cases like this. In all, though, I must agree with Norman Zacour, who notes that most of one's time is spent flipping through mental pages - not actual books. I might also point out that at this point in my life - word processing excepted - computers have probably cost me more time than they've saved. I've spent a lot of time learning MS-DOS, UNIX, and some programming languages. I've also spent a long time developing a few programs that really aren't terribly sophis- ticated by commercial standards (the economic realities here dictate that I do other things than program all day). Worst of all, I've spent countless hours learning individual programs, from the tiniest utility to the biggest applica- tions programs - all of which are constantly growing, mutating, maturing, and dying. It is an incredible time investment, and so far it has cost me far more than it's paid off. For me to take the plunge for new software right now, it's going to have to be pretty slick, pretty easy to use, pretty reliable, and su- premely useful, not to mention stable in its interface, and permanent in its availability! The kinds of projects you are working on have as good a chance of fitting this bill as any other I've heard of. Again, good luck. -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer From: MCCARTY@UTOREPAS Subject: CD-ROM use (32 lines) Date: 9 February 1988, 11:17:20 EST X-Humanist: Vol. 1 Num. 699 (699) ---------------------------- From Keith W. Whitelam () We are in the process of ordering a CD-ROM reader so that the technology can be assessed. The hope is to equip a micro lab with readers for use by Arts departments in text analysis. Departmental funds are scarce, particularly in the Arts, so Computing Science are going to provide the hardware and help in the assessment. Incidentally does anyone know if it is possible to network a CD-ROM reader so that the texts can be accessed by more than one micro or are we faced with the problem of providing readers for all the machines? The hypertext CD-ROM for biblical studies, mentioned by Robin Cover, is precisely the kind of development that we are looking for. As a biblical specialist, such a development offers immense possibilities for research. The great advantage of CD-ROMS is surely not simply to look up a few passages but to provide a large text database for searches and so analysis of a particular text or texts. The caveat introduced by Sterling Bjorndahl concerning a breakthrough in read/write optical disks is a major problem. With limited resources, do we await the breakthrough, in technology and more importantly pricing, or do we provide a research and teaching facility based on CD-ROMS? As a footnote, do IBYCUS market their micro in Britain? From: MCCARTY@UTOREPAS Subject: Are CDROMs all that marvellous? (43 lines) Date: 9 February 1988, 11:45:12 EST X-Humanist: Vol. 1 Num. 700 (700) ---------------------------- From Susan Hockey Before we get too excited about the prospect of CDROMs all over the place, shouldn't we be asking what we can do with them, or (to put it the right way round) can they answer the questions which we want to ask? There are two approaches to CDROMs for text: (1) The TLG and the planned PHI CDROMs which are intended for use with Ibycus. These contain only sequential text files - the apparent speed of searching on Ibycus is because of a hardware chip between the disk and the CPU which acts as a filter and only passes to the CPU hit records. Any other software which reads this disk on a PC is bound to be much slower, probably too slow for anybody to want to use interactively on anything but a short text. (2) Everybody else's which use indexes for searching, and are supplied to the user with packaged software for their use. For these it is in the supplier's interest not to let the user reproduce the basic text. To speed up access times these systems often hold the indexes on more conventional disk. Most CDROMs which are available now in this category contain bibliographic data, but there are plans for others holding text. Therefore it seems to me that (1) can only answer questions which are defined as a sequential search (I admit it does this very well) and (2) can only answer questions which somebody else (i.e. the compiler of the indexes) has decided need to be answered. Neither (1) nor (2) address the problem of retrieving too much information (for human digestion) other than at the level of collocations, nor do they provide the user with much opportunity to do any further analysis with the text using other software. Apart from one or two attempts to index the TLG material I don't know of any CDROMs for textual scholarly use apart from the ones intended for Ibycus. (I don't count the OED CDROM here.) I would like to know what use HUMANISTS want to make of CDROMs. I don't want to hear how marvellous it is have all this text available. I want to hear what specific questions can be answered now and what scholarly activities can be aided by CDROMs. I also want to hear what questions people would like to ask which can't be answered now and whether they think existing CDROM systems could answer them. Susan Hockey SUSAN@VAX.OX.AC.UK From: MCCARTY@UTOREPAS Subject: Teleconferencing at Rochester Institute of Technology (204 ll.) Date: 9 February 1988, 16:09:45 EST X-Humanist: Vol. 1 Num. 701 (701) ---------------------------- From Doug Hawthorne REPLY TO 02/09/88 08:31 FROM UNKNOWN: In a remarkable coincidence I received the following, fairly lengthy description of the use of teleconferencing in a history course at Rochester Institute of Technology just before reading Dan Brink's query. Other readers of HUMANIST may find in the RIT experience some ideas applicable to their own teaching. Doug Hawthorne From the Handicap Digest # 235: Written by: patth@dasys1 (Patt Haring) Modern American History on VAX Notes Computer Conferencing in RIT Classes by Professor Norman Coombs Professor of history Rochester Institute of Technology 1 Lomb Memorial Drive Rochester N. Y. 14607 Bitnet address: bitnet%"nrcgsh@ritvaxd" At Rochester Institute of Technology (RIT), a truly modern version of Modern American history is being taught with VAX Notes, Digital's new electronic conferencing package. This class is part of an on-going experiment using a computer conference to replace the standard classroom lecture/discussion format. Results have been extremely positive to date. Using VAX Notes, professors and students have the opportunity to transcend the boundaries of time and space. Since no one has to be at the same place at the same time to participate in the conference, VAX Notes provides a maximum of schedule flexibility for everyone concerned. This approach is particularly useful for off-campus students trying to balance busy schedules that include work, family and school. VAX Notes is also a convenient and easy conference program to use even for professors and students who have very limited computer experience. VAX Notes is a software package that is compatible with the VMS software environment and works with standard editors, such as EDT, EVE and WPS. It can be called from the ALL-IN-1 Office and Information System menu and is available on all Digital VAX systems, from MicroVAX to high-end VAX computers. A VAX Notes conference is overseen by a moderator, (in this case, the class professor), who posts of a variety of topics within a particular conference. At RIT the data is entered on the professor's (Apple ii plus and modem). Students can enter their responses on a variety of terminals, personal workstations and pc's located in labs in class buildings, dorms or, in the case of a pc, at home) and the response to each topic is automatically attached to that subject. This allows several discussions to be held simultaneously within a conference. Everyone is assigned a title and a number, allowing the user to follow each in a logical and normal fashion. VAX Notes keeps a notebook in the user's main directory that tracks which topics and responses a user has read. Each time the user participates in the conference, VAX Notes automatically begins at the first item which had been added since the user last took part. This allows the participant to keep up with the discussion without having to remember which notes had already been viewed and without having to find find his or her place. The Modern American History course was structured so students gained information from textbook readings and from watching video tapes. These were available in the library where students could use them at their own convenience. The VAX Notes conference took the place of a classroom discussion on these materials. Each week I posted a set of three to five topics on the current material, consisting of several questions. Students logged on to VAX Notes and attached their responses electronically to the relevant topic by inputting them on terminals located either on campus or from home I checked the postings for new entries several times each day and added comments of MY own when appropriate. There were two sections of Modern American History given in the Summer and Fall semesters. The material in each was identical and both sections took the same objective exams. The traditional class met twice weekly for discussions with the professor and served as a control group for the experiment. Both in the Summer and Fall, the students in the computer confeencing section scored a higher mean grade than those in the traditional class. In the Fall, the control group scored a mean grade of 78.7 while the computer students averaged 82.0. This probably does not mean that computer conferences are better than class discussion. Rather, the use of the computer frightened away the below average student. Also the computer section did have a grade point average slightly above that of the control class. Students and professors also used electronic mail (VAXmail) extensively. The professor and some students felt that there was more than the usual interaction between professor and student. It is unusual for a professor to single out a student in public and call him in for a discussion, but (VAXmail) made it easy to develop one-to-one conversations quite frequently. One student remarked that, as a result of being able to use this facility, this professor was the most helpful she had encountered at college. The questionnaire mentioned previously asked the students to rate professor helpfulness and availability compared to normal class settings. The computer students ranked that item 4.8 out of 5 and also scored electronic mail as the major factor in that process. Electronic mail has been of especial aid to me because I am blind. Students use electronic mail to submit written materials to me; this replaces the need for human readers. Finding the computer as a communication tool made it easy for me to envision its possibility in teaching. Because the project ran on a minimal budget, the personal PC, old fashioned Apple II PLUS AND eCHO II sPEECH SYNTHESIZER WHICH were used could not emulate a VT100 terminal. This did not interfere with the use of VAX Notes, but it has limited the use of some of the system's possibilities. Obviously upgrading the access equipment would open even more computing potentials. During the Winter quarter of 1986-87, Mr. Stanley Bissell used VAX Notes in a telecourse on micro computers. This will open new avenues of communication with distance learners. Plans are also underway to adapt the Modern American history course for a class of deaf students. The National Technical Institute for the Deaf (NTID) is on the RIT campus. I plan to use captioned television, textbooks and the VAX Notes program on the computer to work directly with the NTID students. This will remove the need for interpreters, note takers and tutors and bring teacher and student closer together. Not only can computer conferencing bridge the gap between the teacher and distance learner, it can transcend the gulf of physical handicaps in the teaching process. RIT Sidebar Founded in 1829, Rochester Institute of Technology (RIT) is a privately endowed, co-educational university that enrolls about 14,000 full- and part-time students in its undergraduate and graduate programs. RIT's modern campus occupies 1,300 acres in suburban Rochester, N.Y. There are nine colleges at RIT. Its primary emphasis is oncareer education A pioneer in cooperative education, RIT offers programs that allow students to alternate academic course work with paid experience in business and industry. With a program that began in 1912, RIT has the fourth oldest and fourth largest cooperative education program in the nation. In addition tot traditional curriculum, RIT offers world renowned programs at the School for American Craftsmen and the National Technical Institute for the Deaf. As part of RIT's emphasis on career education, the Institute believes that the availability of computing resources is critical to the education of students, the instructional and research efforts of faculty, and effective administration. Computing resources rely heavily on Digital Equipment Corp. equipment. Through its department of Information Systems and Computing (ISC), the Institute maintains relatively open access to computing facilities for all students, faculty and staff. All RIT students, regardless of major, must demonstrate computer literacy before graduation. ISC has a VAX cluster with a VAX 8650 computer and three VAX-11/785 systems running the VMS operating system. Among others, we use APL, COBOL, LISP, PASCAL, BASIC, DSM (MUMPS), PL/I, C, Fortran, MACRO ASSEMBLER, ALL-IN-1, VAX NOTES, C.A.S., DAL, DECgraph, GKS and ReGis. ISC also has a VAX-11/785 computer which runs ULTRIX. The College of Engineering has a VAX-11/782 system, and NTID has a VAX-11/750 computer. The Center for Imaging Science has a VAX 8200 comuterand a GPX II system, (a color Microvax graphics workstation). The Computer Science department has a VAX-11/780 computer running ULTRIX and two VAX-11/780 computers running Berkley 4.2 UNIX. The American Video Institute has a VAX-11/780 computer and several Pro IVIS videodisk systems. All of these systems and several others are linked via two Ethernet networks which are bridged together, an Infotron IS4000 data switch and most recently with an AT&T ISN data switch. ISC has about 300 Digital terminals, including GIGI, VT220, VT240 and VT241, located in five major users center distributed around campus. There are several hundred other terminals and microcomputers which access our system both on campus and off-campus. They include approximately 800 Rainbows personal workstations, 30 Pro 350 systems, 25 DECmate personal workstations, and IBM PC%, Macintosh%, Apple II% and others. Patt Haring UUCP: ..cmcl2!phri!dasys1!patth Big Electric Cat Compu$erve: 76566,2510 New York, NY, USA MCI Mail: Patt Haring; GEnie-PHaring (212) 879-9031 FidoNet Mail: 1:107/701 or 107/222 From: MCCARTY@UTOREPAS Subject: Text Encoding Initiative (22 lines) Date: 9 February 1988, 16:11:45 EST X-Humanist: Vol. 1 Num. 702 (702) ---------------------------- From Nancy Ide Sebastian, We certainly hope that the set of specific tags that we develop for use in encoding machine readable texts intended for literary and linguistic research do not *extend* SGML. I may have been inaccurate in what I said on HUMANIST, since we have every expectation that the tag sets we develop will, like the AAP tag set for electronic manuscript markup, be an *application* of SGML. But without beginning the actual research, we cannot yet be sure that SGML will serve all of our needs (althogh we expect it to--the argument about SGML's adequacy has already gone around once on HUMANIST). Nancy Ide ide@vassar From: MCCARTY@UTOREPAS Subject: TVedit and how-far-have-we-come (52 lines) Date: 9 February 1988, 16:16:38 EST X-Humanist: Vol. 1 Num. 703 (703) ---------------------------- From Michael Sperberg-McQueen A recent inquiry on Humanist concerned the program TVedit, reportedly used by Douglas R. Hofstadter in writing and typesetting his book Goedel Escher Bach. I had never heard of this, but by chance ran into a reference to it yesterday. In their survey (1971) of early interactive editors, Andries van Dam and David E. Rice describe TVedit as "one of the earliest (1965) time-sharing, CRT-based text editors." It was developed at Stanford University, and appears from the description to be what we would now call a full-screen editor, more restricted in its command language but more sophisticated in its user interface than Stanford's later editor Wylbur (itself a very good program, the best line editor I've ever used), which did not acquire full-screen capacities until much later. No details are given but the program appears to have run under a time-sharing system called Thor, about which I know nothing. The article also contains a brief description of what must be the first implementation of hypertext on a machine (the "Hypertext Editing System" at Brown) and the earliest hierarchically structured editor I've encountered, used both for outline processing and as a syntax-directed editor for PL/I and other programming languages ("Emily," developed at Argonne National Labs by a Stanford grad student, and named for Emily Dickinson). The amateurs of history among us will be amused or intrigued to read of editors "supplied ... as part of a time-sharing system for those few lucky enough to have access to one." Can it be so recently that even mainframe time-sharing was so rare? And who among Bitnet users can remain unmoved by the authors' closing evocation of a day when, they hope, manuscripts will not need to be rekeyboarded every time authors jump from one system to another, when "files can be cooperatively worked on in real-time, regardless of geographic location or editing system, and still at reasonable costs." Let us pause for a moment of silence and all give thanks for the networks, which have given us facilities that only sixteen years ago were only a visionary dream. Michael Sperberg-McQueen, University of Illinois at Chicago From: MCCARTY@UTOREPAS Subject: Multitasking, Desqview, Windows (62 lines) Date: 9 February 1988, 16:20:31 EST X-Humanist: Vol. 1 Num. 704 (704) ---------------------------- From Mike Stairs It is time for me to join the fray! Up to this time I have been quietly reading Humanist without actively participating. My office is next to Willard's so I figured he could respond for the CCH as a whole. Willard asked about Deskview and Windows etc. I just acquired both packages along with a mouse and 4 meg extended memory and 1 meg expanded memory. Both these packages make using PCs almost bearable (I am hoping to get a 386 machine very soon). Though some of my computing needs may differ from the average humanist I have some idea of what they probably need and also what they think they need. There seems to be a general fear of new technology and a desire to make do with what is presently available rather then venture into the *new wave*. This is a waste of computing power that is there for YOU if you dare! I came from the Computer Science department to this position but have a strong background in philosophy. I would argue that machines could never think but that they are tools for achieving that which was previously impossible or very time consuming. I don't want to start a discussion on the first claim but the second point is important. If Humanists are afraid of available tools they are losing their own very valuable time. Why do a concordance that takes 5 hours when with a faster machine it may take 1 hour? It seems a false economy indeed if cost is the motivating factor. Analysis that would not be undertaken with a slow machine could now be possible. I'm not advocating that everyone rush out and buy the fastest machine on the market today but people should try to not be so aprehensive of the changing technology. All this said, what are the present alternatives to DOS? Both DeskView and Windows are superior to DOS standing alone. They are not replacements to DOS but rather enhancements. They allow you to use a mouse, for this reason alone you should be convinced. The use of a mouse is a great time saver. But this is just a bonus, the true joy of these systems is their multitasking ability. Anyone who has used TeX will will be happy that they can have their favorite word processor, previewer and TeX all running at the same time. You can use the spooler and work on something else too. You could be even downloading a huge document with your modem at the same time. It is like having a room full of computers all accessing the same files, but capable of doing completely different tasks at the same time. You could even view multiple tasks on the same screen. OS/2 will not be able to do this with DOS tasks so there is no point in using it. You are not forced to buy (expensive) expanded memory to use DeskView. It can swap programs on and off your hard disks as they become activated. It is like a large ramdisk (though it is a bit slower). I realize that I have rambled on a bit too much. The point is that it is frustrating to see people afraid of new technology which could save them a lot of time and work in the long run. I guess the question is whether it is worth the effort to keep up. If the answer is "no" there will be many important answers that Humanists are searching for that will never be found. I know my time is too valuable to waste it when there are alternatives to the present methodologies. Mike Stairs Site Coordinator Centre for Computing in the Humanities University of Toronto From: MCCARTY@UTOREPAS Subject: CD-ROM / SGML (36 lines) Date: 9 February 1988, 20:08:03 EST X-Humanist: Vol. 1 Num. 705 (705) ---------------------------- From David Nash Last night here I attended a talk by Bill Gates (founder etc. of the MicroSoft corporation) at the monthly meeting of the Macintosh group of the BCS (Boston Computer Society). Followers of recent threads on this group may be interested to note that the up-beat note on which he ended his hour was an assured expression of confidence that SGML-based documents on CD-ROM are about to appear in great numbers. He cited examples outside academia, such as large manuals, parts catalogues, and didn't even mention MicroSoft Bookshelf (the CD-ROM now available with a dictionary, Zip Code directory, almanac, etc.) He noted that the people at Boeing regard the paper copies of the full manuals for their large aircraft as rare immoveable objects (which are consulted in some kind of rack, as I read his gesture). He said that US Government publications are now prepared in SGML, and predicted (?) that in a few years the machine-readable versions of (most?) US Government documents would be more readily available than printed versions. I noted that he didn't explain "CD-ROM", but felt he had to expand on what "SGML" is. If you believe what he was saying, it sounds CD-ROM readers will soon be around as much as, say, microfiche readers. -DGN From: MCCARTY@UTOREPAS Subject: Early editors (40 lines) Date: 9 February 1988, 20:09:18 EST X-Humanist: Vol. 1 Num. 706 (706) ---------------------------- From David Durand If we're going to talk about the history of editing and advanced text- handling on computers I have to put in a plug for Douglas Engelbart's amazing work. He implemented the first integrated office with central file handling, outlining, text-handling + arbitrary cross referencing in the early 60's. I'm not sure when the first version cam online, but I think it was before 1965. A few articles: still very worthwhile reading, as the fundamental issues have not changed since then, though the technology certainly has: ::BOOKLONG: ::CONFPROC: There are a number of later ones not included here. Citations taken from Paul Kahn's distribution of the IRIS Hypermedia Bibliography which was started by Steven Drucker and is now maintained by Nicole Yankelovitch. From: MCCARTY@UTOREPAS Subject: Nuancing Susan Hockey on CD-ROMs (89 lines) Date: 10 February 1988, 09:12:53 EST X-Humanist: Vol. 1 Num. 707 (707) ---------------------------- From Bob Kraft There is much to say about the current CD-ROM discussion, but for the moment I would like to nuance some of the information in Susan Hockey's recent contribution, to help avoid possible misunderstanding on a couple of details: (1) Currently three different TLG CD-ROMs have been produced, dubbed "A", "B", and "C". The TLG "A" disk was issued in the summer/fall of 1985 in a provisional, pioneering (pre High Sierra) format that simultaneously was made available on the new IBYCUS Scholarly Computer. Subsequently, CCAT (with Packard Foundation funding) developed experimental prototype programs to access the "A" disk from IBM type machines. The TLG "B" disk appeared soon after, prepared as a version of the "A" materials (or at least of the Greek texts on the "A" disk, which also had some Latin and other texts) with lexical indices by Brown University (Paul Kahn) in cooperation with Harvard (Gregory Crane). This disk does not run on IBYCUS, and ran very fast on the machines for which it was intended (initially an IBM RT, I think). The software was adapted for an IBM PC environment by Randall Smith, and, I think, for an Apple Macintosh environment as well, although I am sketchy on the exact details. Gregory Crane now uses it on a Mac, if I am not mistaken. The TLG CD-ROM "B" uses a format developed at MIT, also pre-High Sierra. TLG CD-ROM "C" has been mastered and is about to be released. It uses the (provisional) High Sierra format and can, at a very rudimentary level, be accessed by any machine equipped to read that format; CCAT is developing software to use the "C" disk from IBM type machines (employing the recently released DOS Extension), and IBYCUS has redesigned its access program to read the "C" disk. Complications are caused by the internal ID-locator formatting of the materials on the disk, which must be decoded for ease and efficiency of access, but there is nothing peculiarly "IBYCUS oriented" about this disk. The TLG "C" disk also includes indexing at some level, although the IBYCUS access program does not make use of this feature. (2) A "sister CD-ROM" to the TLG "C" disk has been prepared by the Packard Humanities Institute (PHI) in cooperation with CCAT. For details see OFFLINE 17. This disk is intended to be in the same "High Sierra" format as the TLG "C" disk, and to employ the TLG "Beta coding" for ID-locators. The IBYCUS software can read the PHI/CCAT CD-ROM, which has been mastered and is about to be released for distribution. CCAT also has "experimental" software for reading this disk from IBM type machines. This disk does not contain indices. (3) It is not difficult to treat the CD-ROM as a source from which to offload particular texts or portions of texts, so that a user's favorite software can then be applied to the offloaded material. This is true on IBYCUS or on IBM, and is perhaps the most obvious use of a CD-ROM for the "average" user who is not yet in a position to make more sophisticated direct use of the materials. For people who want to have access to large bodies of texts, even if only for offloading, the economic and storage advantages are tremendous. I can give concrete figures, as examples, if anyone cares. Finally, to Susan's question -- what research benefit does my access to CD-ROM (via IBYCUS, I confess) offer? Apart from the obvious searching and selecting, which can include searching the entire corpus of available Greek (or Latin, etc.) material for a given word, phrase, combination of words, etc., I use the CD-ROMs as a reference point for quality control (correction of errors; no small matter, at this point!), as a convenient source for offloading (no need to hunt for the tape, mount it, etc.), and as a gigantic pool in which to hunt for identifying papyri scraps, unknown quotations/allusions, etc. (a special kind of searching). Basically, they are at present for me repositories and bases for excerpting and sophisticated index-type searches. I have not yet tried to do anything imaginative with the variety of non-linear or extended-linear files on the PHI/CCAT disk -- e.g. morphological analysis, parallel texts, text with variants, word lists. But they are there for the experimenting! Bob Kraft, CCAT From: MCCARTY@UTOREPAS Subject: IBYCUS in Europe (17 lines) Date: 10 February 1988, 09:16:55 EST X-Humanist: Vol. 1 Num. 708 (708) ------------------ From Bob Kraft In response to Keith Whitelam's footnote, it is my understanding that John Dawson at Cambridge has agreed to act as a representative of IBYCUS for European (or at least British) distribution. Bob Kraft, CCAT From: MCCARTY@UTOREPAS Subject: Re: Are CDROMs all that marvellous (43 lines) Date: 10 February 1988, 09:18:03 EST X-Humanist: Vol. 1 Num. 709 (709) ------------------------- From Charles Faulhaber Scholarly activities aided by CD-ROMS: 1) Textual criticism. To be able to check the usus scribendi of an author, an age, or an entire corpus would be an enormous boon for the establishment of texts. It simply cannot be done now except in the very simplest of cases. 2) Historical linguistics: First attestations of word usage are important for lexical studies of all sorts, but so are last attestations. At this point it is impossible to say "This is the last time word x" is used. With the TLG we can say this with a very high degree of certainty. I'm with the CD-ROM people (but it doesn't have to be CD-ROM). Access in any reasonable fashion to the entire machine-readable corpus of a given literature will revolutionize the study of that literature. I would like to see some thought given to software for semantic access, probably some sort of thesaurus approach. This may already exist, but my ignorance is vast .... From: MCCARTY@UTOREPAS Subject: CD-ROMs (51 lines) Date: 10 February 1988, 09:25:46 EST X-Humanist: Vol. 1 Num. 710 (710) ------------------------- From ghb@ecsvax (George Brett) Hullo from mid-south. My work with micro concordances is minimal... I have watched Randall Jones do a presentation of BYU Concordance. I may be off course, so please accept my apologies early on. Now, about this cd-rom business and concordances and hypertext. Have you seen a HyperCard stack named Texas? This stack has the ability to index a text file at a rate of 3Mb per hour. The result of the indexing are two "lists" and the text file. The first list is a word list that is alphabetically sorted, numbered by frequency, and can be searched. Once a word is selected from list one you are presented list two. List two shows the word you have selected in a one line context sitution. Each line has the selected word centered on the screen with text to the left and right on the same line. When you find a line that appears to meet your criteria you select that line and then Texas presents you the paragraph in the original text file with the word in it. (verbally that's the best I can get at the moment.) If you have access to a Mac with HyperCard I would be most willing to mail you a copy of this package. The author has expressly mentioned that it is intended for CD-ROM application. I would be interested to hear what someone more familiar with concordance packages would have to say. Cordially -- -- george George Brett (not the baseball player) Manager, Academic Services UNC-Educational Computing Service ghb@ecsvax.UUCP or ghb@ecsvax.BITNET or ECSGHB@TUCC.BITNET >work: UNC-ECS POB 12035 RTP NC 27709-2035 919/549-0671 >home: 1408 Alabama Ave. Durham, NC 27705 919/286-9235 From: MCCARTY@UTOREPAS Subject: CD-ROMs and the problem of defining a "corpus" Date: 10 February 1988, 12:23:32 EST X-Humanist: Vol. 1 Num. 711 (711) --------------------------- From Dr Abigail Ann Young I'm not acquainted in detail with the CD-ROM (or any other version!) of the TLG, although I've used the printed TLL a great deal, so I may be misinformed about what I'm going to say as it pertains to the TLG as opposed to the TLL. But unless the TLG project did as the DOE project has done, and edited from MSS. all the hither-to UNpublished writings in Classical Greek, Koine, patristic, & Byzantine Greek, you cannot use to it determine the earliest occurrence or the latest occurence of a word in Greek. Similarly, I cannot now use the TLL to find the earliest or last occurrence of a given word in Latin. I can find the earliest or last occurrences of words in PRINTED works/MSS. I think it is important for those of us who work with languages like Latin to remember that, even with the best CD-ROMs, software, et al., we are still restricted in our analyses to a comparatively small portion of the actual surviving texts because only a comparatively small portion of those texts have been edited and printed. (And I wouldn't want to think about the problem of how many of them have been edited well!) We can perhaps analyse and study the available corpus of Latin or Greek literature on CD-ROM much better than in any other form, but we should be careful not to confuse the available corpus with the entire corpus. It is a problem I often experience in my own work, because people are apt to say of a Latin word, "That form doesn't exist!" or "No, it doesn't mean that!" on the basis of the TLL. But the TLL only includes edited and published material, which means that it is entirely possible for other forms and other senses of known forms to exist in many, many MSS. texts of which the TLL is unaware. In our desire to use the latest technology to expand our understanding of a language or a literature, we mustn't forget how much work remains to be done in the locating, editing, and publishing (in whatever form, print or electronic) of writings still in MSS. or on papyrus, etc. I doubt even a Kurtzweil and a 386 machine could help much with that! Abigail Ann Young REED From: MCCARTY@UTOREPAS Subject: CDROMMERIE (24 lines) Date: 10 February 1988, 12:28:34 EST X-Humanist: Vol. 1 Num. 712 (712) -------------------------------------- From David Nash Beryl T. Atkins thanks you for the (first 15) reponses on CD-ROM. " Have only scanned the Humanist responses so far but wonder if it's worth making the point that what we're thinking of putting onto CDROM now would be a corpus of general English, with a front end allowing selection (& downloading to one's own computer) of node-word concordances of flexible length. It's really a research linguist's tool. We have also of course thought of putting out dictionaries on CDROM, I suspect these too would be for the researcher rather than the sporadic user or the dic freak browsing through etymologies at midnight." -DGN From: MCCARTY@UTOREPAS Subject: Long-term storage of electronic texts (37 lines) Date: 10 February 1988, 16:19:11 EST X-Humanist: Vol. 1 Num. 713 (713) ----------------------------------- From Ian Lancashire A journal called Electronic Library intermittently publishes interesting articles about electronic methods of storing text. The general impression given by these is that computer-readable media, including CD-ROM, offer much less of a storage life than acid-free paper or (best) microform. Since an alarming percentage of existing library collections is already irrevocably lost to this slow fire, and since most of us are using electronic storage for our own work, may I pose a question? What is the most secure method of storing machine-readable text and how long many years does that method offer us? The best solution for mass storage I know of has been termed videomicrographics by Dennis Moralee ("Facing the Limitations of Electronic Document Handling", Electronic Library 3.3 [July 1985]: 210--17). This proposes very high-resolution page images stored with conventional micrographics and then -- here's where mother CPU comes in -- output electronically by scanning when (and only when) the user asks for the page(s). This method is, in theory, technology-independent and reliable. I hope my colleagues and students will begin to make good use of concordances and indexes of vast archives of literature on CD-ROM and the like, but a better use of our efforts might be to save some fragments of the past 150 years from the fires of acid paper. From: MCCARTY@UTOREPAS Subject: In defense of write-only (59 lines) Date: 10 February 1988, 16:27:44 EST X-Humanist: Vol. 1 Num. 714 (714) ------------------------------------ From (M.J. CONNOLLY [Slavic/Eastern]) I restricted my reply about CD-ROM usage to the read-only formats because the question seems to have come from the publishing quarter. I do, however, agree with Bjorndahl that the issue of read/write deserves attention and his caveat is well put. One important consideration, which humanists must appreciate, is the matter of _authority_ in data. When I read something in a reference work I can be reasonably certain that what I see at line m on page n is the same as what another reader has in another copy of the same edition. Not so, however, if those contents were in a read/write format. Short of edit-trails, which few will be likely to implement or respect, we could never be sure whose Dante we are really reading. The same difference applies in music with LPs and CDs on the one hand versus the far more volatile and editable tape media on the other hand. Why is the recording industry in the US ashiver at the prospect of digital audio tape decks? Granted, they could care less whether the Mozart in your living room is echt, but they do see that they forfeit control in a read/write scheme. For many of our needs, that same control rigidity serves as a blessing. I should not feel able to place scholarly reliance on a work presented in a medium where anyone else may have been before me, dickying around at leisure. If you've used some one else's Home stack in Hypercard or an editable spell-checker in a public facility, you'll understand the feeling on a minor scale. This is also why an MIS officer at your institution will prefer to maintain his own database and job mailing labels and reports when you need them, rather than letting you have a copy of the database with which to make your own. Without linked editing of the copies (then virtually a shared file) changes to A do not appear in B and v.v. The second issue with read/write involves availability and standards. CD-ROM is here, and the manufacturers are in near uniform agreement about the standards for the emerging alternate formats. (See also what I wrote about the intended upgrade paths). Read/write is still not here yet, and I see no guarantee that we shall be spared the CBS:RCA-color-television duel or the VHS:Beta wars or the interactive laserdisc skirmishes whenever speedy products appear at affordable prices. For the meantime we should not put important work on hold to wait for the next stage of the technology -- there is always an exciting next stage. Yet even when we do have useful read/write, the need for read-only will not have passed. Prof M.J. Connolly Slavic & Eastern Languages Boston College / Carney 236 Chestnut Hill MA 02167 ( U . S . A .) (617)552-3912 cnnmj@bcvax3.bitnet From: MCCARTY@UTOREPAS Subject: CD-ROMS *are* marvellous (73 lines) Date: 10 February 1988, 18:48:56 EST X-Humanist: Vol. 1 Num. 715 (715) ------------------------------------ From Sterling Bjorndahl Susan Hockey asks if CD ROMS are marvellous. All I can say is that the Ibycus system provides a relatively cheap way for people to be able to do linear searches, as she describes. Cheap both in terms of money (if your department buys it) and time. And even though it is just a linear search it has revolutionized the way we do research here at Claremont. It is becoming unacceptable here for us as graduate students to write a paper without checking through and digesting several centuries worth of Greek literature. Let me give an example. I wanted to find out something about in what context a man and a woman could recline on the same couch in a meal setting. This was in an examination of the Gospel of Thomas saying 61, in which Jesus says "Two will rest on a couch; the one will die, and the other will live." Salome replies: "Who do you think you are, big shot? You've been up on my couch and eaten from my table!" (implying: which of us two is going to die, you or I?). This is an amusing Cynic-type chreia which goes on to be elaborated in sayings 62-67 in a rather gnostic fashion. My question was: isn't it a little unusual for Jesus to be on a couch with Salome at all? I am not from a classics background - a situation shared by many in my field - and so I didn't know where to turn to find out about men and women eating together on the same couch. I found a couple of old books dealing with home and family in Greece and Rome, but they were rather popular in orientation and didn't cite many sources to back up their assertions. I knew a little bit about hetaerae, of course, but didn't know if I knew enough to compare Salome's behaviour with that particular role. However with the TLG texts on CD-ROM I was able to pick a few key Greek words to search for. In a half an hour I had twenty pages of references, in context, in which those words appeared together. Over the next few days I was able to digest them and put them into categories of public/private, married or non-married couple, and I paid some attention to time frame, social status, and geographical setting. This resulted in about twenty texts that made a rather nice appendix to a paper (although it suffers from a lack of archeological data, and of course it cannot claim to have *every* relevant text - just enough to argue a case). I really don't know how else I could have done this. I would have not taken the time to search through the index of every volume in the Loeb Classical Library series. It may be that some bright classics person reading this will be able to point me to a monograph dealing with this subject, but as I say, it's not my field and I don't know my way around it very well yet. I couldn't find any secondary literature that helped very much (if you know of some, I'd be grateful to hear about it). So even though it's not CD-I, even though it's just a linear search on a literal string, it has made quite a difference to me and to several other people here. It has allowed us to work conveniently with texts that we may have never before touched. It certainly does not do our work for us - we would not want that. Our work is the digesting and analyzing. Flipping through indices is menial labour that we don't mind letting the machine do. I belive that it is IBM's unofficial motto that reads: "Machines should work. People should think." The more texts I can have at my fingertips, the more I'm going to enjoy making connections that may never have been made before. Sterling Bjorndahl BJORNDAS@CLARGRAD.BITNET From: MCCARTY@UTOREPAS Subject: CATH 88, call for papers (59) Date: 10 February 1988, 18:52:18 EST X-Humanist: Vol. 1 Num. 716 (716) ---------------------- From May Katzen --------------------------------------------------------------- --------------------------------------------------------------- HUMANIST readers are reminded that the deadline for submitting abstracts for the CATH 88 conference is 29th February 1988. --------------------------------------------------------------- CALL FOR PAPERS FOR CATH 88 CONFERENCE Computers and Teaching in the Humanities: Re-defining the Humanities? University of Southampton, UK 13th-15th December 1988 The theme this year is the interface between the computer and the humanities disciplines and whether or not traditional assumptions and methods are being supported or challenged by the use of new technologies in higher education. The conference will be mainly devoted to workshops and seminars, and proposals are invited for contributions to workshops. Abstracts of at least 500 words should be sent to: Dr May Katzen Office for Humanities Communication University of Leicester Leicester LE1 7RH UK 011 44 533 522598 (from North America) Please note that the deadline for receipt of papers is 29TH FEBRUARY 1988. ------------------------------------------------------------- May Katzen Office for Humanities Communication University of Leicester From: MCCARTY@UTOREPAS Subject: academics and apartheid Date: 10 February 1988, 19:00:24 EST X-Humanist: Vol. 1 Num. 717 (717) ------------------------------------------ From Sebastian Rahtz Its nothing to do with computers at all, but could I take the opprtunity to suggest to anyone reading this that they consider joining the World Archaeology Congress? For a mere $20 you can be part of a body which is more or less unique (so far as I know) in its commitment to the THE academic subject (archaeology - didnt you know?), to Third World integration into the subject, and to a vision unbiased by geographical or historical constraints. It is founded on the success of the 1986 'anti-apartheid' World Archaeological Congress. If you have never heard of it, find out now by writing to W A C, Department of Archaeology, University, Southampton S09 5NH, UK. I might add that I disagree with much of what WAC has to say about the past and the relationship between ethnic groups and historic remains. But since WAC is the most anarchic body I have yet encountered in academia, and is seriously trying to bring together academia and politics, it gets my vote every time. join WAC or be square.... Sebastian Rahtz Computer Science University Southampton S09 5NH From: MCCARTY@UTOREPAS Subject: Large corpora and interdisciplinary poaching (45 lines) Date: 10 February 1988, 19:02:50 EST X-Humanist: Vol. 1 Num. 718 (718) Sterling Bjorndahl in his most recent note tells a story I'm sure many of us could repeat in essence: how access to some large textual corpus, such as the TLG, allows a scholar to get a grip on the material of an academic discipline outside his or her training. I certainly can, but I'll spare you most of the details, since Sterling's are as good. Like Sterling I'm not a classicist; like Sterling I found myself forced to poach on the classicists' turf in order to serve research in another field, and I also used the TLG on an Ibycus. I know I didn't find every reference to the Greek words for "mirror" (footnote prowling turned up several the experimental TLG disk didn't produce), but never mind. I found enough to make a sound argument, at least sound enough to get the thing accepted in a classicists' journal. Having so much evidence was, yes, marvellous for all the obvious reasons. My point, again, is to underscore what Sterling noted about the benefit of access to large corpora for us interdisciplinary poachers. Ok, in the good old days, anyone lucky enough to get into a good school would have read classics and, perhaps, have known just where to look for "mirrors," though I doubt the comprehensiveness of *his* knowledge. And yes, the CD-zealot isn't guaranteed a sound argument simply by having mounds of evidence. Intelligence is required. But it IS marvellous for a poorly educated Miltonist to be able to survey 21,000,000 words of Greek so as to be able to make a plausibleGreconstruction of what his intellectual master was talking about. And all this merely by doing a sequential search, with no built-in morphological analysis, no tools for disambiguation. Are you all aware of what Yaacov Choueka and his colleagues have done with the Global Jewish Database, ca. 80 million words of rabbinic Hebrew? If only the Ibycus could run that software! Willard McCarty mccarty@utorepas.bitnet From: MCCARTY@UTOREPAS Subject: Scanning query (21 lines) Date: 10 February 1988, 23:19:00 EST X-Humanist: Vol. 1 Num. 719 (719) ------------------------------------------ From goer@sophist.uchicago.edu(Richard Goerwitz) Can anyone recommend a decent, inexpensive scanning service that can handle a typset English document whose only catch is a some variation in type size, and some extra, accented characters? -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer From: MCCARTY@UTOREPAS Subject: TEXAS 0.1, in response to George Brett (49 lines) Date: 10 February 1988, 23:23:30 EST10 February 1988, 09:25:46 ESTwrites:Haveyou seen a HyperCard stack named Texas? This stack has the ability toindex a text file at a rate of 3Mb per hour. The result of theindexing are two "lists" and the text file.And goes on to give a clear description of what Mark Zimmerman'ssoftware will do for the Mac user.I would be interested tohear what someone more familiar with concordance packages would haveto say. X-Humanist: Vol. 1 Num. 720 (720) ------------------------------------------ From David Nash >From ghb@ecsvax (George Brett) Well, I'm not familiar with a range of concordance packages, but as nice as TEXAS 0.1 is (and at a very nice price), it calls for some additional features to make it a good tool. There are two handicaps I found in the first minutes: (1) A context word cannot be selected from the line- or full-context displays, and then used in turn in the index. One has to return to the word-list screen and type in the desired word. (2) A slice of display (whether index, line-context or page context) cannot be saved to a (ASCII) file. Both these would be no problem if the Clipboard were available from TEXAS, but unfortunately it isn't. I believe Zimmerman would like to hear further suggestions for its improvement. I would use an option of reverse lexicographic indexing for instance, and of suppressing material marked in a certain way (e.g., all material inside square brackets). The approach deserves encouragement, I think. And I like it that he added the HyperCard interface to a previous package. (Zimmerman's browsing package was written in C and used first with Unix.) -DGN From: MCCARTY@UTOREPAS Subject: Text archiving (44 lines) Date: 10 February 1988, 23:26:26 EST X-Humanist: Vol. 1 Num. 721 (721) ------------------------------------------ From David Durand Ian Lancashire's posting about the longevity of various forms of machine readable archival brought a few thoughts to mind. The first thought, is agreement with the plea to use the greatest possible speed in moving information from pages printed on acid paper to some more secure form. It would truly be a sad irony if the last 200 years of written production, an accumulated weight (literally) of knowledge and effort, were to be lost to the very technology of inexpensive printing that helped it to be created. However, I think there is a common misconception about the safety of digitally stored data that needs to be addressed. Granted that the longevity of most digital media currently in existence probably cannot be guaranteed to be greater than 30 years, there is a significant difference between digital and analog media. The crucial difference is that digital media do not degrade gradually as they are stored, and they can be speedily duplicated without loss of content in a way that other media cannot. In addition, given the speed of technological change, each copying operation is likely to pack more and more information into one storage unit. This means that over time, the copying process becomes easier and easier. Compare the operation of copying the TLG on magnetic tape to the operation of printing a new copy of the CD-ROM from the master stamping plate. Another factor is that once data is captured in character-based form (i.e. not page images) it can be manipulated much more flexibly than printed, video, or microform media which can only be read by human beings. All of these technologies may be needed in the interim process of converting past knowledge into permanent electronic form. We have to work with what is available, after all. But the fact remains that a computer file is both more easily duplicated without error, and is more useful for research than an image-based representation of the same texts. Perhaps character recognition will eventually solve the problems of converting photographs of badly typeset victorian books into ascii files, but it seems best to go with the most flexible format available. From: MCCARTY@UTOREPAS Subject: Final SGML posting (64 lines) Date: 10 February 1988, 23:28:53 EST X-Humanist: Vol. 1 Num. 722 (722) ------------------------------------------ From David Durand This posting is a joint product with Allen Renear of Brown. A point of terminology: The word `tagset' refers to the set of content entities indicated in a computer stored document. The issue of what tagset to use is independent of the decision to use SGML or not. The point that must be kept in mind about SGML that must be kept in mind is that SGML is *not* a tag set -- it is a language for rigorously declaring a tagset and its associated grammar. An SGML complying file consists of 2 components: (1) the prologue, a header which declares the tags and describes their grammar, and (2) the tagged text itself. The AAP standard describes a tagset -- and gives the associated SGML declarations. That is, it is a tagset that conforms to the SGML standard in having, among other things, an SGML declaration. Many of us believe that the important thing is that SGML provides a standard for declaring and then using tagsets. But attempts to standardize tagsets themselves are much riskier projects. For one thing a great deal of ingenuity and innovation goes into a tagset. For another, a tagset may correspond to a work of scholarship in itself, if the tagged information involves a new interpretation of the the text. Consider, for example, the introduction of paragraphing into a manuscript text, or the scansion of a work previously regarded as ametrical. Tagsets often represent an improved understanding of a text's structure or of the relative importances of different aspects of that structure. The may be innovative, they may be controversial. Not the sort of thing which is ripe for standardization. I understand that University Microfilms is accepting, as an experiment, some dissertations in electronic form -- provided they are marked up in the AAP tagset. Well my dissertation is rigorously marked up -- but there are no AAP tags corresponding to my *primitive_term_introduction* tag, or my *counterexample* tag. In some cases I add detail to what seems to be an AAP analogue: my definition is composed of a *definiendum* and a *definiens* -- so far so good. But my definiens contains *clauses* and my definiendum a *predicate* tag. In more awkward cases there may be semantic incongruities in tagsets that no amount of granularity adjustments will resolve. Such tagsets will be fundamentally incommensurable. Standard tagsets support comparative work, so we should explore them and create them. But we should not hold out too much hope for them if we expect them to be adequate for all applications. Extensibility within a uniform structure is an essential. The important thing is that 1) the manuscript be tagged and 2) the tags and their grammar be declared in a prologue and the 3) the tags be glossed as to their meaning. SGML provides a way of doing 1) and 2) -- without restricting anyone to any particular tagset. The problem of 3) -- standardizing a semantic gloss -- is still up for grabs -- but I doubt if much more can be done than to describe their meaning as clearly as possible in English. From: MCCARTY@UTOREPAS Subject: CD-ROM Bibliography Date: 11 February 1988, 16:12:15 EST X-Humanist: Vol. 1 Num. 723 (723) ------------------------------------------ From Bob Kraft A footnote to my nuancing [Lou B. wonders about the use of "nuance" as a verb; is this unusual?] of Susan Hockey's comments on CD-ROM: For a more extensive and balanced treatment of the state of text oriented CD-ROM development as of a year ago, see the article by Linda W. Helgerson entitled "CD-ROM: the Basis for Scholarly Research in Higher Education" in the periodical publication called _CD Data Report_ 3.2 (December 1986) 15-18 (Langley Publications, 1350 Beverly Road, Suite 115-324, McLean, VA 22101). This "Profile" report covers TLG, the Brown-Harvard Isocrates Project (TLG disk "B"), the new Perseus Project of Gregory Crane at Harvard, CCAT, and PHI. It also includes a bibliography and set of relevant addresses. Finally, Randall Smith of University of California in Santa Barbara should be urged to say something in some detail about his work with the TLG "B" disk on IBM type machines, building on the Brown-Harvard development. He is 6500RMS@UCSBUXB.BITNET and should not be bothered too much or his graduate program work will suffer. But he has done important work in this important area. Bob Kraft, CCAT From: MCCARTY@UTOREPAS Subject: PhD exams in humanities computing ( Date: 11 February 1988, 16:20:34 EST X-Humanist: Vol. 1 Num. 724 (724) ------------------------------------------ From Nancy Ide I sympathize with the problem of putting HUMANIST mail aside for several days with good intentions to get to it later. If I actually do get to it eventually, I find another problem arises: the topic I'd like to comment on has been abandoned in the current discussions. With this as a preface I'd like to say something about Sterling Bjorndal's question concerning PhD exams in humanities computing. Bob Kraft made a good point: I, too, would be reluctant to consider a PhD exam that focusses on the computer-as-tool, and I too don't see the fact that courses are offered in the field as reason enough to argue that a PhD exam is appropriate--such a course in the field of English literature is effectively an extension of the methods course that every graduate student is required to take, but which no one would consider the basis of a PHD exam. However, I can see an exam in computer-as-method, or, put more broadly, quantitative methods--especially where the focus is on the the relationship between quantitative methodology and more traditional methods. In literary studies, at least, understanding the critical approach that quantitiative methods embody and fitting them into the critical context is a substantial task. I have argued before that courses in literary computing should include ample consideration of the critical approach that the methodology embodies, and that especially for beginning graduate students, such consideration is essential in order to prevent the inappropriate or at least, ill-considered use of computers for literary research. I can also see PhD exams in areas that support humanities computing that involve a substantial theoretical component. For instance, an exam that focussed on certain relevant areas within computer science (such as formal language and parsing theories, or data base theory and design) or linguistics or both would make sense. The answer to the question about offering PhD exams in humanities computing, then, is no in my opinion--not if the focus is tools or applications. However, there are substantially theoretical aspects to humanities computing that could justify such an exam. My remarks are made in the context of what I know best--literary and linguistic studies. I hope that those of you who are in other fields can determine whether what I say makes sense in relation to your own disciplines. Nancy Ide ide@vassar --note that my address has been and may continue to be incorrectly specified in message headers as ide@vas780 or ide@vaspsy. Please note that these addresses are incorrect and mail should be sent to ide@vassar. The problem should clear up in a few weeks. From: MCCARTY@UTOREPAS Subject: Time as a commodity (20 lines) Date: 11 February 1988, 16:23:31 EST X-Humanist: Vol. 1 Num. 725 (725) ------------------------------------------ From Brian Molyneaux Mike Stairs makes some interesting comments about the value of time - at least, his time - but I wonder if there is not some advantage in the physical activity of working, in that spending more time with the undivided object - a text, an image - may help one conceive of new ways of seeing it. when I get the money, I'm going to get my 386 with as much memory as I can cram into it, and shove in the best time-saving and time-segmenting software I can get - but I will still spend a lot of time turning pages and dreaming - and, gosh, I might even pick up a pen now and then, for old time's sake. From: MCCARTY@UTOREPAS Subject: The Packard Humanities Institute (63 lines) Date: 11 February 1988, 16:29:43 EST X-Humanist: Vol. 1 Num. 726 (726) [John M. Gleason of The PHI has kindly supplied the following information about his Institute, which has just become a member of HUMANIST. I am publishing his blurb here because it speaks to questions about the PHI's work that have arisen recently. --W.M.] The Packard Humanities Institute 300 Second Street, Suite 201 Los Altos, California 94022 USA Telephone (415) 948-0150 Bitnet xb.m07@stanford All of our activity currently involves collecting and analyzing bodies of text for eventual inclusion in a CD-ROM: 1. We are collecting all Latin writings through some undecided cutoff date. We issued in December 1987 PHI Experimental CD-ROM #1, which contained: 4 million Latin words processed by PHI. These include most of the authors of the Republic. For example, Cicero is complete. Several of these texts have not been available before in machine-readable form, e.g. Quintilian, Celsus, Seneca the Elder. IG 1 and 2, produced at Cornell University under a grant from The David and Lucile Packard Foundation. A number of miscellaneous texts produced by the Center for the Computer Analysis of Texts at the University of Penssylvania. Many of these were previously included in the Pilot CD-ROM of the Thesaurus Linguae Graecae. Biblical texts include the Septuagint, New Testament, Hebrew Old Testament, Authorized and Revised Standard Versions. Other texts include Arabic, Syriac, Coptic, Aramaic, French, Danish and English. Experimental CD-ROM #1 will be ready for distribution by the end of February 1988. The cost will be very low. 2. PHI is working with outside scholars to produce complete morphological analyses of the Hebrew Old Testament and the Greek New Testament. Various other projects are being considered and even dabbled at, but the Latin CD-ROM should occupy us for quite a while. Main PHI personnel: From: MCCARTY@UTOREPAS Subject: Conference on Sentence Processing (212 lines) Date: 11 February 1988, 19:32:53 EST X-Humanist: Vol. 1 Num. 727 (727) ------------------------------------------ From Terry Langendoen CUNY FIRST ANNUAL CONFERENCE ON HUMAN SENTENCE PROCESSING MARCH 24 - 27, 1988 --------------------------- CORPORATE SPONSORS: BELL COMMUNICATIONS RESEARCH INTERNATIONAL BUSINESS MACHINES CORPORATION --------------------------- - PROGRAM - All meetings (except reception) will be held at the CUNY Graduate Center, 33 West 42nd street in the Auditorium at the Library level. THURSDAY, March 24, 7:00-9:00 pm Evening wine reception at Roosevelt House, Hunter College, 47-49 East 65th Street (between Park and Madison). FRIDAY MORNING, March 25, 9:30-12:30 David Swinney, Janet Nicol, Joan Bresnan, Marilyn Ford, Uli Frauenfelder, and Lee Osterhout--Coreference Processing During Sentence Comprehension. Kenneth Forster--Tapping Shallow Processing with a Sentence Matching Task. Susan M. Garnsey, Michael K. Tanenhaus, and Robert M. Chapman-- Evoked Potentials and the Study of Sentence Comprehension. Wayne Cowart--Notes on the Biology of Syntactic Processing. Merrill Garrett--Semantic and Phonological Structure in Word Retrieval for Language Production. Jose E. Garcia-Albea, Susana Del Viso, and Jose M. Igoa--Movement Errors and Levels of Processing in Sentence Production. pa FRIDAY AFTERNOON 2:30-6:30 Tutorial Sessions SATURDAY MORNING, March 26, 9:30-1:00 Stuart Shieber--An Architecture for Psycholinguistic Modeling. Amy Weinberg--Minimal Commitment, Deterministic Parsing and the Theory of Garden Paths. Aravind Joshi--Processing Crossed and Nested Dependencies: An Automaton Perspective on the Psycholingistic Results. Eric Sven Ristad--Complexity of Linguistic Models. Joseph Aoun and Samuel S. Epstein--A Computational Treatment of Quantifier Scope. Mark Steedman--Context and Composition. Mark Johnson--Parsing as Deduction: The Use of Knowledge of Language. SATURDAY AFTERNOON 3:00-6:00 Special Session on Ambiguity Resolution Michael K. Tanenhaus, Greg Carlson, Julie Bolard and Susan Garnsey-- Lexical Parallelism. Paul Gorrell--Establish the Loci of Serial and Parallel Effects in Syntactic Processing. Lyn Frazier--Ambiguity Resolution Principles. Martin Chodorow, Harvey Slutsky, and Ann Loring--Parsing Nondeterministic Verb Phrases. Mitch Marcus--Whence Deterministic Parsing? pa SUNDAY MORNING, March 27, 9:30-12:30 Steven P. Abney--Parsing and Psychological Validity. Thomas G. Bever, Caroline Carrithers, and Brian McElree--The Psychological Reality of Government and Binding Theory. Maryellen MacDonald--Facilitation Effects from NP-Trace in Passive Sentences. Charles Clifton, Jr.--Filling Gaps. Emmon Bach--Understanding Unnatural Language: Dutch and German Verb Clusters. Anthony S. Kroch--Grammatical and Processing Effects on the Appearance of Resumptive Pronouns. pa PREREGISTRATION FORM Name _____________________________________________________________ Address _____________________________________________________________ _____________________________________________________________ _____________________________________________________________ University or corporate affiliation (if not shown above): _____________________________________________________________ Fees may be paid by mail with check or Post Office money order payable to: CUNY Sentence Processing Conference Please circle: STUDENT* NON-STUDENT Advance registration $ 5.00 $15.00 Walk-in registration $10.00 $20.00 *To verify student status, please have a faculty member sign below: ________________ _____________________________________________ Name (printed) Faculty signature Return this form to: Human Sentence Processing Conference Linguistics Program CUNY Graduate Center 33 West 42nd St. New York, N.Y. 10036 __ Check here to be put on a list for crash space. (You will need to bring sleeping bag. Assume no smoking unless specially arranged). Any special needs (we will try to accommodate): ___________________________________________________________________ A small demonstration area will be open on Friday and Saturday from 12-2pm and for two hours after the afternoon sessions. An IBM-XT with 512K internal memory and a Symbolics 3620 (Rel.7.1) can be made available for running demonstrations. Please check the appropriate box if you wish to do a demo on one of these machines and provide a brief description of your program. IBM-XT ________________________________________________________ _________________________________________________________ Symbolics _________________________________________________________ 3620 _________________________________________________________ pa FOR FURTHER INFORMATION From: MCCARTY@UTOREPAS Subject: Job advertisement (24 lines) Date: 11 February 1988, 22:10:46 EST X-Humanist: Vol. 1 Num. 728 (728) ------------------------------------------ From Vicky Walsh UCLA Humanities Computing Facility has a job open for a programmer/analyst with experience in the Humanities. I am looking for someone with lots of micro experience, UNIX knowledge, use, or experience, knowledge of networking, but especially a technical person with a good grounding in the humanities and what humanists do for a living. If you are interested, or know anyone who is, contact me via bitnet (IMD7VAW@UCLAMVS) or send a letter and resume to: Vicky A. Walsh, Humanities Computing, 2221B Bunche Hall, UCLA, 405 Hilgard Ave, LA, CA 90024-1499. PH: 213/206-1414. I will be at Calico later this month, if anyone would like to talk to me about the job. Vicky Walsh From: MCCARTY@UTOREPAS Subject: CD what? Date: 12 February 1988, 09:16:24 EST X-Humanist: Vol. 1 Num. 729 (729) ------------------------------------------ From Sarah Rees Jones Quiet voice from the slip-stream, "What EXACTLY is CD-ROM?" From: MCCARTY@UTOREPASscience@nems.ARPA (Mark Zimmermann) Subject: TEXAS 0.1 from science@nems.arpa (66 lines)Re: For HUMANIST: TEXAS 0.1 Date: 12 February 1988, 09:56:52 EST11 Feb 88 06:57 EST X-Humanist: Vol. 1 Num. 730 (730) ------------------------------------------ From David Nash [Lightly edited -DGN] tnx for comments/suggestions re TEXAS! Note that the version number is 0.1 -- not meaning to imply completeness.... [Index or context displays can be copied to the Clipboard: ] I deliberately made the buttons that cover those fields a few pixels too small to entirely shield the text area. Try moving the HyperCard 'hand' cursor to the very edge (left or right) of the index or context pseudo-scrolling display, and it should turn into an I-beam, so that you may select that text and copy/paste it ad lib. (Alternatively, hit the "tab" key and the entire contents of the field gets selected.) Sorry that I didn't document that. [...] as for the problem in pasting into the "Jump to..." dialog box, that's HyperCard's fault, and (I hope!) will be fixed in a new release of that program soon. Also, you are of course free to shrink the buttons yourself if you want a different arrangement -- that's what HyperCard is for, user flexibility.... [...] To unlock the stack so that you can move buttons around, resize them, change fonts, etc., hold down command key before you pull down the HC 'File' menu -- that lets you choose "Protect Stack..." and raise the user level above Typing. The saving to ASCII file is easy under MultiFinder -- and you can certainly copy and accumulate arbitrary chunks of ASCII text and export them via scrapbook/clipboard/etc. Unfortunately, the current version of HyperCard isn't friendly enough to allow continuously-open desk accessories, so unless you have extra memory enough to run MultiFinder with HC, the procedure is a trifle inconvenient. That's why I provided the "Notes Card", which of course you're free to make multiple copies of, modify at will, etc. Hyper-Card limits text fields to 32kB each, which is another inconvenience.... I will think about how to make jumping around in the Index better ... would like to allow a special (perhaps option-click?) on Context or Text words to jump the Index to that point, but haven't figured out a good implementation yet. Proximity searching (subindexing as in my UNIX-type C progs for browsing) is a higher priority at this time, for me ... really need to be able to cut down on the search space when indices get big.... As mentioned on the TEXAS v.0.1 stack, all my C source code is available if anybody wants to do any of these extensions -- send a formatted Mac disk, self-addressed stamped envelope, and $2 or less to me to get it. (I need to subsidize my son who does the disk dup'ing.) [...] ~z From: MCCARTY@UTOREPAS Subject: CD-ROM and related technology at Penn Date: 12 February 1988, 16:57:35 EST X-Humanist: Vol. 1 Num. 731 (731) ------------------------------------------ From Jack Abercrombie Bob K. neglected to report on all forms of CD-ROM and related technology available within CCAT. We have two addition WORM drives installed in a computer lab on campus. These WORMs (400 megs of storage) are used heavily in the following ways: 1. Masters for gang copying of the New Testament, LXX, and other texts are stored on the WORMs. Our staff loads individual diskettes with duplicate of parts of the texts from the WORMs over a network. 2. The South Asia Department and other departments at Penn have archival texts stored on the WORMs for use by faculty and graduate students. The students can search and concord texts off the WORMs or copy the texts to their diskettes for use in another installation. Incidentally we have a major effort this year in transfering our Asian language holdings into electronic form much as we did last year for modern Russian poetry. 3. The WORMs also hold digitized pictures of material for papyrology, language instruction, and textual research in which pictorial information is closely linked to ASCII texts. A user can access the WORMs and search both texts and pictures together. Jack Abercrombie Director of CCAT, Assistant Dean for Computing, University of Pennsylvania From: MCCARTY@UTOREPAS Subject: Ibycus UK Date: 12 February 1988, 22:45:38 EST X-Humanist: Vol. 1 Num. 732 (732) ------------------------------------------ From John L. Dawson My company (Humanities Computing Consultants Ltd) has the agency for distributing Ibycus machines in Britain, so I am in close touch with everyone in Britain who has an Ibycus (all of them micro versions). From: MCCARTY@UTOREPAS Subject: TLG CD-ROM B usage (50 lines) Date: 12 February 1988, 22:47:31 EST X-Humanist: Vol. 1 Num. 733 (733) ------------------------------------------ From Randall M. Smith <6500rms@UCSBUXB.BITNET> This is a partially rewritten copy of a letter which I sent directly to Susan Hockey. I think it will serve as a brief description of the work which I have been doing. I wasn't sure if people in HUMANIST would be interested in all of these details, but I will gladly share them: At the Classics Department at the University of California at Santa Barbara, we are using the TLG CD-ROM #B, which was indexed by Greg Crane at the Harvard Classics department and Paul Kahn at IRIS at Brown. We are using it on an AT clone with software which I have been adapting from Greg Crane's UNIX software. It is true that sequential searching on the AT is abysmally slow, but the index searching is very fast. We can find a word with a few occurrences on the entire disc in less than a minute, and displaying the Greek text on the screen takes another minute or two. While this is not as fast as a program like the BYU concordance program, this is working with a database over 200MB in size. Since this is not a commercial system, we do have free access to the text, and pointers to the correct place in the text are provided by the index system. This seems to me the ideal compromise since we can revert to a (slow) sequential search if we ever need to. We are currently working to include relational searches, which can be implemented fairly easily using the information provided by the indices. It should be fairly quick, though it may not be exactly "interactive." Also, we are using a general purpose CD filing system (the Comapct Disc Filing System, developed by Simson Garfinkel at the MIT media lab) which is rather slow; I am sure that we could speed operations immensely by replacing this with a CD interface designed specifically for this application. Because of my experience with the indices I am all in favor of indices being provided on the TLG and PHI CD-ROM's. I am told that the new TLG CD-ROM #C has an index system which provides the number of occurrences of a word in each author, but no references into the text to retrieve the context. I suppose that this eliminates scanning an author who does not use a given word, but it is still a lengthy process to search the entire disc. In any case, we are using our system to search for occurrences of words; we recently looked for _monadikes_ in conjunction with _stigmes_, though we had to do the correlations by hand. (We actually used the same technique which we are writing into the software.) We also ran a search for words built on the stem _vaukrar-_ for a professor in the history department. One of our professors is working on a project which will involve searching for specific verb stems in conjunction with certain endings. Also, students in Greek Prose Composition have used the computer to verify the existence of various forms or stems in Attic Prose authors. We have not yet tried to do any sort of thematic searching or anything that sophisticated, but we have been quite pleased so far. I would be pleased to answer any questions that anyone may have about our software. (If people are interested in the nitt-gritty on the index system I would be happy to post a brief description of how it works.) I would also be interested in other people's ideas about or experience with indices on CD-ROM's. Randall M. Smith (6500RMS@UCSBUXB.BITNET) Department of Classics University of California Santa Barbara, CA 93106 From: MCCARTY@UTOREPAS Subject: Job advertisement (39 lines) Date: 12 February 1988, 22:50:30 EST X-Humanist: Vol. 1 Num. 734 (734) ------------------------------------------ From Susan Kruse KING'S COLLEGE LONDON University of London COMPUTING CENTRE The post of ASSISTANT DIRECTOR HUMANITIES AND INFORMATION MANAGEMENT is available within the Centre. This senior post, supported by 6 specialist staff, carries particular responsibility for developing and supporting computing applications in the Humanities Departments, an area of the Centre's work to which the College attaches great importance. The successful applicant will play a crucial role as the Centre carries through a major expansion of its computing and communications facilities at all levels from the micro to the mainframe. The College's central computing provision will be based on a large VAX Cluster. Applicants should possess a degree in an appropriate discipline and be able to demonstrate a substantial degree of relevant computing and managerial experience. The appointment will be made at a suitable point, depending on age, qualifications and experience, on the academic-related grade 5 scale (1st March 1988): #21,055 - 24,360 per annum including London Allowance. Further particulars and application forms may be obtained from the Assistant Personnel Officer, King's College London, Strand, London WC2R 2LS. The closing date for the receipt of applications is 2nd March 1988. From: MCCARTY@UTOREPAS Subject: whos to blame (57 lines!) Date: 12 February 1988, 23:06:38 EST X-Humanist: Vol. 1 Num. 735 (735) ------------------------------------------ From Sebastian Rahtz ok, lets see who dominates HUMANIST; i just took my HUMANIST backlog (I keep most of it) for the last six months or so, and picked out those whose name I could easily identify (is those with decent mail headers!),which gave me 121 messages. The following appeared once: ked@coral.berkeley.edu Robert Amsler Tom Benson John Bradley Dan Brink Robin C. Cover Jim Cerny Dan Church Stephen DeRose Charles Faulhaber Robert Gauthier David Graham Doug Hawthorne Susan Hockey Randall Jones May Katzen Bob Kraft Ian Lancashire Joanne M. Badagliacco George M. Logan Brian Molyneaux Elli Mylonas Stephen Page Rosanne Potter Sarah Rees Jones Laine Ruus David Sitman Mike Stairs Wayne Tosh Vicky Walsh Keith Whitelam Ronnie de Sousa the following twice: George Brett Allen H. Renear Charles Faulhaber M.J. CONNOLLY Malcolm Brown Jack Abercrombie Randall Smith Terry Langendoen and thrice: Richard Goerwitz Lou Burnard Dr Abigail Ann Young and four times Sterling Bjorndahl Norman Zacour but the winners are: 5 Dominik Wujastyk 5 David Durand 5 Nancy Ide 5 David Nash 5 Michael Sperberg-McQueen 5 Robert Amsler 6 Mark Olsen 6 James H. Coombs 6 Bob Kraft I have left out myself and Willard....... sebastian rahtz PS please dont bother checking this; I just spent 2 minutes with 'grep' and 'sort' while waiting for something. Lou Burnard promises a proper analysis From: MCCARTY@UTOREPAS Subject: Utility of CD-ROMs (30 lines) Date: 14 February 1988, 14:54:44 EST X-Humanist: Vol. 1 Num. 736 (736) ------------------------------------------ From amsler@flash.bellcore.com (Robert Amsler) The question of utility of CD-ROMs involves both the issue of what type of data one would want on CD-ROMs, which affects how many people would buy the CD-ROM--but also how they would want to access that data. I believe this was mentioned several months ago as an issue for CD-ROM. I would expect CD-ROM access to be such that I could easily download any portion of the available information for further manipulation, editing, etc. CD-ROMs lacking such accessibility would not be totally impractical, but would only be attractive to a subset of the total potential users. If CD-ROM publishers decide that they wish to prohibit this kind of access and only provide the equivalent of contemporary database access, to selected items without downloading permission--then I am more pessimistic about the future of CD-ROM apart from a storage medium incorporated into specific hardware-based systems intended solely for the use of that CD-ROM. From: MCCARTY@UTOREPAS Subject: What is CD-ROM? (23 lines) Date: 15 February 1988, 12:54:42 EST X-Humanist: Vol. 1 Num. 737 (737) ------------------------------------------ From M.J. CONNOLLY (Slavic/Eastern) For Sarah Rees Jones: 'Tain't exactly the most up-to-date presentation any more, but your question will perhaps find a ready answer in Laub, Leonard: "What is CD ROM?" From: MCCARTY@UTOREPAS Subject: This is a test (11 lines) Date: 15 February 1988, 16:12:57 EST X-Humanist: Vol. 1 Num. 738 (738) AaBbCcDdEeFfGgHhIiJjKkLlMmNnOoPpQqRrSsTtUuVvWwXxYyZzzzzzzzzzzzzzzzzzzzz! From: MCCARTY@UTOREPAS Subject: A file-server for HUMANIST (104 lines) Date: 16 February 1988, 16:25:42 EST X-Humanist: Vol. 1 Num. 739 (739) Dear Colleague: You may recall that some months ago we began trying to set up a file-server here so that HUMANISTs could download files of common interest without having to trouble anyone other than themselves. At long last, due to the untiring efforts of Steve Younker, our Postmaster, we seem to have succeeded. Here are instructions on how to use this file-server. Two separate but very similar operations will usually be Note that in the following what you type is in caps; all semicolons and periods are not part of the commands to be typed; and addresses expressed as USERID AT NODENAME may have to be entered as USERID@NODENAME. A. If you are on Bitnet/NetNorth/EARN and use an IBM VM/CMS system: - for (1) send the interactive command: TELL LISTSERV AT UTORONTO GET HUMANIST FILELIST HUMANIST - for (2) send the command: TELL LISTSERV AT UTORONTO GET fn ft HUMANIST where fn = filename, ft = filetype (of the file you've chosen) B. If you are on Bitnet/NetNorth/EARN and use a Vax VMS system, you may be able to use the following interactive procedure: - for (1) type: SEND/REMOTE UTORONTO LISTSERV you should get the prompt: (UTORONTO)LISTSERV: then type: GET HUMANIST FILELIST HUMANIST - for (2) repeat the above but substitute GET fn ft HUMANIST, where fn and ft are as above. C. If you are on Bitnet/NetNorth/EARN but don't use an IBM VM/CMS system, or if you are not on Bitnet, etc.: - for (1) use your mailer of whatever kind, e.g., MAIL, to send an ordinary message to LISTSERV AT UTORONTO and include as the one and only line, GET HUMANIST FILELIST HUMANIST This should be on the first line of mail message. In other words, there should be no blank lines pGeceding this line. - for (2) repeat the above but substitute for the first line GET fn ft HUMANIST, where fn and ft are as above. D. As an alternative to B, use whatever command you have to send a file, e.g., SENDFILE, to LISTSERV AT UTORONTO, the first and only line of this file being again, for (1): GET HUMANIST FILELIST HUMANIST and for (2) GET fn ft HUMANIST At the moment the offerings are not extensive, and you've probably all seen what's there, but we'd be pleased if you would test the procedures anyhow. If you have enduring material of possible interest to HUMANISTs, please consider submitting it for storage on our file-server. Send me a note describing what you have and I'll let you know. Since space on the server is not infinite, we need to exercise some restraint. Yours, Willard McCarty (mccarty@utorepas ) Steve Younker (postmstr@utoronto ) From: MCCARTY@UTOREPAS Subject: More on software viruses (138 lines) Date: 16 February 1988, 19:57:52 EST X-Humanist: Vol. 1 Num. 740 (740) ---------------------------- From Y. Radai , with thanks Issue 74 of the Info-IBMPC digest contained a description of a "virus" discovered at Lehigh University which destroys the contents of disks after propagating itself to other disks four times. Some of us here in Israel, never far behind other countries in new achievements (good or bad), are suffering from what appears to be a local strain of the virus. Since it may have spread to other countries (or, for all we know, may have been im- ported from abroad), I thought it would be a good idea to spread the word around. Our version, instead of inhabiting only COMMAND.COM, can infect any ex- ecutable file. It works in two stages: When you execute an infected EXE or COM file the first time after booting, the virus captures interrupt 21h and inserts its own code. After this has been done, whenever any EXE file is executed, the virus code is written to the end of that file, increasing its size by 1808 bytes. COM files are also affected, but the 1808 bytes are written to the beginning of the file, another 5 bytes (the string "MsDos") are written to the end, and this extension occurs only once. The disease manifests itself in at least three ways: (1) Because of this continual increase in the size of EXE files, such programs eventually be- come too large to be loaded into memory or there is insufficient room on the disk for further extension. (2) After a certain interval of time (apparently 30 minutes after infection of memory), delays are inserted so that execution of programs slows down considerably. (The speed seems to be reduced by a factor of 5 on ordinary PCs, but by a smaller factor on faster models.) (3) After memory has been infected on a Friday the 13th (the next such date being May 13, 1988), any COM or EXE file which is executed on that date gets deleted. Moreover, it may be that other files are also af- fected on that date; I'm still checking this out. (If this is correct, then use of Norton's UnErase or some similar utility to restore files which are erased on that date will not be sufficient.) Note that this virus infects even read-only files, that it does not change the date and time of the files which it infects, and that while the virus cannot infect a write-protected diskette, you get no clue that an at- tempt has been made by a "Write protect error" message since the pos- sibility of writing is checked before an actual attempt to write is made. It is possible that the whole thing might not have been discovered in time were it not for the fact that when the virus code is present, an EXE file is increased in size *every* time it is executed. This enlargement of EXE files on each execution is apparently a bug; probably the intention was that it should grow only once, as with COM files, and it is fortunate that the continual growth of the EXE files enabled us to discover the virus much sooner than otherwise. From the above it follows that you can fairly easily detect whether your files have become infected. Simply choose one of your EXE files (preferably your most frequently executed one), note its length, and ex- ecute it twice. If it does not grow, it is not infected by this virus. If it does, the present file is infected, and so, probably, are some of your other files. (Another way of detecting this virus is to look for the string "sUMsDos" in bytes 4-10 of COM files or about 1800 bytes before the end of EXE files; however, this method is less reliable since the string can be altered without attenuating the virus.) If any of you have heard of this virus in your area, please let me know; perhaps it is an import after all. (Please specify dates; ours was noticed on Dec. 24 but presumably first infected our disks much earlier.) Fortunately, both an "antidote" and a "vaccine" have been developed for this virus. The first program cures already infected files by removing the virus code, while the second (a RAM-resident program) prevents future in- fection of memory and displays a message when there is any attempt to in- fect it. One such pair of programs was written primarily by Yuval Rakavy, a student in our Computer Science Dept. In their present form these two programs are specific to this particular virus; they will not help with any other, and of course, the author of the present virus may develop a mutant against which these two programs will be ineffective. On the other hand, it is to the credit of our people that they were able to come up with the above two programs within a relatively short time. My original intention was to put this software on some server so that it could be available to all free of charge. However, the powers that be have decreed that it may not be distributed outside our university except under special circumstances, for example that an epidemic of this virus actually exists at the requesting site and that a formal request is sent to our head of computer security by the management of the institution. Incidentally, long before the appearance of this virus, I had been using a software equivalent of a write-protect tab, i.e. a program to prevent writing onto a hard disk, especially when testing new software. It is called PROTECT, was written by Tom Kihlken, and appeared in the Jan. 13, 1987 issue of PC Magazine; a slightly amended version was submitted to the Info-IBMPC library. Though I originally had my doubts, it turned out that it is effective against this virus, although it wouldn't be too hard to develop a virus or Trojan horse for which this would not be true. (By the way, I notice in Issue 3 of the digest, which I received only this morning, that the version of PROTECT.ASM in the Info-IBMPC library has been replaced by another version submitted by R. Kleinrensing. However, in one respect the new version seems to be inferior: one should *not* write-protect all drives above C: because that might prevent you from writing to a RAMdisk or an auxiliary diskette drive.) Of course, this is only the beginning. We can expect to see many new viruses both here and abroad. In fact, two others have already been dis- covered here. In both cases the target date is April 1. One affects only COM files, while the other affects only EXE files. What they do on that date is to display a "Ha ha" message and lock up, forcing you to cold boot. Moreover (at least in the EXE version), there is also a lockup one hour after infection of memory on any day on which you use the default date of 1-1-80. (These viruses may actually be older than the above-described virus, but simply weren't noticed earlier since they extend files only once.) The author of the above-mentioned anti-viral software has now extended his programs to combat these two viruses as well. At present, he is con- centrating his efforts on developing broad-spectrum programs, i.e. programs capable of detecting a wide variety of viruses. Just now (this will give you an idea of the speed at which developments are proceeding here) I received notice of the existence of an anti-viral program written by someone else, which "checks executable files and reports whether they include code which performs absolute writes to disk, disk for- matting, writes to disk without updating the FAT, etc." (I haven't yet received the program itself.) Y. Radai Computation Center Hebrew University of Jerusalem RADAI1@HBUNOS.BITNET From: MCCARTY@UTOREPAS Subject: IBM Regional Conference in Princeton, NJ (126 lines) Date: 16 February 1988, 20:05:46 EST X-Humanist: Vol. 1 Num. 741 (741) ---------------------------- From Jack Abercrombie IBM CONFERENCE ON ACADEMIC COMPUTING sponsored by Princeton University, University of Pennsylvania, and IBM Corporation I have selected out the following presentations that relate to computing in the Humanities from the IBM Regional Conference to be held in Princeton N.J. on 18th-19th March. This regional IBM conference is different than other such conferences this year in that the emphasis here is on humanities computing rather than computing in the hard sciences or engineering. On the 18th March, colleagues will present their work in panel discussions and seminar sessions. On the 19th, there will be general demonstration of software. Any readers interested in further details on the conference may write to me directly. Thank you. ********************************************************************* FOREIGN LANGUAGE INSTRUCTION R. Allen Computers and Proficiency-Based Language Acquisition: (Univ. of PA) The Case of Arabic P.A. Batke CAI on Micros at Duke University (Duke Univ.) D.F. Sola TECHNOLOGY TRANSFER: Databased Foreign Language Writing (Cornell Univ.) R.M. Wakefield The Development of Listening Comprehension in (Univ. Minn.) German with IBM AT, InfoWindow and Laserdisk Video CONCURRENT PANELS(10:00-noon) NOT LISTED HERE: Economics, Psychology, Technology-based curriculum development, and Information Sharing. HISTORY W.O. Beeman Linking the Continents of Knowledge (Brown Univ.) D.W. Miller The Great American History Machine (Carnegie-Mellon Univ.) J.E. Semonche Encountering the Past: Computer Simulations in (Univ. of N.C.) U.S. history PANEL DISCUSSION on LANGUAGE & LITERATURE J.S. Noblitt (Cornell University), P.A. Batke (Duke University), R.L. Jones (Brigham Young University), G.P. Landow (Brown University) USE OF CAD IN ARCHITECTURE/CLASSICAL ARCHAEOLOGY D.G. Romano The Athena Polias Project: (Univ. of PA) Archaeology, Architecture, Athletics and AutoCAD CONCURRENT PANELS(1:30-3:00) NOT LISTED HERE: Mathematics, Instructional Video Disc/CD ROM. ARCHAEOLOGY H. Dibble Interactive Computer Graphics, Imaging and Databasing: (Univ. of PA) Applications to a Stone Age Archaeological Project in France T.D. Price Archaeological Concepts for Undergraduates: (Univ. of Wisc.) The Case of the Mysterious Fugawi R. Saley Two Archaeologically-Based Computer Applications (Harvard Univ.) TEXTUAL ANALYSIS J.R. Abercrombie Teaching Computer-Assisted Research Techniques to (Univ. of PA) Future Scholars R.L. Jones The Creation of a Literary Data Base: Two Approaches (BYU) I. Lancanshire The Art and Science of Text Analysis (Univ. of Toronto) MUSIC R.B. Dannenberg Real Time Music Understand (Carnegie-Mellon Univ.) F.T. Hofstetter To be announced (Univ. of Del.) R. Pinkston A Practical Approach to Software Synthesis (Univ. of Texas) on the IBM PC CONCURRENT PANELS(3:30-5:00) NOT LISTED HERE: Biology, Examples of Instructional Computing. From: MCCARTY@UTOREPAS Subject: A query and a (very brief) return to Hypertext (29 lines) Date: 16 February 1988, 21:38:32 EST X-Humanist: Vol. 1 Num. 742 (742) ---------------------------- From Stephen R. Reimer First, I am interested in obtaining a copy of Wilhelm Ott's "TUSTEP" program: can anyone advise me on how to proceed? Secondly, I noticed a series of quotations from Ted Nelson in the February BYTE magazine (p. 14): in case anyone missed it, I offer just a small sampling. "Xanadu /his hypertext system/ is not a conventional project. . . . This is a religion." "The objective is to save humanity before we send it into the garbage pail. We must remove the TV-induced stupor that lies like a fog across the land . . . /and/ make the world safe for smart children." Our children, however, are threatened not only by the influence of TV, but also by the influence of database software: "Compartmentalized and stratified fields produce compartmentalized and stratified minds." These quotations are from a speech given at the Software Entrepreneur's Forum in Palo Alto recently. (He also offered a reflection on publishers and CD-ROMs: "Information lords offering information to information peons.") Is this man sane? Is BYTE misrepresenting him? Stephen Reimer, University of Alberta (SREIMER@UALTAVM) From: MCCARTY@UTOREPAS Subject: Updated report on the Israeli virus (111 lines) Date: 17 February 1988, 09:13:41 EST X-Humanist: Vol. 1 Num. 743 (743) [Yisrael Radai has sent me a revised version of his report on the recent MS-DOS virus discovered in Israel. It is intended to replace what I circulated yesterday on this subject. --WM] ---------------------------- From Y. Radai , with thanks Issue 74 of the Info-IBMPC digest contained a description of a "virus" discovered at Lehigh University which destroys the contents of disks after propagating itself to other disks four times. Some of us here in Israel, never far behind other countries in new achievements (good or bad), are suffering from what appears to be a local strain of the virus. Since it may have spread to other countries (or, for all we know, may have been imported from abroad), I thought it would be a good idea to spread the word around. Our version, instead of inhabiting only COMMAND.COM, can infect any execu- table file. It works in two stages: When you execute an infected EXE or COM file the first time after booting, the virus captures interrupt 21h and inserts its own code. After this has been done, whenever any EXE file is executed, the virus code is written to the end of that file, increasing its size by 1808 bytes. COM files are also affected, but the 1808 bytes are written to the beginning of the file, another 5 bytes (the string "MsDos") are written to the end, and this extension occurs only once. The disease manifests itself in at least three ways: (1) Because of this continual increase in the size of EXE files, such programs eventually become too large to be loaded into memory or there is insufficient room on the disk for further extension. (2) After a certain interval of time (apparently 30 minutes after infection of memory), delays are inserted so that execution of programs slows down considerably. (The speed seems to be reduced by a factor of 5 on ordinary PCs, but by a smaller factor on faster models.) (3) After memory has been infected on a Friday the 13th (the next such date being May 13, 1988), any COM or EXE file which is executed on that date gets deleted. Moreover, it may be that other files are also affected on that date (in which case use of Norton's UnErase or some similar utility to restore files which are erased on that date will not be sufficient). Note that this virus infects even read-only files, that it does not change the date and time of the files which it infects, and that while the virus cannot infect a write-protected diskette, you get no clue that an attempt has been made by a "Write protect error" message since the possibility of writing is checked before an actual attempt to write is made. It is possible that the whole thing might not have been discovered in time were it not for the fact that when the virus code is present, an EXE file is increased in size *every* time it is executed. This enlargement of EXE files on each execution is apparently a bug; probably the intention was that it should grow only once, as with COM files, and it is fortunate that the continual growth of the EXE files enabled us to discover the virus much sooner than otherwise. From the above it follows that you can fairly easily detect whether your files have become infected. Simply choose one of your EXE files (preferably your most frequently executed one), note its length, and execute it twice. If it does not grow, it is not infected by this virus. If it does, the present file is infected, and so, probably, are some of your other files. (Another way of detecting this virus is to look for the string "sUMsDos" in bytes 4-10 of COM files or about 1800 bytes before the end of EXE files; however, this method is less reliable since this string can be altered without attenuating the virus. If any of you have heard of this virus in your area, please let me know; perhaps it is an import after all. (Please specify dates; ours was noticed on Dec. 24 but presumably first infected our disks much earlier.) Fortunately, both an "antidote" and a "vaccine" have been developed for this virus. The first program cures already infected files by removing the virus code, while the second (a RAM-resident program) prevents future infection of memory and displays a message when there is any attempt to infect it. One such pair of programs was written primarily by Yuval Rakavy, a student in our Computer Science Dept. In their present form these two programs are specific to this particular virus; they will not help with any other, and of course, the author of the present virus may develop a mutant against which these two programs will be ineffective. On the other hand, it is to the credit of our people that they were able to come up with the above two programs within a relatively short time. My original intention was to put this software on some server so that it could be available to all as a public service. However, the powers that be have decreed that it may not be distributed outside our university unless an epidemic of this virus actually exists at the requesting site and a formal request is sent in writing to our head of computer security by the management of the institution. Incidentally, long before the appearance of this virus, I had been using a software equivalent of a write-protect tab, i.e. a program to prevent writing onto a hard disk, especially when testing new software. It is called PROTECT, was written by Tom Kihlken, and appeared in the Jan. 13, 1987 issue of PC Magazine; a slightly amended version was submitted to the Info-IBMPC library. Though I originally had my doubts, it turned out that it is effective against this virus, although it wouldn't be too hard to develop a virus or Trojan horse for which this would not be true. I have ordered (on a trial basis) a hardware write-protection mechanism which I heard of. If it works, I'll post an evaluation of it to the digest. Of course, this is only the beginning. We can expect to see many new viruses both here and abroad. In fact, two others have already been discovered here. In both cases the target date is April 1. One affects only COM files, while the other affects only EXE files. What they do on that date is to display a "Ha ha" message and lock up, forcing you to cold boot. Moreover (at least in the EXE version), there is also a lockup, *without any message*, one hour after infection of memory on any day on which you use the default date of 1-1-80. The identifying strings are "APRIL 1ST", VIRUS" and "sURIV". (These viruses may actually be older than the above-described virus, but simply weren't noticed earlier since they extend files only once.) The author of the above-mentioned anti-viral software has now extended his programs to combat these two viruses as well. At present, he is concentrating his efforts on developing broad-spectrum programs, i.e. programs capable of detecting a wide variety of viruses. Yisrael Radai Computation Center Hebrew University of Jerusalem RADAI1@HBUNOS.BITNET From: MCCARTY@UTOREPAS Subject: Our new Republic of Letters (31 lines) Date: 17 February 1988, 09:28:34 EST X-Humanist: Vol. 1 Num. 744 (744) Jean-Claude Guedon, a new HUMANIST, has sent me the following observation on our discussion group. A "beginner's mind," as a teacher of mine once used to say, is open to perceptions that experience and specialization tend to attenuate. So I'm grateful for his beginner's glimpse. "After sampling some of the discussions going on and looking at the various interests of the members, I am rather glad I am part of [HUMANIST]. In a sense, groups like the Humanist, based on efficient means of communication, rebuild a situation not unlike that of the old "Republic of Letters". The comparison appears even more convincing if we remember that many members of this informal republic were the equivalent of modern functionaries and could, therefore, take advantage of the mail systems developed for kings and princes. Our only challenge - and it is not a small one - is to be as good as they once were!" Jean-Claude Guedon Institut d'histoire et de Sociopolitique des sciences et Litterature comparee Universite de Montreal From: MCCARTY@UTOREPAS Subject: A new text-analysis package for MS-DOS (40 lines) Date: 17 February 1988, 09:40:02 EST X-Humanist: Vol. 1 Num. 745 (745) The following is republished from another Canadian ListServ discussion group, ENGLISH. My thanks to Marshall Gilliland, the editor of that group, who is also a HUMANIST. -- WM -------------------------------------------------------------------------- From There is a review of a new program for DOS micros in the January issue of RESEARCH IN WORD PROCESSING NEWSLETTER, and it may interest some of you. The reviewer is Bryan Pfaffenberger, author of several books on personal computing. The program is TEXTPACK V, version 3.0, on four 5 1/4" floppies, from Zentrum fur Umfragen, Methoden, und Analysen e.V. Postfach 5969 D-6800 Mannheim-1, West Germany Cost: $60.00 (US), postpaid This is the micro version of a mainframe software package for content analysis. Pfaffenberger says in his summary that it "is probably the most powerful content analysis program available . . .[and is] a family of related text analysis programs that include procedures for generating simple word frequency analysis, creating key-word-in-context concordances, compiling indexes, and comparing the vocablularies of two texts. Although far from user-friendly in the Macintosh sense, Textpack's programs are well-conceived, fast, and powerful. The documentation is cryptic and dry, but a reasonably proficient PC user can manage it. Textpack V can, in sum, do just about everything that a literary scholar or political scientist would want to do with a computer-readable text. Academic computing centers take note: when the humanists and social scientitsts start knocking on your door and talking about text analysis, you'll do well to have a copy of the MS-DOS version of Textpack V around." M.G. From: MCCARTY@UTOREPAS Subject: Activities on HUMANIST, second report (449 lines) Date: 17 February 1988, 12:50:56 EST X-Humanist: Vol. 1 Num. 746 (746) Dear Colleagues: Today HUMANIST has for the first time reached a (momentary) total of 200 listed members. Since 7 of these are actually redistribution lists, this numeriological fact is somewhat fuzzy. Nevertheless, an occasion to celebrate, if you need one. To mark the event I am circulating Lou Burnard's report on our online activites for about the last six months. It has been submitted to the Newsletter of the ACH. Yours, Willard McCarty HUMANIST So Far: A Report on Activities, August 1987 to January 1988 by Lou Burnard, Oxford University Computing Service This is the second in a series of reports on HUMANIST, the Bitnet/NetNorth/EARN discussion group for computing humanists that is sponsored jointly by the ACH, the ALLC and the University of Toronto's Centre for Computing in the Humanities. The first report, published in the ACH Newsletter, vol. 9, no. 3 (Fall 1987) and circulated on HUMANIST itself, covered the initial two months. This one reviews the subsequent six months of strenuous activity. At the time of writing, participants in HUMANIST number nearly 180 and are spread across 11 countries (see table 1). Members are largely, but by no means exclusively, from North American academic computing centres. Table 2 shows that less than half of these participants actually create the messages that all, perforce, are assumed to read; out of over 600 messages during the last six months, nearly 500 were sent by just eight people, and out of 180 subscribers, 107 have never sent a message. In this, as in some other respects, HUMANIST resembles quite closely the sort of forum with which most of its members may be presumed to be most familiar: the academic committee. Personality traits familiar from that arena - the aggressive expert, the diffident enquirer, the unsuppressable bore - are equally well suited to this new medium: both young turks and old fogies are also to be found. Some of the rhetorical tricks and turn-taking rules appropriate to the oral medium find a new lease of life in the electronic one; indeed it is clear that this medium approximates more closely to orature than to literature. Its set phrases and jargon often betray an obsession with informal speech, and a desire to mimic it more *directly*, re-inventing typographic conventions for the purpose. As in conversation too, some topics will be seized upon while others, apparently equally promising, sink like stones at their first appearance; the wise HUMANIST, like the good conversationalist, learns to spot the right lull into which to launch a new topic. Perhaps because the interactions in an electronic dialogue are necessarily fewer and more spaced out (no pun intended) than those in face-to-face speech, misunderstanding and subsequent clarifications seem to occur more often than one might expect. However, the detailed functional analysis of electronic speech acts is an interesting but immense task, which I regretfully leave to discourse analysts better qualified than myself. (Needless to say, HUMANIST itself reported at least two such studies of "electronic paralanguage" during the period under review). For the purposes of this survey I identified four broad categories of message. In category A (for Administrative) go test messages, apologies for electronic disasters, announcements -but not discussion- of policy and a few related and oversized items such as the beginner's Guide to HUMANIST and the invaluable "Biographies". (On joining each member submits a brief biographical statement or essay; these are periodically gathered together and circulated to the membership.) Messages in category A totaled 57 messages, 18% of all messages, or 25% by bulk. In category C (for Conference) go announcements of all other kinds - calls for papers, job advertisements, conference reports, publicity for new software or facilities etc. The figures here totaled 39 messages, 12% of all messages, 20% of all lineage. As might be expected, categories A and C are disproportionately lengthy and not particularly frequent. I do not discuss them much further. In category Q (for query) go requests for information on specified topics, public answers to these, and summaries of such responses. These amounted to 20% of all messages but (again unsurprisingly) only 10% of all lines. I have been unable, as yet, to gather any statistics concerning the extent of private discussions occurring outside the main HUMANIST forum, though it is clear from those cases subsequently summarised that such discussions not only occur but are often very fruitful. What proportion of queries fall on stony ground is also hard, as yet, to determine. In category D (for discussion) I place those messages perhaps most typical of HUMANIST: general polemic, argument and disputation. Overall, these messages account for nearly 50% of the whole, (44% by line) and thus clearly dominate the network. With the curious exception of November, the relative proportions of D category messages remains more or less constant within each month. As table 5 shows, the relative proportions of other types of message are by no means constant over time. Of course, assigning a particular message to some category is not always a clear-cut matter. Correspondents occasionally combine a number of topics - or kinds of topic - in a single message. Moreover, the medium itself is still somewhat unreliable. Internal evidence shows that not all messages always get through to all recipients, nor do they always arrive in the order in which they were despatched or (presumably) composed. This report is based only on the messages which actually reached me here in Oxford; concerning the rest I remain (on sound Wittgensteinian principles) silent. I am equally silent on messages in categories A and C above, which are of purely transient interest. Space precludes anything more than a simple indication of the range of topics on which HUMANISTs have sought (and often obtained) guidance. In category Q over the last six months I found messages asking for information on typesetters with a PostScript interface, on scanners capable of dealing with microform, about all sorts of different machine readable texts and about software for ESL teaching, for library cataloguing, for checking spelling, for browsing foreign language texts, and for word processing in Sanskrit. HUMANISTs asked for electronic mail addresses in Greece and in Australia, for concordance packages for the Macintosh and the Amiga ST, for address lists and bibliographies; they wondered who had used the programming language Icon and whether image processing could be used to analyse corrupt manuscripts; they asked for details of the organisational structure of humanities computing centres and of the standards for cataloguing of computer media. Above all however, HUMANISTs argue. Back in August 1987 HUMANIST was only a few months old, yet many issues which have since become familiar to its readership were already on the agenda. Where exactly are the humanities as a discipline? what is their relation to science and technology? Correspondents referred to the infamous "Two cultures" debate of the late fifties, somehow now more relevant to the kind of "cross-disciplinary soup we are cooking", but rather than re-flaying that particular dead horse, the discussion moved rapidly to another recurrent worry: did the introduction of computing change humanistic scholarship quantitatively or qualitatively? Does electronic mail differ only in scale and effectiveness from the runner with the cleft stick? Do computers merely provide better tools to do tasks we have always wanted to do? The opinion of one correspondent ("if computers weren't around, I doubt very much if many of the ways we think about texts would have come to be") provoked another into demanding (reasonably enough) evidence. Such evidence as was forthcoming, however, did concede the point that "it could all be done without computers in some theoretical sense, but certainly not as quickly". Reference was made to a forthcoming collection of essays which might settle whether or not it was chimerical to hope that computers will somehow assist not just in marshaling the evidence but in providing interpretations of it. A second leitmotiv of HUMANIST discussions was first heard towards the end of August, when an enquiry about the availability of some texts in machine readable form provoked an assertion of the moral responsibility the preparers of such texts should accept for making their existence well known and preferably for depositing them in a Text Archive for the benefit of all. A note of caution concerning copyright was also first sounded here, and it was suggested that those responsible for new editions should always attempt to retain control over the rights to electronic distribution of their material. With the start of the new academic year, HUMANIST became more dominated by specific enquiries, and a comparatively low key wrangle about whether or not product announcements, software reviews and the like should be allowed to sully its airspace. Characteristically, this also provided the occasion for some HUMANISTs to engage in an amusing socio-linguistic discussion of the phenomenon known as "flaming", while others plaintively asked for "less chatter about the computer which is only a tool and more about what we are using it for". It appeared that some far flung HUMANISTs actually have to pay money proportionate to the size of the mailings they accept, recalling an earlier remark about the uniquely privileged nature of the bulk of those enjoying the delights of this new time-waster, which was (as one European put it) "surely *founded* for chatter". In mid October, a fairly pedestrian discussion about the general lack of recognition for computational activities and publications suddenly took off with the re-emergence of the copyright problems referred to above. If electronic publication was on a par with paper publication, surely the same principles of ownership and due regard for scholarly labours applied to it? But did this not mitigate against the current easy camaraderie with which articles, gossip and notes are transferred from one medium to another? as indeed are those more substantial fruits of electronic labours, such as machine readable texts? For one correspondent such activities, without explicit permission, were "a measure of the anesthetizing effect of the xerox machine on our moral sense". For another, however "asking concedes the other party's right to refuse". In mid-November, after a particularly rebarbative electronic foul up, minimal editorial supervision of all HUMANIST submissions was initiated. Other than some discussion of the "conversational style" appropriate to the network, this appears to have had little or no inhibitory effect on either the scale or the manner of subsequent contributions. An enquiry about the availability of some Akkadian texts led to a repeated assertion of the importance to scholarship of reliable machine readable texts. Conventional publishers were widely castigated for their short-sighted unwillingness to make such materials available (being compared on one occasion to mediaeval monks using manuscripts for candles, and on another to renaissance printers throwing away Carolingian manuscripts once they had been set in type). HUMANISTs were exhorted to exert peer pressure on publishers, to pool their expertise in the definition of standards, to work together for the establishment of a consortium of centres which could offer archival facilities and define standards. More realistically perhaps, some HUMANISTs remarked that publishers were unlikely to respond to idealistic pressures and that a network of libraries and data archives already existed which could do all of the required tasks and more were it sufficiently motivated and directed. At present, said one, all we have is "a poor man's archive" dependent on voluntary support. Others were more optimistic about the possibility of founding a "North American text Archive and Service Center" and less optimistic about the wisdom of leaving such affairs to the laws of the marketplace. One intriguing proposal was that a national or international Archive might be managed as a giant distributed database. Following the highly successful Vassar conference on text encoding standards in mid November, a long series of contributions addressed the issue of how texts should be encoded for deposit in (or issue from) such an archive. No one seems to have seriously dissented from the view that descriptive rather than procedural markup was desirable, nor to have proposed any method to describe such markup other than SGML, so that it is a little hard to see quite what all the fuss was about - unless it was necessary to combat the apathy of long established practise. One controversy which did emerge concerned the desirability (or feasibility) of enforcing a minimal encoding system, and the extent to which this was a fit role for an archive to take on. "Trying to save the past is just going to retard development" argued one, while another lone voice asserted a "rage for chaos" and praised "polymorphic encoding" on the grounds that all encoding systems were inherently subjective ("Every decoding is another encoding" to quote Morris Zapp). Anxiety was expressed about the dangers of bureaucracy. Both views were, to the middle ground at least, equally misconceived. In the first case, no-one was proposing that past errors should dictate future standards, but only that safeguarding what had been achieved was a different activity from proposing what should be done in the future. In the second case, no-one wished to fetter (or to "Prussianize") scholarly ingenuity, only to define a common language for its expression. There was also much support for the commonsense view that conversion of an existing text to an adequate level of markup up was generally much less work than starting from scratch. Clearly, however, a lot depends on what is meant by "generally" and by "adequate": for one HUMANIST an adequate markup was one from which the "original form of a document" could be re-created, thus rather begging the question of how that "original form" was to be defined. To insist on such a distinction between "objective text" and "subjective commentary" is "to miss the point of literary criticism altogether" as another put it. One technical problem with SGML which was identified, though not much discussed, was its awkwardness at handling multiply hierarchical structures within a single document; one straw man which was repeatedly shot down was the apparent verbosity of most current implementations using it. However, as one correspondent pointed out, the SGML standard existed and was not going to disappear. It was up to HUMANISTs to make the best use of it by proposing tag sets appropriate to their needs, perhaps using some sort of data dictionary to help in this task. At the end of 1987 it seemed that "text markup and encoding have turned out to be THE issue for HUMANISTs to get productively excited about". Yet the new year saw an entirely new topic sweep all others aside. A discussion on the styles of software most appropriate for humanistic research soon focused on an energetic debate about the potentials of hypertext systems. It was clear to some that the text-analysis feature of existing software systems were primitive and the tasks they facilitated "critically naive". Would hypertext systems, in which discrete units of text, graphics etc. are tightly coupled to form an arbitrarily complex network, offer any improvement on sequential searching, database construction, concordancing visible tokens and so forth? Participations in this discussion ranged more widely than usual between the evangelical and the ill-informed, so that rather more heat than light was generated on the topic of what was distinctively new about hypertext, but several useful points and an excellent bibliography did emerge. A hypertext system, it was agreed, did extend the range of what was possible with a computer (provided you could find one powerful enough to run it), though whether or not its facilities were fundamentally new remained a moot (and familiar) point. It also seemed (to this reader at least) that the fundamental notion of hypertext derived from somewhat primitive view of the way human reasoning proceeds. The hypertext paradigm does not regard as primitive such mental activities as aggregation or categorisation (this X is a sort of Y) or semantic relationships (all Xs are potentially Yd to that Z), which lie at the root of the way most current database systems are designed. Nevertheless it clearly offers exciting possibilities - certainly more exciting (in one HUMANIST's memorable phrase) than "the discovery of the dung beetle entering my apartment". Considerations about the absence of software for analysing the place of individual texts within a larger cultural context lead some HUMANISTs to ponder the rules determining the existence of software of any particular type. Was there perhaps some necessary connexion between the facilities offered by current software systems and current critical dogma? One respondent favoured a simpler explanation: "Straightforward concordance programs are trivial in comparison to dbms and I think that explains the situation much better than does the theory of predominant literary schools". It seems as if HUMANISTs get not just "the archives they deserve" but the software that's easiest to write. -----------Tables for the Humanist Digest------------------------ Table 1 : Humanist Subscribers by Country |Country of |Total number |Number of re- | origin |of sub's |distribution | |per country |lists incl. | |--------------------------|------------| |? | 2| 0| Note: Each "redistribution |Belgium | 3| 0| list" appears as one |Canada | 54| 1| member of HUMANIST but |Eire | 1| 0| stands for a number of |France | 1| 0| people. These are passive |Israel | 4| 0| members, i.e., they only |Italy | 1| 0| receive messages. The number |Netherlands | 1| 0| of such members are not |Norway | 3| 0| known to the compiler of |UK | 37| 4| this report & so do not |USA | 73| 2| figure in these tables. |--------------------------|------------| Total 180 7 Table 1a. Subscribers per node |nusers |nsuch | |---------------------------| | 1| 70| | 2| 17| | 3| 11| | 4| 2| | 5| 1| | 7| 1| | 8| 1| | 13| 1| |---------------------------- Table 2. Messages sent per subscriber |n_mess_sent |number_such |messages | |-----------------------------------------| | 0| 107| 0| | 1| 31| 31| | 2| 10| 20| | 3| 9| 27| | 4| 3| 12| | 5| 2| 10| | 6| 3| 18| | 7| 3| 21| | 8| 2| 16| | 10| 1| 10| | 12| 1| 12| | 14| 1| 14| | 17| 1| 17| | 18| 1| 18| | 20| 1| 20| | 71| 1| 71| |-----------------------------------------| Totals 177| 316| ------------------------------------------- Table 3 Messages by origin |country |Total message| |--------------------------| |? | 8| |Canada | 130| |Israel | 6| |UK | 40| |USA | 132| |--------------------------| Table 4: Messages by type |tag |messages |% messages|linecount |%lines | |--------------------------------------------------------| |A | 57| 17.981| 3867| 25.306| |C | 39| 12.303| 3078| 20.143| |D | 156| 49.211| 6707| 43.891| |Q | 64| 20.189| 1616| 10.575| |--------------------------------------------------------| Table 5: Messages by type within each month |type |messages |% in month|lines |% in month| ---------------------------------------------------------------| AUG87 |A | 10| 23.256| 1230| 32.031| SEP87 |A | 7| 17.500| 105| 9.722| OCT87 |A | 9| 30.000| 428| 36.992| NOV87 |A | 16| 34.783| 863| 48.840| DEC87 |A | 10| 11.494| 1178| 25.732| JAN88 |A | 2| 4.000| 5| 0.256| AUG87 |C | 3| 6.977| 1712| 44.583| SEP87 |C | 6| 15.000| 208| 19.259| OCT87 |C | 1| 3.333| 93| 8.038| NOV87 |C | 13| 28.261| 526| 29.768| DEC87 |C | 6| 6.897| 218| 4.762| JAN88 |C | 6| 12.000| 112| 5.744| AUG87 |D | 22| 51.163| 694| 18.073| SEP87 |D | 17| 42.500| 577| 53.426| OCT87 |D | 13| 43.333| 518| 44.771| NOV87 |D | 4| 8.696| 131| 7.414| DEC87 |D | 52| 59.770| 2649| 57.864| JAN88 |D | 37| 74.000| 1678| 86.051| AUG87 |Q | 8| 18.605| 204| 5.313| SEP87 |Q | 10| 25.000| 190| 17.593| OCT87 |Q | 7| 23.333| 118| 10.199| NOV87 |Q | 13| 28.261| 247| 13.978| DEC87 |Q | 18| 20.690| 520| 11.359| JAN88 |Q | 5| 10.000| 155| 7.949| ---------------------------------------------------------------| *****END***** From: MCCARTY@UTOREPAS Subject: Lou unkindly cut off when peeked under VM/CMS (22 lines) Date: 17 February 1988, 22:53:32 EST X-Humanist: Vol. 1 Num. 747 (747) As more than one person has complained to me today, Lou Burnard's report on HUMANIST when PEEKed in the reader of an IBM VM/CMS system appears to be unkindly cut off in mid stride. Actually the whole thing has arrived, but one "feature" of PEEK is that by default it will show only the first x lines of a long file. To read the whole thing you should first RECEIVE it. Then you can read it with XEDIT, or download it and print it out. I suggest the latter, since I find it a pleasure to read, and I find such pleasures hard to sustain on screen. Perhaps that's just a sign of age, however. Willard McCarty mccarty@utorepas.bitnet From: MCCARTY@UTOREPAS Subject: CD-ROMs, on-line dictionaries, and NLP (32 lines) Date: 18 February 1988, 10:28:56 EST X-Humanist: Vol. 1 Num. 748 (748) ---------------------------- From Joe Giampapa I would like to direct this message to all people who use CD-ROMs, on-line dictionaries, or are familiar with natural language processing (NLP). I am trying to do research in NLP, but am frequently confronted by the stark reality that my programs (as well as those of others) are merely "toys" without an on-line dictionary. Is there anybody out there who has had this problem and has managed to get around it on a modest budget and hardware configuration (ie. <=$2k, <10Meg, on a VAX 8650, or Symbolics Lisp Machine{_)? The "solution" I came up with was using a CD-ROM dictionary (like MicroSoft's Bookshelf), paired with a database of linguistic information for lexical items (ie. subcategorization) on the hard disk. However, I still have not seen one working CD-ROM dictionary, wonder how their addressing works, and how feasible my idea is. I would appreciate any insight into my dilemma, and references of people (or departments) to contact who are doing similar work. Thank you in advance. Joe Giampapa giampapa@brandeis.bitnet From: MCCARTY@UTOREPAS Subject: Contact Date: 18 February 1988, 19:33:18 EST X-Humanist: Vol. 1 Num. 749 (749) ---------------------------- From Mark Olsen Does anyone know if the Centre for Research in Education, University of East Anglia, can be reached via BITNET or other e-mail system? Thanks, Mark From: MCCARTY@UTOREPAS Subject: Hyper text and Stephen Reimer's note (28 lines) Date: 19 February 1988, 00:00:03 EST X-Humanist: Vol. 1 Num. 750 (750) ---------------------------- From Jean-Claude Guedon Stephen Reimer asked us whether Ted Nelson was sane or misrepresented by Byte Magazine. I do not think the question is very interesting in itself, but it is significant of the way in which ideas are staged (not to say m marketed) nowadays. It reminds me of the way the term "postmodernism" was bandied about in a conference I attended last November. This said, it would be useful to examine what is really new in the expression "hypertext". To me, hypertext is quite old. Pascal may have done a good example of it in his "Pensees". Diderot certainly implemented a form of hypertext in the Encyclopedie when he injected "renvois" to various articles at different places within an article, thus allowing the reader to "drift" according to the inclinations of his thought. Indeed, it would be fun to store the Encylopedie in hypertext mode on CD-ROM with electronic referrals following the indications of the original volumes. Anybody interested in pursuing the idea (and finding the financing to do it). Best to all. Jean-Claude Guedon (Guedon@umtlvr.bitnet) From: MCCARTY@UTOREPAS Subject: Hypertext, on-line dictionaries (36 lines) Date: 19 February 1988, 00:04:12 EST X-Humanist: Vol. 1 Num. 751 (751) ---------------------------- From Randall Smith <6500rms@UCSBUXB.BITNET> This is partly in response to Joe Giampapa's question about on- line dictionaries for natural language processing and partly a description of a hypertext system which HUMANISTS may be interested in knowing about. Greg Crane at the Harvard University Classics Department is working on a model Greek hypertext system called "Perseus." This model, when complete, will have a Greek text linked to an _apparatus criticus_, dictionary, grammar, metrical scansion, and commentary for teaching purposes. As far as I know this work is being done using Lightspeed C on a Macintosh (probably a Mac II). One of the things it will incorporate is an on-line version of the intermediate Liddell and Scott Greek Lexicon. I know that he just received the electronic version of this lexicon, though I have no idea how it is stored, indexed, etc. Also, he is using a program written by Neal Smith at Cincinnati which does morphological parsing of Greek words. Even though this does not directly involve natural language processing, some of the techniques which Greg is using may be helpful. He can be reached at: Department of Classics 319 Boylston Hall Harvard University Randall Smith From: MCCARTY@UTOREPAS Subject: References for SGML wanted (16 lines) Date: 19 February 1988, 11:54:03 EST X-Humanist: Vol. 1 Num. 752 (752) ---------------------------- From Leslie Burkholder During the course of the many exchanges on SGML, someone posted some references to introductions to SGML. Could that person, or someone else, send me these references? Thanks. lb0q@andrew.cmu.edu From: MCCARTY@UTOREPAS Subject: Lou Burnard's article (30 lines) Date: 19 February 1988, 11:55:56 EST X-Humanist: Vol. 1 Num. 753 (753) ---------------------------- From PRUSSELL%OCVAXA@CMCCVB I have some observations to make on Lou Burnard's article that I suspect are not unique to my experience. In the article, he alludes to the "private discussions occurring outside the main HUMANIST forum." I have not (until now) contributed to the main HUMANIST forum. I have, however, joined a special interest group (IBYCUS); sent information to and received help from individual members, institutions, and programs; and re-established communication with long lost colleagues on both sides of the Atlantic. HUMANIST has also served as a real-time example in faculty seminars I give on computer networks. HUMANIST's value to me may not be reflected in Lou's impressive statistics, but it goes far beyond the discussions carried on amongst the "top seven." Roberta Russell Oberlin College From: MCCARTY@UTOREPAS Subject: Asides, private discussions, and undocumented uses of HUMANIST Date: 19 February 1988, 11:59:11 EST X-Humanist: Vol. 1 Num. 754 (754) I for one would be very interested to hear from people who, like Roberta Russell, have used HUMANIST in ways that do not show up in the public forum. I wonder if the membership would not very much enjoy a report now and then about what HUMANIST has provoked or assisted offline? Willard McCarty mccarty@utorepas.bitnet From: MCCARTY@UTOREPAS Subject: SGML reference (28 lines) Date: 19 February 1988, 13:19:01 EST X-Humanist: Vol. 1 Num. 755 (755) ---------------------------- From amsler@flash.bellcore.com (Robert Amsler) I came across what appears to be an excellent introduction to SGML in the Univ. of Texas at Austin library earlier this week. It is: SGML and Related Issues by Joan Margaret Smith,1986. It is a British National Bibliography Research Fund Report available from: (1) The British Library Publications Sales Unit, Boston Spa, Wetherby, West Yorkshire, LS23 7BQ, UK or (2) Longwood Publishing Group, Inc., 51 Washington Street Dover, New Hampshire, 03820, USA It had both an ISBN number (0-7123-3082-8) and an ISSN number (0264-2972;22) It was a report on the events leading up to the creation of the ISO SGML standard and seemed quite readable. I don't know what the price is. From: MCCARTY@UTOREPAS Subject: An undocumented use of HUMANIST (23 lines) Date: 19 February 1988, 14:23:57 EST X-Humanist: Vol. 1 Num. 756 (756) ---------------------------- From Lou Burnard One of the many which I failed to find space for in my recent account was the case of a colleague of mine who, having occasion to visit New York at a particular crucial point during some electronic discussions on JANET, was provided by HUMANIST with an introduction to a BITNET site in NY from which he could continue his negotiations. On the offchance, he sent four pleas for help in the morning and received 3 offers of a temporary BITNET account the same day. New Yorkers are of course famous for their hospitality, but this was beyond the call of duty. It's also worth bearing in mind the next time people start droning on about the dehumanising effect of the computer on personal interaction! Lou From: MCCARTY@UTOREPAS Subject: SGML (me too!) Date: 19 February 1988, 18:34:09 EST X-Humanist: Vol. 1 Num. 757 (757) ---------------------------- From Francois-Michel Lang I too would be greatly interested in some introductory references to SGML. If somebody out there (Bob Amsler?) could send them to me too, I'd be very grateful. Thanks. --Francois Lang Francois-Michel Lang Paoli Research Center, Unisys Corporation lang@prc.unisys.com (215) 648-7469 Dept of Comp & Info Science, U of PA lang@cis.upenn.edu (215) 898-9511 From: MCCARTY@UTOREPAS Subject: Reference on SGML tags for literary documents (20 lines) Date: 20 February 1988, 01:52:36 EST X-Humanist: Vol. 1 Num. 758 (758) ---------------------------- From Peter Roosen-Runge I recently came across a Master's Thesis which gave me a good introduction to the key ideas of SGML and what's involved in applying them to the creation of a set of tags for literary documents. An implementation in SCRIPT/VS is discussed, there's a 90-page reference manual as an appendix which gives a clear description of all the elements defined in the "standard" proposed by the author, and she's provided a sample markup and formatted output for a scene from Hamlet. This is rather outside my field, so I can't assess how complete or useful the proposed set of tags would be, but I found the thesis a great help in understanding the recent HUMANIST discussion of SGML issues. The thesis is from 1986 but was recently published as a technical report: Fraser, Cheryl, A. An Encoding Standard for Literary Documents External Technical Report ISSN-0836-0227-88-207 Department of Computing & Information Science, Queen's University: January 1988 From: MCCARTY@UTOREPAS Subject: CD-ROMs, Hypertext, and GUIDE (43 lines) Date: 20 February 1988, 12:28:52 EST X-Humanist: Vol. 1 Num. 759 (759) ---------------------------- From Ken Tompkins I note that the same sort of intensity and conviction has pervaded the discussion of CD-ROMS that was involved in the remarks on Hypertext. Here, we are just in the early stages of considering wider purchases of CD-ROM technology -- we have a modest application in our library accessing the READER'S GUIDE. The various questions posed by readers asking if faculty will use the technology to such an extent that wide purchase can be justified seem terribly important. I suspect that we will recommend an area in our library where various CD-ROM disks will be stored for faculty and student access. I doubt the College will support individual purchases. A recent piece in PC-WEEK suggests that there may still be life in magnetic media. At IBM's Almaden Lab in San Jose, scientists discovered that a 3 1/2 inch disk could store 10 gigabytes. The only limits found in the project was in recording head technology. Clearly, optical disk technology offers substantial benefits -- stability, etc. but it seems equally clear that magnetic media cannot be so easily counted out. One final question. Are any readers experimenting with GUIDE a hypertext application for PC-AT's? I bought a copy a week ago and am rather impressed with its pedagogical possibilities. To learn how to use it, I'm developing a set of files on a Shakespearean sonnet. Do other readers have practical experiences with the program. If so, I'd like to hear about them. Ken Tompkins Stockton State College From: MCCARTY@UTOREPAS Subject: Tenure-track job in electronic publishing & literature (62 ll.) Date: 20 February 1988, 12:33:08 EST X-Humanist: Vol. 1 Num. 760 (760) ---------------------------- From Ken Tompkins The following position is now open; if you know of anyone fitting the description who would like to develop an Electronic Publishing Track from the ground up, PLEASE HAVE THEM RESPOND to the address below. Also please note my request at the bottom. ************* Faculty Position Open ************** The Literature and Language Program (Department) of Stockton State College seeks an Instructor or Assistant Professor of Applied Communications for a renewable tenure track position starting September 1, 1988. The candidate should have a Ph.D or ABD (for Instructor) in the appropriate field. Curricular expertise plus experience in the field of Electronic/Desktop Publishing or related area required. College level teaching experience strongly preferred. Literature specialization may be British, American, or non-Western. The function of this position is to develop an Applied Communications concentration emphasizing Electronic Publishing and/or Publication Design within the Literature and Language Program. Current salary range: $20,713 -- $28,956 depending on qualifications and experience plus State mandated fringe benefits. Screening begins February 1, 1988. Send letter of application with CV and direct three letters of reference to: Margaret Marsh Chair, Faculty of Arts and Humanities Stockton State College Pomona, N.J. 08240 AA/EOE Employer -- Minorities and Women encouraged to apply. ********************************** My request: We have advertised this position in the major listings applicable to the Literature component but so far have received no responses. Can members of HUMANIST suggest other outlets more oriented to the electronic publishing component? If you can please reply directly to me. This is an opportunity to teach literature and to develop a college-wide facility for electronic publishing as well as to establish a concentration for students. From: MCCARTY@UTOREPAS Subject: Amendment to Ken Tompkin's item (28 lines) Date: 20 February 1988, 13:13:46 EST X-Humanist: Vol. 1 Num. 761 (761) ---------------------------- From David Nash >From Ken Tompkins [...] A recent piece in PC-WEEK suggests that there may still be life in magnetic media. At IBM's Almaden Lab in San Jose, scientists discovered that a 3 1/2 inch disk could store 10 gigabytes. The only limits found in the project was in recording head technology. February 1988 BYTE (page 11) reports what must be the same story, but have it that the 3.5" disks can hold "10 gigabits". This would be a factor of 8, I guess, less than 10 gigabytes, and really to be compared to the ca. 1 gigabyte hard disks on the market now. "Several industry watchers said they'll get more excited when they see devices that can read those disks..." -DGN From: MCCARTY@UTOREPAS Subject: Why CD-ROM? (24 lines) Date: 20 February 1988, 15:01:32 EST X-Humanist: Vol. 1 Num. 762 (762) ---------------------------- From Mark Olsen The excitment over CD-ROM is in the first two letters. It is a technology that is in place, with existing and rapidly expanding production facilities, and -- if audio CD is any example -- the prospect of rapid decending prices. Standards in the computing business, as in many others, has less to do with technical excellence as with economic and marketing considerations. You can go out right now and buy 500 meg HD, but this will always be a limited market (read expensive) item. The CD has the benefit of economy of scale that will probably not be rivaled by specialized magnetic media. The IBM-PC standard is a lasting tribute to the power of economics and marketing to ignore technical advances and to retard future technical developments. Mark From: MCCARTY@UTOREPAS Subject: Parsing programs (37 lines) Date: 20 February 1988, 16:36:54 EST X-Humanist: Vol. 1 Num. 763 (763) ---------------------------- From Richard Goerwitz I'm interested in working up a parsing engine geared for the Hebrew Old Testa- ment. Can anyone recommend any relevant articles? This really isn't a query about machine translation, as I am only interested in distinguishing various grammatical categories. It isn't even about machine-assisted translation. My reason for wanting to do this is that I am interested in doing grammatical searches of various sorts on the Old Testament text. Probably articles on Arabic or other Semitic languages will be applicable. I don't know. Maybe someone out there *does*. -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer P.S. Most of what I have done so far is in Icon. But if anyone wants to send sample code, please feel free to do so. I don't mind reading C, SNOBOL4, or some PASCAL (though Icon is definitely my home-field). From: MCCARTY@UTOREPAS Subject: Silent use of HUMANIST (30 lines) Date: 21 February 1988, 11:01:53 EST X-Humanist: Vol. 1 Num. 764 (764) ---------------------------- From R.J.Hare@EDINBURGH.AC.UK Like Roberta Russell, I too have used HUMANIST as an example of the intelligent use of computers to disseminate information, etc. in a series of seminars we give at Edinburgh to the annual intake of English Literature post-graduates. HUMANIST is an impressive example of such usage, and the only way I intend to change this type of usage in the future is to increase the range of people we demonstrate it to. Roger Hare. From: MCCARTY@UTOREPAS Subject: Parsing engine for Hebrew (49 lines) Date: 21 February 1988, 11:04:14 EST X-Humanist: Vol. 1 Num. 765 (765) ---------------------------- From Robin C. Cover In response to the inquiry of Richard Goerwitz on parsing engines for Hebrew: I am also interested, but the last time I investigated this topic I found disappointingly little information. We should ask the Israelis, I'm sure, and in particular Yaacov Choueka. An important question is whether we want a parser WITH a dictionary or WITHOUT a dictionary; the latter, I'm afraid, would be one big program. Some progress has been made on Greek parsers (Gregory Crane, Harvard; Neel Smith, Bowdoin College; Tony Smith, Manchester) but apparently not for Hebrew. If other HUMANISTS have better news, I'm all ears. One possibility would be to check with Gerard Weil. A student of his once wrote a dissertation on parsing Hebrew by computer, but to my knowledge it was not perfected (Moshe Azar, "Analyse Morphologique Automatique du Texte Hebreu de la Bible," 2 volumes; Nancy 1970). A more profitable lead might be to ask Yaacov Choueka, whose work on lemmatization and disambiguation is highly regarded, and is implemented (I understand) in the software of the Responsa Project, where much of this mammoth corpus of Hebrew is parsed on the fly; see J.J. Hughes in _Bits and Bytes Review_ 1/7 (June, 1987) 7-12. (Also, bibliography in Y. Choueka, "Disambiguation by Short Contexts," _CHum 19 (1985) 147-157.) But I doubt that this is portable or obtainable code. I suspect you want a completely rule-based parser, working with a grammar but not a dictionary. If you don't mind a little help from a dictionary, see Walter Claassen's article "Towards a Morphological Analysis of Biblical There is bibliography on this topic in the volume published by the CIB in 1981, _Centre: Informatique et Bible_ (Brepols, 1981); cf pp. 105-106 on morphological analysis, and pp. 138-140 on lemmatization by computer. Professor Robin C. Cover ZRCC1001@SMUVM1.bitnet From: MCCARTY@UTOREPAS Subject: Bitnet/EARN connection to Univ. of East Anglia (32 lines) Date: 21 February 1988, 18:33:02 EST X-Humanist: Vol. 1 Num. 766 (766) ---------------------------- From Ian Lambert I picked up a message during the last 3 days from someone wanting to know of the Centre for Research in Education at the University of East Anglia. Specifically they were seeking a BITNET or EARN connection. I logged into our database of hosts here in Canterbury and found only the two references to UEA, and thought that this at least would give our colleague HUMANIST a contact to work from. The numbers are the numerical ids. We have alpha ids here but I am aware that the majority of them are UKC specific, and therefore unlikely to work outside our internal network. The only links we have to the University of East Anglia apparently are: 000008006002 their central VAX; and 000008006003 the Computer Centre Perhaps their postman can help. Ian From: MCCARTY@UTOREPAS Subject: Icon (52 lines) Date: 22 February 1988, 09:10:38 EST X-Humanist: Vol. 1 Num. 767 (767) ---------------------------- From Richard Goerwitz It occurs to me that there are probably a few people reading Humanist mailings who run all kinds of programs, and yet do not program themselves. For those who would like to, there is an easy way - learn Icon. Icon is a very high-level, general-purpose programming language that shines when put to text and string-handling tasks. Fortunately, it is also a modern "structured" language, so unlike BASIC, FORTRAN, SNOBOL, etc., it *forces* the programmer into a more structured, procedurally-oriented approach. For me Icon served as an excellent starting point. Before this, I had learned a little assembler (80x86 and some BASIC), but had not really been able to lauch into more substantial programming tasks. Though Icon has a very rich and complex syntax, one can master its basic features in two or three weeks. Just a few lines of Icon code, moreover, can do what would take many lines of, say, Pascal or C (especially with no string handling facilities). Almost as soon as one begins writing Icon programs, one begins writing useful and even very powerful programs. After learning Icon, one can then move into lower-level languages. It's a little hard to get used to having to tell the compiler how much storage to allot every variable (i.e. to operate without "garbage collection") but overall, it seems much easier to begin programming in C after Icon than before. I know because I tried. C seemed incredibly cryptic when I had my first run at it, and I gave up. I guess after learning basic techniques with Icon, it all fell into place. I say this not because I'm connected with the creators of Icon in any way. I just wanted to offer a little information that might be useful to those who feel walled off from their machines because of their inability to pro- gram it themselves - either at all, or in a language suitable to the sorts of tasks they want to perform. For the interested, Icon is free (!). Send a note to the Icon project at icon-project@arizona.edu. There's also a book out called *The Icon Programming Language* by Ralph and Madge Griswold (Prentice Hall: New Jersey, 1983). -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer From: MCCARTY@UTOREPAS Subject: East Anglia successfully reached by Arizona (15 lines) Date: 22 February 1988, 09:16:01 EST X-Humanist: Vol. 1 Num. 768 (768) ---------------------------- From John Roper From John Roper (S200@cpc865.uea.ac.uk) Just to let everybody know that Mark Olsen successfully contacted University of East Anglia. There is obviously a lot of activity behind the scenes if that request was any guide. From: MCCARTY@UTOREPAS Subject: Assessment of the Oxford Text Archive (39 lines) Date: 22 February 1988, 09:48:59 EST X-Humanist: Vol. 1 Num. 769 (769) ---------------------------- From Judith Proud Lou Burnard's recent requests for ideas and opinions concerning various aspects of the Text Archive and its catalogue (Humanist 9th Dec, 8th Feb and passim) are not (necessarily) an early indication of mid-life crisis, mounting self-doubt or basic insecurity but largely the result of recent funding granted by the British Library to enable an assessment of the past workings and current status of the Oxford Text Archive and the formulation of a realistic policy for its future. This funding has led to my appointment to the project for the period of one year at the end of which I shall be producing a report containing our various findings and recommendations. As Lou has already started to do, with somewhat disappointing results, we would like during the course of the project to throw out a number of general requests for information, opinions and ideas from Humanists who have used the Archive in the past or decided not to use it for particular reasons. Just as important for our research, however, and perhaps of more general interest to most Humanists, are the broader issues involved in the use of Text Archives, a vast area that includes a number of topics that have already been touched on in earlier Humanist discussions and which we hope will continue to be discussed energetically and productively. This is just a preliminary announcement to introduce you to the project, but any initial thoughts or comments would of course be very welcome. Judith Proud Oxford Text Archive From: MCCARTY@UTOREPAS Subject: Thanks and some comments on ICON (31 lines) Date: 22 February 1988, 13:20:55 EST X-Humanist: Vol. 1 Num. 770 (770) ---------------------------- From Mark Olsen First, let me thank all those who offered help in contacting East Anglia -- I received no less than 7 messages, showing that we are a humane lot. Richard Goerwitz's comments on Icon are interesting, for what he says and what he does not say. Icon is a sophisticated string processing language that is a marked improvement on SNOBOL and certainly easier to use for many applications than either C or BASIC. But Goerwitz treats it as a "learning" language rather than as a serious application language. Inspite of my positive evaluation of Icon, I too have looked at it and used it in only limited applications. This is because the PD implementation does not have a number of important features that are necessary for serious programming, such as random disk access and a decent debugger, not to mention the combined editor/compilers like Turbo Pascal. The basic components of the language, as defined by Griswold and his group, are admirable, but will probably not see very wide use until a software house picks up the language and gives it higher levels of support. Mark From: MCCARTY@UTOREPAS Subject: Hebrew parsing programs (34 lines) Date: 22 February 1988, 13:23:16 EST X-Humanist: Vol. 1 Num. 771 (771) ---------------------------- From John Gleason The Packard Humanities Institute is currently involved in correcting an automated morphological analysis of the Hebrew text of the Old Testament. The original automated analysis was done by: Richard E. Whitaker 300 Broadway Pella, IA 50219 (515) 628-4360 I believe Whitaker did the analysis on an Ibycus system, but I don't know if it was the mini or the micro. He's a knowledgeable Semiticist, and can also give you lots more information than I can on what's being done in this area. The correction of the automated analysis is being done by: J. Alan Groves Westminster Theological Seminary Box 27009 Philadelphia, PA 19118 (215) 572-3808 From: MCCARTY@UTOREPAS Subject: Hebrew Bible parsing (58 lines) Date: 22 February 1988, 13:26:01 EST X-Humanist: Vol. 1 Num. 772 (772) ---------------------------- From Bob Kraft A couple of footnotes to the informative reply by Robin Cover to Richard Goerwitz' query about Hebrew Bible parsing. I will admit that I'm not sure why it wouldn't be suitable to start with "analyzed or lemmatized texts" from which a lot of flexibility could easily be constructed, but I will leave that for Richard and Robin to enlighten me. My footnotes do in fact refer primarily to analyzed text. Sorry. But they are part of the broader discussion. (1) Several "morphological analyzed" forms of the Hebrew Bible exist in various stages of verification and availability. Two that we at CCAT were unable to obtain for use in the Septuagint Project (CATSS) are by the late Gerard Weil and his French team, and by Francis Andersen and Dean Forbes in Australia. Early in the game, the results from the Maredsous (Belgium) project were also unavailable, so we commissioned Richard Whitaker to create programs for automatic analysis on the older IBYCUS System in the IBYX language (similar to C). As we made progress on this project, negotiations with the Maredsous project became more favorable, so now the results from the Whitaker programs and from Maredsous are being collated and corrected/verified as necessary by the Westminster Seminary team under Alan Groves. Our desire is to make this material avaialble for scholarly research, and if there is some way that Whitaker's code could be of help, I am willing to investigate the question further along those lines. I know nothing beyond what Robin communicated about the other players in this game, except that Dean Forbes did the programming for the Andersen analysis and could be approached about his code as well. (2) The discussion of Greek analysis programs should make note of the pioneering efforts of David Packard, whose morphological analysis program written for IBM maniframes has been around for a very long time and is described in an article from one of the Linguistic Computing conferences at Pisa. This program is mounted in various centers, the most recent of which is Manchester (Robin referred to Tony Smith there), which has permission to serve as a center for others to access the program electronically. (3) Analysis programs for other languages also exist, and are sometimes available in one way or another. We were able to obtain DeLatte's Latin program (Liege) for use on projects through CCAT, and doubtless others have developed or obtained similar programs. Perhaps HUMANIST would be a good place to create an inventory of information on such matters, for accessing through the fileserver? Bob Kraft (CCAT) From: MCCARTY@UTOREPAS Subject: XT vs. Mac (31 lines) Date: 22 February 1988, 14:21:42 EST X-Humanist: Vol. 1 Num. 773 (773) ---------------------------- From Maurice Charland I have a modest sum to spend on microcomputers for our PhD program in Communication. My university supports PC-style machines, but does not support Macs. Also, current XT-clone prices are such that they come in approx. $1000 less than Macs. Given this, are there compelling arguments that would favour purchasing Macs? The machines will be used by PhD students and faculty. With the exception of word processing, the possible applications of these machines is undetermined. I do not, in the near future, expect much in the way of computer-based textual analysis (a hot topic currently on HUMANIST). While I gather that, in the abstract, Apple's designs are far better than IBM's I do wonder whether this difference makes a difference for most applications in the social sciences and humanities. What do fellow HUMANISTS think. Thanks, Maurice Charland Communication Studies Concordia University From: MCCARTY@UTOREPAS Subject: Using the KDEM 4000 for non-Roman alphabets (55 lines) Date: 22 February 1988, 14:23:38 EST X-Humanist: Vol. 1 Num. 774 (774) ---------------------------- From Robin C. Cover I would like to accumulate some wisdom on the use of the Kurzweil Data Entry Machine (KDEM 4000) for digitizing complex textual materials, especially non-roman scripts. We recently acquired a KDEM 4000, but found the documentation somewhat spartan, and the support personnel (including engineers) have been less than enthusiastic about supplying additional information. By trial-and-error, we have learned some tricks that allow us to tweak performance for tasks that press the scanner to its limits, and perhaps into service for which it was not designed. But I suspect there are rich storehouses of "departmental lore" held at various universities where the KDEM has been used in precisely this way. It would be helpful to know, if we can find out, exactly what kind of intelligence the KDEM 4000 has, how it operates (at the algorithmic level), how its performance can be optimized for scanning multiple-font materials. I will appreciate cooperation from any institutions who would be willing to contribute to this task: documenting undocumented features of optimal KDEM performance. Perhaps veteran operators could be asked to contribute a paragraph or two, or suggestions in list format, describing their most critical discoveries in working with the KDEM. I will be glad to compile these suggestions for redistribution if they are sent to me personally, but I would also like to know via public postings if HUMANISTS at other KDEM sites think this is a worthwhile enterprise. Maybe the KDEM 4000 is "just as smart (or stupid) as it is, and not much more can be said." Finally, does anyone know whether OCR technology is currently being developed by major companies? I understand that Palantir is increasing the sophistication of its scanners by adding more font libraries (including foreign language fonts), but this is hardly a godsend for our applications. Much optical scanning technology (as with Recognition Corporation) seems to be focused on bit-mapped images and sophisticated compression algorithms for mass storage, but with less emphasis upon character recognition per se. I'd be delighted to hear that some kind of commercial application is driving development of *intelligent* optical character recognition devices. Wouldn't libraries want this technology? Professor Robin C. Cover ZRCC1001@SMUVM1.bitnet 3909 Swiss Avenue Dallas, TX 75204 (214) 296-1783 From: MCCARTY@UTOREPAS Subject: Teaching with hypertext/media conference (45 lines) Date: 22 February 1988, 18:59:20 EST X-Humanist: Vol. 1 Num. 775 (775) ---------------------------- From elli@husc6.BITNET (Elli Mylonas) I am posting this for a friend. Please pass it on to local bulletin boards, and to others who may be interested. --Elli Mylonas CALL FOR PAPERS It has long been predicted that the advent of hypermedia will have a dramatic impact on education. Now, particularly since the introduction of Apple's Hypercard in August, 1987, hypermedia is becoming widely available to educators for the first time. What effect is hypertext having on pedagogy? "Teaching with Hypertext" will bring together teachers from a wide variety of disciplines, both within and outside of academia, to consider how easily available, cheap hypermedia is influencing them and their students. Desirable topics for papers include examples of applications of hypertext in both formal classroom settings, and independent or less structured learning environments; sound and video applications with hypertext; the impact of hypermedia on form and pace of curriculum; and the relationship between traditional hard-copy learning resources and hypermedia. Please send a 500 word abstract by Friday, March 15 to: Neel Smith Department of Classics Bowdoin College Brunswick ME 04011 or dsmith@wjh12.harvard.edu From: MCCARTY@UTOREPAS Subject: HUMANIST's preoccupations (31 lines) Date: 22 February 1988, 19:01:34 EST X-Humanist: Vol. 1 Num. 776 (776) ---------------------------- From Sebastian Rahtz I found Lou Burnard's report fascinating; pity he didn't supply it marked up in SGML, but wotthehell.... anyway, can I add two further observations to his remarks on trends? The first is serious: HUMANISTs are predominantly literary, and there are a high proportion of classicists amongst us (I include biblical scholars in that), far more than any normal average. For one reason or another, the discussion does not often cover history, music, archaeology, fine art, languages etc; is this because these subjects are covered in other ways or because literary types are naturally argumentative? I suggest that HUMANIST could become a ghetto one day... The second point is trivial, and prompted by a lunch-time conversation with the a fellow-HUMANIST here (there aren't many of us in Southampton...); why aren't more contributions funny? OK, the worlds a miserable place, and we are all desperately trying to get on in the rat race, but computers are not THAT important.... sebastian rahtz From: MCCARTY@UTOREPAS Subject: ICON (37 lines) Date: 22 February 1988, 19:03:56 EST X-Humanist: Vol. 1 Num. 777 (777) ---------------------------- From Sebastian Rahtz I was interested in Richard Goerwitz's promotion of Icon (sorry Richard my mailer *refuses* to talk to you, by the way), and Mark Olsen's rebuttal. I am afraid I cannot share Richard's claim that Icon is easy to learn; I enjoy using it, as I used to use Snobol and after that it seems like the ideal language, but when I tried to teach it to 1st year students last year, they floundered. It is TOO rich, are too many features, for easy comprehension (sounds like ADA!). But equally, I think Mark is being unfair; why does he *want* random access files for the tasks Icon was designed for? Its tracing facilities are good enough for the casual punter, who never uses a debugger anyway (well, I never have and never want to, nor do I know any 8x86 assembler). It ISNT a system language, its a utility, exploratory, prototyping language. I do concur that a Turbo-style environment would be lovely, but there are still people out there would don't use PCs, y'know... to me, the major criticism of Icon as a daily language is that it cant create pure executables in the current version (nor is such a thing planned, so far as I know). This makes it awkward to give programs to friends. Mark could always add random access functions himself (at least in the Unix version) by creating a personalised interpreter. sebastian rahtz From: MCCARTY@UTOREPAS Subject: Job announcement (44 lines) Date: 22 February 1988, 19:05:42 EST X-Humanist: Vol. 1 Num. 778 (778) ---------------------------- From Nancy Ide Job Announcement Instructor in Computer Literacy Vassar College The Department of Computer Science seeks applicants for an Instructor in Computer Literacy beginning September 1988. Candidates holding a doctorate will be given highest priority though others who demonstrate appropriate experience or credentials will be considered. Prior teaching experience in computer literacy is strongly desirable. Candidates should have a broad familiarity with computer literacy issues and an in-depth knowledge of microcomputing in a Macintosh environment, including Pascal programming, word processing, spreadsheets, graphics and similar software applications. Experience with a VAX/VMS computing system is a plus. The teaching load is five courses per year. Vassar is a coeducational liberal arts college of 2,250 students located in the Hudson Valley, approximately ninety minutes north of New York City. The Department of Computer Science consists of five full-time faculty members. Vassar is a charter member of the Carnegie-Mellon InterUniversity Consortium for Educational Computing and has participated in numerous national projects in educational computing for liberal arts colleges. Salary is dependent upon qualificatinos. Candidates should send a resume, transcript(s), three letters of recommendation, and a letter stating teaching interests to: Dr. Martin Ringle, Chairman, Department of Computer Science, Vassar College, Poughkeepsie, NY 12601. Application closing date is March 30, 1988. An Equal Opportunity/Affirmative Action Employer. Women and Minority candidates are encouraged to apply. From: MCCARTY@UTOREPAS Subject: More on ICON (46 lines) Date: 22 February 1988, 19:53:42 EST X-Humanist: Vol. 1 Num. 779 (779) ---------------------------- From Richard Goerwitz Mark Olsen is correct in saying that a decent debugger would be nice for Icon. I, however, have found little trouble using it without one. I'm not claiming to be a hot-shot programmer. In fact, I would say quite the opposite. Still, I can often crank out huge amounts of nearly error- free Icon in a few hours. It's really amazing how easy programming in Icon is. I say this because, if I were thinking about learning to program, I would probably have been turned away from Icon by Mark's comments. This would, at least in my case, have been a very drastic error. Version 7 of Icon has much better debugging facilities. As for random disc access, please note the new seek function in version 7. Clearly Icon does not offer low-level hardware control, but you can now at least go to whatever line you want in a file. Combination editor-compiler packages are nice, I think. However, most of the time compilers are not set up this way. Turbo Pascal and a few other packages stand out as notable exceptions. The lack of such facilities would not discourage me from learning a particular programming language! I say this not to get into any dispute with Mark. Actually, he points out some very important things to keep in mind when looking at Icon. I just wanted to point out that, having had the experience of self-teaching myself to program, Icon has proved an excellent choice. You might argue that I am just so fantastically intelligent that any way I had approached this task would have worked out. This, however, would be completely false, since I am just an everage Joe who wanted to start doing things for himself.... -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer From: MCCARTY@UTOREPAS Subject: Sebastian's points (32 lines) Date: 22 February 1988, 20:06:03 EST X-Humanist: Vol. 1 Num. 780 (780) Sebastian, my friend, I recall that the last time we tried to be funny, or perhaps were funny, our unfortunate colleagues in New Zealand, who were paying to read our flames about flaming, objected. (They are gone now, I hope only temporarily, and they are gone *because of* the cost.) The real reason for so little humour amongst us, however, may be that being funny, or delightfully imaginative, is very hard! Being serious, without wit or irony, is much easier. It may also be only appropriate to the kind of forum we have, though I hope not. As for the literary bias of HUMANIST, I disagree. If literary types were in the majority of those who hold forth, then I'd guess that we'd have heard more, for example, on the subject of software for problems of literary scholarship. We do hear a fair bit about linguistic problems, however. It would be very interesting to have a Burnardian analysis of the backgrounds of our members, at least as far as the biographies would reveal them. Is anyone willing to do that? (Some of you will remember that such an analysis was one of the original aims of the Special Interest Group for Humanities Computing Resources, of which HUMANIST was the first public expression.) Willard McCarty mccarty@utorepas [bitnet] From: MCCARTY@UTOREPAS Subject: Iconography (40 lines) Date: 23 February 1988, 00:14:01 EST X-Humanist: Vol. 1 Num. 781 (781) ---------------------------- From Mark Olsen Don't get me wrong. I like Icon and would use it over SNOBOL4+, save that the support levels for a PD language have to be minimal. One can't expect the kind of support and development for Icon that a commercially supported product receives. What this really boils down to is a plea for a commercially supported version of Icon, one that can be used for writing larger, more complex applications. I'll have to look at version 7, I think I'm back at 6.n, to see what improvements have been added. Sabastian is quite right that I am being unreasonable about wanting random access disk files (and other goodies) in a language that is designed to be something else. This is because I am simply too damned lazy to try to implement the kinds of things that SNOBOL and Icon have built in -- associational arrays, pattern matching etc -- in another language in order to get standard services like random disk i/o and faster i/o. The mere thought of the number of pointers required to implement an Icon table gives me gas pains. So I write most of my analysis programs in SNOBOL4 and wish that SOMEBODY ELSE would do the dirty work. As for a debugger, I had a student writing a program in Icon who managed to bomb the system when he tried to call a procedure with several large parameters. Only after MUCH experimentation was he able to detect the problem. More effective debugging would help there. There is enough good about Icon to lament the fact that it is supported only by volunteer effort. I suspect that I am not the only one in the world who uses these "prototyping" languages for more serious applications rather than try to write similar programs in other languages. Mark From: MCCARTY@UTOREPAS Subject: Dangers of Ghetto Mentality (32 lines) Date: 23 February 1988, 00:15:18 EST X-Humanist: Vol. 1 Num. 782 (782) ---------------------------- From ROBERT E. SINKEWICZ I was very pleased to read Sebastian's comments and I quite agree that HUMANIST is leaning very heavily in the direction of becoming a ghetto for those whose primary interest is in text oriented computing. I would guess that this is in part a reflection of the way in which the humanities have been segregated in specialized departments that very often do not talk to one another. I believe that HUMANIST would be failing in its purpose if it does not contribute to overcoming this sort of compartmentalization of knowledge. To take only one example, as best as I can recall there have been no more than four or five references to the research possiblities of relational databases since last August. And yet relational databases and their statistical counterparts are becoming increasingly common in historical studies. Where are all the historians, archaeologists, and even the librarians??? Perhaps Toronto is unusual in having several major research projects that are heavily computerized and adopt a more interdisciplinary approach to their work. For example, REED - Records of Early English Drama (not just the texts but any and all information about their historical, social and economic context); ATHENIANS - a Who-was-who in old Athens; DEEDS - statistical analysis of the medieval charters of Essex county; GIP - Greek Index Project (everything you want to know about Greek manuscripts anywhere in the world). And this is only the short list. Where are all the other HUMANISTS who are doing interesting things with information databases, building such interesting tools as PROLOG expert systems to analyze data? I have nothing against texts. I read a lot of them, have edited a few of them (Byzantine texts), but I do insist that texts have contexts - historical, social, economic, etc. Robert Sinkewicz (ROBERTS@UTOREPAS) Pontifical Institute of Mediaeval Studies University of St. Michael's College Toronto From: MCCARTY@UTOREPAS Subject: Solomon's solemn silence (31 lines) Date: 23 February 1988, 00:26:13 EST X-Humanist: Vol. 1 Num. 783 (783) ---------------------------- From Norman Zacour I cannot speak about those interested in "music, archaeology, fine art etc.", but if we lack contributions about computing and history it may be because most historians who deal with personal computers are primarily interested in word processors and data processors, both highly developed in the business world and therefore leaving little for the academic to get worked up about. It's the newness of some things that leads to excitement: one can appreciate and welcome the advent of WordPerfect 5.0, which with its competitors will allow the production of oceans of books, but it lacks the explosive excitement of invention, like that of Mr. Crapper in the 19th century. Rather it is all very serious business. It was Mark Twain, though I wish it had been I, who, in discussing our profession, said that many things do not happen as they ought, most things do not happen at all, and it is for the conscientious historian to correct these defects. It's a terrible burden, being conscientious, and leaves little room for frivolity. I do not know where Twain said it, and so I follow notices to HUMANIST about marking up text, hoping that when the entire corpus of biblical, Greek and Latin literature has been thoroughly annotated, perhaps then... Norman Zacour (Zacour@Utorepas) From: MCCARTY@UTOREPAS Subject: Thanks; Dictionaries (83 lines) Date: 23 February 1988, 00:31:35 EST X-Humanist: Vol. 1 Num. 784 (784) ---------------------------- From Richard Goerwitz Thanks for the many replies on Hebrew parsing programs. I will have a fair number of leads to follow up over the next few weeks. I promise to post re- sults - should any be forthcoming. This is perhaps as good a time as any to clear up an apparent misunderstand- ing. Some individuals took it to mind that I was opposed to the idea of a dictionary. This was probably based on my aversion to lemmatized or other- wise analyzed text. The two are not the same. I'd like to explain why. Let me back up and explain first why I want to find out about parsing methods. Basically, I would like to do more than simply look for patterns (grepping, say) - more even than looking for keywords, as in a lemmatized concordance. I want to locate syntactic and other sorts of patterns. There's no sense offer- ing Hebrew examples here. Too many people on the HUMANIST have other areas of expertise. Let me point out, though, that "grammatical" searches are some- thing almost anyone involved in linguistics or some form of language-study would find useful. To facilitate such searches, some folks have apparently lemmatized (i.e. added dictionary entry keys) to texts. Others have actually separated out morphemes or labled words as to their syntactic category. This is a good thing to do. I make no bones about it. However, one must keep in mind that programs which use these texts are not really parsing. They may be doing some parsing. The real grammatical analysis, though, has already been done by the editor of the text. In my mind, this offers several distinct disadvantages. First of all, what if the editors change their minds about something? They have to re-edit the en- tire text, and then redistribute it. Worse yet, what if a user doesn't agree with the editors' notions of morpheme boundaries, of grammatical categories, or of syntactic structure? He basically has to ignore the information provided, or else try to weed out whatever he can use. For Semiticists, let me offer some examples (I'm talking mostly Heb., some Ara- maic). Do we call a participle an adjective or a noun? Do we include a cate- gory "indirect object" (yes, says Kutcher; no, say I). Are infinitives absol- ute to be classified as adverbs, nouns, verbs, or what (sometimes one or ano- ther of these classes will be better)? The problem we are getting into here is that in Hebrew, a word's class will depend partly on morphology, and partly on its external syntax. It's not like Latin, where the morphology will pretty much tell us a word's class. Nor is it like English, where external syntax or the lexicon will usually tell us this information. Whether we adopt a tradit- ional, morphological approach to classification (the "classical" tradition - which likes to impose Greek or Latin categories on languages where it is com- pletely inappropriate), or a more broad one, will be terribly subjective. My feeling, therefore, is that if I can, I'd like to find out if a true parser is possible for biblical Hebrew. Sure, I'll used analyzed text if I need to. I want to know, however, if the other alternative is - even dimly - feasible. Now, about the dictionary: I have no objection to one. Certainly native speakers of any language will memorize a great deal. So why shouldn't our parsing programs? If I have to lexicalize tense distinctions like "go" and "went," why not let my parser lexicalize them, too? I also have no objection to front ends to parsers, which have lists of the most frequent words handily pre-analyzed. Using such a module cuts down the amount of work a parser has to do, while not crippling it in any way. In sum, then, I don't at all object to tables, dictionaries, etc. Nor do I object to analyzed text. It's just that in the latter case, I'd like to try to see whether I can get along without it. Again, if anyone knows of any relevant articles on Semitic languages (like, say the recent one on Arabic in Computers in Translation), please drop me a line! Many thanks again. -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer From: MCCARTY@UTOREPAS Subject: Hypertext, on-line dictionaries (80 lines) Date: 23 February 1988, 00:35:00 EST X-Humanist: Vol. 1 Num. 785 (785) ---------------------------- From elli@husc6.BITNET (Elli Mylonas) >This is partly in response to Joe Giampapa's question about on- >line dictionaries for natural language processing and partly a >description of a hypertext system which HUMANISTS may be >interested in knowing about. Since the Perseus Project was mentioned by Randall Smith in a recent posting, I would like to clarify and add to his description of the project. >Greg Crane at the Harvard University Classics Department is >working on a model Greek hypertext system called "Perseus." This >model, when complete, will have a Greek text linked to an >_apparatus criticus_, dictionary, grammar, metrical scansion, and >commentary for teaching purposes. The Perseus Project is collecting visual and textual data on Classical Greece and putting it together in such a way that it may be used for teaching and research. The project is taking place both at Harvard and at Boston University. The textual data will consist of Greek texts, translations, meter, app. crit. and notes. The latter two items may not be provided for every text, but will certainly be available for a basic canon. (Don't jump on me, we will be asking professors what they consider canonical for inclusion.) The system will also include a Greek lexicon (the Intermediate Liddel Scott Lexicon) and a classical encyclopedia. All this will be linked together to allow access from one part of the database to the others, and to allow a user to look up a word while in a text, or to see a picture or a map that are relevant. By the way, all our texts will be encoded using content markup in accordance with the SGML standard, so that they may be as machine independant as possible. >As far as I know this work is >being done using Lightspeed C on a Macintosh (probably a Mac II). We are working on Macintoshes, it is true. We are developing material on Mac II's, but most of it will run on a Mac plus, and the parts that won't will tell you that they don't. We are using Hypercard, however, so that we can avoid writing software as much as possible, and concentrate our efforts on the content and how to organize it. The coding that must be done in-house is being done with Lightspeed. >One of the things it will incorporate is an on-line version of >the intermediate Liddell and Scott Greek Lexicon. I know that he >just received the electronic version of this lexicon, though I >have no idea how it is stored, indexed, etc. We do have the online version of the intermediate L&S. It was keyboarded in the Philipines and approximately half of its cost was defrayed by PHI. It is at the moment in alpha code with very little markup, although we are in the process of marking it up. The first task is to get as much of the morphological information out of it as possible, in order to feed that to the parser (see below). We plan on storing this in descriptive markup also. However we are aware of the difficulty of parsing meaningful content elements out of a complex document which contains almost only format information. >Also, he is using a >program written by Neal Smith at Cincinnati which does >morphological parsing of Greek words. Even though this does not >directly involve natural language processing, some of the >techniques which Greg is using may be helpful. The morphological parser, appropriately named Morpheus, was begun by Neel Smith while he was at Berkeley, but later rewritten in C and finished by Greg Crane at Harvard. It is now finished, but still needs information for its rules in order to be useful. This will be supplied from the Middle Liddel. Morpheus has been tested in beginning Greek classes at Chicago and Claremont. I have given a *very* brief description of a large project. Many details and facts have been glossed over and left out. If anyone would like more information, please note me so I can go into more detail, or send you our written information. --Elli Mylonas Research Associate, Principle Investigator Perseus Project elli@wjh12.harvard.edu From: MCCARTY@UTOREPAS Subject: References for SGML wanted & Author/Editor (17 lines) Date: 23 February 1988, 00:36:22 EST X-Humanist: Vol. 1 Num. 786 (786) ---------------------------- From elli@husc6.BITNET (Elli Mylonas) I would like recommend an article that explains why descriptive markup like that prescribed by the SGML standard is to be recommended. It described other forms of markup and compares them to descriptive markup. This was written by 3 HUMANIST's, but I think that they will not mention it, so I will. Coombs, J. H., Steven J. DeRose and Allen H. Renear, "Markup Systems and the Future of Scholarly Text Processing" Communications of the Association for Computing Machinery, Nov. 1987, pp.933-947. An addendum: I will be getting a copy of SoftQuad's Author/Editor program, and will post a review to the mailing list as soon as I have had a chance to take a look at it. --Elli Mylonas elli@wjh12.harvard.edu From: MCCARTY@UTOREPAS Subject: A ghetto of what kind? (40 lines) Date: 23 February 1988, 00:41:11 EST X-Humanist: Vol. 1 Num. 787 (787) In replying to Sebastian's warning about the "literary" ghetto that we may find ourselves in, Bob Sinkewicz says, I have nothing against texts. I read a lot of them, have edited a few of them (Byzantine texts), but I do insist that texts have contexts - historical, social, economic, etc. This I applaud. This was just my substance some weeks ago, when I speculated that New Criticism and similar movements in and outside academia have caused us to take a very narrow view of what humanistic software we might want. Perhaps, as someone said, concordance software is easier to write than database software, but that does not adequately explain our more general preoccupation with The Text as an isolated object of study, perhaps even of idolatry. It's not a *literary* preoccupation that immures us in a narrower space than some would wish to inhabit but a particular and historically provincial view of what text is all about. Any literary critic worth the name (I say with no humour or subtlety) will be interested in some context or other, and this context necessarily has a history, or several histories. Database software is what one uses to keep track of such things. I'm not saying that people interested in the workings of the language on the page are wrongheaded but that the relative poverty of other kinds amongst computing humanists is not a good sign. Or have I read the innards of my chicken incorrectly? Willard McCarty mccarty@utorepas From: MCCARTY@UTOREPAS Subject: HUMANIST's ghetto (36 lines) Date: 23 February 1988, 09:05:02 EST X-Humanist: Vol. 1 Num. 788 (788) ---------------------------- From Joe Giampapa I would like to call attention to an outstanding individual who began "humanities computing" way before computing made it out of the realm of the more technically-oriented sciences. George Cowgill, a faculty member of the Brandeis Department of Anthropology, began a computerized reconstruction of either Tenochtitla'n or Teotihuac'an (or both?) years ago, using stacks of punch cards. From what I hear and have read, he has achieved considerable success with his project, and has contributed greatly to the advancement of archaeological methods of site reconstruction. I do not know what state his project is in now -- it might be "completed" (if ever one with these proportions could be completed) -- but it is at least off those darn punch cards. I am sure he would be able to provide a clearer and more thorough explanation of his work, if anyone asked. For those interested in contacting him, they can send mail to him directly at cowgill@brandeis.bitnet If, in the weird event that someone cannot get mail to him, I can serve as a message forwarder. Joe Giampapa giampapa@brandeis.bitnet or garof@brandeis.csnet or giampapa@cogito.mit.edu From: MCCARTY@UTOREPAS Subject: Cumulative Kurzweil lore (28 lines) Date: 23 February 1988, 09:45:47 EST X-Humanist: Vol. 1 Num. 789 (789) ---------------------------- From Susan Hockey In summer 1985 there was a proposal to form a user group of academic Kurzweil users. I think this proposal came from the University of South Carolina. I have heard nothing more of it since and would be interested to know if it ever got off the ground and if not, whether there is enough interest now. The cumulative wisdom of five year's Kurzweil usage at Oxford can be found in my article 'OCR: The Kurzweil Data Entry Machine', in Literary and Linguistic Computing, 1 (1986), 61-67. This article describes the Kurzweil Data Entry Machine, not the Kurzweil 4000, but the recognition algorithms are the same. The main differences between the two machines are in the user interface and the fact that training and production are separate tasks on the KDEM, but not on the Kurzweil 4000. Susan Hockey SUSAN@VAX.OXFORD.AC.UK From: MCCARTY@UTOREPAS Subject: Italian computational musicology (388 lines) Date: 23 February 1988, 09:54:03 EST X-Humanist: Vol. 1 Num. 790 (790) [Lelio Camilleri, HUMANIST and Professor of Computer Music, Conservatory of Music L. Cherubini, in Florence, Italy, has submitted the. following report in response to the recent concerns about the narrow specialization of HUMANIST. Bene! -- WM] ----------------------------------------------------------------- From Lelio Camilleri MUSICOLOGY AND THE COMPUTER IN ITALY A report on the current activity and the educational implications (Forthcoming in MUSLETTER, I, 3) Lelio Camilleri Conservatorio di Musica L. Cherubini Piazza delle Belle Arti 2 I-50122 Firenze E-mail address: CONSERVA@IFIIDG.BITNET INTRODUCTION The Italian situation of computer assisted research in musicology is still in development even though there are some centers which carried out a permanent research activity is this field. In fact, although in Italy several centers work on computer music and an Italian Association for Computer Music (Associazione Italiana di Informatica Musicale AIMI) has been founded, the various activities are mainly focused on composition, sound synthesis or hardware/software development rather than on musicology, music theory or analysis research. The year 1987 has registered an increasing interest in the use of computer for musicological purposes. A two days seminar, sponsored by the Istituto di Studi Rinascimentali of the University of Ferrara, has been held in March to discuss the use of computer to create database or system of information retrieval for musical studies. Another important international meeting has been held in Bologna, co-sponsored by the University of Bologna and the Center for Computer Assisted Research in the Humanities, Menlo Park, during the International Musicological Society Conference. In this meeting (Selfridge-Field forthcoming) the participants discussed two main subjects: musical data and music analysis. The research activity on music theory and musicology in Italy, can be divided into two parts: - research in music theory, analysis and music analysis/theory software development; - musical data bases realization. RESEARCH IN MUSIC THEORY AND ANALYSIS The research work in music theory and analysis can be summarized in the following way: research project using the computer as a tool for testing musical theory or hypotheses (these works are mainly based on the notion of musical grammar) and research focused on the realization of music analysis software and its usage on musicological work. The works of Baroni et al. (1978, 1983, 1984) and of the present author (Camilleri 1985) can be classified in the first aspect of computer use in musicology. The research work of the group of Baroni and Jacoboni, University of Bologna and Modena, is the first of this kind in Italy and one of the most interesting projects. It deals with the definition of a grammar for melody and it is devoted to the examination of four different but related melodic repertoires. The first and more advanced project is concerned with the melodies of the Bach Chorales. The second project analyzes all the four parts of J.S. Bach Chorales. Finally, the third and the fourth respectively deal with a repertoire of one hundred French melodies taken from a collection of popular chansons published in 1760, and the Legrenzi's Cantatas. The ultimate goal of these research projects is to identify the structural principle of what we define a melody. The methodology used is based on the formalization of a grammar, or set of rules, which should describe by means of concepts like, kernel, transformation, primitive phrase, the hierarchical structure of a melody. This grammar is then implemented in a computer program to generate melodies whose stylistic pertinence to the original model serves to verify the theoretical assumptions. They assumptions is to define general rules for melody found in all the repertories, as well as rules belonging to a well defined musical style. My work, carried out at the Conservatory of Music of Firenze and the Musicological Division of CNUCE, starts from similar methodological grounds. The goal of this project is to define some high-level structural features of melody which should also have psychological implication. The main hypotheses concern with the perfection and imperfection of phrases, the existence of a kernel, the concept of melodic contour hierarchically related to a particular structural level. The popular melodies of North Europe has been chosen as model. In this work also the computer has been used to check the various theoretical hypotheses by means of the implementation of the formalized rules in a program and the subsequent automatic generation of melodies. Other research projects related to this kind of computer use are those of Camilleri and Carreras for the realization of an expert system, based on ESE, for musical segmentation and tonal harmonic analysis, and F. Giomi and M. Ligabue (Ligabue 1985, 1986) for the analysis of jazz improvisation. The first research project is still in the developmental stage, concerning the realization of a theoretical model based mainly on the work of Schenker, Lerdahl and Jackendoff and Keiler. It is also based on the research work of one of the authors on the musical cognitive processes (Camilleri 1987). At present time, a model of musical segmentation (Camilleri forthcoming) is in phase of completion and the testing stage is starting. The work of F. Giomi and Ligabue, two associates of the Musicological Division of CNUCE in the Conservatory of Music of Firenze, is based on similar methodologies of Baroni's and Camilleri's works. A system of rules to model the harmonic/melodic jazz improvisation has been formalized and implemented in a software tool. The software is integrated with the TELETAU system and provide a sound output by means of TELETAU sound facilities. Jazz software also supplies an interactive part in which the user can specify harmonic path, melodic scale and other musical parameters as to investigate the various aspects of jazz improvisation. Software for music analysis has been realized at the Musicological Division of CNUCE (Camilleri et al 1987), Florence. The programs currently available at the Musicological Division of CNUCE and the Conservatory of Music L. Cherubini, Firenze, fall in two categories: those which use quantitative and statistical techniques to supply information on the surface structure of musical pieces (recurrence analysis, thematic analysis) and the ones which allow a deeper analysis of a piece structure and evidence the hierarchical relations among its parts (Schenkerian analysis, pitch-sets analysis). The two sets of programs may be considered as complementary in that they produce information which allows more complete understanding of the piece from different points of view. Some of this programs are integrated with the TELETAU system. TELETAU system (Nencini et al. 1986) can also be viewed as a tool for musicological work. It is a mainframe based system which supplies sound output by a special MIDI/RS232 interface. The system features of musicological interest comprehend a musical encoding language, and several commands to decompose and process the musical data in a very flexible way. TELETAU is also accessible through the BITNET-EARN network. The Laboratorio di Informatica Musicale, University of Milano, is pursuing research on the description of musical process by means of Petri's nets and a work on the study of the orchestration. The description of musical processes uses the notion of hierarchy, concurrency and causality (Camurri, Haus, Zaccaria 1986). The other research work aims at describing by Petri's nets the rules of orchestration of a particular composer. A sound output is provide to verify the correctness of the orchestration realized. Two software realizations for analytical purposes have been carried out by L. Finarelli, University of Bologna, and W. Colombo, University of Milano, as works for dissertation thesis. The software developed by L. Finarelli is based on some elementary analysis procedure, like those of the first categories of the Musicological Division of CNUCE software, which should serve to complete a sort of score data base. The goal of the system realized by W. Colombo is to develop a set of programs for tonal harmonic analysis based on the Schoenberg's theory of Regions. The software allows the user to scrutinize the harmonic skeleton of the piece and to bring out the belonging to a particular region of a chord succession. The two works just mentioned use micro or personal computer. Finally, P. de Berardinis (1983,1984,1985), Studio di Sonologia Computazionale "E. Varese, Pescara, has realized an analysis software for atonal music which uses the pc-sets theory of A. Forte. The software package runs on Apple II computer. A kind of computer application to music theory can be found in the work of the research group of the Department of Computer Science, University of Salerno (D'Ambrosio, Guercio, Tortora 1984). They have been realized a formalism to represent musical texts and rules in a grammatical fashion which is implemented in a package of programs. This approach has the goal of building a system containing the capacity to elaborate musical texts automatically. Another work of musicological interest is the one of Barbieri and Del Duca (1986) who uses the computer to demonstrate the microtonal tuning system used by Vicentino, Colonna, Sabatini, and Buliowski. MUSICAL DATA BASE REALIZATION The realization of musical data base is the aim of several projects which have also the feature to concern with other artistic fields as poetry and theater and educational or printed musical sources. The first project started four years ago at the University of Bologna and it is carried out by people working on the research about the grammar for melody at the same University. This project deals with the realization of a system which allows to handle information about the Emilian libretti of the XVII and XVIII centuries (Baroni et al. 1987). The data of each libretto, composer, librettist, title, year, place, performers, are encoded in a personal computer. An information retrieval system makes discovery of historical kind and crossed data analysis possible. The data of about 4000 libretti have been actually encoded. The other project is carried out by the collaboration among several Institution as the University of Ferrara, Rome, Pisa and the University of California at Berkeley. It deals with the Italian lyric poetry of the Renaissance in musical and literary prints. The aim is to realize a 100.000 poetic texts data base, for publication information and retrieval of "lost" poetry from music part-books. The Italian part of the project is more related to the literature aspects. Furthermore, we have to mention the library of encoding musical pieces of the Musicological Division of CNUCE, about 1000 pieces of different authors. The library is also accessible from remote users belonging to the Earn-Bitnet network and is made easy by a query system. A large project, financed by funds of Ministry of Cultural Funds, has been started in 1987, concerning musical sources located in Veneto, a northern Italy region. The project deals with several fields as cataloguing printed music, textbooks on singing education, and a music thematic catalogue of venetian music. Other two projects are the multiple indexing of performance details and text incipits of all comic operas produced in Naples form 1700 to 1750 carried out at the University of Milan under the direction of Professor Degrada, and the encoding and cataloguing of a musical funds using technical devices as OCR or CD-ROM, in which collaborate several Institutions and the Laboratorio di Informatica Musicale in Milano. CONCLUSION The situation of computer application to musicology in Italy is in evolution. As I mentioned above the use of computer at the University Department of Music and in the Conservatory is still not well established even though the people (scholars, students and researcher) who start to interest in this field is increasing. One may also hope that the realization of software for musicological purposes will be more and more concerned with the design of packages which can be used by several researchers. The software designer should be oriented to create not only experimental software, that is used only by the research group who realized it. An interesting issue related to the problem of spreading and promoting the use of computer in musical studies is its educational implications. Mainly, the use of computer in music education is only concerned with the study of the computer music itself or the study of sound processing. The music educational activities established which use the computer are the Computer Music course at the Conservatory of Music L. Cherubini in Florence, the only one in an Italian Conservatory, the computer music summer courses held at the Centro di Sonologia Computazionale in Padova, and other few which also deal with electronic music. A three day seminar on Computer and Music Education has been held by myself last October at the Centro di Ricerca e Sperimentazione per la Didattica, and I will hold a summer course on Computer and Musicology next September in Florence. In my opinion, a very important question is: How could be used the methodological approach of, say, musical grammar to teach the theory of melody by computer to students ? Is it possible to integrate the methodology and the realized tools with, say, the curriculum of music history and music analysis courses ? My answer is yes, and I think this is a promising path to follow for the near future. REFERENCES SELFRIDGE-FIELD, E., forthcoming. "Computer-Based Approaches to Musical Data and Music Analysis: a Technical Exchange", in Proceedings of the XIV IMS Conference. BARBIERI, P., DEL DUCA, L., 1986. "Reinassance and Baroque Microtonal Music Research in Computer Real Time Performance, in Proceedings of the 1986 International Computer Music Conference, P. Berg (ed.), S. Francisco, Computer Music Association. BARONI, M., Jacoboni, C., 1978, "Proposal for a Grammar of BARONI, M., 1983. "The Concept of Musical Grammar", Music Analysis, II, 2. BARONI, M. et al.,1984. "A Grammar for Melody. Relationships between Melody and Harmony", in Musical Grammars and Computer Analysis, M. Baroni and L. Callegari (eds.), Firenze, Holschki. BARONI, M. and JACOBONI, C., 1983. "Computer Generation of BARONI, M. et AL., 1987. "Libretti of Works Performed in Bologna, 1600-1800", Modena, Muchhi. CAMILLERI, L., 1985. "Un Sistema di Regole per Generare Semplici Melodie Tonali", Quaderni di Informatica Musicale, V. CAMILLERI, L., 1987. "Towards a Computational Theory of Music", in The Semiotic Web '86, T. Sebeok and J. Umiker-Sebeok (eds.), Berlin, Mouton De Gruyter. CAMILLERI, L., forthcoming. "Psychological and Theoretical Issues of Musical Segmentation". CAMILLERI, L., CARRERAS, F., GROSSI, P., NENCINI, G., A Software Tool for Music Analysis, Interface, forthcoming. CAMURRI, A., HAUS, G., ZACCARIA, R., 1986. "Describing and Performing Musical Processes", in Human Movements Understanding, Tagliasco and Morasso (eds.), Amsterdam, North-Holland. DE BERARDINIS, P., 1983. "Il Microcomputer nell' Analisi Strutturale della Musica Atonale", Quaderni di Informatica Musicale, I. DE BERARDINIS, P., 1984. " Analisi Strutturale della Musica Atonale (II)", Quaderni di Informatica Musicale, IV. DE BERARDINIS, P., 1985. "Analisi Strutturale della Musica Atonale (III)", Quaderni Informatica Musicale, V. D' AMBROSIO, P., GUERCIO, A., TORTORA, G., 1984 "Music Theory by Means of Relational Grammars", Internal Report, Dipartimento di Informatica e Applicazioni, Salerno, Universita' di Salerno, LIGABUE, M., 1985. "Un Sistema di Regole per l' Improvvisazione nel Jazz", in Atti del VI Colloquio di Informatica Musicale, Milano, Unicopli. LIGABUE, M., 1986. "A System of Rules for Computer Improvisation", in Proceedings of the 1986 International Computer Music Conference, P. Berg (ed.), S. Francisco, Computer Music Association. G. NENCINI, e al., 1986. "TELETAU - A Computer Music Permanent Service", in Proceedings of the 1986 International Computer Music Conference, P. Berg (ed.), S. Francisco, Computer Music Association. From: MCCARTY@UTOREPAS Subject: Kurzweil lore? (26 lines) Date: 23 February 1988, 10:08:14 EST X-Humanist: Vol. 1 Num. 791 (791) ---------------------------- From PROF NORM COOMBS I am a kind of Kurzweil user. Not primarily for data entry although I have done some and expect to do more. I am totally blind history prof at the Rochester Institute of Technology. I use a KRM to do some of my reading. On occasion I have connected through a bi-directional terminal to our VAX and read the material simultaneously by "ear" and also into a text file. If there is an "accumulated body of wisdom" especially about text entry, I surely could benefit by learning more than I know... which is not much. Looking forward to more info. Norman Coombs From: MCCARTY@UTOREPAS Subject: KDEM and scanning (29 lines) Date: 23 February 1988, 18:49:59 EST X-Humanist: Vol. 1 Num. 792 (792) ---------------------------- From Bob Kraft Like Oxford, CCAT has a KDEM III (not the newer 4000). There are a number of ways to trick the machine to read more efficiently, although I do not know whether they will work as well on the 4000. One easy and obvious (once you think about it) tactic is to let the machine tell you what it reads and develop an artificial coding in that manner, to be changed to the desired coding later through tailoring. Thus for Hebrew, the resh looks to the KDEM like a "7" and the beta looks a "2" and the dalet is enough like a "T" to get by, etc. Of course, ambiguities must be avoided, but it really helps the scanner to guess correctly. And there are more such tricks, if this is really what Robin Cover is after. May I use this occasion to renew my scanner question -- has anyone developed the ability to go directly from microform (film, fiche) to electronic form, without a hardcopy intermediate? I still have not found a concrete lead to such technology. Bob Kraft (CCAT) From: MCCARTY@UTOREPAS Subject: Work on on-line dictionaries (16 lines) Date: 23 February 1988, 18:51:38 EST X-Humanist: Vol. 1 Num. 793 (793) ---------------------------- From Terry Langendoen I understand that Prof. Herbert Stahlke, Dept. of English, Ball State University, has a project for making large on-line dictionaries available in a variety of environments. I don't have an email address for him, but I'm sure he'd be receptive to inquiries by regular post or phone. From: MCCARTY@UTOREPAS Subject: CALL (Romance essays) Date: 23 February 1988, 18:53:29 EST X-Humanist: Vol. 1 Num. 794 (794) ---------------------------- From Joanne Badagliacco Does anyone out there know of any programs to teach foreign languages - particularly essay writing in Romance languages? German and Russian would also be interesting. Please reply to Joanne Badagliacco (JMBHC@CUNYVM). From: MCCARTY@UTOREPAS Subject: Ghetto mentality (58 lines) Date: 23 February 1988, 19:00:26 EST X-Humanist: Vol. 1 Num. 795 (795) ---------------------------- From ked%garnet.Berkeley.EDU@jade.berkeley.edu (Charles Faulhaber) All right, I have under development a relational data base for the primary sources of medieval Spanish literature, i.e., MSS and early printed books.It is written in Advanced Revelation, an MS-DOS-based dbms like DBase but considerably more flexible: variable-length fields, no limits on number of fields and number of records. The only real limit is that records can be no more than 64K and no field can be no more than 64K. The Bibliography of Old Spanish Texts (BOOST) has evolved from a system based on FAMULUS at the U. of Wisconsin which originally had 19 data elements. The current version has 7 related files (biography, uniform title, libraries, MSS, copies [printed books], contents [information about specific copies of a given text in a given MS or ed.], and secondary bibliography) with some 300 data elements. Still to come are a subject file and major additions to some of the other ones. E.g., for biography we want to add standard prosopographical information. Advanced Revelation is a window-based system which is designed to allow for the integration of information from a number of different files in any given window as well as for customizable sorts of all kinds. We eventually plan to use the system as a data base front end for a corpus of machine-readable texts in medieval Spanish. Many of these already exist, having been transcribed for the purposes of the Dictionary of the Old Spanish Language at the U. of Wisconsin directed by John Nitti and Lloyd Kasten. We are focussing on 1992, since our corpus represents the literary culture which Spain took to America; and we hope that storage developments and compression algorithms will have developed sufficiently so that in addition to the data base and the texts themselves we will also be able to include a good selection of digitized facsimiles of significant MSS, probably on a CD-ROM disk. (I would be interested in more information aboaut the GIP [Greek Index Project].) Charles B. Faulhaber (ked@garnet.bitnet) From: MCCARTY@UTOREPAS Subject: ICON and SPITBOL... (30 lines) Date: 23 February 1988, 19:04:06 EST X-Humanist: Vol. 1 Num. 796 (796) ---------------------------- From Richard Giordano I haven't used ICON yet, but I have been programming extensively in different version of SNOBOL for just about ten years. If ICON is anything like SPITBOL, it is true that you can learn to write complicated string-handling and symbol manipulation programs in little more than a couple of weeks. As many people know, SNOBOL is a powerful tool for text analysis and retrieval, and ICON seems to be even more powerful than SNOBOL, particularly because it's a structured language. I really have to agree with Richard Goerwitz's praise. I'm a little puzzled over the comments I've read regarding indexing files with ICON. I use IBMPC SPITBOL, and I have had no problem indexing records, and retrieving them. In fact, I created a pretty large database system written entirely in IBMPC SPITBOL. Maybe I am puzzled because I am mis-reading the comments. Richard Giordano Computing and Information Technology Princeton University RICH@PUCC From: MCCARTY@UTOREPAS Subject: ICON and commercial support (49 lines) Date: 23 February 1988, 19:06:00 EST X-Humanist: Vol. 1 Num. 797 (797) ---------------------------- From dow@husc6.BITNET (Dominik Wujastyk) I don't agree at all with Mark Olsen's claim that Icon is fettered by its public domain status. Of course it would be nice if a company took it up and added goodies. But I probably couldn't afford them anyway. But look for a moment at TeX. This is a public domain product that has more support than any commerical product I can think of. Have you ever tried, for example, to get help from IBM? Or Lotus? Or Microsoft? Sometimes one strikes lucky, and will get a helpful person. But TeX has a huge and thriving community of users, including many very talented programmers, who provide a steady stream of excellent add-on programs and macros. In the first place there is TeXhax, which is a digest of questions, answers and macros produced by Malcolm Brown, which appears sometimes as often as thrice weekly! Then there is TeXmag, another on line magazine with news and more macros. There is the USENET news area comp.text which is full of TeXiana, and there is TUGboat, a tri-annual journal of the highest quality, which is again full of help and interest. The staff of the TeX Users Group are also on hand with help and advice. As I say, I have not met support like this *anywhere* else, and the commercial software I have used over the years, especially from the big companies, has been abysmally supported. It would be interesting to try and analyse just why it is that TeX has attracted the support that it has. In the first place it is a superb I make it my general policy to prefer the PD software to the commercial as far as possible, and there are few areas where this has not led me to have better programs and better support for them. Dominik Wujastyk From: MCCARTY@UTOREPAS Subject: The Greek Index Project (17 lines) Date: 23 February 1988, 19:07:01 EST X-Humanist: Vol. 1 Num. 798 (798) ---------------------------- From dow@husc6.BITNET (Dominik Wujastyk) Robert Sinkewicz recently mentioned a project at Toronto called GIP. Could someone who knows about it please send me information about it? From: MCCARTY@UTOREPAS Subject: Kurzweil and OCRs not pundits? (14 lines) Date: 23 February 1988, 19:08:18 EST X-Humanist: Vol. 1 Num. 799 (799) ---------------------------- From dow@husc6.BITNET (Dominik Wujastyk) Am I right in thinking that no Kurzweil or other OCR machine has yet succeeded in reading Devanagari? Dominik Wujastyk From: MCCARTY@UTOREPAS Subject: Parsers for Semitic languages (39 lines) Date: 24 February 1988, 00:33:54 EST X-Humanist: Vol. 1 Num. 800 (800) ---------------------------- From Jack Abercrombie Just a quick footnote to the query on parsers for Semitic languages. As noted, Dick Whitaker wrote a parsing program for biblical Hebrew. His work was funded by CATSS Project, and its results, a parsed Hebrew text, was then collated against the Maredsous text. (Whitaker's program incidentally can be modified to work with most Semitic Languages.) Both the Maredsous biblical text and that of G. Weil were parsed by hand, and not program. As far as I know, only a few individuals have admitted to writing a parsing program for Hebrew or any Semitic language, besides Whitaker and of course Dean Forbes. Some eight years ago, Nahum Sarna informed me that his son had written such a program for Hebrew. About five years ago, one of my students wrote an automatic parsing program for Hebrew. Another student took this same work and modified the data files for Arabic. Neither version has survived, though Roger Allen and I have saved the algorithm for other purposes and it would be easy to reconstruct it if we were seriously interested. Lastly, Everhard Ditters has also written a parsing program for Arabic according to information I received from him last year. Jack Abercrombie, Assistant Dean for Computing & Director of CCAT From: MCCARTY@UTOREPAS Subject: SPITBOL, ICON, Brain Damage (24 lines) Date: 24 February 1988, 08:53:10 EST X-Humanist: Vol. 1 Num. 801 (801) ---------------------------- From Lou Burnard Like Richard Giordano I've been a happy spitbol hacker for more years than I care to remember. I've also been greatly tempted by the evident superiority of icon in terms of language design structure etc. My question is, am I alone in finding it very difficult to re-think problems in iconic-ic (as opposed to snobol-ic) terms? i dont get this problem programming in other high level languages because they dont have built in string operations so similar to those of spit/snobol, but whenever i start trying to write a piece of icon code i find myself wanting to introduce gotos and call snobol functions. maybe its just my celebrated mid-life crisis. or does the snobol style actually damage the brain in the way dijkstra warned us it would? lou From: MCCARTY@UTOREPASMCCARTY at UTOREPAS Subject: Griswold and ICON----------------------------From R.J.Hare@EDINBURGH.AC.UK Date: 24 Feb 88 09:01:14-EST24 February 1988, 08:55:41 EST X-Humanist: Vol. 1 Num. 802 (802) Why doesn't someone try and contact Griswold et. al. at Arizona and invite them a to contribute to the current discussion of Icon? I have tried and failed (for reasons which are not clear to me) to contact them by electronic mail from here, but I'm sure that they would be interested in the current discussion (if they aren't already aware of it). As an indication of the power and versatility of Icon, I might say that the first program I wrote using Icon was an editor (yeah, I know context editors are passe, but it seemed a useful starting exercise for a language whose strong points include powerful string processing capabilities), and I was amazed that I was able to get a more or less working editor, albeit a very simple one, after only a few days 'spare-time' effort, particularly as I hadn't done any *real* string processing before. I'm a fan, and if I had more time I'd be a better one. [Note that a copy of this message is being sent to Ralph Griswold. W.M.] From: MCCARTY@UTOREPASMCCARTY at UTOREPAS Subject: Icon and the PD----------------------------From Mark Olsen Date: 24 Feb 88 09:02:41-EST24 February 1988, 09:01:42 EST X-Humanist: Vol. 1 Num. 803 (803) I would like to agree with Dominik Wujastyk's comments regarding PD software. There is some good stuff out there. But I'm not sure that the example of TEX is appropriate as, from what I understand, one purchases it through one of several vendors, each of whom might add something to the TEX standard and give more or less support to particula printers/environments/etc. I know we have looked at TEX for the PC and found the price at about $250, with options for additional printer support. This is not PD software, at least as I define it. Developers have taken a good design, ported it to particular machines and added other support. Software marketed by the major manufacturers and small houses can all attract good users groups (and Lord knows we need a KDEM_SIG) and other levels of "informal" support. I have also had some good experiences with smaller developers, who take an intense interest in the product and in customer satisfaction. I see the PD as a testing ground for design ideas, the most successful of which appear in later incarnations as either "shareware" or commercial ventures. I don't really want to depend on the spare time of some individuals to improve products and design reasonable upgrade paths. I'm more than willing to shell out the price for WordPerfect or PC-Write (remember it ain't PD), both of which are good products, than to try to find some PD product that is only a third as effective. I think that Icon is probably at the point in its life where it could be reasonably picked up by a small developer, like Catspaw or the developer of SPITBOL for the PC, and given real support at a fair price. I know academics are broke, but that does not mean that we should really try to get something for nothing. Mark From: MCCARTY@UTOREPAS Subject: WordCruncher texts (36 lines) Date: 24 February 1988, 09:02:58 EST X-Humanist: Vol. 1 Num. 804 (804) ---------------------------- From Randall Jones Electronic Text Corporation, home of the text retrieval program known as WordCruncher, has recently announced a number of indexed texts that can be used with WordCruncher. These include the Riverside Shakespeare; ten volumes from the Library of America collection, viz. Franklin, Jefferson, Melville, Twain, Emerson, Thoreau, Hawthorne, Whitman, Henry James, London; a collection of approximately 50 writings on the U.S. Constitution (including Federalist Papers), and the King James Bible. Texts in progress include the Oxford Shakespeare, additional volumes from the Library of America, other English bibles, the Hamburg edition of the complete works of Goethe, and the Pfeffer spoken German corpus. Because of agreements with the publishers these texts cannot be sold simply as electronic texts, but can only be used with WordCruncher. For additional information as well as suggestions for texts to be made available for the future send me a BITNET note or write or call: Electronic Text Corporation 5600 N University Ave Provo, Utah 84604 801-226-0616 Randall Jones, Director Humanities Research Center Brigham Young University Provo, Utah 84602 801-378-3513 From: MCCARTY@UTOREPAS Subject: Line-counts; the fileserver (36 lines) Date: 24 February 1988, 09:04:54 EST X-Humanist: Vol. 1 Num. 805 (805) Following the suggestion of a HUMANIST in the U.K., I have almost faithfully been adding line-counts to the subject lines of messages. I have not counted the lines myself but used the built-in line-counter provided me by the operating system I use, CMS. This counter registers the total number of lines in a message, including the header. It's not much work to do this, and I'll continue to do it without protest, but I want to make sure that some of you are finding this count useful. Please let me know >>only if you are<<. On another matter, I propose that henceforth announcements of conferences be posted once to all of us, then transferred to the fileserver for future reference. I also invite anyone with relevant bibliographies, catalogues (such as the SNAPSHOT of the Oxford holdings, Lou), or technical reports, to submit them for storage on the server together with a very brief notice that will be sent to all HUMANISTs when the material is ready for access. The biographies will be kept there, and I encourage anyone whose life has changed in a way that he or she wants to announce, to fetch the relevant file, make the update, and send me the results. Shortly we will make efforts to see what can be done to store public-domain packages on the server. One HUMANIST, David Sitman, has already sent me a number of very useful suggestions about the operation of the server. I invite others to do the same. Willard McCarty mccarty@utorepas From: MCCARTY@UTOREPAS Subject: Jean Talon Project (44 lines) Date: 24 February 1988, 10:05:10 EST X-Humanist: Vol. 1 Num. 806 (806) ---------------------------- From (M.J. CONNOLLY [Slavic/Eastern]) From my involvement with the Domesday Project, as user, adapter and evaluator, I have also been kept informed about a similar Canadian project, namely the Jean Talon project, which seeks to have the next Canadian census follow the Domesday mode of collection and publication (Domesday was a stock-taking of British life in the '80s). This type of project requires the combining of data and text from all disciplines, visual and cartographic material to accompany same, and well-designed programs to integrate same and present it in an interface that can be used by journalists, officials, businessmen, but primarily by teachers, researchers, and students. It is an excellent example of what recent HUMANIST discussions have been seeking: TEXT in CON-TEXT. The moving force behind Jean Talon is: Mr Victor B. Glickman, Regional Director Statistics Canada 25 St Clair Avenue East Toronto Ontario M4T 1M4 One might like to see how HUMANIST and Jean Talon could link up or, at very least, interact. The project is most ambitious, but Glickman seems to be covering all the bases for funding sources, project organization, and learning from others' mistakes. I can see the need for a good dose of 'humanistic' input here, though. Prof M.J. Connolly Slavic & Eastern Languages Boston College / Carney 236 Chestnut Hill MA 02167 ( U . S . A .) (617)552-3912 cnnmj@bcvax3.bitnet From: MCCARTY@UTOREPAS Subject: Text retrieval & related programs for Macintoshes (26 lines) Date: 24 February 1988, 11:18:30 EST X-Humanist: Vol. 1 Num. 807 (807) FROM John J. Hughes Help needed from HUMANISTS! Other than SONAR (Virginia Systems Software Services, Inc.) and SQUARE NOTE (Union Software), what commercial and noncommercial text retrieval, concording, and text analysis programs are available for Macintoshes? Thank you in advance for your help. John J. Hughes Bits & Bytes Computer Resources 623 Iowa Ave. Whitefish, MT 59937 (406) 862-7280 XB.J24@Stanford.BITNET From: MCCARTY@UTOREPAS Subject: CD-ROMs (21 lines) Date: 24 February 1988, 13:08:53 EST X-Humanist: Vol. 1 Num. 808 (808) ---------------------------- From Joe Giampapa I am asking this question out of curiosity ... It sounds like a few HUMANISTS are using CD-ROM technology for saving their texts. For those who are, may I ask what the process is that you use to do so? Specifically, what company(ies) are involved (or is it on site?), what do you do for the CD-ROM reader, and what is the cost of all this? Joe Giampapa giampapa@brandeis.bitnet From: MCCARTY@UTOREPAS Subject: CD-ROM, etc. (31 lines) Date: 24 February 1988, 13:11:19 EST X-Humanist: Vol. 1 Num. 809 (809) ---------------------------- From O MH KATA MHXANHN I noticed this morning in the February issue of Micro/Systems the following jetsam among "Random Gossip & Rumors": Later this year, Sony, Sharp, and Verbatim are expected to introduce high-capacity, removable, rewritable, optical disk drives that may challenge Winchester hard disks as primary storage devices. Sony's drive is expected to store 650 MB, have a 120-millisecond access time, use an SCSI interface, and cost about $1,000. I wonder if the digital cognoscenti of this group regard this information as (a) likely to be true (b) optimistic (c) vaporously induced phantasmagoria (d) a lengthy typographical error (e) all of the above (f) none of the above. From: MCCARTY@UTOREPAS Subject: SNOBOL4/brain damage (42 lines) Date: 24 February 1988, 13:14:09 EST X-Humanist: Vol. 1 Num. 810 (810) ---------------------------- From Richard Goerwitz Since SNOBOL4 and Icon are languages particularly suited to Humanistic uses, I don't think that this will be too much of a sidetrack. In response to Lou Burnard's query about switching from SNOBOL/SPITBOL to Icon, let me say that I actually sent a letter on this to Griswold (the creator of Icon) at arizona.edu. I also sent a similar letter to Mark Emmer, who markets a SNOBOL4 implementation for MS-DOS. Let me comfort you: Appar- ently lots of folks have trouble re-thinking SNOBOL4 problems in Icon. The reason is partly that Icon is much more procedurally oriented. More than this, though, its string-scanning functions are much lower-level and better integrated into the rest of the language. What this means is that you'll need to write actual procedures called ARB, ARBNO, RARB, etc. To call them at run-time, you'll need to use the "call-by-string" feature available on Icon versions 6 and up. This will be a pain until you get used to it. Once you get used to it, Icon becomes a much more powerful tool to work with. I guess the trouble with SNOBOL4 and Icon is that SNOBOL4 looks a little like Icon in some respects - just enough to be dangerous! If you want advice at any point, try emailing to the icon-project@arizona.edu. They are busy with version 7 now, but generally they return mail within two days. Apparently they enjoy a show of interest & questions. -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer From: MCCARTY@UTOREPAS Subject: Latin analysis software (21 lines) Date: 24 February 1988, 14:51:28 EST X-Humanist: Vol. 1 Num. 811 (811) ---------------------------- From Bob Kraft Jean Schumacher at CETEDOC in Belgium (NETDOC@BUCLLN11) reminds me that CETEDOC has been involved in analysis of Latin texts for about two decades and is willing to explore ways in which its programs and experience can be of assistance to others -- for example, files might be sent electronically to CETEDOC for analysis and the results returned in the same way. CETEDOC also supports the suggestion that an inventory list of such programs and services would be helpful. Bob Kraft (CCAT) From: MCCARTY@UTOREPAS Subject: Snobol vs. Icon (21 lines) Date: 24 February 1988, 18:34:13 EST X-Humanist: Vol. 1 Num. 812 (812) ---------------------------- From amsler@flash.bellcore.com (Robert Amsler) I believe icon is not the clearcut step forward some would have us believe. It solved a number of problems computer scientists had with snobol as a language--but it seems unclear to me that these were problems that non-computer scientists were experiencing. I believe snobol is excellent for small programs that either do complex extractions or rewrites of data. As the programs get larger, the programming language theory defects in snobol may become unreasonable; I certainly wouldn't advocate it as the best language in which to build complex systems with thousands of lines of code--but frankly some of the changes between snobol and icon are NOT steps toward making the language easier to use. From: MCCARTY@UTOREPAS Subject: Re: Random Gossip and Rumors (about mass-storage media) Date: 24 February 1988, 18:41:04 EST X-Humanist: Vol. 1 Num. 813 (813) ---------------------------- From amsler@flash.bellcore.com (Robert Amsler) Optimistic, or possibly even ``run it up the flagpole and see if anyone salutes''. The events are very desirable goals from the vendors perspectives, but there current capabilities put them further away from realizing these than the rumor would seem to suggest. If you replaced later this year with within 3 years I'd find it more believable. If you changed the price to $3000 (which might mean they wouldn't make it) it would also be more reasonable. So, how is this... Within 3 years Sony will be marketing a $3000, 1 Gigabyte erasable optical disc. From: MCCARTY@UTOREPAS Subject: Random gossip and rumors about magneto-optical technology, cont. Date: 25 February 1988, 09:28:59 EST X-Humanist: Vol. 1 Num. 814 (814) ---------------------------- From Robin C. Cover The "rumor" alluded to by McCarthy is probably second generation, at least. There was a report in PCWeek to the same effect: that Sony's magneto-optic drive would cost $1000 this summer, have a data-transfer rate of 680K per second, 650 megabytes of read/write 120 millisecond access time disk space. It sounded too good to be true so we contacted Sony...raise that to $7500. But apparently Sharp, Verbatim and Sony did demo such things at Comdex, and I have heard of another vendor releasing a magneto-optic read/write drive in "first quarter" this year, allegedly having 45 megabytes per side on the disk, 30 millisecond access time, and costing around $1500. More dreaming? If anyone knows anything reliable about magneto-optical, I'd like to hear it. Robin C. Cover ZRCC1001@SMUVM1.bitnet From: MCCARTY@UTOREPAS Subject: Icon/Snobol (26 lines) Date: 25 February 1988, 09:34:44 EST X-Humanist: Vol. 1 Num. 815 (815) ---------------------------- From R.J.Hare@EDINBURGH.AC.UK Those interested in the Icon versus Snobol aspect of the Icon discussions may be interested in the paper: Griswold, R.E., "Implementing SNOBOL4 Pattern Matching in Icon", Compute r Languages, Volume 8, pages 77-92 (funny - I don't have the date for that one, only the volume). Griswold, R.E. and Griswold, M.T, "High Level String Processing Languages: COMIT, SNOBOL4 and Icon", Abacus, Volume 3, No 4, pages 32-44, Summer 1986. If anyone were that interested, I could mail them a bibliography of about 30 Icon related items... Roger Hare. From: MCCARTY@UTOREPAS Subject: Hypertexpert systems (16 lines) Date: 25 February 1988, 10:08:06 EST X-Humanist: Vol. 1 Num. 816 (816) ---------------------------- From Sheldon Richmond Is there anyone who could tell me something about HypertExpert Systems produced by Knowledge Garden Inc? Thanks. Sheldon Richmond From: MCCARTY@UTOREPAS Subject: CD-ROM, etc. (16 lines) Date: 25 February 1988, 10:10:20 EST X-Humanist: Vol. 1 Num. 817 (817) ---------------------------- From M.J. CONNOLLY (Slavic/Eastern) Can't say too much on specs, but (a), likely to be true, is closest. Removable 50MB optical read/write, e.g. Verbatim, may be here as early as the summer. Watch for the usual 50>100>200Mb progression in capacity. Too soon to speculate on pricing, SCSI most likely. The rest is silence. From: MCCARTY@UTOREPAS Subject: TeX and the public domain (64 lines) Date: 25 February 1988, 10:50:57 EST X-Humanist: Vol. 1 Num. 818 (818) ---------------------------- From dow@husc6.BITNET (Dominik Wujastyk) No, TeX really *is* in the public domain, in two important senses of the phrase. First, yes you can buy personal TeX from Lance Carnes, for $249. What you pay for is not the programming that went into TeX (what would it cost to hire Knuth for ten years!), but for the considerable work that went into rewriting the Pascal source so that it would compile and run efficiently on a PC or AT. And Carnes has done an excellent job. PCTeX runs as fast on a good AT as on a medium loaded Vax. Plus you get the TeXbook, LaTeX manual, and macros (LaTeX, AMS, Spivak's Vanilla) and installation, TFM files etc. etc. In other words, you are paying for little more than the cost of putting the package together, testing it, making it easy to run and install, and documenting it. In fact, Carnes is doing the job for the PC community that a sysop does on a mainframe machine. The full WEB source code of TeX is available from Carnes either by mail or by downloading from the PCTeX bulletin board. Secondly, TeX for Unix, VMS, Tops-20, and most other systems is avaiable for the cost of the tape, or the FTP. I expect that TeX is already available on your mainframe somewhere, if you have not already discovered it. In a purer sense of PD, there is Monardo's Common TeX. This is the translation of TeX into C. This is yours for the downloading from several sources. There is even a ready-compiled PC version, complete with the plain.fmt, available on several bulletin boards in the Boston area (E.g., Channel One, (617) 354 8873, or Viking Magic (617) 354 2171), and very probably across the country. There is another PD version, again based on a C translation (done automatically this time), and the source of that too is easily available. You would need to compile it yourself, though. Both these versions are also excellent, but you have no documentation or help in getting them up and going. Really, you have to have used TeX before on some other system, so that you know where everything should go, and how to use TeX. You also have to get your own fonts and drivers (also available in PD from Nelson Beebe@science.utah.edu). The point is, someone with the know-how and a modem could put together a perfectly good version of TeX on a PC or AT without buying anything at all. It would just take a bit of time and effort. Incidentally, Monardo is offering a C version of Metafont for beta testing, and the final version will be released into the PD very soon. The other sense of PD that I have in mind is that TeX's algorithms are freely available. I know of two commercial companies who use parts of TeX in their systems and I am sure there are others. Anyone writing a serious text processing program or word processor today is going to look very carefully at TeX's source. XyQuest have used TeX's hyphenation algorithms in their excellent word processor, XyWrite III plus. And Graham Asher of Informat, a London typesetting company, has made TeX the heart of a major professional typesetting system that his company uses and sells. (It is used to set several of Gower Publications scientific journals.) Dominik Wujastyk From: MCCARTY@UTOREPAS Subject: New item on the file-server Date: 25 February 1988, 13:02:28 EST X-Humanist: Vol. 1 Num. 819 (819) The file-server now contains a desultory bibliography on the Icon programming language. It has been submitted by Roger Hare of Edinburgh, who warns that it is simply what he has been able to gather in the last 12 months or so. Caveat emptor! (and let him or her remember that it's free). Contributions to this bibliography can be sent to me. The file is named ICON BIBLIOG. Willard McCarty mccarty@utorepas From: MCCARTY@UTOREPAS Subject: Mega-storage devices, and dreams, and things Date: 26 February 1988, 12:09:53 EST X-Humanist: Vol. 1 Num. 820 (820) ---------------------------- From Eva Swenson I have found that promises of bigger and better eventually come true, with emphasis on "eventually". It is just a matter of time. The promise, of course, may be fulfilled with side effects even though none of these were promised. And not all the side effects are desirable. So what else is new? From: MCCARTY@UTOREPAS Subject: Snobol - Icon transition (17 lines) Date: 26 February 1988, 12:13:04 EST X-Humanist: Vol. 1 Num. 821 (821) ---------------------------- From dow@husc6.BITNET (Dominik Wujastyk) In the Icon program library that I received with the MSDOS version of Icon 6 from Arizona, there was a file containing Icon procedures which duplicate many of the built in functions of Snobol, like LEN, BREAK, ARB and so on. I hope everyone knows this already. Dominik From: MCCARTY@UTOREPAS Subject: Producing a CD-ROM at the PHI (48 lines) Date: 26 February 1988, 12:27:47 EST X-Humanist: Vol. 1 Num. 822 (822) ---------------------------- From John Gleason Here's what The Packard Humanities Institute and, as far as I know, the Thesaurus Linguae Graecae do to produce a CD-ROM: 1. Imbed in the text files an extensive system of markers, to permit access by author, section, etc. 2. Process these files through a humungous program that constructs various directories of these markers. 3. Send the resulting tapes to a contractor who processes them into CD-ROM-type files in the "High Sierra" format which is proposed as standard by the National Information Standards Organization. The contractor we have used is: Publishers Data Service Corporation (PDSC) 2511 Garden Road, Bldg. C Monterey, CA 93940 (408) 372-2812 4. The contractor then sends his tapes to a company which makes a master CD ROM with a few copies for testing. Our contractor uses: Digital Audio Disk Corporation 1800 North Fruitridge Ave. Terre Haute, IN 47804 (812) 466-6821 5. After testing the sample CD ROM's, we order as many additional copies as we need. There are lots of detailed steps that I've omitted, but that's the general outline. John M. Gleason, The Packard Humanities Institute, xb.m07@stanford From: MCCARTY@UTOREPAS Subject: Computer scientists and SNOBOL (45 lines) Date: 26 February 1988, 12:29:25 EST X-Humanist: Vol. 1 Num. 823 (823) ---------------------------- From Richard Giordano I remember a few comments by computer scientists regarding SNOBOL, (I think they are in Wexelblat's HISTORY OF PROGRAMMING LANGUAGES), that may (or may not, I don't know), have been 'solved' in ICON. Aside from the whole GOTO question, there were two issues: the ambiguity regarding the use of a space (they have two meanings); and there appeared to be severe inefficiencies in execution time. I also think there might have been some discontent with SNOBOL's YES/NO/UNCONDITIONAL GOTO structure as well. One overwhelming criticism of the language is that its strength is its greatest weakness--SNOBOL's terse structure makes its programs very difficult to document. SNOBOL code written by one programmer looks like randomly-strewn characters to the next. I am not at all sure that I agree with Robert Amsler's comments on SNOBOL's limitations as a language for systems regarding thousands of lines of code. Who among us writes thousands of lines of code anyway? That sort of thing is developed by teams, teams that are managed. The selection of a programming language in a team effort has much more to do with management decisions and concerns than it has to do with the intrinsic merits of this or that particular language. This explains, in part, why there are so many FORTRAN and COBOL shops around. Computer scientists, and their opinions, usually count for little or nothing in this equation. I also think that Amsler underestimates SNOBOL's power. It can do far more than "complex extractions or rewrites of data". I've written complex parsers in SNOBOL, as well as programs that do pretty complicated manipulations, and analyses, of symbols. I agree with Amlser that the problems non-computer scientists were experiencing were different from those experienced by computer scientists. My problem with SNOBOL is that it doesn't really have the data processing punch that PL/I has, particularly when you want to pass parameters. So big deal. I do have an objection to Amsler's implicit message that computer scientists know best. They don't, but that's a different story... Rich Giordano Princeton University From: MCCARTY@UTOREPAS Subject: Stemmatic analysis of Latin mss (43 lines) Date: 26 February 1988, 12:30:41 EST X-Humanist: Vol. 1 Num. 824 (824) ---------------------------- From Norman Zacour It has been suggested that I ask fellow Humanists for some advice about a small programming project that I have become engaged (immersed? bemired?) in. It has to do with the mechanical problems surrounding the making of critical editions of (in this case) Latin texts. As the mss. multiply and the variants begin to pile up, the difficulty of keeping them all in order increases in geometrical proportion; they have to be numbered and numbered again as new mss. are used, new variants found. One answer is to embed the variant note in the text itself, right beside its lemma, with the risk, ultimately, of introducing every kind of error when it has to be retyped and numbered preparatory to submission to a publisher. I have written a program to read a text file with such embedded notes of variants, as well as embedded "fontes" (references to sources such as the Bible, ecclesiastical authorities, etc). The program copies the variant notes into a separate file, fishes out the lemma and puts it with the note, copies the fontes into a second file, copies the text itself, now newly formatted and shorn of notes, into a third file, counts the line numbers of the new text file, and numbers both the variant notes and the fontes with the number of the lines in the newly formatted text to which they refer. It is purely a mechanical operation, but is designed to preserve a semblance of sanity and latinity. Depending on the speed of one's machine it will handle a 100,000 byte text in from 15 to 35 seconds. The next step is to have the program examine the variants themselves, classify them according to type, the purpose being to determine family relationships among the mss. involved, and so to fashion the preliminaries of a stemma. Is there anyone out there doing this sort of thing? If the response warrants it, I shall be happy to post a summary on HUMANIST so that others can keep abreast of what is going on. Norman Zacour (Zacour@Utorepas) Pontifical Institute of Medieval Studies Toronto From: MCCARTY@UTOREPAS Subject: Help on research project by novice (53 lines) Date: 26 February 1988, 12:34:01 EST X-Humanist: Vol. 1 Num. 825 (825) ---------------------------- From PROF NORM COOMBS I am a historian by training and only arrived recently ;by the back door in the field of communications. I have been using a computer conference system to teach distance learners. They learn material from watching videos and reading texts, but then they check into the college VAX system weekly to join a class discussion. The conference is asynchronous which allows maximum time flexibility. Most students found they needed very little computer knowledge. Those with a PC and modem who could access the system from home or work came to "love" it. We did have a lot of participation, and I felt the studetns shared more personal items than they do in my classroom meetings. Now I have a record of last fall's class. There were 56 topics with 35 to 45 replies to each. The students also wrote essay exams and submitted them by computer. It seems to me, though I am not a communications specialist, that this body of data could be useful. I intend to analyze it during the spring and hope to find useful results by summer to share with a wider scholarly community. Here are some of the items I had intended to examine. (I expect to run the data through Writers Workbench.) 1. compare conference replies with essay exams to see effect of different situation on writing. e.g. formal vs informal. Sophistication. Length of sentence and length of word. 2. Compare first and last week replies in conference for any changes in style. 3. Chart how much the discussion merely answered the professor's topic questions and how much involved interaction with other participants' comments. As I confessed previously, I am new in studying communications, and I would certainly appreciate any good hints about what might be done with this data and what kinds of tools could be helpful. Any ideas are welcomed. I will be away for a week and have had my Humanist mail stopped for the duration. Therefore direct mailings would be best as they will get through. NRCGSH@RITVAX.BITNET Norman Coombs From: MCCARTY@UTOREPAS Subject: Date: 26 February 1988, 14:50:37 EST X-Humanist: Vol. 1 Num. 826 (826) Subject ODA SGML list (46 lines) Announcing the creation of an ODA/SGML mailing list. The ISO 8613 ODA/ODIF has recently been made into an International Standard. This, and the growing interest in both ODA and SGML is cause enough for a mailing list. In case you don't know, ODA stands for Office Document Architecture, ODIF for Office Document Interchange Format, and SGML for Standard Generalized Markup Language. If you are interested in these standards, or like myself are trying to use them in future products, then this mailing list is for you. Lets trade horror stories :-) To join the mailing list send mail to: oda-request@trigraph or ...utzoo!trigraph!oda-request Thanks, Michael Winser -- ...utscri!trigraph!michael Michael Winser michael@trigraph.UUCP Trigraph Inc. 5 Lower Sherbourne St. #201 (416) 363-8841 Toronto,Ontario M5R 3H8 ---------------------------------------- From: MCCARTY@UTOREPAS Subject: British natural language processing? (25 lines) Date: 26 February 1988, 14:53:22 EST X-Humanist: Vol. 1 Num. 827 (827) ---------------------------- From Grace Logan I have received an urgent request from my home university for information "about the Natural Language Processing which the British Government are busy implementing." Now I would very much like to help out the folks back home, but Natural Language Processing is not something with which I have a great deal of experience. I wonder if my fellow HUMANISTs have more of an idea than I do about where to start to collect information on the topic. I should be very grateful. Grace Logan From: MCCARTY@UTOREPAS Subject: TeX and the public domain (19 lines) Date: 26 February 1988, 15:28:59 EST X-Humanist: Vol. 1 Num. 828 (828) ---------------------------- From ked%garnet.Berkeley.EDU@jade.berkeley.edu (Charles Faulhaber) One of the slickest WYSIWYG text formatters around is Arbortext's Publisher, currently available only for the Sun. Essentially it is a user-friendly front end for TeX with SGML style templates for various kinds of documents. It gives the end user all of the power of TeX without having to know anything about the language at all. It still needs work but it is head and shoulders above any text formatting system I know. From: MCCARTY@UTOREPAS Subject: Stemmatic analysis of Latin mss. (14 lines) Date: 26 February 1988, 15:31:50 EST X-Humanist: Vol. 1 Num. 829 (829) ---------------------------- From ked%garnet.Berkeley.EDU@jade.berkeley.edu (Charles Faulhaber) The TUSTEP program written by Wilhelm Ott (Tuebingen) is designed to handle these kinds of problems. Francisco Marcos Marin (Madrid) also has a mainframe IBM program for this. From: MCCARTY@UTOREPAS Subject: ODA SGML list (38 lines) Date: 26 February 1988, 15:33:58 EST X-Humanist: Vol. 1 Num. 830 (830) Announcing the creation of an ODA/SGML mailing list. The ISO 8613 ODA/ODIF has recently been made into an International Standard. This, and the growing interest in both ODA and SGML is cause enough for a mailing list. In case you don't know, ODA stands for Office Document Architecture, ODIF for Office Document Interchange Format, and SGML for Standard Generalized Markup Language. If you are interested in these standards, or like myself are trying to use them in future products, then this mailing list is for you. Lets trade horror stories :-) To join the mailing list send mail to: oda-request@trigraph or ...utzoo!trigraph!oda-request Thanks, Michael Winser -- ...utscri!trigraph!michael Michael Winser michael@trigraph.UUCP Trigraph Inc. 5 Lower Sherbourne St. #201 (416) 363-8841 Toronto,Ontario M5R 3H8 ---------------------------------------- From: MCCARTY@UTOREPAS Subject: Snobol comments (53 lines) Date: 26 February 1988, 21:57:46 EST X-Humanist: Vol. 1 Num. 831 (831) ---------------------------- From amsler@flash.bellcore.com (Robert Amsler) Curious. I was deliberately striving to say that computer scientists DON'T know all that much about ease of use--or rather that concerns about formal syntax, theoretical power and such might get in the way of concerns about simplicity of learning, etc. Another good example of this is BASIC. BASIC is a fine language for simplicity of understanding--which was its intended goal; however, I cringe at the thought of someone writing complex system in BASIC. I am a computational linguist. It is QUITE reasonable for some massive projects to be carried out in academia. They could even be done by a lone individual. I do feel that today someone should not undertake such a project in Snobol. When one gets to doing things like writing mechanical translation systems or creating symbolic algebra interpreters or developing text understanding systems--the need is for a suitable contemporary language. These projects ALSO should not be done in Fortran or C or Pascal. They probably should be done in LISP or Prolog. They MAY be done in Fortran or C or Pascal at commercial shops because they expect to sell the resultant software to people who can only run in those environments--but in research labs or academic institutions this isn't even typically the case. Finally, I didn't intend the somewhat stronger statement that Snobol is only good for rewrites or extractions. These happen to be the two broadest classes of tasks I can think of for its use. Parsing IS a rewrite system, incidentally. So, parsing isn't something I'd have excluded in that even IF I had meant it in the strongest sense. If I were to try and imagine what tasks are not rewrites or extractions I'd probably have to say numerical computations. A concordance is a rewriting task; automatic indexing is an extraction task. These seem perfectly suited to Snobol, being rather simple procedures. Ah, there is one other class of programs--generation systems. Snobol would also be excellent there. The main point of my message was that I was trying to describe the justifications I could see for someone using Snobol RATHER THAN any other language for some tasks. If one assumed that someone didn't know any programming language and was prepared to learn a language; under what circumstances should the language they learn be Snobol rather than Icon, C, Fortran, Basic, Lisp, Pascal, etc. I believe there are reasons this is true. I also believe there are tasks in which they should not learn Snobol, but should instead learn Icon, etc. From: MCCARTY@UTOREPAS Subject: Correction of address for SGML list (14 lines) Date: 28 February 1988, 12:42:21 EST X-Humanist: Vol. 1 Num. 832 (832) On 26 February I circulated a notice about a ListServ list devoted to SGML. Interested people not using the UNIX network were told to subscribe by sending a request to "oda-request@trigraph". The address should have been "oda-request@trigraph.uucp". Willard McCarty mccarty@utorepas From: MCCARTY@UTOREPAS Subject: More on computer scientists and SNOBOL (83 lines) Date: 28 February 1988, 12:47:03 EST X-Humanist: Vol. 1 Num. 833 (833) ---------------------------- From Joe Giampapa This note is in response to Richard Giordano's note, "Computer scientists and SNOBOL". He made a couple of points which I would not agree with. Part of Giordano's argument was as follows: Who among us writes thousands of lines of code anyway? That sort of thing is developed by teams, teams that are managed. The selection of a programming language in a team effort has much more to do with management decisions and concerns than it has to do with the intrinsic merits of this or that particular language. This explains, in part, why there are so many FORTRAN and COBOL shops around. Computer scientists, and their opinions, usually count for little or nothing in this equation. I do not see the connection between the "thousands of lines of code" and the "teams that are managed" statements. Thousands of lines of code could be written by one individual, or written by a team. Even if written by a team, it does not change the problem of a large program's readability and manageability, as embodied by that language. The criteria by which programming languages are selected, I agree, are not always reduced to the concerns of an optimal language. Future system support, the availability of coders and implementation of a language on a system, and the cost of maintaining such a system are important managerial concerns. However, it is ridiculous to state that in a way which factors out the opinions of computer scientists (or, pragmatically, the program coders). There is no way a manager can make effective project decisions without computer know-how. Neither is it intelligent to expect cost-effective work from programmers who have to work with a language which was chosen without their concerns in mind. Managers and programmers have to work together in the best interests of the project -- not in the mutually exclusive interest of one side. It disappoints me to feel I have to make such an obvious claim. "This explains, in part, why there are so many FORTRAN and COBOL shops around." I do not know much about COBOL, but I know that FORTRAN is around because it has had support of magnitude unrivaled by any other language. Most user libraries for scientific computing were written and maintained in FORTRAN, and FORTRAN is the unrivaled best optimizing compiler language. It makes no sense to rewrite everything in C (which was designed for writing programmer- optimized code). Finally, I would like to look at Giordano's closing paragraph: I do have an objection to Amsler's implicit message that computer scientists know best. They don't, but that's a different story... It is good that HUMANIST discussions express the author's emotional involvement in his/her side. However, we should be a little more conscious of when our emotions overtake our arguments and have no bearing on the matter. Whether Rich Giordano thought that Amsler had vested interest in making exaggerated claims about computer scientists, or not, or whether Amsler did in fact make those claims (I do not know --- how was the originally provocative letter phrased?) are beyond the issue of Giordano's preferred style of responding to them. If Rich Giordano wanted to direct those comments to Amsler, he should have done so directly. Not only did Giordano miss the target (Amsler replies that he is a computational linguist), but he succeeded in alienating computer scientists with his irrelevant comment. As a computer science student and professional, I cannot allow such a baseless claim to go unheeded. HUMANIST is an open forum, with the message- poster assuming full responsibility for her/his statements. It is also academic in nature, permitting the uneditted expression of opinions and insights. It would be a travesty to see it reduced to the rubble of a forum for expressing crass generalizations and to defame any profession or area of study. Joe Giampapa giampapa@brandeis.bitnet From: MCCARTY@UTOREPAS Subject: Icon and Snobol (31 lines) Date: 28 February 1988, 19:34:34 EST X-Humanist: Vol. 1 Num. 834 (834) ---------------------------- From Richard Goerwitz Richard Giordano's note admittedly exaggerates some points, as I see it, and does not fully explain the facts surrounding others. But I do understand the points he was trying to make. If taken in context - as responses to equally one-sided remarks made by previous posters - they stand as an interesting cor- rective. It therefore seemed a bit out of place to see his ideas labled as a case of emotions run wild, or as "crass generalizations" a "travesty" and various and sundry uncomplementary things. The Humanist is indeed an open forum. This, however, does not mean that we have to demean each other when we get a little out of line.... -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer P.S. I retain the subject heading, since this does actually come as a followup to a discussion concerning two computer languages that are well suited to the needs of Humanists, namely Icon and Snobol. From: MCCARTY@UTOREPAS Subject: Call for papers, 4th Waterloo New OED Conference (69 lines) Date: 28 February 1988, 23:49:55 EST X-Humanist: Vol. 1 Num. 835 (835) ---------------------------- From Maureen Searle UNIVERSITY OF WATERLOO CENTRE FOR THE NEW OXFORD ENGLISH DICTIONARY 4TH ANNUAL CONFERENCE CALL FOR PAPERS - CALL FOR PANELISTS INFORMATION IN TEXT October 27-28, 1988 Waterloo, Canada This year's conference will focus on ways that text stored as electronic data allows information to be restructured and extracted in response to individualized needs. For example, text databases can be used to: - expand the information potential of existing text - create and maintain new information resources - generate new print information Papers presenting original research on theoretical and applied aspects of this theme are being sought. Typical but not exclusive areas of interest include computational lexicology, computational linguistics, syntactic and semantic analysis, lexicography, grammar defined databases, lexical databases and machine-readable dictionaries and reference works. Submissions will be refereed by a program committee. Authors should send seven copies of a detailed abstract (5 to 10 double-spaced pages) by June 10, 1988 to the Committee Chairman, Dr. Gaston Gonnet, at: UW Centre for the New OED University of Waterloo Waterloo, Ontario Canada, N2L 3G1 Late submissions risk rejection without consideration. Authors will be notified of acceptance or rejection by July 22, 1988. A working draft of the paper, not exceeding 15 pages, will be due by September 6, 1988 for inclusion in proceedings which will be made available at the conference. One conference session will be devoted to a panel discussion entitled MEDIUM AND MESSAGE: THE FUTURE OF THE ELECTRONIC BOOK. The Centre invites individuals who are interested in participating as panel members to submit a brief statement (approximately 150 words) expressing their major position on this topic. Please submit statements not later than June 10, 1988 to the Administrative Director, Donna Lee Berg, at the above address. Selection of panel members will be made by July 22, 1988. The Centre is interested in specialists or generalists in both academic and professional fields (including editors, publishers, software designers and distributors) who have strongly held views on the information potential of the electronic book. PROGRAM COMMITTEE Roy Byrd (IBM Corporation) Michael Lesk (Bell Communications Research) Reinhard Hartmann (Univ. of Exeter) Beth Levin (Northwestern University) Ian Lancashire (Univ. of Toronto) Richard Venezky (Univ. of Delaware) From: MCCARTY@UTOREPAS Subject: Re: Snobol comments (26 lines) Date: 29 February 1988, 08:55:52 EST X-Humanist: Vol. 1 Num. 836 (836) ---------------------------- From Hans Joergen Marker Robert Amsler's note on Snobol made me curious as well. I don't understand what kind of tasks are so much better handled in Prolog and Lisp than in C -- so that these languages are usable for solving problems in humanities and C is not. Mind you I am asking out of ignorance. I am only experienced in a few languages, and after I have learned C, I have no intention of learning another language. I feel that anything I want to do is done very comfortably in C, especially when you make use of some of the many available libraries. Perhaps I should recommend a book: Herbert Schildt: Artificial Intelligence using C. It was that book which convinced me that AI methods might actually be of practical use anyway. From: MCCARTY@UTOREPAS Subject: British natural language (21 lines) Date: 29 February 1988, 09:00:06 EST X-Humanist: Vol. 1 Num. 837 (837) ---------------------------- From Sebastian Rahtz I believe the British Government 'natural language processing' that Grace Logan refers to may be the Prime Minister saying "the National Health Service is safe in our hands" on election, and then proceeding to dismantle it as fast as she can go. I understand a number of other governments have been doing similar research sebastian rahtz From: MCCARTY@UTOREPAS Subject: Public domain programs (36 lines) Date: 29 February 1988, 09:03:15 EST X-Humanist: Vol. 1 Num. 838 (838) ---------------------------- From Sebastian Rahtz I sympathise with Dominik W. in his views on the excellent support for TeX vs. the minimal support from big companies. But mega-fan though I am of TeX, I cannot really agree with his premise that the whole system is there for the taking by the new punter. Agreed, there ARE PD versions for the PC, Atari etc, and Unix/Vax/IBM etc people can get the tape for a minimal cost; but: a) the printer drivers are NOT generically in the public domain as TeX itself is; I know Beebe etc give away their drivers, but they might not one day b) the work required to build oneself a working TeX environment is considerable, unless you spend money. I just about maintain a TeX system on our machine in conjunction with our systems programmer, and we keep our heads above water only because we support about 10 users. Just because it costs nothing to buy doesn't mean its free of maintenance charges! I'd support the suggestion that HUMANISTs out there who just want to typeset their papers and books in conventional ways spend some real cash and buy a system like Publisher and get on with what they are paid for, writing & thinking, not typesetting! I except people like Dominik whose Devanagari needs are not catered for in standard software.... sebastian rahtz From: MCCARTY@UTOREPAS Subject: Ghettoes (41 lines) Date: 29 February 1988, 09:05:22 EST X-Humanist: Vol. 1 Num. 839 (839) ---------------------------- From Sebastian Rahtz I haven't yet had time to take up Willard's gauntlet and find out what HUMANISTs are interested in (I will have a go at it), but I did quickly see what words they use in their biographies: these are the 'interesting' words that occur more than 10 times: Philosophy 11 programming 15 database 27 Text 11 Information 16 IBM 27 medieval 11 Linguistics 16 Greek 30 YORK 11 mainframe 17 text 33 History 11 literary 18 texts 37 Hebrew 11 courses 18 French 38 Language 12 student 18 language 38 Macintosh 12 history 19 teaching 50 technology 12 VAX 21 Computer 54 Latin 12 Computers 21 English 56 concordance 13 applications 23 Humanities 58 music 13 writing 24 computers 63 German 14 teach 24 Computing 69 colleagues 15 languages 25 humanities 72 science 15 Science 26 computing 93 electronic 15 literature 26 computer 96 Doesn't prove a thing, does it? except maybe about history. Sebastian heres a *joke* for British HUMANISTs: Q. what was the catchphrase in the Egyptian telephone company privatisation? A. Tell El-Amarna From: MCCARTY@UTOREPASJohn B. Haviland Subject: Conference on Honorifics (164 lines) Date: 1 March 1988, 09:32:49 EST X-Humanist: Vol. 1 Num. 840 (840) -------------------------------------------------------------------- Reed College, Anthropology and Linguistics Portland State University, Department of Modern Languages announce A Conference on |HONORIFICS| to be held in Portland, Oregon April 8-10, 1988 |Precursors and topics:| Since classic early work on the social effects of pronoun choice, the respectful (or abusive) elaboration of vocabulary, and proposed universal features of the linguistic expression of politeness, there has been significant interest in the pragmatics of honorifics. Despite some programmatic optimism, these varying phenomena have not been systematically related. Nor have such studies been integrated with more recent research on natural conversation and discourse, on contextual cues, or on the pragmatics of inference, in which honorific phenomena are significant components. Similarly, detailed treatments of Asian, American, and Australian languages have shown the necessity for integrating honorific categories into the central core of grammatical description in individual cases, but work on the general theoretical significance of such categories for syntax and morphology has only recently begun. This conference proposes to direct these two complementary but divergent currents towards a more unified and general theory of honorifics in language. The prospects for significant results seem substantially heightened by a conference organized around both detailed ethnolinguistic and morpho-syntactic presentations, complemented by an explicitly comparative and theoretical workshop. We have solicited contributions which deal with the following topical areas: |A. Strategies for encoding honorific (and deprecatory)| |categories |in morphology, syntax, and the lexicon of different languages, as well as the consequences--both typological and interactive--of such strategies: Here we have in mind different possibilities for \grammaticalizing\ pragmatic aspects of language. A summary of the basic typological facts of honorific phenomena awaits more detailed descriptions of a wide range of particular languages. |B. Levels of honorifics within language structure:| their encoding within subject, object, speaker, and addressee categories; or in pronouns, classificatory elements; in lexical or phonological alternants; or in kinesic, paralinguistic, and gestural cues. In the best studied cases, honorific devices seem to range from speech levels and marked vocabulary (often, if not always, buttressed by marked kinesics as well) as in the cases of Javanese and Balinese, Australian "mother-in-law" languages, or Samoan respect vocabularies; to lexical alternation complemented by thoroughgoing syntacticization of honorific categories within verbal morphology (as in Korean or Japanese); to classification and gender-like systems incorporating honorific components (as in languages of the Americas, such as Nahuatl or Mixtec) or systems of "politeness affixes"; to familiar pronominal alternations, for example in the languages of Europe, or in a more elaborate form in the highly developed systems of address terms found in South Asia. What warrant, conceptual or analytic, is there for treating these phenomena together, either at the level of language use or language structure? |C. Honorifics (and their relatives) as tools, or fuel for| |linguistic argumentation and theory:| convincing evidence that honorific categories bear on syntactic theory derives from work on Japanese and Korean, but no doubt can be found elsewhere as well, once the syntactic and morphological facts are sufficiently well described. Recent work on the 'iconicity' (or principled motivation) of morphological encoding may need revision on the basis of fuller treatment of such ill-understood categories as honorifics. In a somewhat similar way, the seemingly inescapably indexical and situated nature of honorific usage seems to pose a challenge to current semantic theories in a way that will be useful to explore. |D. Historical change and evol||ution of honorific devices in| |language:| we have in mind such matters as: the evolution of terms of respect and disrespect; the gradual 'downgrading' of markers on honorific scales; the history of social arrangements (revolutions, or religious conversion, for example), and corresponding reflexes in the linguistic expression of honorific categories; and so on. The literature contains suggestive proposals connecting pronominal elaboration, for example, with particular political institutions or ideological movements--connections that might well be explored in the light of current ethnography. |E. Sociolinguistic and ethnographic aspec||ts of honorific| |use|, in relation to the place of language in wider social theory. Rich ethnographic description of the ethnography of deference and respect, in natural conversation and interaction, is a necessary precursor to a more widely applicable pragmatic account. Moreover, as socially potent linguistic tokens, honorifics serve as particularly striking instances of language as social action, moves in the discourse of power. |F. The interaction of honorifics with related but perhaps| |distinct phenomen||a:| candidates include politeness, respect, deference, formality, and their opposites: rudeness, insult, abuse; joking relationships, avoidance relationships; formality, informality, casual and non-casual interaction. Just as, at the level of language structure, honorific categories often attach to vehicles with different grammatical characters (pronoun and agreement systems, verbal or nominal classifiers, for example), in their social circumstances their use may be linked to specific situations, relationships, or contexts. The components of such interaction must be disentangled for a candidate theory to succeed. |G. Case studies ||of honorific systems|: we will solicit studies from various sociocultural traditions and contexts, with an emphasis on the wider ethnographic significance of honorifics, and their placement within both a general linguistic or ethnographic framework, and an adequate social theory. For further information, please contact John Haviland Linguistics and Anthropology Reed College Portland, OR 97202 (503)771-1112, 771-1197 From: MCCARTY@UTOREPASHumanists' Group Subject: Magneto-optical disks (33 lines) Date: 1 March 1988, 09:39:01 EST X-Humanist: Vol. 1 Num. 841 (841) ----------------------------------------------------- [Taken from ``The Olympus Pursuit'', Vol. 7 No. 1 1988] Magneto-optical discs can now hold a prodigious mount within their 5 1/4-inch format. Now a magneto-optical disc drive offers the means to read, write and erase the data in these information warehouses. .. many have agreed that the future of data storage lies in the magneto- optical disc, a system that can hold as much as 500 floppy discs ... .. After four years of work, Olympus researchers have answered the demands with a new magneto-optical disc drive ... .. Momentum gathered with the development of a laser-optical pickup system. Still other contributions came from development of a new high-speed servo- motor, a linear motor, an error-correction controller and central processing unit. Since the field was new to Olympus, the project took longer than expected. The results, however, set the performance standards in an industry looking toward an estimated disc market of $2,200 million by 1990 ... For further information, contact the Optical Memory Department, Olympus, Tokyo. From: MCCARTY@UTOREPASDavid Sitman Subject: Nota Bene list (53 lines) Date: 1 March 1988, 09:43:02 EST X-Humanist: Vol. 1 Num. 842 (842) --------------------------------------------------------- Dear Colleagues, We have started a new electronic mail special interest list at Tel Aviv University, the Nota Bene discussion list. This list is intended as a forum for discussing issues related to the word processor/textbase, Nota Bene, exchanging tips, asking questions, trading programs, etc. This discussion list is made possible by the list serving software, LISTSERV. Here is how it works: A list of the members is kept in TAUNIVM. Whenever anyone sends electronic mail (or a file) to NOTABENE@TAUNIVM, the mail (or file) is automatically distributed to all list members. All requests to join the list, leave the list, etc., should be sent to LISTSERV@TAUNIVM, and NOT to NOTABENE@TAUNIVM. To join the list, or to correct your name on the list, send the SUB command: SUB NOTABENE Your Name For example, if I wanted to add my middle initial to my name (I don't), I would use the command: SUB NOTABENE David B. Sitman Note that you send your name, not your computer code. TO leave the list, send LISTSERV@TAUNIVM the command: UNS NOTABENE LISTSERV will accept the commands either as interactive messages or as mail. For example, from VM/CMS in BITNET you can send the command: TELL LISTSERV AT TAUNIVM UNS NOTABENE From VAX/VMS, you could use: SEND LISTSERV@TAUNIVM SUB NOTABENE My Name From any computer you can send electronic mail. Make sure that the command that you want to send to LISTSERV is in the mail body, not in the header (e.g., not in the SUBJECT field). The LISTSERV command processor ignores mail headers. We have also made available some programs written for Nota Bene use by Itamar Even-Zohar. Explanations of what these files are and how they can be obtained will appear shortly in the Nota Bene discussion list. David Sitman From: MCCARTY@UTOREPASROBERT E. SINKEWICZ (416) 926-7128 ROBERTS at UTOREPAS Subject: SQL and Humanities Research (23 lines) Date: 1 March 1988, 09:51:40 EST X-Humanist: Vol. 1 Num. 843 (843) --------------------------------------------------------- I am preparing a report on SQL Database Software and Humanities Research. It will be based in the first instance on our experiences in the Greek Index Project. If there are other installations of SQL software, large or small, mainframe, mini or micro being used by humanities projects, I would be interested in hearing of their existence. When my report is ready I will post it on the HUMANIST file storage area for anyone who may be interested. Robert Sinkewicz Greek Index Project Pontifical Institute of Mediaeval Studies ROBERTS@UTOREPAS From: MCCARTY@UTOREPAS Subject: The Global Jewish Database and its software (133 lines) Date: 1 March 1988, 12:38:41 EST X-Humanist: Vol. 1 Num. 844 (844) Dear Colleagues: Some time ago I made reference to the "Global Jewish Database," and in particular to its full-text retrieval software. We were, as you'll recall, discussing the specific things that humanists want to do with large amounts of text, e.g., as provided on a CD-ROM. Since I've received a few enquiries about this Database, and since my Centre is one of the few places outside Israel connected to it, I thought I'd send along the following brief description. Willard McCarty mccarty@utorepas The Global Jewish Database is a 70-million-word electronic corpus in Hebrew accessed by means of a full-text retrieval system running on an IBM mainframe at Bar-Ilan University in Israel. The largest part of this database (53 million words, 253 volumes, 53,000 documents) comes from the "Responsa literature," a collection of rabbinical answers to questions about all aspects of Jewish life and culture. This literature spans the millennium from the tenth to twentieth centuries and originates from more than 50 different countries. It thus comprises a very rich storehouse of information on Jewish law, history, philosophy, ethics, customs and folklore, and is of interest for both religious and secular scholarship. The other parts of the database include the Hebrew Bible (fully vocalized, with marks of cantillation), the Babylonian Talmud, the Midrash literature, Maimonides' Code, and various medieval biblical commentaries. By arrangement with the Institute for Information Retrieval and Computational Linguistics at Bar-Ilan, the Database is accessible to anyone with a PC and a modem via a telecommunications network such as Telenet or Datapac. To date connections have been established at the Rabbinical Court of London, at the Institute for Computers in Jewish Life (Chicago, Illinois), and the Centre for Computing in the Humanities, Univ. of Toronto, besides 25 connections at various research centers in Israel. The connection in Toronto is the only one to a research institution outside Israel; so far it has been used both by members of the local Jewish community and by academics. Computing humanists without interest in the contents of the Database will, however, likely be interested in its software. Although this full-text retrieval software was developed at the Institute in Bar-Ilan especially for the Hebrew Database, it has recently been adapted for use with English language texts. It allows searching for words and phrases with positive and negative proximity operators on the word, sentence, and paragraph levels, and for combining different queries with boolean operators. Besides a full pattern-matching component (with left, middle and right truncations, wild cards, and boolean constraints on the occurrence of strings in the pattern), it embodies a sophisticated morphological component that allows the user to retrieve all grammatically derivative forms of a word automatically by specifying merely its lemma. Through the technique of "short-context disambiguation" the system also permits most unwanted occurrences of a word to be eliminated before full retrieval: occurrences are grouped and listed with their nearest verbal neighbours, either to the left or right, and from this list the user selects which are to be retrieved. Since in general the number of different neighbors that are also relatively frequent turns out to be quite small, the user can disambiguate a large number of occurrences quickly. For example, "see" causes "see", "sees", "saw", and "seen" to be retrieved, while the difference between "he saw" and "the saw" allows the homographic noun to be identified and eliminated immediately. Experiments have shown that native speakers have a high degree of proficiency in making such choices on the basis of very limited contexts. By statistically analyzing some of the retrieved relevant documents, a local feedback module can suggest to the user new words to be included in his query. A local thesaurus element allows the user to define and later edit families of related terms to be considered equivalent and to be automatically retrieved whenever the family name is mentioned in the query. Specialized communications software with error-correction and compression modules handles all necessary protocol conversions and insures that the rapid response time of the software is practically unaffected by the relatively slow 1200-baud speed of transmission across telephone lines. Experience with the Database from North America, for example, suggests that a 10 second response is not uncommon. Research for the Database was initiated and conducted from 1966 to 1975 by Prof. Aviezri Fraenkel and developed since then at the Institute for Information Retrieval and Computational Linguistics, Bar-Ilan University, Ramat-Gan, Israel 52100, by its Head, Prof. Yaacov Choueka, and colleagues. For information contact Prof. Choueka at that address; telephone: 03-53718716; The following references may be of interest: Choueka, Yaacov. "Computerized Full-Text Retrieval Systems and Research in the Humanities: The Responsa Project." CHum 14 (1980): pp. 153-169. ... R. Attar, N. Dershowitz,and A.S. Fraenkel, "KEDMA--Linguistic tools for retrieval systems", J. of the Ass. Comp. Man., 25 (1978) pp 52-66. ... "Linguistic and Word-Manipulation Components in Textual Information Systems." In The Application of Mini- and Micro-Computers in Information, Documentation and Libraries. Ed. C. Keren and L. Perlmutter. Elsevier, 1983. pp. 405-17. ..., S.T. Klein and E. Neuwitz. "Automatic Retrieval of Frequent Idiomatic and Collocational Expressions in a Large Corpus." ALLCJ 4.1 (1983) pp. 34-38. ... and Serge Lusignan. "Disambiguation by Short Contexts." CHum 19 (1985) pp. 147-157. See also "List of Publications (In English)" Institute for Information Retrieval and Computational Linguistics (The Responsa Project), Bar-Ilan Univ., August 1987. *****END***** From: MCCARTY@UTOREPASRichard Giordano Subject: Computer scientists etc..... (66 lines) Date: 1 March 1988, 12:49:12 EST X-Humanist: Vol. 1 Num. 845 (845) --------------------------------------------------------- My original message regarding SNOBOL and large systems was not meant to attack computer scientists, nor did I even suggest that Amsler is a computer scientist. I should by way of explanation mention that I am completing a dissertation on the early history of data processing, and the emergence of computer science in the early 1960s. Some of what I have observed in my research guides the way that I approach this issue of large systems, management, and the choice of programming languages. Giampapa is absolutley correct when he states that the selection of programming languages are not always reduced to concerns of the optimal language. He then mentions such factors as "future programming support, the availability of coders and the implementation of the language on a system, and the cost of maintaining such a system are important managerial concerns." From what I've gathered, they are not important concerns--they are absolutely critical in the choice of a language. Equally important is the language that the shop is already using. You'd be surprised what has been programmed in COBOL and FORTRAN, especially after ALGOL was introduced. You'd be even more surprised to learn which features in languages (like the use of based variables in PL/I) are forbidden because of what managers considered their intinsic difficulty to learn. A fundamental difference between computer scientists and data processors is that the managers are mainly interested in getting the job done with the least amount of present or future disruption. From what I can tell, programmers and managers don't work together in selecting a language, and programmers (coders) are really something like a high-tech proletariat (but let's not take the analogy too far). Anyway, if a shop is programming in FORTRAN or COBOL, it would take an awful lot of convincing to switch to some other language. Anyhow, this was how commercial shops work--and I stand by that. Academic environments may not function in exactly the same way. But let's face it, academic shops are small potatoes in the grand scheme of things. As for computer scientists... Geeze, I wasn't trying to insult anyone, and I am surprised over the vehemence of Giampapa's reply. In fact, upon re-reading my message, I still don't think I've said anything to "alienate computer scientists." But Giampapa uses a word that is critical in understanding a computer scientist and a practitioner--that word is "professional." Nothing is intrinsically wrong with that, but what about those who do computing, but who are not computer scientists? The answer, as it has panned-out throughout the sixties and seventies, is that the others are outside of the core information circuits, outside of the means by which they self-consciously think of themselves as computer scientists, outside of the profession. C'mon, that's the nature of professions, and we all know it. What makes this relevant? Sometimes practitioners, trained in something other that computer science, come up with solutions that may not (or may) have anything to do with how computer scientists do things. What's to make one solution better than the other? Most of us on the HUMANIST are not computer scientists, and many of the solutions to our problems may seem 'unprofessional'--I'm not putting words in anyone's mouth here. That brings me to my original statement that computer scientists are not always right. In one environment they may be right, but the world's a big place, and there is often more than one way to skin a cat. From: MCCARTY@UTOREPASSusan Hockey Subject: PostScript non-standard fonts? (19 lines) Date: 1 March 1988, 12:52:25 EST X-Humanist: Vol. 1 Num. 846 (846) --------------------------------------------------------- Does anybody know of any PostScript versions of non-standard fonts which are either public domain or available on a site-licence? We are particularly interested in Greek, Hebrew, Arabic and Cyrillic as well as an extended Latin alphabet including Old English characters. Susan Hockey SUSAN@VAX.OXFORD.AC.UK From: MCCARTY@UTOREPASJoe Giampapa Subject: Apology (21 lines) Date: 1 March 1988, 20:08:45 EST X-Humanist: Vol. 1 Num. 847 (847) --------------------------------------------------------- To Rich Giordano and HUMANISTs: I would like to apologize for my reply to Rich's [2-26 12:33] letter. Also, I would like to retract the last paragraph of that letter -- it clearly renders me a hypocrite. I should not have responded with such strong language to something of which I did not know the full context. With sincere regrets, Joe Giampapa giampapa@brandeis.bitnet From: MCCARTY@UTOREPASJoe Giampapa Subject: Prolog and Lisp (72 lines) Date: 1 March 1988, 20:15:24 EST X-Humanist: Vol. 1 Num. 848 (848) --------------------------------------------------------- Joergen Marker recently asked about what tasks Prolog and LISP are better suited for. Here is a brief description and attempted explanation off the top of my head. If people would like a more thorough explanation and some examples of code from each, I will have to get back to them after digging up some archived programs and notes. Prolog first: (see also "Programming in Prolog" by Clocksin and Mellish, Springer-Verlag) Prolog creates and manages a database of facts. The command structure analogous to other language's "procedures" and "functions" is the prolog "predicate" which basically evaluates to "true" or "false". The arguments to the predicate are the objects you are looking for. A common warm-up exercise is to create a prolog geneology: human(Mary,female). human(John,male). human(Bill,male). parent_of(Mary,Bill). parent_of(John,Bill). offspring(Bill). parent(X) :- human(X) and parent_of(X,Y) and human(Y) and offspring(Y). ?- parent(Z). Z=Mary etc. A package that comes with Prolog are DCG's, which correspond roughly to context free grammars. They are a sort of "macro" for prolog, which use the above-described structure for doing parsing. Here is an example of a DCG grammar which will parse a Pascal program. The lower case names are literals which the parser looks for in the program text, the upper case names are further DCG expansion rules. Pascal --> program NAME ( DEVICE-LIST ) ; const CONSTANTS type TYPES var VARIABLES begin BODY end. NAME --> [_] ; means accept anything . . . BODY --> .... With some limited success, you can achieve versatility in parsing English (or any other "natural language") sentences. LISP is an nth-order language, which means you can derive any language from it (including Prolog). Prolog is only 1st-order and cannot be designed to write a LISP interpreter (that is, without extreme hacks and "unpure" Prolog constructs). For this reason, you can write anything you want in LISP. However, LISP code usually does not conceptualize some constructs as elegantly as Prolog does. I hope this helps out. I can fill in any specific "holes" people have questions about. Joe Giampapa giampapa@brandeis.bitnet From: MCCARTY@UTOREPASJohn J. Hughes Subject: More on the Global Jewish Database (28 lines) Date: 1 March 1988, 20:19:12 EST X-Humanist: Vol. 1 Num. 849 (849) --------------------------------------------------------- Perhaps those HUMANISTs who inquired about IRCOL's Global Jewish Database and the Responsa Project might find the detailed review of those projects in the June 1987 _Bits & Bytes Review_ (pp. 7-12) helpful. That review was written with the cooperation and assistance of Yaacov Choueka, so it is both up-to-date and accurate. I'll be glad to send a free copy to any interested HUMANISTs. They may contact me as XB.J24@Stanford or c/o Bits & Bytes Computer Resources 623 Iowa Ave. Whitefish, MT 59937. John J. Hughes From: MCCARTY@UTOREPASamsler@flash.bellcore.com (Robert Amsler) Subject: Programmers (47 lines) Date: 1 March 1988, 23:29:30 EST X-Humanist: Vol. 1 Num. 850 (850) --------------------------------------------------------- There are two usages of the word programmer which need to be clarified. In the Data Processing/Business world, the term `programmer' is used for the basic generic worker; akin to `secretary'; who does what they are told the way they are told to do it. These people do indeed work in `shops' and in horrible languages on business machines (i.e. IBM's) a lot. They make the bank payroll programs (and the phone system and its billings) work. The other use is that commonly employed in academia, which refers to anyone who writes programs. Typically such people ALSO originate the task the program is trying to accomplish (in the DP world this would mean they are more like ``system analysts'' than ``programmers''). These people have varied backgrounds and may hold advanced degrees in all sorts of fields. The term `hacker' and `wizard' are used to refer to the most revered members of the group--hacker meaning someone who figures out how to do things without being told how and wizard being someone who KNOWS how things can be done (having perhaps figured them out via hacking). The discussion about programmers and computer scientists seems plagued by a misunderstanding as to which group of `programmers' are being discussed. A language such as SNOBOL or LISP or PROLOG would have very little use among the members of the ``programming shop'' community. However, among the members of the latter group, these languages might be at least equal in their use to those of FORTRAN, COBOL, etc. It would be inappropriate to describe the ``programmers'' in EITHER environment as not amounting to much. On the business side, the world would cease to run if those programmers stopped doing their daily routine programming tasks. On the academic side, nearly all the advances in computer science have come from university environments. This includes time-sharing, computer graphics, several computer languages, database management, information science, etc. Robert Amsler From: MCCARTY@UTOREPASBob Kraft Subject: CD-ROM Configurations (43 lines) Date: 2 March 1988, 12:53:00 EST X-Humanist: Vol. 1 Num. 851 (851) --------------------------------------------------------- A week ago, Joe Giampapa asked for information about using CD-ROM technology, and John Gleason (PHI) responded to part of Joe's question by outlining the steps in preparing a laser disk. The costs of such mastering fluctuate, but seem to be in the neighborhood of $2000 to $3000 at present (John would know more accurately). That, of course, does not include the countless hours of formatting the files, preparing the ID tables, etc., prior to sending it off for mastering. As for hardware and software to access a CD-ROM from an IBM type microcomputer, one needs a reader (we use the Sony, which is also a standard component of the micro-IBYCUS Scholarly Computer) and an interface/controller card. The cost here is around $750, unless it has dropped recently, and doubtless varies from manufacturer to manufacturer or dealer to dealer. To use the new "High Sierra" format TLG "C" disk or the PHI/CCAT demonstration disk, one also needs the "DOS Extension" software from MicroSoft that is available from various sources at minimal cost (we paid $10). Finally, for the aforementioned CD-ROMs, software to decode the IDs in the text, etc., is necessary to make things work effectively. CCAT is preparing such software, and plans to make it available as a basis for further cooperative development and/or use at fairly nominal cost (hopefully under $100). Of course, for those fortunate enough to have purchased a complete IBYCUS SC system ($4000, including Sony CD-ROM reader), all the necessary hardware and software comes with the package and works efficiently and impressively from the start. It will take some time before people with IBMs or other machines can come anywhere close to the IBYCUS standard. Bob Kraft (CCAT) From: MCCARTY@UTOREPASJoe Giampapa Subject: More on Prolog (70 lines) Date: 2 March 1988, 21:03:05 EST X-Humanist: Vol. 1 Num. 852 (852) --------------------------------------------------------- I have received a few questions relating to my posting on Prolog, yesterday. Here they are: 1. "Has Prolog been standardized?" As far as I can tell, the most "standard" description of the language is that given by Clocksin and Mellish, "Programming in Prolog" (Springer-Verlag). All implementations must satisfy the minimum requirements as stated there. Some implementations may have more features, but that book is the bare-bones. I think people at the University of Edinbergh would be better authorities on what is the language "standard". 2. "Can it 'deal well' with non-Western-European languages?" I assume that this makes reference to the difficulties imposed by strings of characters in a non-Roman character set. What one would have to do is include in their Prolog code routines which would map from an ASCII character set (or whatever your machine supports) to their own character fonts. The Prolog interpreter does not care what symbols it manipulates, so you could conceivably run Prolog in a special character display package. The degree of "well"ness or efficiency depends on how well this has been implemented at your site, and the sophistication of your tools. I do not know of any "alternate character set" packages which will keep your system "standard" in a portable way, if you do need them. 3. "Will it really be worth it to make the effort of learning Prolog?" Some tasks are easily suited for Prolog, and the concise representation it can give you would be well worth while. Whether it is "worth it" depends on the nature of your work. It might well be worth some time and effort to become more familiar with it for future reference. 4. "Where is Prolog's strong point?" In a simplistic way, the strength of a Prolog system is in the way it searches for items in its database. As long as it can find something to satisfy its goals, it will proceed down to the next level, look there, and continue. If it cannot find something to satisfy its goal, it will "backtrack", which is a little complicated to explain. In practice, if you are parsing a sentence which can be parsed in more than one way, Prolog will automatically return the alternate parse trees. 5. "Could you offer a sample of [...] something that all of us might find useful - and explain what all the little nick-nacks mean? Then could you do the same for Lisp in another posting?" OK. What would be useful to all? A complete parsing program with full documentation? Something else? Send me some suggestions and I will pick something from them. Finally, I would like to throw the door wide open for additional commentary. Some people know Prolog, and have sent me notes in response to my posting. Please, publicly correct me and add to what I say. From: MCCARTY@UTOREPAS Subject: Request for announcements of conferences; plan for same (20) Date: 3 March 1988, 00:51:30 EST X-Humanist: Vol. 1 Num. 853 (853) Would anyone with an announcement of an upcoming conference or call for papers please send me the notice so I can post it on our file-server? My request applies to all such notices, whether they have been circulated on HUMANIST or not, except for notices of conferences that have already taken place. My plan is to keep current announcements on the server and to distribute to all of you only a brief summary of the upcoming events. Comments, objections, shouts of relief? Willard McCarty mccarty@utorepas From: MCCARTY@UTOREPASTom Flaherty Subject: Software for "roots"? (33 lines) Date: 3 March 1988, 09:18:29 EST X-Humanist: Vol. 1 Num. 854 (854) --------------------------------------------------------- I have recently been bequeathed the "family tree" for my mother's side of my family. My uncle worked on it quite vigorously for about 40 years, so it now contains thousands of names/dates and a great deal of narrative information. I would like to organize all of this stuff (the amount of paper is incredible) into a "database" that will make it less forbidding and more accessible. Also, should I ever have the time and energy, I would like to be able to fill in a blank or two in the historical information, and I am now obligated to update the near end of the tree with current events -- births, deaths, etc. So, my question is: Does anyone know of any *good* system for genealogical record keeping? Ideally, it should provide for a virtually infinite number of entries and links AND be able to hold textual information (biographies) in some organized way. Probably, one of the off-the-shelf database packages could be used. I just don't know enough about them to choose the most likely products to investigate. Is this a hyper/card/text application? I have a PC clone but would like to hear about systems for any hardware. My main concern is to get the material into a "permanent" system. There is room in a life to do this sort of thing only once. Any advice will be greatly appreciated. Thanks. --Tom Flaherty flaherty@ctstateu.bitnet From: MCCARTY@UTOREPASFrancois-Michel Lang Subject: Software for roots (24 lines) Date: 3 March 1988, 19:23:00 EST X-Humanist: Vol. 1 Num. 855 (855) --------------------------------------------------------- This may sound like an odd idea, but I know that the Mormon church has investigated precisely this problem and invested quite a lot of time and money into developing genealogical software. I have never seen or read about any of their methods, but I spoke with some LDS (Latter-Day Saints) Church members at a Logic Programming conference in Salt Lake City a couple of years ago, and learned of the LDS Church's interest in this sort of thing. That might be a place to start... Francois-Michel Lang Paoli Research Center, Unisys Corporation lang@prc.unisys.com (215) 648-7469 Dept of Comp & Info Science, U of PA lang@cis.upenn.edu (215) 898-9511 From: MCCARTY@UTOREPASMark Olsen Subject: Genealogy (24 lines) Date: 3 March 1988, 19:27:57 EST X-Humanist: Vol. 1 Num. 856 (856) --------------------------------------------------------- Tom Flaherty's request for info on geneology systems may be of more general interest, since historians frequently have to represent family linkages on large amounts of data. The best system for this seems to be GENESYS, developed by Mark Skolnick et al. You might consult "A Computerized Family History Data Base System" *Sociology and Social Research* 63 (April, 1979): 506-523. There is a good description of the application of this system in the Saguenay project by Ge/rard Bouchard, "The Processing of Ambiguous Links in Computerized Family Reconstruction" *Historical Methods* 19 (Winter, 1986): 9-19. Nominal record linkage and family reconstruction are areas where historians (bless our souls) have been developing computer methods in innovative and interesting ways, just in case we were getting to worried about the "literary ghetto" some HUMANISTS are afraid of. From: MCCARTY@UTOREPASMichael Sperberg-McQueen Subject: Prolog and logic, Prolog and parsing (76 lines) Date: 3 March 1988, 19:33:27 EST X-Humanist: Vol. 1 Num. 857 (857) --------------------------------------------------------- Two points worth stressing about Prolog for humanists who think they may be interested in learning it: 1 Prolog, as its name implies, is an attempt to make it possible, or nearly possible, to write 'programs' that are nothing more than sets of statements expressed in first-order predicate calculus. While the relations between Prolog and symbolic logic as you may have learned it in Philosophy 150 are not always obvious, they are there, and for people interested in symbolic logic Clocksin and Mellish's Chapter 10 ('The Relation of Prolog to Logic') makes fascinating reading. It clarifies the nature of the inferences a Prolog system can make, points out the various ways in which Prolog's version of logic loses nuances that can be important to the logical form of propositions in the predicate calculus, and develops a method for reducing predicate-calculus statements to Prolog clauses. (An appendix gives Prolog programs that will do the transformation for you, but it's worth doing it manually on some samples first.) That is, the logical underpinnings of Prolog as a system are almost as interesting as what you can do with it as a language. ***If you think symbolic logic is or can be beautiful, I think you'll like Prolog.*** It should be noted, though, that it can be hard to switch from the procedural, step by step style of analysis one acquires from other programming languages, to the non-procedural, declarative interpretation of Prolog programs. There turn out to be lots of things I know how to do step by step, that I cannot define readily in predicate calculus. And since Prolog programs have both a declarative and a procedural interpretation (that is, they *are* programs that are supposed to *do* things), working with Prolog can induce some intellectual dizziness. 2 Prolog provides a convenient way to parse expressions using Chomsky-like rewrite rules, and many implementations provide facilities for working with a special notation for such rewrite rules. (This is the 'definite-clause grammar' notation mentioned by Joe Giampapa.) I am less enthusiastic about this than most people seem to be, because these facilities enforce a specific left-to-right parsing strategy that seems on the whole better suited for things like Pascal programs or SGML document component hierarchies than for natural language texts. (And even for such unnatural grammars I am having trouble finding a definite clause notation that correctly parses an SGML document -- maybe my fault and not that of the notation, but still frustrating. Has anyone else done this sort of thing with better success? Write me if you have.) But even if one ignores the builtin 'grammar' notations and writes one's own parser, Prolog handles a lot of the details more conveniently (*NOT* faster!) than other languages. To parse a lot of text, I'd almost surely want to write a parser in some other language, for speed. But only after working out the parsing strategy with Prolog. ***For developing a parser, Prolog has a lot of advantages.*** Finally, a note on products: Borland's Turbo Prolog is very nice, seems to run fast, and has a convenient (though complicated) multiwindow interface. But they achieved the speed by leaving out some of the key features of Prolog, qua logical system. You may not feel that you've thrown your money away by buying Turbo Prolog, but if you are interested in logic, then you will eventually want a fuller implementation. (I have not compared them all, but have been happy with Arity on the PC and the Waterloo Core Prolog interpreter on our IBM mainframe.) Michael Sperberg-McQueen, University of Illinois at Chicago From: MCCARTY@UTOREPASDavid Nash Subject: Software for roots (28 lines) Date: 3 March 1988, 19:41:06 EST X-Humanist: Vol. 1 Num. 858 (858) --------------------------------------------------------- I would like to second the inquiry of Tom Flaherty (message of 3 March 1988, 09:18:29 EST). For MS-DOS, and the Mac, the main contenders I know about are: (1) Quinsept's Family Roots (Mac version reviewed favourably in Feb. 1988 MacWorld pp.213-4, but not mentioning that the complete equivalent of the MS-DOS version is not yet available); (2) Personal Ancestral File from the Church of the Latter-Day Saints. I would like to hear any information about the latter, not having yet tried to pursue it through Salt Lake City. There was a newsletter "Genealogical Computing" published in Virginia; maybe still exists. -DGN From: MCCARTY@UTOREPAScbf%faulhaber.Berkeley.EDU@jade.berkeley.edu (Charles Faulhaber) Subject: More roots to follow.... (15 lines) Date: 3 March 1988, 19:45:22 EST X-Humanist: Vol. 1 Num. 859 (859) --------------------------------------------------------- I have seen advertised (but know nothing about) a program which I believe was originally developed for the Mormon Church; and I suspect that inquiries to their genealogical society in Salt Lake City might be fruitful. From: MCCARTY@UTOREPASNorman Zacour Subject: More roots (20 lines) Date: 3 March 1988, 23:32:15 EST X-Humanist: Vol. 1 Num. 860 (860) --------------------------------------------------------- I used to see advertisements for a product called Family Tree, which From: MCCARTY@UTOREPASNorman Zacour Subject: Of the tracing of roots there is no end (18 lines) Date: 3 March 1988, 23:34:59 EST X-Humanist: Vol. 1 Num. 861 (861) --------------------------------------------------------- Now there is a program called Roots II, recently advertised in PC Magazine. "Organize your family tree and print camera-ready family books containing charts, text and indexes. Store, retrieve and display thousands of family facts with biographical sketches and source documentation. Lightning-fast searches and sorts. 250 page manual. Free brochure. Price: $195 (US)." Commsoft, 2257 Old Middlefield Way, Ste.A, Mountain View, CA 94043 (ph. 415-967-1900). From: MCCARTY@UTOREPASRD_MASON@VAX.ACS.OPEN.AC.UK Subject: Computer-mediated communication (22 lines) Date: 4 March 1988, 09:24:39 EST X-Humanist: Vol. 1 Num. 862 (862) --------------------------------------------------------- I wonder if I might use this forum to announce a conference hosted by the Open University, U.K. on computer-mediated-communication in distance education. It is to take place Oct. 8 - 11 and will consist of a workshop on the Open Universities' large scale use of the conferencing system, CoSy on an Information Technology course with 1500 distance students, and a colloquium with invited educators and researchers involved in a variety of educational applications of CMC. Because of limited accommodation here, numbers will be limited to 100 participants. If you are interested and want more details, please send a mail message to RD_Mason@uk.ac.ou.acsvax. [that's rd_mason@vax.acs.ou.ac.uk for those on Bitnet &c. -- WM] From: MCCARTY@UTOREPASHans Joergen Marker Subject: Prolog and Lisp (24 lines) Date: 4 March 1988, 09:31:39 EST X-Humanist: Vol. 1 Num. 863 (863) --------------------------------------------------------- In my earlier note I posed the question "what kind of tasks are so much better handled in Prolog and Lisp than in C....?". In Joe Giampapa's answer I see no reference to C. An important question (at least to me) is the performance of the generated code. What I find in the answer is examples of specific program syntax, which naturally could not be used unaltered in C, but which on the other hand could be replaced by structures, pointers and functions in a quite legible way. The performance of a C solution to a specific problem would be considerably better (I suspect) than the solution to the same problem in Lisp or Prolog. So the question remains: "Are there problems out there that you can't solve in C, but only in other languages?" From: MCCARTY@UTOREPASTom Flaherty Subject: Overwhelmed by the roots-software (32 lines) Date: 4 March 1988, 09:35:08 EST X-Humanist: Vol. 1 Num. 864 (864) --------------------------------------------------------- I am overwhelmed! To my posting to HUMANIST requesting information about computerizing a family tree, I have received 23 replies (in less than 24 hours). Most of them have suggested sources of programs, a few have asked me to share my findings with them. Given the apparent interest, I will compile a list of the suggested software and sources thereof and post it to HUMANIST. I have only had time to glance at the responses so far, but I can report that the "Personal Ancestral File" software from the Church of Latter Day Saints was the most frequently suggested program. (Why didn't I think of them? It seems so obvious now.) It may be a short time before I have the opportunity to sort this out and report back, but I do want to express my appreciation to all of those who have responded or may yet do so. It seems that HUMANISTs really are just that. Many Thanks. --Tom p.s. Please do continue to send me ideas. I will include all I receive in my "list." From: MCCARTY@UTOREPASJohn C. Hurd Subject: Selecting a programming language (71 lines) Date: 4 March 1988, 09:51:19 EST X-Humanist: Vol. 1 Num. 865 (865) --------------------------------------------------------- The remark about lack of humor on the network a while ago and the current discussion of the merits of various programming languages reminded me of the appended analysis. It includes APL, one of my favorite languages, but omits SNOBOL4(+), my very favorite. John Hurd HURD@UTOREPAS Selecting a Programming Language Made Easy Daniel Solomon & David Rosenblueth Department of Computer Science, University of Waterloo Waterloo, Ontario, Canada N2L 3G1 With such a large selection of programming languages it can be difficult to choose one for a particular project. Reading the manuals to evaluate the languages is a time consuming process. On the other hand, most people already have a fairly good idea of how various automobiles compare. So in order to assist those trying to choose a language, we have prepared a chart that matches programming languages with comparable automobiles. Assembler - A Formula I race car. Very fast, but difficult to drive and expensive to maintain. FORTRAN II - A Model T Ford. Once it was king of the road. FORTRAN IV - A Model A Ford. FORTRAN 77 - A six-cylinder Ford Fairlane with standard transmission and no seat belts. COBOL - A delivery van. It's bulky and ugly, but it does the work. BASIC - A second-hand Rambler with a rebuilt engine and patched upholstry. Your dad bought it for you to learn to drive. You'll ditch the car as soon as you can afford a new one. PL/I - A Cadillac convertible with automatic transmission, a two- tone paint job, white-wall tires, chrome exhaust pipes, and fuzzy dice hanging in the windshield C - A black Firebird, the all-macho car. Comes with optional seat belts (lint) and optional fuzz buster (escape to assembler). ALGOL 60 - An Austin Mini. Boy, that's a small car. Pascal - A Volkswagon Beetle. It's small but sturdy. Was once popular with intellectuals. Modula II - A Volkswagon Rabbit with a trailer hitch. ALGOL 68 - An Astin Martin. An impressive car, but not just anyone can drive it. LISP - An electric car. It's simple but slow. Seat belts are not available. PROLOG/LUCID - Prototype concept-cars. Maple/MACSYMA - All-terrain vehicles. FORTH - A go-cart. LOGO - A kiddie's replica of a Rolls Royce. Comes with a real engine and a working horn. APL - A double-decker bus. Its takes rows and columns of passengers to the same place all at the same time. But, it drives only in reverse gear, and is instrumented in Greek. Ada - An army-green Mercedes-Benz staff car. Power steering, power brakes and automatic transmission are all standard. No other colors or options are available. If it's good enough for the generals, it's good enough for you. Manufacturing delays due to difficulties reading the design specification are starting to clear up. From: MCCARTY@UTOREPASMary Peterson Subject: Hyper-roots (19 lines) Date: 4 March 1988, 14:41:21 EST X-Humanist: Vol. 1 Num. 866 (866) --------------------------------------------------------- I've read the materials about software for family trees, etc., and I still think HyperCard on the Macintosh is the best choice for this application. Mary Peterson University of New Hampshire M_PETERSON@UNHH From: MCCARTY@UTOREPASJoe Giampapa Subject: Reply to Hans Joergen Marker (27 lines) Date: 4 March 1988, 14:43:00 EST X-Humanist: Vol. 1 Num. 867 (867) --------------------------------------------------------- I would say not. C gives one amazing control over a computer system. The other languages stress "conceptual control" to the program designer. Lisp and Prolog hide the pointers and lower level features from the programmer, directing concentration on the higher-level objects and constructs, themselves. C allows the clever programmer to do practically anything in the most efficient way as the programmer sees fit (but gives enough rope to hang inexperienced programmers). I have seen some pretty fast Lisp systems, whose time-lag behind C systems is not that noticeable. I have not seen too many fast Prolog systems. Joe Giampapa giampapa@brandeis.bitnet From: MCCARTY@UTOREPASLeslie Burkholder Subject: Prolog (56 lines) Date: 4 March 1988, 14:45:34 EST X-Humanist: Vol. 1 Num. 868 (868) --------------------------------------------------------- (1) Are there problems you can't solve in C but only in other programming languages, eg Prolog or Lisp? From: MCCARTY@UTOREPASMichael Sperberg-McQueen Subject: Computable solutions (57 lines) Date: 4 March 1988, 14:47:49 EST X-Humanist: Vol. 1 Num. 869 (869) --------------------------------------------------------- Hans Joergen Marker asks whether there are problems that simply cannot be solved in certain languages (e.g. in C), but which require other languages. Computer scientists on the list may wish to correct me if I am wrong, but I am fairly sure the answer is no. If your language allows you to say "Subtract X from Y and if the result is negative, branch to location Z" (or the logical equivalent), and the problem you want to solve can be solved in some other computer language, then you can solve it in your language. This follows, does it not, from Turing's discussion of computable numbers. Of course, no one is claiming that working with a language this primitive will be any fun, or that the program will be readable. There is an analogous theorem in sentential logic, which demonstrates that the operators of sentential logic ('and,' 'or,' 'not,' 'if,' 'if and only iff' and so on are all superfluous and every sentence of sentential logic can be expressed using only one operator: the 'Sheffer stroke' (named for its inventor), which means 'not both.' If we write the Sheffer stroke with '|', then 'A|B' means 'A and B are not both true,' and we can paraphrase the other operators thus: not A A|A A or B (A|A)|(B|B) if A then B A|(B|B) A and B (A|B)|(A|B) But although an interesting result (and perhaps profoundly significant in symbolic logic), Sheffer's notation is not nearly as convenient for practical logic as is the conventional notation. And so it is not used. The difference between Prolog and Lisp on the one hand and languages like C or assembler on the other is similarly one of notation, not power. It is easier, many find, to think in Lisp or Prolog terms and let the scut work of translation into machine terms be handled by the compiler or interpreter. The logical structure of the program is easier to display -- and easier to implement because you don't have to write all your own procedures for handling unusual data objects. To be sure, it's possible to display the logical structure of a solution in Pascal or C or Assembler, too -- but it's likely to be harder to change it, since you will have to change your underlying procedures. For this reason some AI shops develop programs in Prolog, and then translate the finished product into C for the production version. The simple rule: use an 'AI Language' to optimize programmer productivity; use a lower level language to optimize machine time. From: MCCARTY@UTOREPASamsler@flash.bellcore.com (Robert Amsler) Subject: Snobol as an automobile (20 lines) Date: 4 March 1988, 14:49:01 EST X-Humanist: Vol. 1 Num. 870 (870) --------------------------------------------------------- How about..... snobol4 - A Winnabago camper. Needs lots of space, very comfortable inside when exploring the countryside; but neither built for speed nor tight parking. ICON - A modern version of the Winnabago, shipped as a kit. Claims to get good gas mileage, but older Winnabago owners seem unconvinced. From: MCCARTY@UTOREPASJacques Julien Subject: French Date: 4 March 1988, 14:51:53 EST X-Humanist: Vol. 1 Num. 871 (871) --------------------------------------------------------- OVERTURE (alla Water music, but a bit slower and somewhat pomposo) I have been watching the river flow for a while now and it is very interesting. I would describe the show as a colourful pageant of boats, some of them quite impressive, fast and powerful and a very small population of surfers. My residence is too far away from the Maritimes and all that I can think of to join the stream is a bottle, a good and cosy one, let's say Chianti Ruffino. It has been on my desk for years. So, I take away the candle I used to decipher my manuscripts at night and I kick the vessel into the channel. FIRST MOVEMENT (French aria, double dotted) I operate in a rare, monstruous, extravagant area called French. And it is not even the good Burgundy of blue, white and red French, but its sparkling and lighter version: French-Canadian! I have a feeling of always looking at the dark face of the moon. In fact, and to keep on with the same staging, French seems to be as repulsive to the Computing Hegemony as garlic is/was? to Dracula. It reacts in the same way: horror or evasion but it can never get to swallow the *!!!* bulb.... For example, what do we do with accents on mainframes? SECOND MOVEMENT: The merry widow at her simultaneous windows I would like to list certain items on which I am sure the network can be very helpful. As stated in my *hagiography *, I am working in French-Canadian literature and popular culture (songs). The tools I am looking for are: 1. Database. One, or more. Relational, must be open to data stricto sensu and to added text like: annotations, commentaries, full transcript of lyrics. 2. Stylistic analysis device. I am thinking of Deredec and its sub-products, which I have not tried yet. In the long term, I would like to build an analysis that would integrate (not simply place side by side) lyrics AND music. When I read the report from the Conservatorio di Musica L. Cherubini, I tried to catch the next plane to Italy, but planes heading for sunny countries never land on my iceberg. 3. Access to large collections of texts in French from France and from North America, literature and references like dictionaries. 4. CAI. "What do you do for a living, besides watching the river flow?" Well, I must confess, I teach French. That is why I am interested in software dealing with interactive writing in ..... French! CONCLUSION coda/cauda, and no venenum I appreciate HUMANIST very much. It is a welcomed network, much needed and improving with use. Do not send me too many bottles back: I do not want to block the channel. Julien@Sask From: MCCARTY@UTOREPASCharles Young (youngc@clargrad) Subject: Displaying and printing Classical Greek (19 lines) Date: 4 March 1988, 15:06:44 EST X-Humanist: Vol. 1 Num. 872 (872) --------------------------------------------------------- Over the past year or so I have maintained a list of packages that claim to support the printing and display of classical Greek. It has finally occurred to me that other HUMANISTS might be interested in the list.... [This list has been posted to the file-server. It should be available by this coming Monday, under the name GREEK SOFTWARE. -- W.M.] From: MCCARTY@UTOREPASJoe Giampapa Subject: Addendum (46 lines) Date: 4 March 1988, 15:12:23 EST X-Humanist: Vol. 1 Num. 873 (873) --------------------------------------------------------- I would like to post an addendum to what I said previously about computer languages. Every once in a while, the question pops up in computer circles about "what IS the best language" to program in. Most times it does not attempt to be so cut-and-dry, yet, the variants do not really stray too much from this. In the search for the optimal answer, the question almost never gets answered the way the question was originally posed. The tendency I have observed, is that when programmers are faced with a project, and several languages to choose from for doing the project, their "decision algorithm" proceeds roughly as follows: In short, then, I think the "ultimate answer" to the question is, "whatever the programmer wants to use", ... or "42". Joe Giampapa giampapa@brandeis.bitnet From: MCCARTY@UTOREPASDan Bloom Subject: c.s. graduates are people too (44 lines) Date: 4 March 1988, 15:14:14 EST X-Humanist: Vol. 1 Num. 874 (874) --------------------------------------------------------- My appologies in advance for several things; 1) Unlike most of you, I am not erudite. 2) I may not be timely or on topic. 3) my degree was in computer science. Given the above; Yes, many people of the computer faith do tend to consider themselves above such considerations as ease of use, and prefer elegance over readability/useability. Such will be the case of most whom started computing in the long gone dusty era of punched cards and 256k mainframes where elegance, compactness of code, and speed of execution were paramount. Others of the lofty profession, who take themselves less seriously, such as myself consider the computer a rather advanced tool. As a tool, it must suit the users purpose and not the designers. However, as with any advanced tool, it requires a learning curve, both on the part of the user and of the creators. There also seems to be a mindset of the micro computer industry wherein they feel obligated to recreate every error made in the development of mainframes. In conclusion, if you consider the above to have presented a thesis of any sort, I have put forth the proposition that not *ALL* people who are in the computer industry are inhumane pretentious soothe sayers, some of us are people too. (I have not really taken any offence: in general I agree with most of what has been said in reference to the above and quite enjoy the different view of the field). And in retort, if this network is any indication, Humanists seem to have an obsession with what should be, not what is. Hope I haven't taken too much time....Dan (Improbable) Dan Bloom Senior Consultant Academic Computing Services York University From: MCCARTY@UTOREPASStephen J. DeRose Subject: C vs. Prolog (43 lines) Date: 4 March 1988, 15:18:25 EST X-Humanist: Vol. 1 Num. 875 (875) --------------------------------------------------------- In reply to Hans Joergen Marker's note: > So the question remains: "Are there problems out there that you can't > solve in C, but only in other languages?" No, there are no such problems. In fact, there are no problems which can be dealt with in only a particular subset of programming languages. The more formal theorem for this is known in CS circles as "Church's Hypothesis" (among other names). All serious computer languages are "functionally complete", and so inter-translatable. Thus, the more relevant questions are: 1) How *easy* is it to learn/use language X? 2) How *fast* can I program problem Y in language X? 3) How *efficient* will my code in language X be? And here we have major differences. For example, right now I'm putting the finishing touch on a program to handle an annotated natural-language dictionary of about 50,000 words. It takes about 3,000 lines of C, because of the need to provide detailed control of storage allocation and data structures. I think I could write the same functionality in about 10,000-15,000 lines of Assembler, or in 750-1,000 lines of Icon or Prolog. It is roughly true that a programmer can write the same number of (working) lines of program per day, regardless of language. So it makes sense to use the most compact language available for the particular problem at hand. Unfortunately, in this case I had to use C rather than Icon or Prolog, because the last 2 do not deal with memory as efficiently by themselves as I can by myself, and I can't afford 8 Meg of RAM for my Mac. Steven J. DeRose From: MCCARTY@UTOREPASdow@husc6.BITNET (Dominik Wujastyk) Subject: Family tree software (50 lines) Date: 4 March 1988, 19:06:40 EST X-Humanist: Vol. 1 Num. 876 (876) --------------------------------------------------------- I have used both the Mormon product, Personal Ancestral File (PAF), and the Pine Cone one (is it FTetc for Family Tree Etc?). Both for DOS. I am doing this note from memory, since I am at present in the USA, and all my notes and manuals are back at home in the UK. FTetc (if that's it) was slicker and somewhat faster, since it was compiled. The PAF (Pers Anc. File) was Basic code, and needed fiddling with to set the correct defaults for use on a hard disk. This was clearly described in the manual, but seemed unnecesary in these days of setup menus. PAF has a companion program that can store biographical data about individuals; FTetc includes this in the main prog, but if I remember rightly, PAF allows larger files for this. FTetc had one huge advantage: it allowed you to print a BIG chart piecemeal on several sheets of ordinary computer paper, for gluing together. PAF can handle 135 col paper (I think) but that's it. One bit of the tree at a time. The manual of PAF is written in the style of an obsessive. Everything is hyper neat and repeated several times. I found dealing with PAF (program and documentation) worried me at some deep level: was the author still sane? Nevertheless, he wrote me a nice letter when I sent a query, and probably fits into his community very well (no offence intended in any quarters). By comparison, FTetc is just another good shareware prog. The capabilities are very similar. I had a lot of data in PAF before I heard of FTetc, and I am reluctant to change over. But if I were starting today I think I'd go for FTetc. Of course a lot depends on where the programs go, what upgrades are made, maintenance etc. Imponderables. Dominik. From: MCCARTY@UTOREPAS Subject: Summary of conference notice on the file-server (28 lines) Date: 4 March 1988, 19:16:12 EST X-Humanist: Vol. 1 Num. 877 (877) * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * International Conference on Symbolic and Logical Computing Dakota State College Madison, South Dakota April 21-22,1988 The third International Conference on Symbolic and Logical Computing (ICEBOL3) will present papers and sessions on many aspects of non-numeric From: MCCARTY@UTOREPASSheldon Richmond Subject: Computer language intertranslatability (49 lines) Date: 4 March 1988, 19:19:08 EST X-Humanist: Vol. 1 Num. 878 (878) --------------------------------------------------------- Are computer languages intertranslatable? There is a discussion in the Talmud that God understands every language, but Angels only understand the Holy Language of the Torah. So, if you want to speak to the Angels, you have to speak in the Holy Language, but God doesn't care what language you speak. Computers are somewhere between Angels and God. Mathematically, computers are God; practically, they are neither. Turing argued that every computer is formally identically: every Turing machine is a Universal Turing Machine. So every computing languague ideally is formally identical. The operative terms are 'ideally' and 'formally'. In practice, not every computing language permits recursion--i.e. statements which call themselves; or functions which define operations in terms of themselves. This is not only important for convenience, and for performing certain algorithms, but for AI simulation. So, then, in the real world, computer languages are not completely intertranslatable. The upshot is that, depending on what one wants to do with computers, one will have to use different languages, and different hardware/software systems. The technology of computers has not done away with the Tower of Babel, or the requirement for multilingualism. Though every few years, new Holy Languages for our computer/angels--PASCAL, C, PROLOG-- are produced. In reality, computers are neither angels nor God. Different languages are required for different purposes, no one language can do all, and some languages are more suited to some tasks than other languages. The proper attitude toward computer systems and languages, is the one that states 'when in Rome do as the Romans do'. You can't expect that one system of manners or etiquette will please all people regardless of cultural background. So, rather than search for the Holy Language, or just use one language regardless of task, choose the language that is most pleasing to the crowd you will be hanging around with for the task at hand. Use the language that the crowd who is working on the project one is interested in for the moment uses--and in that way you will be included in the chit-chat and problem/solution sharing talk. From: MCCARTY@UTOREPASMark Olsen Subject: Server security (82 lines) Date: 4 March 1988, 20:06:10 EST X-Humanist: Vol. 1 Num. 879 (879) --------------------------------------------------------- Security for Kermit3 SERVER operation Kermit3.EXE (2.30) has a number of new features, many of which enhance REMOTE operation. It is now relatively easy to SEND and GET files, etc. (between, say, home and school) by setting one machine in SERVER mode before leaving for the other location. The problem with Kermit3 SERVER operation is security: somebody chancing on your remote machine and erasing your files, putting an infected COMMAND.COM in it, etc. The following is a solution to this problem, offering you a fairly high degree of safety. This technique make use of Kermit3's ENABLE and DISABLE commands. It requires the following: A file much like SECRET, below, which is run on the REMOTE (HOST) machine before it is put in SERVER mode; A file much like PASSWORD (but using your own secret filename!), which must be available on the machine you are using as a terminal. Host file: SECRET: DEFINE SRV OUTPUT ATS0=1\15,SET PARITY NONE,SET BAUD 1200,DO SR2 DEFINE SR2 CWD \XX\XX,DELETE PASSWORD,DISABLE ALL,DO SR3 DEFINE SR3 ENABLE FIN,SERVER,TAKE PASSWORD,SERVER,DO SR2 Terminal file: PASSWORD: ENABLE ALL To use this system: 1) Set up the values you want on the machine which is to run in SERVER mode, then add the commands: TAKE SECRET DO SRV 2) Later, from the second machine, call the number of the SERVER, and you will be connected, but with all services DISABLED; enter the commands: SEND PASSWORD FIN This will cause the host computer to TAKE PASSWORD and reenter SERVER mode. Since PASSWORD contains the command ENABLE ALL, you are now in business. When you are through, you must be sure to DISABLE all services; to do this, type: FIN This will cause the host to rerun SRV2, disabling all services and erasing PASSWORD. notes: Be sure to arrange the host machine so that Kermit3 is looking at an empty subdirectory; Do not use the word PASSWORD! Change every occurrence to a word known only to you. If you find strange files in your /xx/xx subdirectory, it is probably best to erase them, to fend off infection. No guarantees; good luck! *****END***** From: MCCARTY@UTOREPASVicky A. Walsh Subject: Unix "user-friendly" shell (20 lines) Date: 4 March 1988, 21:39:57 EST X-Humanist: Vol. 1 Num. 880 (880) --------------------------------------------------------- Rumor has it that at the Unix conference UNIFORUM held this last February in Dallas, TX. someone dicussed a user friendly shell for unix that works with the Macintosh. Did anyone attend this meeting and/or can they provide any information about the shell? American management Systems is the company name associated with this project. I'd be grateful for any information and/or experience on this. Thanks. Vicky Walsh From: MCCARTY@UTOREPASBob Krovetz Subject: Computer language intertranslatability (18 lines) Date: 4 March 1988, 21:43:11 EST X-Humanist: Vol. 1 Num. 881 (881) --------------------------------------------------------- I've heard on more than one occasion that you cannot write a Lisp interpreter in Prolog. Is this really true? If so, why? -bob Krovetz@umass.bitnet or Krovetz@umass.edu (internet) From: MCCARTY@UTOREPASamsler@flash.bellcore.com (Robert Amsler) Subject: Choices of programming languages (67 lines) Date: 4 March 1988, 23:29:08 EST X-Humanist: Vol. 1 Num. 882 (882) --------------------------------------------------------- Something strange seems to be creeping into the discussion, the quest for the ``best'' programming language devoid of knowing what computer one is running on. To say a programming language is INHERENTLY slow is somewhat strange; like saying that electric-powered vehicles are inherently slower than gasoline-powered vehicles (electric trains do much better than most cars) First, some basic CS. There are computers. Computers have very elemental things called `instruction sets' which are primitive `languages' often referred to as assembly code or machine language, in which one can talk to the computer. These languages are the fastest things the computers can execute. Videogames, for instance, are often written in this level of language to absolutely positively optimize what can be done as rapidly as possible. However, such languages are awful most of the time. They keep saying things like `load and carry contents of register XXX to Register YYY'; which bears as much relationship to making a concordance of a text as the wiring diagram of your TV set has to how the on-off switch works. So... people write `higher level' programming languages which will run on the same computer. But how can they do that? Simply by telling the computer what to do with the statements in the higher level language to translate them into the original language the computer understood. This introduces some inefficiency, for a couple reasons. One is that the user of the higher level language doesn't necessarily know whether what he is asking for is efficient for the computer he is asking it of. Most people want computers to run their favorite languages. This may or may not be easy to do on some computers---``Mr. Spock, can we program the tricorder to become a TV set receiver?'' ``Yes, captain, but it will take a little time and won't work very well for long'' ``I don't care if it is efficient''... That sort of thing. So... we get BASICs and FORTRANs and PASCALs and LISPs and PROLOGs and lots of languages for lots of machines. Each is an IMPLEMENTATION written by someone with a varying degree of attention to how efficient it will be (and hence someone could write a LISP for a certain machine which runs faster than someone else's PASCAL (or visa versa)). The speed of a language is thus a matter of first and foremost what computer it is running on. Then, it is a matter of how efficiently it has been implemented for that computer. Now that computers are things the size of postage stamps (all that other stuff it comes surrounded with is just for the sake of your bulky human fingers and poor input/output capabilities) the possibility of a computer chip that can run your favorite language is very real. (For instance, TI just announced a chip to run LISP for the MacIntosh II's). Every time they change the chip, they change the possible speed of the computer; and saying that any language is slower is very problematic since you have to know on what computer it has been implemented and at what level of design (i.e. as software or hardware). From: MCCARTY@UTOREPAS Subject: Notice of ALLC/AIBI conference posted to file-server (25 lines) Date: 5 March 1988, 10:13:52 EST X-Humanist: Vol. 1 Num. 883 (883) Association for Literary and Linguistic Computing 5 - 9 June 1988 XVth International ALLC Conference The Fifteenth International ALLC Conference is being held at Jerusalem, Israel, and will be immediately followed by the Second International Conference of the Association Internationale Bible et Informatique from June 9 until June 13, 1988. The major topics of the Conference to be covered will be: textual databases and corpora; mechanised morphology, lexicography, and dictionaries; statistical linguistics, stylistic analysis and authorship studies; encoding and formatting techniques; critical editions, collations and variants; computational linguistics; and data entry, typesetting and text processing. From: MCCARTY@UTOREPAS Subject: Summary of posting to the fileserver (19 lines) Date: 5 March 1988, 22:59:03 EST X-Humanist: Vol. 1 Num. 884 (884) UNIVERSITY OF EXETER FROM VALOIS TO BOURBON December 14-16 1988. To coincide with the quatercentenary of the Blois assassination of the Duke and Cardinal de Guise, which in turn prompted the assassination of Henri de Valois, a residential Conference/Colloquium has been arranged for December 1988 at the University of Exeter. From: MCCARTY@UTOREPASJean-Claude Guedon Subject: French (26 lines) Date: 5 March 1988, 23:03:16 EST X-Humanist: Vol. 1 Num. 885 (885) --------------------------------------------------------- En reponse a Jacques Julien: La question des diacritiques sur les ordinateurs est effectivement troublante. Il n'aurait pas ete tres difficile en effet de prevoir un nombre suffisant de signes diacritiques pour, au moins, prevoir l'utilisation des ordinateurs par des francophones, hispanophones et autres germanophones ou italophones, etc. And this is why I write the beginning of this message in French, just to remind all that might forget it that although English is a useful language as a kind of lingua franca, it should limit its role to this functional level and not impose itself as if it were THE language of the world, be it computerized or otherwise. This is not meant as an aggressive statement; but simply as a reminder of the marvellous variety that characterizes humanity. Cheers Jean-Claude Guedon From: MCCARTY@UTOREPAS Subject: Languages on HUMANIST (30 lines) Date: 5 March 1988, 23:38:20 EST X-Humanist: Vol. 1 Num. 886 (886) As a member of HUMANIST I'm grateful for Jean-Claude Guedon's reminder of monolingual perils. I, too, rejoice in variety and difference. Doctrinally imposed uniformity is dangerously prevalent these days, and (may I hasten to add) it is promulgated by both sexes, by many if not all nationalities, and in many if not all languages. As editor of HUMANIST (for what it's worth) I welcome notes in all languages, whether or not I can read them. Many HUMANISTs read if not write French, not a few must know some German and Italian, and so forth. So, let me suggest that if anyone is moved to write in a language other than English, let him or her do so, let us say providing that a translation into English is appended. After all, a lingua franca (or lingua anglica) is a fine thing, nicht wahr? Would it be reasonable to establish some kind of convention for diacritics, say that the appropriate symbol follow the letter it belongs with? Comments or suggestions? Willard McCarty mccarty@utorepas From: MCCARTY@UTOREPAS Subject: Two pleas and a request from your editor (41 lines) Date: 6 March 1988, 16:44:56 EST X-Humanist: Vol. 1 Num. 887 (887) Plea the first. When you want to fetch any file from HUMANIST's file-server, you must communicate by whatever means with LISTSERV, not with HUMANIST. Thus, in interactive mode, for example, you would TELL LISTSERV at UTORONTO GET HUMANIST FILELIST, *not* TELL HUMANIST.... &c.; and if embedding your request in a message, this message would be sent to LISTSERV, not to HUMANIST. When you send requests to HUMANIST, they just come to me, which means that either I have to request the file and send it to you or that I have to write to you and say something helpful. Right now, for example, I have 101 messages in my reader, and it's Sunday afternoon.... Plea the second. Several of you, intending a message for HUMANIST, send it to me directly, knowing that I must deal with it anyhow. True enough, but this procedure can cause two problems: (1) occasionally I cannot tell if the message is meant for me only or for everyone; I usually decide it's meant for everyone, but this may not always be the case; and (2) at such time as I decide no longer to interpose myself between incoming and outgoing mail, HUMANIST messages sent to me will get delayed. Actually, I will be away from about mid May to mid July, and during this time we may decide to return to the automatic mode rather than to ask someone to assume my daily duties with HUMANIST. So, *please* send HUMANIST mail only to HUMANIST. Request. Would all of you who redistribute HUMANIST mail to others send me a brief description of how this is done and, if you have it, a list of those to whom the mail is sent, or a count of the number of people? Thank you all for making HUMANIST such an interesting creature. Willard McCarty mccarty@utorepas From: MCCARTY@UTOREPASNorman Zacour Subject: Diacritically speaking... (23 lines) Date: 6 March 1988, 17:09:38 EST X-Humanist: Vol. 1 Num. 888 (888) --------------------------------------------------------- One way to prevent linguistic "imposition" might be to provide a glossary of technical computer terms in French, German and Italian. Reading manuals in one's own language, whatever that might be, is difficult enough; what sort of garbage words (interface), compressed descriptions (cut-and-paste), diverse borrowings (macro, default, root, library, directory), slightly out-of-focus terms (routine), to say nothing of out-and-out neologisms are likely to cause us trouble in a language other than our own? At the moment, for quite selfish reasons, I could use a good short glossary of English-French and English-German. If HUMANISTs can't contribute to its making, who can? Shall we dance? Norman Zacour (Zacour@Utorepas) From: MCCARTY@UTOREPASRichard Goerwitz Subject: English and French (32 lines) Date: 7 March 1988, 10:48:39 EST X-Humanist: Vol. 1 Num. 889 (889) --------------------------------------------------------- I think most of us realize that the use of English as a lingua franca for things like international air-traffic control, radio, etc. is a rather ar- bitrary choice, based on practical political and economic considerations. We had Latin at one time, then French, now English. What next, Japanese? With computers, the phenomenon runs a bit deeper than this. English doesn't use a lot of diacritics, and can be represented comfortably, using a 7-bit coding scheme. Note also that entire languages like Prolog are tuned to an English-like syntactic scheme. Prolog does not work well with languages that have few word-order constraints or lots of discontinuous morphemes. I suppose that with a little fussing, we could all post to the HUMAN- IST in French or German, or some other W. European (left-to-right, alphabetic, syntactically rigid) language. That would be a lot of fun. -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer From: MCCARTY@UTOREPASRichard Goerwitz Subject: Help for Penn OT users (44 lines) Date: 7 March 1988, 10:50:45 EST X-Humanist: Vol. 1 Num. 890 (890) --------------------------------------------------------- I've been playing with the OT and NT texts I got from Bob Kraft now for a year and a half, and they have served my quite well. I've been able to determine with great precision many things about the OT that would have taken months to do by hand. Many thanks! One problem keeps coming up with these texts, however: They are coded using TLG-style (i.e. "Betacode") counters. So, instead of marking each chapter and verse reference explicitly (e.g. chapter 10, verse 1 [in Beta- code language ~~x10y1]), they merely tell us to increment (e.g. "increase chapter counter by one, verse counter by one [in Betacode ~~xy]). This means that you can't take verses here and there out of context. I had a friend who uses LBASE (a nice language-database package allowing grammatical searches) complain to me that, on account of this coding pro- blem, he could not slice out separate documents for LBASE to analyze (he wanted to work on the supposed "priestly" document only). So I wrote him a program in Icon that does this. Basically, the program allows one to a) collect a corpus by excising verses and chapters from a larger work, and b) mark them explicitly as to chapter and verse (while still remaining within the definition of the TLG Betacode level-marking scheme). Now the program is just sitting around, and I was wondering if any HUMANISTs wanted it. NB: It's written in Icon, so it's not going to work on any sys- tem that doesn't have Icon installed. I haven't tested it under v6, though it should work fine. -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer From: MCCARTY@UTOREPASRobin C. Cover Subject: In search of a search engine (106 lines) Date: 7 March 1988, 10:54:04 EST X-Humanist: Vol. 1 Num. 891 (891) --------------------------------------------------------- I'm looking for a search engine which probably does not exist, but I would like advice from those more knowledgable about text retrieval systems. It is a text retrieval system optimized for literary-critical and linguistic study. The major requirements for the search engine are as follows: (1) Literary texts should be "understood" by the system in terms of the individual document structure, as indicated by markup elements. The user should be able to specify within a search argument that proximity values, positional operators, comparative operators and logical operators govern the search argument and the textual units to-be-searched IN ACCORDANCE WITH THE HIERARCHICAL STRUCTURE OF THE DOCUMENT. That is, if a document is composed of books, chapters, pericopes, verses and words, then expressions within the search argument must be able to refer to these particular textual units. If another document (or the *same* document, viewed under a different hierarchical structure) contains chapters, paragraphs, sub-paragraphs, (strophes), sentences and words, then expressions in the search argument should be framed in terms of these textual units. To borrow a definition of "text" from the Brown-Brandeis-Harvard CHUG group: the text retrieval system must be capable of viewing each of its documents or texts as an "ordered hierarchy of content objects (OHCO)." (2) The database structure must be capable of supporting annotations (or assigned attributes) at the word level, and ideally, at any higher textual level appropriate to the given document. Most record-based retrieval systems cannot accommodate the word-level annotations that textual scholars or linguists would like to assign to "words." More commonly, if such databases can be modified to accommodate annotations at the word level, the record-field structure is thereby contorted in ways that introduce new constraints on searching (inability to span record boundaries, for example). Preferably, even the definition of "word" ought not to be hard-coded into the system. Hebrew, for instance, contains "words" (graphic units bounded by spaces) which contain three or four distinct lemmas. Minimally, the database must support annotations at the word level (e.g., to account for the assignment of lemma, gloss, morphological parse, syntactic function, etc) and these annotations must be accessible to the search engine/argument. Though not absolutely required, it is desirable that attributes could be assigned to textual units above "word," and such attributes should be open to specification in the search argument. Linguists studying discourse, for example, might wish to assign attributes/annotations at the sentence or paragraph level. (3) The search engine should support the full range of logical operators (Boolean AND, OR, NOT, XOR), user-definable proximity values (within the SAME, or within "n" textual units), user-definable positional operators (precedence relations governing expressions or terms within the search argument) and comparative operators (for numerical values). The search argument should permit nesting of expressions by parentheses within the larger Boolean search argument. Full regular-expression pattern matching (grep) should be supported, as well as macro (library/thesaurus) facilities for designating textual corpora to be searched, discontinuous ranges or text-spans within documents, synonym groups, etc. Other standard features of powerful text retrieval systems are assumed (set operations on indices; session histories; statistical packages; etc). Most commercial search engines I have evaluated support a subset of the features in (3), but do very poorly in support of (1) and (2). The text retrieval systems which claim to be "full text" systems actually have fairly crude definitions of "text," and attempt to press textual data into rigid record-field formats that do not recognize hierarchical document structures, or are not sufficiently flexible to account for a wide range of document types. Three commercial products which attempt to support (1) are WORDCRUNCHER, Fulcrum Technology's FUL-TEXT and BRS-SEARCH. I know of no systems which intrinsically support requirement (2), though LBASE perhaps deserves a closer look, and a few other OEM products promise this kind of flexibility. It may be possible to press FUL-TEXT or BRS-SEARCH into service since both have some facility for language definition. Another promising product is the PAT program being developed by the University of Totonto in connection with the NOED (New Oxford English Dictionary). But I may have overlooked other commercial or academic products which are better suited for textual study, or which could be enhanced/modified in some fashion other than a bubble-gum hack. It is not necessary that a candidate possess all of the above features, but that the basic design be compatible with extending the system to support these functional specs, and that the developers be open to program enhancements. Ideally, such a system would work with CD-ROM, though this is not an absolute requirement. I would like good leads of any kind, but particularly products that could be leased/licensed under an OEM agreement...for microcomputers, I should add. Thanks in advance to anyone who can suggest names of commercial packages or academic software under development which meet the major requirements outlined above, or which could be *gently* bent to do so. I will be glad to post a summary of responses if others are interested in this question. Professor Robin C. Cover ZRCC1001@SMUVM1 3909 Swiss Avenue Dallas, TX 75204 (214) 296-1783 From: MCCARTY@UTOREPASRobin C. Cover Subject: The languages of HUMANIST (25 lines) Date: 7 March 1988, 10:58:17 EST X-Humanist: Vol. 1 Num. 892 (892) --------------------------------------------------------- In response to Willard's suggestion that contributions in French, German, Italian (etc) be encouraged on HUMANIST, I concur wholeheartedly. It's not clear why those who feel more comfortable writing in non-English languages ought to be required to supply an English translation; isn't that giving with one hand and taking back with the other? If there is pride among HUMANISTS that we *are* humanists, then let's reflect upon that very long tradition in humanities education which requires that we be able to read great literature in any of the world's languages. That should prepare us to deal with postings on HUMANIST. If we learn computer languages but fail to treasure human languages, have we broken with our past? Professor Robin C. Cover ZRCC1001@SMUVM1.bitnet From: MCCARTY@UTOREPASamsler@flash.bellcore.com (Robert Amsler) Subject: A writing convention for diacritics (32 lines) Date: 7 March 1988, 10:59:28 EST X-Humanist: Vol. 1 Num. 893 (893) --------------------------------------------------------- I came up with this one recently while encoding phonetic symbols. Basically... {,} are used around any character which is to have diacritics associated with it. {,} is taken to mean, ``combine together the symbols inside the {,}'s, so {ae} is a ligature, {,c} a c-cedilla, {o:} an o-umlaut, etc. You may note I said {o:} for an o-umlaut, rather than {:o}. That is because the position of the punctuation dictates whether it goes ABOVE or BELOW the character. Punctuation appearing BEFORE the letters goes BELOW, punctuation appearing AFTER the letters goes ABOVE. This allows one to also represent symbols such as {:a:} which is an `a' with diaresis above and below. Note that I am not necessarily claiming this is the best final form for special symbols, but it is an easily keyboarded and read system which I find useful for rapid keying of data. Bob Amsler Bellcore From: MCCARTY@UTOREPASJeffrey William Gillette Subject: What can't I do with C? (92 lines) Date: 7 March 1988, 11:01:20 EST X-Humanist: Vol. 1 Num. 894 (894) --------------------------------------------------------- At least this is the case on my IBM PC compatible computer. Let me begin to defend my (admitedly provocative) assertion by claiming a distinction between the C programming language and all the extra "goodies" manufacturers throw into their C compiler packages. According to "The C Language" by Brian Kernighan and Dennis Ritchie (sometimes referred to as the "C Bible" because its authors are also the creators of C), C is a general-purpose programming language. It has been closely associated with the UNIX system. ... The language, however, is not tied to any one operating system or machine. ... C is a relatively "low level" language. ... C provides no operations to deal directly with composite objects... C itself provides no input-output facilities: there are no READ or WRITE statements, and no wired-in file access methods. What I earlier referred to as extra "goodies", more generally known as library functions, are the system specific and machine specific functions to which Kernighan and Ritchie claimed the C language is not tied. On most computers these library functions are written in assembly language. In fact, since K & R created C as a language without input-output facilities, many of these standard library functions could not possibly be written in the C language! Often we think of input as that which we type into a computer, and output as that which the computer displays on its screen or prints to the printer. In technical terms this is not quite correct. I/o, properly speaking, refers to everything not a part of core memory (or ROM/RAM). On IBM compatible machines (i.e. computers that use Intel microprocessors), when a key is typed the corresponding key code appears in a special door (or "port"). It does not enter core memory until the processor explicitly takes it from the port and places it into some memory location. It is precisely this facility of reading a port (and its converse - writing a code to a port that will send it to the printer) that the C language lacks. Because C cannot read from or write to ports, on my IBM compatible machine I cannot write a C program that will get a character from the keyboard, read a byte from my disk drive, print a line of text, dial a modem, send an instruction to the math co-processor, or a myriad of other tasks I want to perform many times a day. By now some provoked C enthusiast will complain that my definition of 'C' is too restrictive. The question should instead be, Pilot is a rather restrictive programming language that is optimized for creating Computer Assisted Learning drills. Given enough time, however, I could program a definite clause grammar parser in Pilot (though I've no idea why I should want to). In fact, since Pilot has the same type of assembly language escape hatch used in C, I could probably reproduce MS-DOS in Pilot! Similarly, dBase III+ is not generally thought of as a word processor, but its creators are fond of claiming that the dBase programming language can be used to write a word processor. Perhaps we should all put our C compilers on the shelf and take up dBase. Or let use cast aside UNIX and MS-DOS in favor of Pilot! After all, Jeffrey William Gillette dybbuk at tuccvm.bitnet From: MCCARTY@UTOREPASSterling Bjorndahl Subject: Anglophone imperialism? (45 lines) Date: 7 March 1988, 11:05:25 EST X-Humanist: Vol. 1 Num. 895 (895) --------------------------------------------------------- It is good that Willard will accept contributions in languages other than English. However, I must disagree with his request that a translation always be appended. I am very embarrassed by the typical native English speaker's lack of facility in foreign languages, and I fear that Willard's request will only serve to condone that attitude among us - we who are, after all, HUMANists. I think that a policy of self-editing is sufficient here. If people want to send messages in Portuguese or Japanese, they will know that they will be communicating with only a very select audience. The world knows what we English speakers are like. I doubt very much that our mailers will be filled with messages we can't understand. On a few occasions when I had time to kill, I signed up to BITNET's RELAY network - an interactive computer forum which functions somewhat like amateur radio in terms of human interaction. The main population which uses the RELAY facility consists of undergraduate computer science students involved in casual conversations. On several occasions, the link between North America and Europe went down. During that time, several people in Europe would begin a conversation among themselves in German or Dutch. When the link came back up, parts of those conversations were transmitted to the North American side of the network. More than one person on this side castigated the Eupopeans for using their own language on the network. Granted, they thought that these were simply other North American students showing off their foreign language ability. But the outrage in their "voices" that anyone would use anything other than English on BITNET (they had forgotten about EARN), made me both angry at their chauvinism and sad for the North American educational system. That many of these people will be granted a university degree without every having had to learn another natural language is, well, inhuman. Sterling Bjorndahl Institute for Antiquity and Christianity From: MCCARTY@UTOREPASDavid Owen Subject: French messages Date: 7 March 1988, 11:08:34 EST X-Humanist: Vol. 1 Num. 896 (896) --------------------------------------------------------- Having advised, from a technical not a linguistic perspective, various instructor's about the use of computer conferencing for conversation practice in French and Spanish instruction, I think I can say that the use of special symbols to indicate accents etc probably does more harm than good. It makes messages harder to write (and thus less likely to get written), and troublesome to decipher. Such special marks are extremely useful, nay essential, when the text is to be printed and the marks are re-interpreted by the formatter, but for purposes such as HUMANIST, I vote that we ignore them. David Owen OWEN@ARIZRVAX OWEN@RVAX.CCIT.ARIZONA.EDU From: MCCARTY@UTOREPASHans Joergen Marker Answer to Steven J. DeRose Subject: C vs. Prolog (32 lines) Date: 7 March 1988, 11:10:08 EST X-Humanist: Vol. 1 Num. 897 (897) --------------------------------------------------------- I have to accept your point of the number of program lines needed to accomplish a specific task in a given language, still bearing in mind that I remain practically ignorant of the workings of Prolog and Lisp. On the other hand, your statement, that: "It is roughly true that a programmer can write the same number of (working) lines of program per day, regardless of language." would naturally be dependand on the programmer. When I started this argument I was actually trying to find out whether it would be worth my while from a productive point of view to take a closer look of Prolog or Lisp. I am still not convinced. I am still very happy with C. (Perhaps it is my very well hidden macho instincts, though in Europe we would rather symbolise that with a Porsche, Firebirds are'nt that common over here). I think that your note confirms my point of view: C can get the job done, and because of the control you have over the machine using C, it will even get the job done when using other languages impractical. Hans J%rgen Marker. From: MCCARTY@UTOREPASHans Joergen Marker Subject: Languages on HUMANIST (31 lines)Use of other languages than English on HUMANIST Date: 7 March 1988, 11:14:27 EST X-Humanist: Vol. 1 Num. 898 (898) --------------------------------------------------------- fra Hans J%rgen Marker Jeg kan selvf%lgelig ikke have noget imod at folk skriver p} HUMANIST i sprog jeg ikke forst}r. Hvem kan det? Men det giver dobbelt arbejde for afsenderen at skulle overs{tte sine tanker for at f} dem forst}et af de andre deltagere. Hvorfor l{rer i andre ikke dansk? from Hans Joergen Marker Naturally I can not be againt peoble writing on HUMANIST in languages that I don't understand. Who can? But it doubles the effort for the sender to be obliged to translate his thoughts into English to make them understandable to the other participants. Why don't the rest of you learn Danish? [Editor's note: In case you haven't guessed already, the rather strange looking words (e.g., "p}") have resulted from the computer's automatic translation of accented characters.] From: MCCARTY@UTOREPASJohn Dawson Subject: Genealogical display (40 lines) Date: 7 March 1988, 11:19:04 EST X-Humanist: Vol. 1 Num. 899 (899) --------------------------------------------------------- I don't know of a package for genealogical display, but I have a suggestion which I found to reduce the problem of displaying my family tree considerably. Instead of constructing a tree with all the family members of the same generation in a horizontal line, try showing the tree turned through 90 degrees so that one generation appears in a *vertical* line. If you arrange it so that information is typed in narrow columns, and so that no one text line contains text relating to more than one person, the result is quite easy to edit and keep up-to-date. Obviously, the most recent generation can either be the left-most or the right-most column, and it is easy to add a complete new generation. A small example follows (J24 is the son of J48 and J49; J12 is the son of J24 and J25; etc.): ------------------ J48 ) ???? HICKLING ) J's ggg-gf )-| | ------------------ |-(J24 ) | (???? HICKLING ) | (J's gg-gf )-| J49 )-| | ------------------ | | ------------------ |-(J12 ) | (HENRY HICKLING ) | (Traveller? in 1915)-| ------------------ | | J50 )-| | | |-(J25 )-| | | ------------------ | From: MCCARTY@UTOREPASEamon Kelly Subject: Writs (23 lines) Date: 7 March 1988, 12:35:51 EST X-Humanist: Vol. 1 Num. 900 (900) --------------------------------------------------------- Does anyone know a simple guide to the names and types of writs issued by the English (or Irish) Chancery in the 13th and 14th centuries, in partictular ones dealing with appointments and grants? Hopefully yours, Elizabeth Dowse Dept of Medieval History Trinity College Dublin Ireland PS. I have already tried all the various Guides to the Public records From: MCCARTY@UTOREPASANDREWO@UTOREPAS Subject: langues autres que l'anglais (17 lines) Date: 7 March 1988, 13:00:12 EST X-Humanist: Vol. 1 Num. 901 (901) --------------------------------------------------------- From: MCCARTY@UTOREPASDr Abigail Ann Young Subject: On chancery writs (31 lines) Date: 7 March 1988, 16:56:35 EST X-Humanist: Vol. 1 Num. 902 (902) --------------------------------------------------------- [My mailer returned my attempts to send this reply direct to the enquirer, so this is with due apologies to everyone not interested in the history of English law] Re your query to HUMANIST, I think some writs are discussed in Pollock and Maitland's 2 vol History of English Law. I've also found the legal writers of the 18th and early 19th century very helpful, because the forms of many writs and other legal instruments stayed quite constant from the mediaeval period until the reform of the judicature in the 1870's: I've used Littleton, Coke and Blackstone for help in understanding 15th and 16th century actions, for instance. What writs in particular are you looking at? I hope this is helpful. Abigail Ann Young Records of Early English Drama University of Toronto From: MCCARTY@UTOREPASJohn J Hughes Subject: Portable computers (55 lines) Date: 7 March 1988, 16:59:28 EST X-Humanist: Vol. 1 Num. 903 (903) --------------------------------------------------------- HUMANISTS interested in the latest side-by-side reviews of IBM-PC-compatible laptop, "luggable," and portable computers should see Nora Georgas, "Planes, Trains, & Automobiles: 12 Portables for the Road," _PC Magazine_ 7:6 (March 29, 1988): 93-143. (No, despite the review's title, Steve Martin's latest movie does not figure in this review!) The Toshiba T1000 ($1,999) and the GRiDCase 1530 ($4,695) were the editors' picks. The GRiDCase wins these "competitions" time after time. According to the various reviews I've read over the last few years, the GRiDCase is probably the most rugged MS-DOS-compatible portable on the market. According to the The battery-operated GRiD 1530 is a 12.5-MHz, 80386 machine. It weighs less than 13 pounds, measures 11.5-by-15-by-3 inches, has a 72-key keyboard, is EMS compatible, and comes standard with two 1.4-megabyte 3.5-inch floppy drives and 1 MB RAM--all housed in a svelte matte-black, magnesium-alloy case. The system will accept up to 512K of user-installable ROM (two 256K slots) in a pop-up panel at the top of the keyboard. GRiD Systems will burn ROMs for customers. The picture of the hi-res gas plasma screen in the review demonstrates the GRiDCase's astounding resolution, contrast, and clarity. Compared to the GRiDCase's gas plasma screen, the Compaq Portable 386's gas plasma screen looks "muddy." All of this is great, but who has $4,695 to nearly $7,000 for this machine?! I wonder if I could convince GRiD Systems to loan me one to field test for a year or so?. . . From: MCCARTY@UTOREPASDan Bloom Subject: Translations (32 lines) Date: 7 March 1988, 17:13:20 EST X-Humanist: Vol. 1 Num. 904 (904) --------------------------------------------------------- 1) it seems obvious that if someone submits a request/statement in a particular language, the audience it is intended for is those that are conversant in that language and the topic at hand. 2) we are on an international network, therefore multilingual. Therefore, one must anticipate communications in many languages, and indeed it should be encouraged, although people such as myself who know about 1.25 languages (.75 english, .50 three other languages) may have to reply in English to another languages request. Which brings me to my final point; it should be noted that a request for information will get the greatest, quantitatively, response from a request in as many languages as possible (greatest subset of the people on the network) and English is an obvious choice as one. But certainly there should be no requirement for a translation into English. let the user beware so to speak ...Dan Dan Bloom Senior Consultant Academic Computing Services York University From: MCCARTY@UTOREPASSebastian Rahtz Subject: linguae bellissimae (14 lines) Date: 7 March 1988, 18:32:44 EST X-Humanist: Vol. 1 Num. 905 (905) --------------------------------------------------------- estne facile loquare linguae latinae... (i think its been too long since i did latin...) From: MCCARTY@UTOREPASKeith Cameron Subject: Screen messages in French (26 lines) Date: 7 March 1988, 18:35:21 EST X-Humanist: Vol. 1 Num. 906 (906) --------------------------------------------------------- I agree with David Owen that there is no absolute need for accents when writing French on the screen although the absence of an acute or a grave can cause a temporary ambiguity and retard comprehension. I suggest that ' be used before the vowel for the acute i.e. il a 'et'e and the ` for the grave - o`u as distinct from ou. It is rare that the diaresis, the circumflex or the cedilla affect meaning. If a text however is to be published, I have found that a number placed after the vowel is efficient as it allows for the subsequent global edit to adapt the text for printing eg. 1=acute, 2=grave, 3=circumflex, 4=diaresis, 5=circumflex. Keith Cameron From: MCCARTY@UTOREPASLeslie Burkholder Subject: Prolog and Lisp (19 lines) Date: 7 March 1988, 18:37:31 EST X-Humanist: Vol. 1 Num. 907 (907) --------------------------------------------------------- For those interested in comparisons of Prolog and Lisp here are two references: Cohen, "The Applog language", in DeGroot and Lindstrom, Logic Programming: Functions, Relations, and Equations (Prentice-Hall 1987). Warren, Pereira, and Pereira, "Prolog - the language and its implementation compared to Lisp", ACM SIGPLAN Notices 12 (1977) and ACM SIGART Newsletter #64 (1977). LB From: MCCARTY@UTOREPASamsler@flash.bellcore.com (Robert Amsler) Subject: Multilingual messages (27 lines) Date: 7 March 1988, 18:39:11 EST X-Humanist: Vol. 1 Num. 908 (908) --------------------------------------------------------- While I can see some benefits to a multi-lingual mailing list, there is also the issue of what are we saying to those readers who do not understand the language in which a given message appears? If you speak two languages; one of which will be understood by everyone in a room--and one of which will only be understood by 60% of the people in the room, what does it mean that you decide to speak SOLELY in the language which is only understood by 60% of the people in the room? One might say one was being rude to those who cannot understand that language? If we look at international organizations in which several languages are acceptable, such as the United Nations; there is a strict adherence to a policy of translation into each of the languages. If you look at multi-lingual journals; there is often a policy of requiring an abstract in each of the approved languages accompany the article in only one language. From: MCCARTY@UTOREPASBill Winder Subject: Prologs (57 lines Date: 7 March 1988, 18:41:04 EST X-Humanist: Vol. 1 Num. 909 (909) --------------------------------------------------------- I have followed with interest the programming language debate, especially on the Prolog question. I do like logic, and that is why I turned from a shaky acquaintance with Lisp to an enthusiastic acceptance of Prolog. Turbo Prolog was all I could afford at the time, and it has proved sufficient, up to a point. I still like Prolog, but Turbo Prolog has some tragic flaws, which may ruin our relationship. In particular, there are a number of bugs, especially with the I/O. More importantly, however, is the question of having a more developed logic. What functions are needed in Tpro to make it more standard or more powerful? Obviously, a typed language has particular constraints, but I have found that it just means copying sections of code and renaming predicates for different types: there is no rethinking of the problem because of typing, just more keyboarding, at least such has been my experience. I'm willing to have the speed of Tpro, even if it means more work. (In a recent Turbo Technix article (the new Borland magazine), Tpro was shown to be faster than Turbo C for at least some functions, such as calculating the mean of a set of numbers.) The fact that it is compiled is not a problem, since you can build an interpreted level if you so desire (i.e. a prolog interpreter can be built out of a compiled program....) That might seem counter productive, but the advantage is that the interpreted level will be tailored to the specific needs of the application. For the moment, therefore, I can't find a solid argument against Tpro. This may be because I have never used sufficiently a full implementation of Prolog. Has anyone run across a damning piece of evidence against Tpro? I need but a single, convincing argument in order to abandon Tpro (even with its very pleasant development environment) and upgrade to Arity or Mprolog. (Note on Sheffer's bar: though it is true that Sheffer is given credit for it, I believe that Peirce actually proved the reduction --and used the bar, or equivalent-- some 30 years before Sheffer (I would have to check my figures, but 30 sounds right....). Don Roberts could certainly set me straight if I misunderstood Peirce's approach, and the meaning of the cut in the existential graphs. The bar is a binary connector and Peirce's cut is n-ary: it could be written in Prolog as cut([var1,var2,...]), whereas the bar would be written as bar(var1,var2). Both mean "neither, nor", only for the cut, the "nor" is iterated over all variables of the list. [N.B. Peirce's cut has nothing to do with the Prolog cut operator].) Bill Winder Utorepas From: MCCARTY@UTOREPASRichard Goerwitz Subject: Courtesy (33 lines) Date: 7 March 1988, 22:11:42 EST X-Humanist: Vol. 1 Num. 910 (910) --------------------------------------------------------- I enjoyed Hans Joergen Markers Danish posting immensely. The language apparently has a lot of German cognates, and I probably would have got- ten the gist without a translation. I am only concerned that the effort I would have spent deciphering it would have exceeded by far the effort it would have taken him to write it in English (his English is, of course, excellent). Maybe in cases like this - non-international W. European languages - we should encourage people to post only if they do not feel comfortable writing in something like German or French or English. -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer P.S. I mean no slight against languages that are not generally seen as "international." This implies nothing about their intrinsic rich- ness or character. It just means that your average reader is not going to be as likely to understand them. Fortunately, in cases like Dutch, Danish, etc., the resemblance to German is strong enough to make things much easier. From: MCCARTY@UTOREPASMark Olsen Subject: Full text; languages (46 lines) Date: 7 March 1988, 22:16:27 EST X-Humanist: Vol. 1 Num. 911 (911) --------------------------------------------------------- I looked at a package called Concept Finder by an outfit called MMIMS (566A South York Road, Elmhust, Illinois, 60126 (312) 941-0090) which does, or claims to do, much of the sophisticated full text applications described. I was supposed to do a review for *CHum*, but, to perfectly honest, the program ran so poorly on the PC/XT class machines and was so poorly documented that we decided to wait for a future revision. We're still waiting. This is too bad, since on paper, it looks GREAT. Searches on annotation, text and references, full boolean support, proximity searching and so on. Very impressive. But the system could take up to 2 minutes for searches on very small samples of text. Even worse, once it found the references, it took 20 seconds to write a single screen. The problem SEEMS to be that it is written in a version MUMPS that is VERY poorly implemented on the IBM-PC machines. The company was considering optimizing the system, but for $1200.00 (retail) I need more than a promise of something that might, someday, run decently. I hope they do, as the overall design and approach are very interesting. I might have a draft of the old review (I'll check my files) written before we decided to let it die until a future revision. A second product is the full text retreival system which has been advertized by AIRS, Baltimore, who also market MARCON II. The advertising suggests the same kind of power as Concept Finder, but I have not seen the product in action. I would be interested in hearing about any other full text systems that HUMANISTS may be familiar with. On the language issue, would it be safe to assume that a message posted in English, French, or German could be read by the vast majority of HUMANISTS? If this is the case, we might ask those whose primary language is something other to provide a translation (or summary) of the posting. Settling on three or four languages which most of us can read will reduce our dependence on English without creating our own electronic Tower of Babel. From: MCCARTY@UTOREPASIan Lambert Subject: Family tree (23 lines) Date: 7 March 1988, 22:18:39 EST X-Humanist: Vol. 1 Num. 912 (912) --------------------------------------------------------- [Note: this is the second attempt to broadcast the following message; the previous try seems to have been lost somewhere.... W.M.] Further to Dominik's message today, I use FTetc, but have one problem with it. It seems only to allow a single marriage, despite the documentation. A second husband is defined as "brother-in-law" to his wife! Similarly there seems some difficulty in entering the child of an unmarried mother! I don't know if PAF allow for these? Ian iwml@ukc.ac.uk From: MCCARTY@UTOREPASEric Johnson Subject: Prolog, Lisp, Snobol4 (20 lines) Date: 7 March 1988, 22:25:54 EST X-Humanist: Vol. 1 Num. 913 (913) --------------------------------------------------------- For those interested in AI programming, Michael G. Shafto has written ARTIFICIAL INTELLIGENCE PROGRAMMING IN SNOBOL4 (Ann Arbor, From: MCCARTY@UTOREPASJames H. Coombs Subject: Query expressions (32 lines) Date: 8 March 1988, 09:00:35 EST X-Humanist: Vol. 1 Num. 914 (914) --------------------------------------------------------- I would like to know what people think about the following sorts of expressions. I am finding that people here think I am just giving them a hard time when I tell them that my Intermedia application will accept wild-card characters (a la (3)): So, there are at least these: 1) Boolean. E.g., 'own' AND ('house' OR 'car') 2) Contexts. E.g., 'own' WITHIN 5 'sell' 3) Regular. E.g., '[Ss]ee*' Thanks. --Jim Dr. James H. Coombs Software Engineer, Research Institute for Research in Information and Scholarship (IRIS) Brown University jazbo@brownvm.bitnet From: MCCARTY@UTOREPASRichard Goerwitz Subject: Speaking to 60% in the room (27 lines) Date: 8 March 1988, 09:04:16 EST X-Humanist: Vol. 1 Num. 915 (915) --------------------------------------------------------- Perhaps if we allowed freer use of languages other than English, we would be allowing more people to get online. If a writer feels com- fortable writing in English, then for heaven's sake, write in it! Those that don't, or who want to broaden our horizons a bit, better to post in some other language then not at all (i.e. better to post to 60% of the folks online here than to 0%). Looking at my English, I wonder if I might have been better off postin in some other language! -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer From: MCCARTY@UTOREPASJames H. Coombs Subject: Re: searching for a search engine (182 lines) Date: 8 March 1988, 09:10:57 EST X-Humanist: Vol. 1 Num. 916 (916) --------------------------------------------------------- Robin Cover posted a very interesting query about full-text retrieval. I haven't had much time to think about it, but a couple of possibilities come to mind. 1) Battelle's BASIS? Someone built a DM on top of BASIS? DM was recommended to me by someone from Datalogics and someone else from McGraw Hill. Datalogics is planning to use DM for the Random House Dictionary. I think it may require a MicroVax, but I would be very happy to hear otherwise. 2) ARRAS, ???? being used now by ARTFL. I don't have more information on it. ARTFL is a French literature project; others will be able to give addresses, etc. ARRAS probably requires a mainframe. The new system runs on workstations, I believe. Probably requires Unix. Anyway, worth a phone call. The software was developed with a group at U Chicago. 3) *I'm* looking for a relational database management system that supports full-text fields. I would require some of the same things that Robin asks for. Among other things, I should be able to supply a function that returns tokens when the dbms indexes the contents of a text field. (Thus, the application gets to decide what a word is or, more generally, what units to index.) Hastily, comments on Robin's requirements: > (1) Literary texts should be "understood" by the system in terms of the > individual document structure, as indicated by markup elements. Are you imposing any storage requirements? For example, I can parse entries in the American Heritage Dictionary (AHD) based on the markup. I can then generate a relational database 1) containing the dictionary or 2) containing search keys for retrieval. In either case, the structure of the database is the same. I can then get answers to questios such as "Give me a list of all words whose pronunications are dependent on their part of speech" and "Give me a list of all music terms appearing in the definitions of verbs" and "For each author of a citation, tell me how many times his/her work is used to illustrate the use of each part of speech; display the results with the authors with most citations first." The point is that I can use a relational dbms to capture the structure of the data. Designing that database is not trivial, however, because determining the structure of the data is not trivial. I also have to write the program that parses the entries and generates the files ready for importing into the dbms. Furthermore, this applies to the AHD. It might be possible to determine a universal dictionary structure for all current dictionaries, but what about future dictionaries? Wouldn't we need a generative grammar? Having such a grammar would not be the same as having a structure that would be adequate for all dictionaries. Yes, it would save a ton of work, but I would still have to define a new database structure, wouldn't I? (To a large part, I am assuming that the structure of higher-levels of text is not as constrained as sentences are.) Similarly for literary texts. What is the structure of the text? How could a system know all of the possibilities ahead of time? People are out there analyzing in new ways, discovering new entities. People are out there creating new kinds of text. What do we do? SGML. That's a start at least, and a big one. Does it necessarily give you all dominance and precidence relationships? Would an SGML prolog for the AHD tell me that senses inherit the usage labels of their parents? I don't think so. It would tell me where a usage label is permitted, but it would not tell me how to determine exactly what properties apply to what entities. (This *could* be done, but it's not the goal of SGML to provide such information. E.g., items in a list may be enumerated, but that does not cause the tokens within those items to inherit that enumeration---the property applies to the parent but not to the child.) So, I think that Robin will require a meta-markup language that is richer than SGML. In fact, SMGL does not even specify a rigorous way to state that is the markup for a poetry quotation. The person who defines that tag includes a comment, but the comment is not parsed and validated. Evidence? (from ISO 8879-1986(E)) A. Annex B. Basic Concepts. (although "This annex does not form an integral part of this International Standard.") Paragraph B.4.2.1. Content Models. For each element in the document, the application designer specifies two element declaration parameters: the element's GI and a content model of its content. The model parameter defines which subelements and character strings occur in the conent. For example, the declaration for a textbook might look like this: Here "textbook" is the GI whose conent is being defined, and "(front,body,rear)" is the model that defines it. The example says that a textook contains the GIS "front", "body", and "rear". (The GIs probably stand for "front matter", "body matter", and "rear matter", but this is of interest only to humans, not to the SGML parser.) See what I mean? B. But what's a "Quotation"? And if we know what that is, how do we know that it's the same thing as a "Quote" or a "quote". What if I define a poetry quotation element? and so on. Well, I confess to not having provided evidence in support of the assertion that SGML does not enable one to determine inheritance of properties. Was I wrong? I've only shown that people may have trouble determing how to relate the concept "poetry quotation" to the entities tagged . They will have to read the prolog or the documentation and supply the query engine with the tag or they will have to inform the query engine that when they say "poetry quotation", they mean "all entities tagged ". I want to half take it back. SGML does not provide information about propery inheritance, but one can achieve the same effect by listing the parent elements that might have the property. So, I can't say something like "Give me a list of all slang verbs"; instead, I have to say "Give me a list of all verbs where some sense of the verb is slang or the head is slang". (Usage labels in the AHD can occur at sense divisions, part of speech divisions, or before all divisions.) But this means that *I* have to know a lot about the structure of the document and the resulting database. The system has no way of knowing that children inherit usage labels. Are literary documents trivial in comparison? I believe they are much more complicated, but I can't come up with a satisfactory example. > (2) The database structure must be capable of supporting annotations > (or assigned attributes) at the word level, and ideally, at any higher > textual level appropriate to the given document. This should not be a problem. I guess you want the application to make it easy to associate user text with blocks of subject text. I suppose that you also want to asign keywords to your text to make it easier to retrieve, but that you don't want to sink to the level of telling the dbms to create a table that contains X and that is associated with Y. I would think that you might want a table for linguistic information, and another for something else, but perhaps not. So do I understand your system at all, Robin? Have I been overly pessimistic? How willing are you to get your hands dirty? Should a scholar be a database designer? Should this be a system that is all primed for literature? Might not be much good for anything else? Around here, you know, people don't believe in anything as powerful as Boolean expressions. Regular expressions and set-level operations are for power users only. I wonder how many Humanists are want/need such capacities. (I will post another note with that query, since few will get this far.) --Jim Dr. James H. Coombs Software Engineer, Research Institute for Research in Information and Scholarship (IRIS) Brown University jazbo@brownvm.bitnet From: MCCARTY@UTOREPAS Subject: Logbooks and how to get them (26 lines) Date: 8 March 1988, 10:27:01 EST X-Humanist: Vol. 1 Num. 917 (917) The logbooks into which all HUMANIST messages are automatically put are not where you've been told they should be. According to the "Guide to HUMANIST" you should be able to see them listed in HUMANIST FILELIST among all the other files on our newly inaugurated file-server. Due to a temporary problem with the ListServ software, however, they do not appear there. You can still retrieve them, using the standard GET command, but you have to know what they are called. As the Guide explains, all logbooks are named HUMANIST LOGyymm, where yy = the last two digits of the year, and mm = the number of the month. Thus the log for February 1988 is called HUMANIST LOG8802. On behalf of our disobedient servant, I apologize for any inconveniences. Willard McCarty mccarty@utorepas From: MCCARTY@UTOREPASRandall Smith <6500rms@UCSBUXA.BITNET> Subject: C (and dBase) as languages (50 lines) Date: 8 March 1988, 10:48:22 EST X-Humanist: Vol. 1 Num. 918 (918) --------------------------------------------------------- I had been refraining from the discussion on various programming languages because I believe that there is no "best" language; the "best" language varies task to task, even person to person. However, I could not let Jeffrey Gillette's comments on C slip by without comment. I agree that the language itself has no I/O capability apart from its libraries, although I am not sure why that is so important. My guitar has no I/O capability without its strings, but this does not concern me very much since I always use it with strings. I pick the proper weight strings based on the type of music I intend to play, the condition of my fingers, etc. This is true of C as well. The libraries, as packaged by Microsoft at least, are easy to use and relatively fast and efficient; they are also written for the different memory models which the segmented architecture of Intel chips requires. Furthermore, I have written several libraries in assembly language which perform operations that were not included by Microsoft. I can also choose (for a price, of course) from a wide variety of commercial libraries to perform other functions. Libraries of this type are also available for Turbo Pascal and other languages. This gives one great flexibility in choosing the libraries which one needs without being burdened by extra baggage. I have no doubt that dBase could be used to write a word processor, or even a text parser. I can also play bass guitar on my regular guitar by tuning the strings down, but this is not an elegant solution. The difference between programming in C and dBase is elegance. Certainly other languages will be more elegant than C for specific tasks, but I appreciate C's flexibility and general applicability. I also find that different word processors are better for different types of writing. However, as a PhD student in Classics, I do not have the time to learn a multitude of word processors or a multitude of languages, as much as I would like to. Therefore, I have to pick a language, and a word processor, which can perform all my tasks. Once I have chosen, I find it easier to find a way to solve a new problem in the old language than to start a new one from scratch. Randall Smith From: MCCARTY@UTOREPASHartmut Haberland Subject: Re: English and French (21 lines) Date: 8 March 1988, 11:04:07 EST X-Humanist: Vol. 1 Num. 919 (919) --------------------------------------------------------- Som svar paa Goerwitz' og Cover's bidrag til diskussionen, og som lille forsoeg paa at goere alvor ud af de smukke forslag som hidtil er blevet fremsat (paa engelsk, vel at maerke), vil jeg starte med at bidrage noget paa dansk - et sprog, som i parentes bemaerket, ikke et mit modersmaal, men et af mine daglige arbejdssprog. Problemet er selvfoelgelig: hvem kan laese det her? Og hvis jeg har et oenske om at bliv hoert: er der nogen der lytter? Der maa vel findes en eller anden Kierkegaard-forsker rundt omkring i verden som kan laese dansk. Mon hun eller han findes paa HUMANIST? Med venlig hilsen og i haab aa et svar, Hartmut Haberland (ruchh@neuvm1, ruchh@vm.uni-c.dk) From: MCCARTY@UTOREPASBirgitta Olander Subject: Languages on HUMANIST (20 lines) Date: 8 March 1988, 11:07:26 EST X-Humanist: Vol. 1 Num. 920 (920) --------------------------------------------------------- Det sl}r mig pl|tsligt att t ex vi i Norden kan ha en egen liten mail-grupp i HUMANIST, liksom andra humanister som har ett exotiskt spr}k gemensamt. --------- It occurs to me that humaists in the Nordic countries, for example, might have our own mail-group within HUMANIST. The same is true for others with an exotic language in common. But is it desirable? Birgitta Olander, LIBLAB Dept of Computer Science, Linkoping University, Sweden From: MCCARTY@UTOREPASSebastian Rahtz Subject: Flaws in Turbo Prolog (19 lines) Date: 8 March 1988, 13:38:59 EST X-Humanist: Vol. 1 Num. 921 (921) --------------------------------------------------------- Surely the fact that a new predicate cannot be dynamically defined in Turbo Prolog is enough to rule it out of court as a full Prolog? I cannot read in from my user "loves" "sebastian" and "wagner" and 'assert' a relationship between them. Its been a while since I used Turbo Prolog (and that was only for a few days before I found this flaw) - would anyone care to correct me? sebastian rahtz From: MCCARTY@UTOREPASSebastian Rahtz Subject: Languages (23 lines) Date: 8 March 1988, 13:40:01 EST X-Humanist: Vol. 1 Num. 922 (922) --------------------------------------------------------- i think people are getting too serious about this! we dont want 'HUMANIST-recognised languages', and items in Hebrew rejected by Willard. Why not just keep it free-form as it is? People should write in the language they feel will be read by the intended audience. Shame on anyone who isn't prepared to have a go at any language that comes along! anyway, its all academic, since our terminals are almost all (I assume) ASCII without accents etc, so most interesting languages havent a hope of coming over as their author intends them. why SHOULD Greek people transliterate for us, just because computers were developed by arrogant Westerners? no compromises, please. sebastian rahtz From: MCCARTY@UTOREPASHans Joergen Marker Subject: Multilingual messages (42 lines)Multilingual messages (37 lines) Date: 8 March 1988, 13:40:53 EST X-Humanist: Vol. 1 Num. 923 (923) --------------------------------------------------------- From Hans Joergen Marker I would like to express my agreement with Robert Amsler where he speaks about the disadvantages of somebody using a language understood by only a fraction of the audience. On the other hand I disagree that it should be regarded as rudeness if somebody spoke, say, Spanish in predominantly English audience. My point is rather a practical one, over the last couple of days I have given up hope that the rest of you will learn Danish, but imagine for one moment that you spoke a native language unknown to the majority of the attendants of most international conferences. Then you would first of all be forced to acquire some capability in the major language (pt. English, but to keep your imaginination vivid, dig up my earlier note on this subject and imagine for a while that is was Danish). Secondly if you are attending bilingual conferences you the choice between learning the third language or loosinng half of the conference. If your natural language is one of the minor languages to a degree where it is impropable to meet anybody abroad who speaks the same language as you you will allways be in the disadvantage of being forced to know one language more than the others. This means that you will allways appear a bit more restricted linguisticly speaking than other peoble not having that difficulty. Now before you start crying for the unfortunate Scandiniavians and other peoble from the minor nations of the World. There are advantages in being a native speaker of a flexible language like Danish. Though in order not to arouse your envy I shall refrain from counting my blessings at this place. Yours Hans Joergen Marker. From: MCCARTY@UTOREPASPeter Houlder Sender uknet-admins-request@UKC.AC.UKResent-Date Tue, 8 Mar 88 13:04:36 GMTResent-From Sebastian Rahtz From Peter Phillips Date Mon, 7 Mar 88 10:13:11 GMT Subject: Immortality (53 lines) Date: 8 March 1988, 13:41:52 EST X-Humanist: Vol. 1 Num. 924 (924) --------------------------------------------------------- Peter, I've just received some information regarding a little boy, who users of the NET might be interested in helping. If you think it would be OK, could you mail a copy to all net users, or post it in the news ? Here is the text of the letter I received. ====== David is a 7 year old boy who is dying from Cancer. Before he does, he has a dream of one day being in the Guinness Book of Records for the person who has had the most postcards sent to them. If you would like to help David achieve his dream, all you have to do is send a postcard to David as soon as possible. Send to: David, c/o Miss McWilliams, St Martin de Porres Infant School, Luton, Bedfordshire. Don't forget to sign your name, - -- Pete Phillips, TEL : 0443-204242 Ext: 6552 Quality Control Laboratory, TEL : 0443-202641 (Direct Line) East Glamorgan Hospital CIX : peteqc Church Village, UUCP: ukc!egh-qc!pete SOUTH WALES CF38 1AB ------- End of Forwarded Message From: MCCARTY@UTOREPASJohn Roper Subject: Languages (31 lines) Date: 8 March 1988, 14:53:19 EST X-Humanist: Vol. 1 Num. 925 (925) --------------------------------------------------------- I was astounded that the subject of language caused such a discussion. I live in a multilingual society here in the UK. Almost everybody locally speaks English English and "Norfolk". In other areas many other languages are popular such as those from the Indian continent. Professionally my jargon changes depending whether I am speaking to my computer, a computer scientist or somebody from Art History. If you wish to communicate to another individual, you use the language most easily understood unambiguously by both of you. On occasions when speaking with a Finnish colleague, we converse in pidgin French/German. In the HUMANIST context with a broadly Nth. American audience who are obviously parochial in outlook, American English does look like the obvious language to use if you wish to communicate easily to a maximum audience. Nobody on the other hand should be barred from using their natural language or be expected to translate their thoughts into another. However I suspect the potential audience for a Russian offering to be strictly limited. John Roper(S200@CPC865.UEA.AC.UK) From: MCCARTY@UTOREPASSebastian Rahtz Subject: Query expressions (33 lines) Date: 8 March 1988, 14:55:03 EST X-Humanist: Vol. 1 Num. 926 (926) --------------------------------------------------------- > I would like to know what people think about the following sorts of > expressions. I am finding that people here think I am just giving them . . > 3) Regular. E.g., '[Ss]ee*' I can see why people are not happy with this, because in the example as given the query allows for both Seeds and seeds to be retrieved. But the 'man in the street' thinks that should happen anyway (well my students do, anyway) so s/he gets aggrieved at your sharp practise. The example [ABC]ee* is equally upsetting because there is no old-world equivalent - we are not used to expressing things in parallel, as it were. As for [A-Za-z]* ..... I find the example of the regular expression fine, but thats because I am a Unix user, otherwise it could easily bother me. But I suppose the moral is that if people are going to have to use regular expressions, then the least we can do is have a universal syntax, and the Unix one seems good to me - it irritates me like mad that the two versions of SQL I use have different wild-card characters! sebastian rahtz From: MCCARTY@UTOREPAScbf%faulhaber.Berkeley.EDU@jade.berkeley.edu (Charles Faulhaber) Subject: Searching for a search engine (18 lines) Date: 8 March 1988, 14:56:17 EST X-Humanist: Vol. 1 Num. 927 (927) --------------------------------------------------------- Anecdotal evidence: When I talk about the glories of text searching to non-computer users they remain singularly unenthusiastic. When I suggest that such searches could be semantic in nature rather than string-oriented, their ears perk up. Technically I do not know the best way to accomplish this, although I suspect that some sort of thesaurus would make it possible. From: MCCARTY@UTOREPASRoberta Russell Subject: An international computer glossary Date: 8 March 1988, 14:57:14 EST X-Humanist: Vol. 1 Num. 928 (928) --------------------------------------------------------- HUMANISTs interested in a multi-lingual glossary of computer terms will find one in the 1987 edition of the Directory of Computer Assisted Research in Musicology, published by the Center for Computer Assisted Research in the Humanities, Menlo Park, California . It gives equivalent terminology in English, French, German and Italian (sorry, no Danish or Norwegian......) Roberta Russell Oberlin College From: MCCARTY@UTOREPASBrian Molyneaux Subject: Talking at one another (19 lines) Date: 8 March 1988, 20:11:19 EST X-Humanist: Vol. 1 Num. 929 (929) --------------------------------------------------------- Why not simply let everyone type in the language of their choice? I am guessing that all 'humanist' participants will be able to correctly assess what their offerings will look like and what their readership will be in any given language. 'Standards' have a way of creeping into all kinds of open communication - the next thing you know, someone might complain about Sebastian Rahtz's jokes....... Brian Molyneaux (AYI004@UK.AC.SOTON.IBM) From: MCCARTY@UTOREPASK.P.Donnelly@EDINBURGH.AC.UK Subject: Languages and diacritics (35 lines) Date: 8 March 1988, 20:13:49 EST X-Humanist: Vol. 1 Num. 930 (930) --------------------------------------------------------- Is anyone out there using the new ISO standard, IS 8859/1? This is an 8-bit extension to ASCII which includes the accented characters needed for practically all languages with Latin based alphabets, in both upper and lower case, as well as other useful things "pound", "cent", "half", "superscript 2", and "degrees". It has been an ANSI standard for some time, and is basically the same as the "DEC multinational character set" on VT220 terminals, which will no doubt become the de facto standard li ke VT52 and VT100 before them. So it looks like being the answer to the problem of diacritics. The only snag is that there are all sorts of obstacles in the way of 8-bit working at present. Mail messages containing 8-bit characters work ok within the computer I am using, but when I tried bouncing them around the JANET network in the UK, or even between computers with different operating systems on the local Edinburgh University network, the eighth bit got stripped. What is happening in "foreign" countries? Is it chaos at present? It looks from Joergen Marker's message as if the Danish "national version" of 7-bit ASCII is being used there. Does this cause confusion within the country as well as outside, with curly brackets appearing on screens and printers? Kevin Donnelly From: MCCARTY@UTOREPAS"Jeffrey C. Sedayao" Ben Shneiderman Subject: Common LISP/INGRES interface; course on hypertext (53 lines)Common LISP / INGRES interface availableHypertext Course Date: 8 March 1988, 20:28:50 ESTFri, 4 Mar 88 16:08:50 PSTFri, 4 Mar 88 16:55:13 EST X-Humanist: Vol. 1 Num. 931 (931) Extracted from IRLIST-L, with thanks Announcing the availability of CLING, Common LISP INGRES interface. CLING is a Common LISP package that permits a user to manipulate and query an RTI Ingres database. Databases can be created and destroyed, and tuples appended and retrieved, all with Common LISP functions. Versions for Sun Common LISP (Lucid) and Franz Allegro Common LISP are available. CLING cam be retrieved via anonymous FTP from postgres.berkeley.edu. Jeff Sedayao ..ucbvax!postgres!sedayao sedayao@postgres.berkeley.edu ------------------------------ . . . The University of Maryland University College Center for Professional Development presents HYPERTEXT: A NEW KNOWLEDGE TOOL A 3-day course taught by Ben Shneiderman, Charles Kreitzberg, Gary Marchionini, and Janis Morariu, May 9-11, 1988 This course presents hypertext systems and concepts in order to facilitate the development of hypertext applications. Participants will learn and use avail- able systems, understand implementation problems, recognize which applications are suitable, and design knowledge to fit hypertext environments. [More information available on the file-server, s.v. HYPRTEXT COURSE.] From: MCCARTY@UTOREPASMichael Sperberg-McQueen Subject: Languages (46 lines) Date: 8 March 1988, 20:33:02 EST X-Humanist: Vol. 1 Num. 932 (932) --------------------------------------------------------- I apologize to Hartmut Haberland for replying in English, but the answer to his question is yes, there are HUMANISTS other than those in Scandinavia who read Danish. (Not all of us even Kierkegaard scholars!) Surely no one should feel apologetic for posting notes or notices on HUMANIST in languages other than English, any more than they should feel apologetic for publishing articles in other tongues. The obvious advantages of having one's note more broadly understood will suffice to encourage those who can, to write in 'common' languages; we certainly don't need, as a group, to create any further rules or apply any further pressure. It would be a shame to lose the potential contributions of all those who read English but are shy about writing it. (I feel that way about my Danish, why shouldn't someone feel that way about their English?) If a sender wishes a note to have a broader distribution (or at least a broader readership) than is possible in the original language, why should not other HUMANISTS supply the translation? May I propose that, *if* HUMANIST needs conventions governing the language of contributions (and I am not sure we do), we adopt these: 1 contributions may be made in any language chosen by the sender. 2 a translation into another language may be appended to any message by the sender, if desired 3 if the sender wishes for a translation, but cannot supply it personally, a request for translation into a more commonly understood language may be made part of the message; any HUMANIST able and willing to undertake the translation is then encouraged to do so (and to sign the translation) Jeg ville gerne sige det allt paa Dansk, men jeg kan ikke skriver Dansk saa godt. Michael Sperberg-McQueen, University of Illinois at Chicago From: MCCARTY@UTOREPASJohn J. Hughes Subject: Help needed re SPIRES and PRISM (29 lines) Date: 8 March 1988, 20:35:09 EST X-Humanist: Vol. 1 Num. 933 (933) --------------------------------------------------------- Dear HUMANISTS, Someone involved in producing specialized musicology databases recently wrote and asked what I know about humanities data bases . . . that are (a) stored in SPIRES (or a SPIRES type data base such as PRISM) and (b) can be accessed or searched remotely by users other than the data base's creators or administrators. Can anyone help with this request? Thanks. John John J. Hughes XB.J24@Stanford From: MCCARTY@UTOREPASSebastian Rahtz Subject: Relational databases (47 lines) Date: 8 March 1988, 20:43:44 EST X-Humanist: Vol. 1 Num. 934 (934) --------------------------------------------------------- I'd appreciate a fuller explanation of what Jim Coombs means by a relational database that "supports full text fields"; it sometimes seems to me that people fail to differentiate between the database back-end and the front end. If Jim has to define a database structure in which every word in a sentence is held in a separate tuple with information about it, it will not be directly useable, but someone can write a front-end that makes it look sensible. and whats wrong with "every word a tuple", aside from possible considerations of efficiency? My point is that I think Jim wants a full-text frontend, not a full-text database. Sebastian Rahtz PS Since Jim re-opened the SGML quagmire, can I pose this one to the community? Is an em-dash "punctuational markup" (Cooombs et al in ACM Comm Nov 87), or is presentational markup? I write an aside---as indeed I may do at any point---and indicate it with em-dashes, but if I were a Frenchman or a German or a Tamil speaker, might I not use a different typesetting convention? Ergo, emdashes must be replaced by descriptive markup, to indicate "parenthetical aside", must they not? Or is the choice of em-dash, brackets, foornote or whatever a genuine function of the writer's meaning, in which case it _is_ punctuational markup like a full-stop. Consider also this: a short list such as "I like a) cats b) food c) sex" appropriately appears in-line, and if I were an SGML purist I'd have done it with descriptive markup. If I went back and expanded that list so that each item was more fully described, it should probably be expanded to a full 'enumerated list'. But my intention remains the same, to list my favourite things. So is the change from 'inline list' to 'full list' up to me or my designer? is it a change in intent or presentation? There seems a horrid possibility that the SGML purist would tell me that the software should examine my list and make its own decision based on length of list. Would anyone care to comment? From: MCCARTY@UTOREPASDr Abigail Ann Young Subject: Languages, natural and others (81 lines)languages, natural and others Date: 8 March 1988, 20:47:34 EST X-Humanist: Vol. 1 Num. 935 (935) --------------------------------------------------------- 1. What can you do in C you can't do in .... When it became obvious that someone at our project was going to have to settle down and write some computer programmes to manipulate strings of codes and I was elected, I wrote them (or tried to write them!) in BASIC, mostly because it was there, already bought and paid for with our IBM PC. They were very clumsy programmes, and they took a long, long, looong time to run. Even after I learned enough to write less clumsily, they still took a long time to run. Then I heard about a "real" programming language, which was very hard, called C, in which one could write programmes which ran FAST, and I persuaded my seniors to buy a C compiler and two books on the subject on the strength of that one feature!! But, it worked; it turned out to have been an intelligent, efficient choice, and despite the fears born of ignorance, it wasn't very hard to learn. The C versions of the old BASIC programmes ran infinitely faster; as our needs have changed, and the coding strings were modified, the C programmes proved quite simple to change. And I became very fond of C (one's favourite programming language, like one's favourite word processor, is chosen on the basis of indefinable criteria, like one's choice of chocolate over vanilla ice cream). Like Latin, my favourite natural language, it has an elegant, structured, precise syntax. And it encourages even a novice to produce the most elegant solution to a given problem. Now, this is a very subjective judgement, and I'm sure there are plenty of other languages which do the same thing, I just happened to learn C first. I suspect in the end, most choices of a language are equally subjective. I do worry about my affection for C in light of the automobile definitions posted 2. Unilingual vs. bi-/ or multi-lingual communications I just want whole heartedly to support those who have spoken in favour of encouraging HUMANISTS to post in languages other than English, and to do so with whatever diacritical marks they feel appropriate. I wonder how those who decried the need for accents would feel if asked to return to the days of ALL UPPER CASE TERMINALS AND 'DISPLAYS' -- AFTER ALL, IT DOESN'T INTERFERE WITH COMPREHENSION, DOES IT? I don't want to be part of a North American unilingual ghetto (after all, I live in a bilingual country!), although to be fair, I don't think anyone was seriously suggesting that HUMANIST be North American (ie, US Websterian) English only. But those of us who do not live in Europe or Asia tend to fall into the trap of thinking that the relatively small number of "internationally used" scholarly languages, English, French, German, are all there is. We don't very often encounter even the other Romance languages (except Spanish in some parts of the States, I suppose) in scholarly discourse far less Scandinavian and Slavic languages, and as for the languages of Asia! And we don't know those languages, for the most part. (the exceptions are, of course, the people who teach those other languages) I think what is really worrying the people who expressed doubts is the fear of missing out on an interesting conversation! I will certainly be running that risk if the conversation goes outside of English and French, but I would rather take the chance than create an atmosphere in which people feel they must communicate in a foreign language (eg, English or French only) or be socially ostracised by HUMANIST. An alternative, of course, is to return to a dead language (such as Latin) for the language of scholarly discourse, thereby prefering no one current natural language over another.... Ego huic proposito studeam! Abigail Young young at utorepas From: MCCARTY@UTOREPASRichard Giordano Subject: The COBOL of natural languages.... (20 lines) Date: 8 March 1988, 20:50:53 EST X-Humanist: Vol. 1 Num. 936 (936) --------------------------------------------------------- Maybe one solution in a multilingual environment is to translate a message into Esperanto. Weren't most of us in Esperanto clubs in high school? More seriously, I do agree with Sebastian Rahtz who argues for the most open environment we can possibly achieve on the Humanist. Richard Giordano RICH@PUCC From: MCCARTY@UTOREPASRichard Goerwitz Subject: hvem kan laese det her (39 lines) Date: 8 March 1988, 20:52:09 EST X-Humanist: Vol. 1 Num. 937 (937) --------------------------------------------------------- Well, I'm not involved in Kierkergaard research, but I know a Germanic language when I see one ("svar" or not!). Please, don't shy from post- ing. Reading your posting was just good plain fun (I'm in Semitic languages, and so it offered an interesting diversion...). It's sad to see Americans labeled as "parochial," as John Roper so baldly called us. We are linguistically isolated here not on account of some great moral failure, but on account of a thing called the Atlantic. The size of our country doesn't help, either. Admittedly, we are not the most open-minded of societies. Now that we are not dominating the world economy - in fact, we seem to be in a bit of trouble - we will surely be looking outward more to see what the rest of the world has to offer. Perhaps some day we will come up to Mr. Roper's standards of pluralism! -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer From: MCCARTY@UTOREPAS Subject: First I plead, then I agitate, question, and comment (42 lines) Date: 8 March 1988, 21:00:22 EST X-Humanist: Vol. 1 Num. 938 (938) The volume of mail generated by HUMANIST is getting large, nicht wahr? Yesterday, in fact, I brought down a machine-to-machine data channel here by sending out about 6 to 8 messages in rapid succession to our 210+ membership. So, I've been told that I must not play such pranks again during the day. The problem is that I must send out several during the day or there will be nothing left of my evening. So, I'm thinking of ways to make my labour more efficient. Thus follows an agitated exclamation, and a question to you; finally a comment. Please, *please* address all messages to HUMANIST to this address: HUMANIST@UTORONTO >>not<< HUMANIST@UTOREPAS or MCCARTY@UTOREPAS. As I explained earlier, the latter error leads to a bad habit that soon will cause grief. The former causes the local postmaster work, since the message goes into his account first (where it may rest for days, or minutes, depending), then it causes me extra work, because I must peal away the commentary and various bits of garbage that get attached to the message. I may have to drag out the garbage can soon and designate it my dead letter office. --- Is the volume of mail getting too much for you or are you enjoying it? If the former, then may I suggest that we do our best to stick to a single topic until it (the topic!) collapses? Any other suggestions? --- In the matter of languages, vox populi vox Dei, and cheers to the populus! From the very beginning we have tried with what strength we have to make HUMANIST truly international. I'm glad to see that you all agree. Willard McCarty mccarty@utorepas From: MCCARTY@UTOREPAS Subject: Prophetic warning or survival of the fittest? (19 lines) Date: 9 March 1988, 00:31:27 EST X-Humanist: Vol. 1 Num. 939 (939) [The following is extracted from a recent resignation from HUMANIST. W.M.] I find the volume of communication on HUMANIST too much to bear. After skipping one day of checking my mail, I find 71 messages waiting for me, all of them HUMANIST. I simply don't have time for this much chat and reading. I'm sorry, but please remove my name from the HUMANIST distribution list. From: MCCARTY@UTOREPASNorman Zacour Subject: Speaking in tongues (19 lines) Date: 9 March 1988, 00:36:02 EST X-Humanist: Vol. 1 Num. 940 (940) --------------------------------------------------------- I'm sure that Michael Sperberg-McQueen has summed up the opinion of all of us quite neatly, and his suggestion is most agreeable, viz., that those of us who wish might ask other Humanists to supply a translation - just so long as no one has to translate Sebastian Rahtz' Latin! Might there be some relationship between the vigour of this discussion and Willard McCarty's plaintive cry for help? Norman Zacour (Zacour@Utorepas) From: MCCARTY@UTOREPASdow@husc6.BITNET (Dominik Wujastyk) Subject: Volume of HUMANIST (46 lines) Date: 9 March 1988, 00:38:55 EST X-Humanist: Vol. 1 Num. 941 (941) --------------------------------------------------------- I'm afraid I am finding the volume of recent HUMANIST mail opressive, and I am getting very free with the "d" key, which means that if someone says something interesting after about the second para, I don't get to see it. I feel like a relative newcomer to HUMANIST (although in the world of computing a month is a decade), so what I have to say below may have been thrashed out already. If so, forgive me. I also subscribe to TeXhax, as I expect other HUMANISTs do, and although it too can get pretty voluminous, I feel much better about it, and not oppressed. For those who don't know, Malcolm Brown collects about 20k of letters into a single document, adds a header with a list of the subject headers, date, issue number, and ocassional editorial comments, and sends it out. It appears on average once or twice a week. It feels much like receiving a magazine or the latest issue of a journal: a little thrill of pleasure in anticipating what people are now saying, and what is new. I also find it *much* easier to skip stuff that is not of interest, because of the "contents page" at the beginning, and because I read it with an editor or lister which is much faster than paging through mail. In contrast, I always feel I'm wading through HUMMANIST. I suppose I could just create my own HUMANISTmagazine by saving fifteen messages in a file before reading it, but it still woudn't have the contents header (some Icon buff could knock out a prog to do that, no doubt). But does anyone else share the leaning I have for the TeXhax type of thing? A certain spontaneity would perhaps go -- perhaps not a bad thing --- oops, delete delete delete ... Dominik From: MCCARTY@UTOREPASMCCARTY at UTOREPAS Subject: A sick joke? (20 lines) Date: 9 Mar 88 09:39:46-EST9 March 1988, 09:34:08 EST X-Humanist: Vol. 1 Num. 942 (942) I have received an angry message asserting that there is no little boy dying of cancer who wants to get his name in the book of records by receiving more postcards than anyone else. I have no way of verifying either the original request or this allegation, but the fact that a hoax of such a kind has occurred before may lend weight to the latter. So, I leave it to your judgment how to respond. Willard McCarty mccarty@utorepas From: MCCARTY@UTOREPAS Subject: Too much! and more (81 lines) Date: 9 March 1988, 21:37:05 EST X-Humanist: Vol. 1 Num. 943 (943) Dear Colleagues: We seem to have reached a turning point with HUMANIST. Of the dozen or so respondents to my note about the volume of mail, only one expressed approval of the quantities we have all received in the last few days. Some have said, simply, "Too much!" Some have added a wistful regret that HUMANIST might die or that they might soon be forced to drop out. Some have made suggestions about how the problem might be averted. For all of this I'm grateful. What a fascinating social experiment! There are two problems, yours (receiving and reading the mail) and mine (reading, processing, and sending the mail). I know mine (about 2 hours a day doing nothing but); I can imagine yours. What other solution to both can there be but to reduce the daily volume of mail? The number of messages you receive could be reduced by me sorting and bundling the messages according to topic, removing the sometimes voluminous headers, and sending these bundles out. One of you suggested that there might be "digesting software" that would automate this process for a VM/CMS system. If so, *please* let me know about it. If someone is willing to write the exec according to the specifications I could supply, *please* let me know -- and may he or she be blessed forever! Alone, unaided by automatic means, I certainly cannot do the digesting. We could subdivide HUMANIST into two or more separate lists by topical area. The flaw with this plan seems to me that it is based on a misunderstanding of what HUMANIST is. We want, do we not, a discussion group that can range freely over topics of all sorts, dwell on them as long as we want, then move on to something else? If a coherent subgroup (e.g., those devoted to issues of textual encoding) want to set up a discussion group, Steve Younker and I will gladly help in any way we can. But I don't think doing that will solve the problem. I did suggest earlier that we try to stay on a single topic until it is exhausted, but this notion was criticized for its obvious flaws -- and so the critic missed the point, I think. Take the obvious analogy, a large, free-wheeling seminar: there do we not in one way or another attempt to stay on topic? Don't we usually regard two or more simultaneous conversations on different topics as distraction? and focus as the way to turn random babble into something powerful and illuminating? In short, I see no real solution that does not involve some kind of self-discipline mixed with courtesy. We have done well in that regard, I've been told, but we've passed some sort of threshold that's forcing the issue once more. Everyone says that HUMANIST is a good thing, but it is only what we make it from day to day. So far we've not had to think very much about what's relevant and what's not. Now I think we do. So, what I propose is this: whether or not someone comes up with a technological aid to handling quantities of mail, that we discuss exclusively computing in the humanities in the professional sense, that we try as much as possible to stay with one topic at a time (requests for information, conference announcements, and the like excepted), and that we exercise conversational restraint on ourselves according to our best judgment. Lest we fall into the common error of becoming verbose about our verbosity, please direct all comments about this to me. I'll fairly summarize what people say and post the results. I'm open to arguments that we should do other than I have suggested. Willard McCarty mccarty@utorepas From: MCCARTY@UTOREPASJames H. Coombs Subject: Relational dbms; FE vs BE (87 lines) Date: 9 March 1988, 21:42:13 EST X-Humanist: Vol. 1 Num. 944 (944) --------------------------------------------------------- Sebastian Rahtz writes: > I'd appreciate a fuller explanation of what Jim Coombs means by a > relational database that "supports full text fields"; it sometimes > seems to me that people fail to differentiate between the database > back-end and the front end. I mean a relational database management system, not a database. As far as I'm concerned, that means a backend. I suppose that I'm being sloppy though. I use the SQL frontend to Ingres, and I use my own frontend to Ingres. Strictly speaking, the SQL frontend is part of Ingres, which is a relational database management system. Ok, writing that helped me see that you are correct. I don't want an Ingres frontend to handle the full text, because I would then have to handle it from my own frontend. I already have that problem with SQL (Ingres' SQL frontend permits the dynamic construction of SQL statements; the ESQL interpreter does not; presumably the backend doesn't know the difference, so I blame it on the interpreter). > If Jim has to define a database structure in which every word in a > sentence is held in a separate tuple with information about it, it will > not be directly useable, but someone can write a front-end that makes > it look sensible. and whats wrong with "every word a tuple", aside from > possible considerations of efficiency? Well, there are at least two things wrong. 1) *I* have to write the frontend (as well as my own server/backend so that I can trick Ingres into opening two databases at once). 2) Even if someone else were to do it, we would have many solutions to the same problem. It's a common problem and should be solved once for all databases (for the majority?). 3) "Every word a tuple" is ambiguous, I believe. It could mean that the tables contain first-level indices, which the frontend generates. Or it could mean that the tables contain the entire text, not just indices into the text. For the simple version of my application, I use the first approach; it requires less storage and is more efficient. The second approach---where the database contains the entire text--- introduces complications with spacing and markup. Do we have a tuple for every punctuation mark? A tuple for spaces between words? How do we handle presentational markup? (I guess we would have to rule it out.) In addition, we would have to have a separate table to define the range of the definition text, for example (start=54; end=82). So far, range searches in Ingres have been relatively slow. (In part, I'm bringing up efficieny again, but there's also an issue of complexity.) I thought about the tokenized approach. The biggest problem, to my mind, is that it doesn't properly capture the state of the universe that I'm modeling. The value of a definition is the text in the definition. It's similar to words, whose values are character strings. We typically don't decompose words into one tuple per character. If we wanted to search for individual characters, then I guess we would have to (with current dbms). Notice, however, that dbms developers provide pattern matching facilities so that we can decompose words within fields. (We can use these same facilities to search for words within phrases, but the performance is unacceptable.) > My point is that I think Jim wants a full-text frontend, not a full-text > database. I will just repeat that point about multiplicity of front ends. I'm writing the frontend, so I want Ingres to do the work (just as I want Ingres to determine access paths and optimize queries). In addition, I want it to be possible for someone to write a hypercard frontend (TCP/IP is becoming widespread, so they can use my server, which has as much as possible of the intelligence in it---e.g., user asks for "ceilings"; we don't find it; we try "ceiling"---NO PLANS for other frontends, just a design philosophy). I hope this clarifies things. I suppose I get confused because I have my own frontend and backend as well as Ingres' backend. --Jim From: MCCARTY@UTOREPAScbf%faulhaber.Berkeley.EDU@jade.berkeley.edu (Charles Faulhaber) Subject: Help re SPIRES and PRISM (18 lines) Date: 9 March 1988, 21:45:57 EST X-Humanist: Vol. 1 Num. 945 (945) --------------------------------------------------------- Talk to Tony Newcomb (Dept. of Music, UC Berkeley) who is engaged in developing a full text data base of Italian poetry with music between 1400 and 1600. It runs on SPIRES. It is being done in collaboration with Italian scholars (see the recent note on computer activities in Italian musicology), and if not accessible now from Italy is fully intended to be so accessible. From: MCCARTY@UTOREPASAllen Renear Subject: Markup (49 lines) Date: 9 March 1988, 21:47:32 EST X-Humanist: Vol. 1 Num. 946 (946) --------------------------------------------------------- I just can't resist markup... Sebastian Rahtz wonders whether an em-dash is punctuational or presentational markup. I'd say an em-dash, like a comma or full-stop, is punctuational markup. But a "soft" hyphen -- one that indicates that a word is broken across lines -- would be a good example of something that may look like punctuation, but is in fact presentational markup. Notice how the number of soft hyphens varies as the design varies (eg with line length, font, hyphenation patterns &c.) But the number of em-dashes varies primarily with authorial decisions. Can punctuational markup be replaced by descriptive markup? Absolutely. It can be replaced by descriptive -- or referential -- markup. Should it? Probably not in general. But sometimes it is useful to do so. And it is interesting to speculate on what the advantages would be. The basic idea is, of course, that the *role* the punctuation plays would be formalized. Then we could easily switch between British and American full-stop/quotation mark conventions; French, German, or Anglo-American quotation symbols; have our search programs look for words only in direct quotations; and have our text editors or formatters handle conventions for representing nesting quotations -- allowing us to cut and paste without worrying about varying singles and doubles. In my typesetting days I always insisted on descriptive markup for quotations. Of course I did let the authors and editors indicate with different markup which were to be displayed and which inline -- so things are not so simple. On this topic -- markup purity -- notice that AAP has, like Waterloo GML, for quotation and for long quotation. Sebastian also wonders what happens as an inline list gets longer during editing. Should the display/inline decision be up to you the author or your designer? Your designer -- no question about it. But this is just a provocative way of saying it is a *design decision*. "SGML purists" don't really want to remove any power or authority from authors, they just want you to call a list a *list*. Format it displayed or inline, wysiwyg or batch; have the decisions made by you, your designer, or your software. The SGML *purist* removes himself from these decisions -- after having made them all possible. From: MCCARTY@UTOREPASHans Joergen Marker Subject: 8-bit character sets (35 lines) Date: 9 March 1988, 21:50:18 EST X-Humanist: Vol. 1 Num. 947 (947) --------------------------------------------------------- Answer to Kevin Donnely: Yes we have chaos over here. In the mainframe world texts are usually corrupted when transmitted from one computer to another. The Danish I wrote was appearing on my screen with beatiful Danish wovels, and would have appeared on my printer in the same way. But if I try sending a message on the network with my second chistian name spelled correctly "J%rgen" (the second letter is an o with a slash over it) the message will be rejected with something like "ILLEGAL FIELD" Speaking of micro computers Danes and Norwegians are still wondering why IBM was so preoccupied with providing a y 'umlaut' that it was impossible to provide an %. The Danish-Norwegian IBM-character set, uses the cent and Yen signs for the % in lower and upper case. This gives rise to a number of interesting malfunctions of programs, screens and printers. In Denmark this group of errors are popularly known as the Yencent errors giving a play on words with the common Danish family name Jensen. About the y 'umlaut' the latest theory is that it should be used among some remote New York tribes, utilising the letter as an interesting abreviation for the city name. Hans Joergen Marker From: MCCARTY@UTOREPASHartmut Haberland Subject: Languages, HUMANIST, and mail pollution (53 lines) Date: 9 March 1988, 21:53:31 EST X-Humanist: Vol. 1 Num. 948 (948) --------------------------------------------------------- This may be my last contribution to Humanist. Although I enjoy it thoroughly, I spent amost an hour this morning reading, discarding, forwarding items I got through the net. This is simply more time than I can spare, and reading the message of the poor person who found 71 items in the mail after only one day's absence from the terminal gave me the shivers. It is great to look into one's pigeonhole in the morning and you see `030 RDR' or `027' RDR etc., all that mail, and is it really meant for you? but then the difficult bit comes: to sort 'em out. I thouroughly enjoyed the comments from Susan Kruse, Richard Goerwitz and Michael Sperberg-McQueen (and possibly of others I discarded too quickly) on the language issue. (You may have noticed that I write this in English, more on this below.) Of course, it is a myth that people in general and Americans in particular are pathologically monolingual. Some are, but far fewer than one realizes. (I would like to see statistics on that, by the way. I strong- ly believe that the pure, absolute, no-way-out monolingual is a very rare species, probably overrepresented in huge and wealthy countries, but virtually absent from large parts of the world.) Of course, one should use as many languages as one can manage to read and, possibly write. I am just afraid that although most Humanists can recognize a language when they see it (in order to carry Goerwitz' point a bit further), most of them will feel quite comfor- table writing English anyway. One of the problems is diacritics. I admit it is fun to write Greek on an ASCII terminal (ever seen how they do it? it's amaz- zing, kind of mixture between transliterated and phonetic, like this: Agaphte Hartmout, kharhka (or xarhka, beeing interpreted as Xi or Chi according to context) poly (or polu) pou elaba to gramma-sou. To diktyo mas leitourgei ... etc. etc.) It is much less fun to write Danish on an ASCII terminal, especial- ly since you never know what goes through and what not (curly brackets, yes, but also dollar signs instead of exclamation marks etc.). When I get letters from Germany in German, the a and o mlaut always comes through as ae and oe, but u umlaut as a 'bolle-a' (a with a circle on top), and the Eszett (long s) as a tilde ... So all in all, this is not so much a matter of principle but of convenience. I am certanly glad to receive mail in all languages I can handle (English, French, Danish, German, Norwegian, Greek, Swedish, Dutch, Italian), but I can't promise that I will answer in the same language (and if you write to me in Finnish, I have to fetch my dictionary first). This letter being very long, it is an excellent contribution to information pollution. I won't say much more now, but I urge everybody who feels the same to think about possibilities to restrict the information flow from HUMANIST. Its great fun, but sometimes you have other things to do than those which are fun. So much for today. Thanks for listening (who got this far, I wonder). Hartmut Haberland (rucch at neuvm1) From: MCCARTY@UTOREPASSebastian Rahtz Subject: America the Golden (17 lines) Date: 9 March 1988, 21:55:02 EST X-Humanist: Vol. 1 Num. 949 (949) --------------------------------------------------------- And there was I thinking that America had a wider racial and linguistic mix than Norfolk! Would Richard Goerwitz care to comment on that complete lack of Spanish contributions to Humanist from his country? On his side of that Atlantic there are far more Spanish speakers than ours! Sebastian Rahtz From: MCCARTY@UTOREPASAndrew Oliver Subject: Electronic Henry of Avranches (16 lines) Date: 9 March 1988, 21:57:27 EST X-Humanist: Vol. 1 Num. 950 (950) --------------------------------------------------------- A colleague at the centre for medieval studies at the University of Toronto is working on the Latin verse of the 13th century poet, Henry of Avranches. He wishes to know if any of Henry's poetry exists in electronic form, if so where and under what conditions might he obtain them. From: MCCARTY@UTOREPASJohn J. Hughes Subject: Mac text retrieval programs (45 lines) Date: 9 March 1988, 22:02:09 EST X-Humanist: Vol. 1 Num. 951 (951) --------------------------------------------------------- Dear HUMANISTs, A week or so ago, I asked for help in locating text retrieval and/or concording programs for Macintoshes. Alas, the replies were few--very few. This leads me to conclude that there aren't many such programs for Macs. In fact, Sonar is still the only commercial text retrieval program for the Mac that I have found. And Mark Zimmermann's BROWSER, aka TEXAS 0.1, is the only noncommercial such program. That's a sorry state of affairs. (I'll have more to say on these two programs after I review and compare them.) Although there are many text retrieval programs for IBM PCs, and although I have working copies of most them (I'm reviewing several of the "most powerful" ones for the next issue of the _Bits & Bytes Review_, including the ones mentioned above for the Mac), I am discovering that most of them are not designed for scholarly use or are frightfully slow or treat a whole file as a single record (!) or are crippled in their search functions or suffer from some serious design flaw or .... So far in my investigations, WordCruncher still seems to be "state of the art." As much as I appreciate that program, I know that "we" can do better. For example, Yaacov Choueka's/IRCOL's text retrieval software sounds like a significant "step up," and according to recent correspondence with Yaacov, IRCOL is giving some thought to creating an MS-DOS version of that software. (That's neither a rumor nor a promise.) Perhaps interested HUMANISTs should e-mail Yaacov their encouragement for producing such a program (Yaacov, please forgive me if you are inundated with HUMANIST mail!). John John J. Hughes XB.J24@Stanford From: MCCARTY@UTOREPASRobin C. Cover Subject: Correction: PAT and the NOED (46 lines) Date: 9 March 1988, 22:11:28 EST X-Humanist: Vol. 1 Num. 952 (952) --------------------------------------------------------- In a recent posting I referred to a program called "PAT" which I said was being developed at the "University of Toronto." CORRECTION: this program is being developed at the University of Waterloo in connection with work on the NOED (New Oxford English Dictionary), at the Centre for the New Oxford English Dictionary. My information about PAT comes from a document supplied by Darrell R. Raymond, and is an internal memorandum written by Gaston H Gonnet, "Examples of PAT applied to the Oxford English Dictionary" (OED-87-02; July 28, 1987), 34 pages. According to the developers at the University of Waterloo, "Pat is a program based on the patricia tree data structure. The main virtues of Pat are (1) its speed: Pat can find the matches to any fixed string in the OED in under a second (2) its method of specifying queries: all Pat queries are searches for string prefixes, hence Pat can find repetitions, phrases, and do concordancing merely by specifying the appropriate prefix (3) its complete indifference to the the content of the corpus: since Pat knows nothing about words or any other semantic structures in the text, it can be easily applied to virtually and kind of data that is representable as text." Deepest apologies to University of Waterloo for this careless citation in my posting. Text retrieval methods being developed at the Centre for the New Oxford English Dictionary show great promise for general applications in textual studies, so we may look forward to hearing more from the Centre in the coming months. The mailing address is: Centre for the New Oxford English Dictionary University of Waterloo Waterloo, Ontario, CANADA N2L 3G1 (Professor Robin C. Cover ZRCC1001@SMUVM1.bitnet) From: MCCARTY@UTOREPASBill Winder Subject: Linguistic processing in Prolog (46 lines) Date: 9 March 1988, 22:16:04 EST X-Humanist: Vol. 1 Num. 953 (953) --------------------------------------------------------- Sebastian Rahtz's remark about dynamic definition of predicates seems to concern (what I call rightly or wrongly) the interpretative level. To get such features in Turbo Prolog, one must parse expressions. The question is dealt with on page 150 of the user's manual, in the context of the call predicate. However, I suppose the point is that in Turbo Prolog, there will always be (unnecessary) complications when using such functions. That complication has perhaps some deleterious influence on the way I approach problems. For the moment, however, the Turbo Prolog solution does not seem unreasonably circuitous. In fact, the case you mention, Sebastian, is perhaps too simple to be telling, since it boils down to a n-tuple relation between strings. It can be stated as a database predicate concerning lists of strings, such trans(R,Subject,Object) if dynamic_clause([R,Subject,O1]), dynamic_clause([R2,O1,Object]). The two clauses above give the conclusion trans(loves,Sebastian,music), which could be asserted dynamically also. A more telling example would be one where a non-database predicate ("if" containing construction) is constructed on the fly. I don't think there is any direct way of doing so in any prolog. Such a feature would allow the program to rewrite itself entirely. In some ways, that is possible through the database clauses, but not to the extent of a true interpreted language, where the program code can be manipulated. Bill Winder Utorepas From: MCCARTY@UTOREPAS Subject: Conference posting to the file-server (14 lines) Date: 9 March 1988, 22:24:42 EST X-Humanist: Vol. 1 Num. 954 (954) Announcement of an International Conference on COMPUTER-MEDIATED COMMUNICATION IN DISTANCE EDUCATION Venue : OPEN UNIVERSITY , MILTON KEYNES, UK Dates : October, 8 - 11, 1988. From: MCCARTY@UTOREPAS Subject: Conference posting to the file-server (8) Date: Thu, 10 Mar 88 11:28:54 EST X-Humanist: Vol. 1 Num. 955 (955) CUNY FIRST ANNUAL CONFERENCE ON HUMAN SENTENCE PROCESSING MARCH 24 - 27, 1988 From: MCCARTY@UTOREPASKeith W. Whitelam (UK.AC.STIR.VAXA) Subject: Computing for humanities students (20) Date: Thu, 10 Mar 88 11:40:20 EST X-Humanist: Vol. 1 Num. 956 (956) --------------------------------------------------------- An appeal for help to fellow HUMANISTS. We are exploring the possibilities of setting up an introductory course for computing for arts and humanities students. Qustions have been raised about academic content! I would be most grateful for information on courses that are run elsewhere. In particular: 1. course content 2. assignments 3. methods of assessment 4. Is it taught by Arts and/or Computing Science staff? 5. any other information that you think appropriate. If replies are sent direct to me, I will summarise the reponses for HUMANIST asap. Thanks in anticipation, Keith W. Whitelam From: MCCARTY@UTOREPAS"John B. Haviland" Subject: Anaphora in Homeric Greek by computer (35) Date: Thu, 10 Mar 88 11:47:26 EST X-Humanist: Vol. 1 Num. 957 (957) --------------------------------------------------------- I have an undergraduate student who proposes a joint Classics/Linguistics honors thesis, based on a machine readable corpus of Homer which he plans to examine for patterns of anaphora. He is interested both in syntactic issues, and in stylistic detective work on the problem of Homeric "strata." We can get the Homer, I think from UC Irvine, in a somewhat idiosyncratic Greek format (with limited diacritics) on 9-track tape, to be read by Vax and thence probably to MacIntosh. I have two questions: (1) Does anyone have suggestions about software (or alternate sources for the Greek corpus, for that matter) that might be appropriately employed? (Reed has multiple Macs of all sizes and shapes, as well as the Vax running bsd UNIX. I think I am the only person on campus who uses equipment other than this.) The student seems reasonably fluent in C, and is currently fiddling with our Vax's Franz Lisp. (2) Is this old news? Is the student doing something that has already been done? I am no classicist, so I am ignorant of the answer. I appreciate all advice; and I have enjoyed the voluminous introduction to Humanist during my first week receiving the discussion. From: MCCARTY@UTOREPASRichard Goerwitz Subject: More help for Penn OT users (45 lines) Date: Thu, 10 Mar 88 11:44:51 EST X-Humanist: Vol. 1 Num. 958 (958) --------------------------------------------------------- A few days ago, I posted a note saying I had a program that allowed one to a) slice out smaller corpora from the Penn OT texts, and b) have these marked more explicitly as to chapter and verse, while still remaining within the TLG betacode guidelines. This is still available. I now have another program available. This one prints out what the first program outputs. It can also output raw Penn OT texts, though you can't print out a verse here and there with the raw text. It has to be whole books. Sorry, that's just the way their coding scheme works! In any case, this printing program will work with any Toshiba P321-51 type printers (P321 printers must be able to accept downloadable fonts). Other printers will work, but you'll need to design a whole Hebrew font (yuck!). My program has a font that goes with it. The program isn't perfect, in that it can't reproduce all those minute accents. When it finds accents, it will print a tick over the letter to indicate its presence. It also strips out the Westminster textual notes that come at the end of some words. Otherwise, the output is quite read- able - considering you're asking a dot matrix printer to do some pretty rough stuff. As usual, I wrote this in Icon. You'll need to have a system that has Icon installed. Everyone ought to have it, since it's free, anyway. Interested parties should drop me a line. You oughta get both programs, by the way, so you can print out lone verses or groups of verses, rather than whole books.... -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer From: Willard McCarty "Dana Paramskas, University of Guelph Subject: French-English bilingual computer lexicon (17) Date: Thu, 10 Mar 88 12:04:58 EST X-Humanist: Vol. 1 Num. 959 (959) --------------------------------------------------------- The most complete French-English/English-French dictionary for computer-related terminology is: Terminologie de l'informatique, published by the Office de la langue francaise, Government of Quebec, ISBN 2-551-05790-6 (1983). To quote the blurb and add a footnote to the multilingual arguments: Le present ouvrage contient pres de 12 000 termes couvrant tous les aspects de l'informatique: materiel et logiciel, traitement des donnees, micro-informatique et teleinformatique, sans oublier les diverses applications de l'ordinateur. A cela s'ajoute une bibliographie selective de 150 titres. From: Willard McCarty dow@husc6.BITNET (Dominik Wujastyk)>> Those of you with access to the Unix News system may wish to know that> the program "Genealogy on Display" has recently appeared on the net,> uuencoded in areas comp.binaries.ibm.pc and soc.roots. This is the> Mormon product; a series of linked Basic progs. It is quite good.>> Incidentally, perhaps soc.roots (which I don't follow) has a lot more> information for those of you looking into the problem of managing> genealogies.>> Dominik> bitnet: user DOW on the bitnet node HARVUNXW> arpanet: dow@wjh12.harvard.edu> csnet: dow@wjh12.harvard.edu> uucp: ...!ihnp4!wjh12!dow Subject: Genealogical software (21 lines) Date: Thu, 10 Mar 88 12:24:41 EST X-Humanist: Vol. 1 Num. 960 (960) --------------------------------------------------------- From: Willard McCarty iwml@UKC.AC.UK>> I am very grateful for several responses to my query about three weeks ago> into structuralist models and the biblical text. I have tried to thank some> individuals and/or make contact, but have not got through the "mailer" yet.>> I trust that I won't bore you with taking the discussion further in> an attempt to float a couple of ideas to see what responses come back.>> ARE THE ACTANTIAL RELATIONS DISTINGUISHABLE BY SEEKING VERBS?>> Is it reasonable in looking at A J Greimas' actantial model> with its three relationships of knowledge, desire and power> to anticipate a pattern of semantics displaying> words/phrases of knowledge, desire and power at the> narrative level?>> Sender--->Object--->Receiver [knowledge]> ^> | [desire]> Helper--->Subject<---Opponent [power]>> If so, does the concentration in the narrative on verbs, or> verb-derived words, seem reasonable on the basis that each> relationship requires a verb in order to operate?>> My thinking at the moment is that for any computerisation to> be successful, what is important is the relationship rather than the> actants in the first instance. If a database of relationships for any> text can be identified, then the locus for identifying the actants is> defined. This is a development of a hypothesis that in binary> oppositions, there are three operative areas of interest, namely two> actants and the nature of the relationship between them, a> relationship which at what ever point on its spectrum you pause, is a> boundary at the narrative level.>> HOW IS A BINARY OPPOSITION RELATIONSHIP DIFFERENT FROM A BOUNDARY?>> If it reasonable to see the relationship between two binary> opposites as central to an actantial model, or for that matter to a> semiotic square, in what way does the quality of that relationship> differ from a boundary? Is it not the case that no matter where you> stop in that relationship, that stopping point is at that moment a> boundary between the opposites?>> I can see a a conceptual assumption in the preceding paragraph> which some may wish to question - namely that of a linear movement> between the opposites.>> However, in a presence v absence binary opposition, the position> of Greimas would be that in the middle is "hiddeness". "In the middle" implies> a linear movement of thought from one binary opposite to the other.>>> WHAT LANGUAGE WOULD YOU USE?>> Fascinating though much HUMANIST discussion on various languages> has been, it is only adding to the confusing of one HUMANIST now engaging in> formal academic computing after ten years of self-teaching! Therefore it> seems appropriate to reverse the situation, and rather than read HUMANIST> for answers to the search, pose the question directly and see what> happens!>> Given:>> (1) Primarily interested in structuralist interpretations;> (2) little or no computer analysis in this field has happened since> a spate of activity in the 1970s;> (3) the programme would be an aid to analysis by the user at this stage;> (4) the programme would guide the user from a narrative text DOWNWARDS> to levels beneath the text itself ultimately to a structuralist> interpretation;> (5) the programme would save the answers (i) en route and (ii) at the> end for analysis.>> Given those presumptions, which are free to be questioned of course,> which language would you consider using?>> For fear that I will now be given 200+ different languages to look at,> I'll sign off now!>> Ian Mitchell Lambert> iwml@ukc.ac.uk Subject: Structuralism and the Bible (92) Date: Thu, 10 Mar 88 12:42:20 EST X-Humanist: Vol. 1 Num. 961 (961) [My apologies that the following has been delayed. It was sent to HUMANIST@UTOREPAS and in consequence strayed into a dusty corner. -- W.M.] --------------------------------------------------------- From: Willard McCarty Subject: Editorial: Messages by subject (24) Date: Thu, 10 Mar 88 12:50:31 EST X-Humanist: Vol. 1 Num. 962 (962) Dear Colleagues: Today I started using a new mailer that allows me some relatively painless sorting of HUMANIST messages by subject. In consequence, you will receive today two kinds of mail: (1) single pieces, on topics unique to the day; and (2) three collections (on languages, that questionable postcard request, and HUMANIST's current problem with the volume of mail). If -- I repeat, IF -- we can more or less stick to a single topic, and if if if we can all be persuaded to use the SUBJECT line in the header of notes to HUMANIST (put it in by hand if necessary!), then we may survive our own success. One person has promised to write an exec to produce a daily digest, which might be even better than what I can offer now. Meanwhile, however, let me know what you think about the new development. Yours, Willard McCarty mccarty@utorepas From: Willard McCarty Lou Burnard Subject: Retrieval engine (32) Date: Thu, 10 Mar 88 16:52:56 EST X-Humanist: Vol. 1 Num. 963 (963) --------------------------------------------------------- Robin Cover's search for an engine (7 march) is a much more interesting topic of discussion than all this guff about lingo, and much more what I always thought Humanist was all about. Reactions to it have also been very interesting, if only in what they overlook. Viz. that there has for many many years been a whole area of the software industry devoted to the problem of providing access to large unstructured texts. There is an enormous body of research literature on how to search texts. There is even a directory of free text searching software updated annually. The problem is that most of the systems are concerned with providing rapid access to huge online sets of news stories, catalogue records, espionage reports etc. Consequently they cost lots of money and lots of computer. Consequently they're not very interesting or innovative. There's a working party of the IUSC (a UK academic computing committee) investigating this area at the moment; I have sent Robin (and will send anyone else who wants it) a copy of its draft report, which at the moment is more of a wish list and an evaluation procedure. Lou Burnard P.S. My own views are unchanged from those outlined in an article I wrote for Lit & Ling Computing last year sometime - you can do anything anyone can think of using indexes, but if you want to stay sane and still have some disk space, then you should be using CAFS or similar. However, as the powers that be have taken away oxford's CAFS I'm now currently experimenting with BASIS, to which JAZBO refers. It's horrible, but it does everything I can think of, so far as I can tell. From: Willard McCarty Subject: Computing for humanities students (20) Date: Thu, 10 Mar 88 16:55:23 EST X-Humanist: Vol. 1 Num. 964 (964) --------------------------------------------------------- There is a survey of existing courses and a selected bibliography. There is a conference scheduled at Oberlin College this summer that deals with the topic. There should not be too many unanswered questions after researching the bibliography in CHum. joe rudman From: Willard McCarty Chuck Bush Subject: Bitnet node change: BYUHRC to BYUADMIN (16) Date: Thu, 10 Mar 88 17:10:07 EST X-Humanist: Vol. 1 Num. 965 (965) --------------------------------------------------------- Please note that electronic mail formerly sent to Chuck Bush, Randall Jones, Kim Smith, Mel Smith or others at BYUHRC should be addressed to BYUADMIN instead. BYUHRC is still a valid node in most routing tables, so mail thus addressed will continue to get through to us for a while longer, but the link will undoubtedly break sooner or later. Chuck Bush Humanities Research Center Brigham Young University From: Willard McCarty Subject: Test of the DISTRIBUTE function (20) Date: Thu, 10 Mar 88 19:51:18 EST X-Humanist: Vol. 1 Num. 966 (966) This is a test of the DISTRIBUTE option of the ListServ software. Would the following HUMANISTs please reply to me directly by returning a copy of this note? Mark Olsen Richard Goerwitz Joel Goldfield anyone in the Xerox group Robert Amsler David Sitman Lou Burnard Thanks very much. If this works our local machine will be much less burdened with our weighty discussions. Willard McCarty mccarty@utorepas From: Willard McCarty "David Owen, Philosophy AYI004 at SOTONVMHartmut Haberland CHAA006@VAXB.RHBNC.AC.UKWalter Piovesan Sebastian Rahtz Robin C. Cover David Nash ATMKO at ASUACADKENNEDY@DALACkrovetz@UMass Subject: Digest: Volume of mail on HUMANIST (206) Date: Thu, 10 Mar 88 20:03:55 EST X-Humanist: Vol. 1 Num. 967 (967) From [name withheld] I'm sorry but I am another one who regretfully must ask that my name be withdrawn from the Humanist address list. I am busily preparing my book on .... I wonder how you can stand it. I think you must be near insanity. --------------------------------------------------------- I agree with Willard that the volume in HUMANIST is getting out of hand; and I only deal with them at the receiving end! I have some sympathy with the user who solved the problem by resigning; I may have to do the same if for no other reason than to prevent my Vax account from continually jamming up against my reasonably generous quota. And that could be a solution to the problem; more HUMANISTS will resign as the flow becomes more unmanageable; it will regulate itself. But it would be a shame to see HUMANIST become limited solely due to its own success. The TeXhak solution is a promising one, but on current form, that would mean receiving bundles of over 100 messages twice a week. Still, if they had tables of contents, etc. Another solution would be to have several lists on different topics, but that would be unwieldy, a lot of work and ad hoc. I can't help but think the fileserver is the answer. Perhaps all discussion of a continuing topic could be dumped in a file in the server, and updated twice a week. Summaries could be posted in the normal HUMANIST, as would messages on new topics. Would this be a lot more work, Willard? And if we could remotely search the server files... Anybody out there used LDBASE? --------------------------------------------------------- Happy anarchy is no more! Kropotkin slinks back to his cave. The awful spectre 'professional' begins to uncoil; Pavlov shrieks with delight. Do something. Help. I feel equally uneasy about the proposals - if no-one else will write an EXEC, I will, if thats the only objection to digests. I imagine it'd be quite easy? what are your specs? I for one want the anarchy to continue; there is a compromise solution which is to have some items in a weekly digest and others on demand - but it all means more work for you. Thus the 'issue of the day' goes on as before, but there is a weekly digest of news and questions about anything. how does that sound? or what about 2 levels of HUMANIST, a 'news only' subscription, and a 'full' subscription? whatever you do isn't going to satisfy everyone though. ah well! --------------------------------------------------------- This may be my last contribution to Humanist. Although I enjoy it thoroughly, I spent amost an hour this morning reading, discarding, forwarding items I got through the net. This is simply more time than I can spare, and reading the message of the poor person who found 71 items in the mail after only one day's absence from the terminal gave me the shivers. It is great to look into one's pigeonhole in the morning and you see `030 RDR' or `027' RDR etc., all that mail, and is it really meant for you? but then the difficult bit comes: to sort 'em out. --------------------------------------------------------- Re. Willard's suggestion that we stick to a single topic until it's exhausted; I do not think it a good idea, nor do I think it will work in practice ... how will one tell that a discussion is exhausted ? Non-receipt of HUMANIST mail could simply indicate a failure in the mail distribution system, rather than an indication that a discussion has run out of steam. Even if correctly perceived, multiple HUMANISTs, all of whom had been waiting for an opportunity to launch a new hobby-horse, might then leap into the fray, starting multiple concurrent discussions until a concensus emerged on which was the "current" topic. --------------------------------------------------------- I find myself in the same boat as the two members of HUMANIST that have complained about the load of messages coming across their electronic desks. I will have to resign if the volume is not cecreased to a manageable level. My reasons for wanting to participate in HUMANIST was to monitor activities in the area of creation, management, and use of Machine-Readable Textual Files. For the most part the discussions have been, in my opinion, a bit too chatty. Perhaps messages can be edited and batched out in weekly or monthly "volumes". --------------------------------------------------------- someone else said to me today that HUMANIST is just too much to cope with. I think that there are only two solutions: a) a digest as Dominik W. suggested. I agree with him that it is much less forbidding. b) interest groups, with mail tagged for sending to one of, say, 10 groups. would require users to tag their mail. but so much IS of general interest. I voted for the Digest approach before and I vote for it again. If necessary you could appoint subject subeditors, who get the whole thing and extract to mail to people who only want to get strictly relevant material to their discipline. This is sent to you but maybe you could include it in an editorial when you get more responses. i find my small archaeology mailing list quite enough trouble, I am impressed by how well you cope! and with jokes too, thats the best bit --------------------------------------------------------- not to clog already overly-talkish HUMANIST... Suggestions which would increase your workload are not welcome, I know. I see the following solutions as superior to the current state of affairs: (a) bundle/digest 10-15 HUMANIST postings, or perhaps each day's worth, into a single file and send it instead of the 8-12 per day. The latter is expensive (I use VM/CMS "receive" command exec, and it costs a lot to "receive" small files; it would be a lot easier to read a group of HUMANIST postings at one time, or to sort the virtual reader repeatedly, as I suspect many have to do (b) IF you could automate a process to put an "index"/"contents" at the head of each digest, that would be great...but not necessary. If the exec to do this is not too bad, then perhaps the index could indicate line-number at which each new posting begins. Most of have line-oriented editors available, I think, and could immediately "goto" selected submissions, as dictated by the subject-line indexed at the top. (c) I agree that 12-28 HUMANIST mailpieces per day is too much...I was already contemplating download of the log once per week if this traffic keeps up. There IS some chatter, in my judgment...I think everyone would respond to pleas for self-control. --------------------------------------------------------- Does reading the Daily Paper bother people as much as reading the voluminous mes sages on the Humanist? Why not apply the same principles to read the messages here as you use when reading the Daily: 1)Read the Headlines. 2)Read the leading lines of interesting stories. 3)Save a few memorable items for your scrapbook. 4) Use the rest to line your kittylitter box, or wipe up the spilled coffee. --------------------------------------------------------- I'd be in favour of digesting. (Gnu Emacs has an RMAIL "undigestify" command (not that I'll have the pleasure of Gnu Emacvs in a few weeks...), but with digested news I get from other sources I find I don't usually use that command.) I agree: I'd vote against subdividing prematurely. Staying on a single topic -- no. The speech-act of "interruption" is quote different from in a seminar room -- and I welcome them here. But the point you may like to quote me on is: how much of recent discussion would I have recommended fellow country to pay 2c/line for?? Precious little of the stuff about which language to use, for instance. Keep up the good work! -DGN --------------------------------------------------------- Don't change HUMANIST. People can either wade through entries one at a time or use the magical PURGE RDR command to clear out the mailbox. Why is there such a stink about this anyway? - Mark Olsen --------------------------------------------------------- I've only been a subscriber to HUMANIST, but I see the problem about volume of mail. The problem is perhaps really associated with the fact that HUMANIST functions as a mail list (simple!). Possibly it needs to be arranged like special interest groups on USENET, as a bulletin board that people can tune into instead of having all postings come directly each and every mailbox. One way of doing this would be to have institutions be subscribing members, so that the mainframe computer maintained a weeklong listing of postings and then cleared them out for the next week. HUMANIST headquarters could continue to maintain archive files and so on. --------------------------------------------------------- I also feel saturated by the volume of mail HUMANIST generates and have been tempted to unsubscribe. It would certainly be helpful to put the messages into a digest format rather than redistributing every message. The messages could also be organized into groups, so an entire section could be skipped through if desired. I don't want to send this message to everyone, but I just wanted to add my vote for a digest format. I also don't think the digest should be sent out more than once per day. -bob From: Willard McCarty R.J.Hare@EDINBURGH.AC.UKR.J.Hare A.C.Boyd @ uk.ac.edinburghdow@husc6.BITNET (Dominik Wujastyk)Sebastian Rahtz Jeffrey William Gillette fillmore%cogsci.Berkeley.EDU@jade.berkeley.edu (Charles J. Fillmore) Subject: Digest: The Postcard Question (164)Re: PostcardsAppeal? Date: Thu, 10 Mar 88 20:17:07 EST10 Mar 88 09:26:12 gmt09 Mar 88 15:47:50 gmt09 Mar 88 14:45:38 gmtWed, 9 Mar 88 10:36:44 ESTWed, 9 Mar 88 16:55:47 GMTWed, 09 Mar 88 20:17:40 ESTThu, 10 Mar 88 12:04:27 PST X-Humanist: Vol. 1 Num. 968 (968) -------------------------------------------------- --- Forwarded message: Well, that was my sort of thought, but we have some pretty suspicious people up here. I append a message sent earlier which seems pretty conclusive. (Message 50) I'm not so sure now that the appeal in GENERAL BB is a hoax. A similar message was broadcast today on the LE.VAX NEWS (equivalent to ALERT) system. I warned the LE.VAX people that it might be a hoax, but one of them, Richard Mobbs (RJM@LE.VAX) assured me that someone had checked with the Luton number and found it genuine. BUT, there is an address change. Of course, it could all be an elaborate hoax-within-a-hoax, but April 1st is still weeks away! Chris Boyd. This is the revised NEWS item from LE.VAX: >A_Genuine_Appeal > > 9th March 1988 > > > I have received the following genuine message. If you feel you can > help, please do so: > > > David is a 7 year old boy who is dying from Cancer. > > Before he does, he has a dream of one day being in the Guinness Book > of Records for the person who has had the most postcards sent to them. > > If you would like to help David achieve his dream, all you have to do > is send a postcard to David as soon as possible. > > Send to: > See Below > > > Don't forget to sign your name > > Postscript: > > Someone has spoken to the police station at Luton. The > appeal is genuine but the address given earlier was > wrong. Apparently David has received enough postcards to > get into the record book, but any further donations of > cards will be welcome to Birmingham Childrens Hospital > to make Davids place in the book very secure (David is > still alive at the moment). The address is: > > "David" > Birmingham Childrens Hospital > c/o 6 Hillside Drive > Streetly > Sutton Coldfield > West Midlands > --- End of forwarded message --------------------------------------------------------------- Would one of our English members please phone up St Martin de Porres Infant School in Luton and check out the story for us? Dominik ------------------------------------------------------------------ >A_Genuine_Appeal > > 9th March 1988 > > > I have received the following genuine message. If you feel you can > help, please do so: > > > David is a 7 year old boy who is dying from Cancer. > > Before he does, he has a dream of one day being in the Guinness Book > of Records for the person who has had the most postcards sent to them. > > If you would like to help David achieve his dream, all you have to do > is send a postcard to David as soon as possible. > > Send to: > See Below > > > Don't forget to sign your name > > Postscript: > > Someone has spoken to the police station at Luton. The > appeal is genuine but the address given earlier was > wrong. Apparently David has received enough postcards to > get into the record book, but any further donations of > cards will be welcome to Birmingham Childrens Hospital > to make Davids place in the book very secure (David is > still alive at the moment). The address is: > > "David" > Birmingham Childrens Hospital > c/o 6 Hillside Drive > Streetly > Sutton Coldfield > West Midlands > ------------------------------------------------------------------------ By now you have probably been deluged with similar notes, but I'll pass along what I have heard. About a month ago National Public Radio did a rather long feature on the young boy in Scotland who was terminally ill, and whose last wish was to make the book of records for receiving the largest number of post cards. In brief, the chap who first brought the story to public notice claims that he heard it by word of mouth. Being a ham radio operator, he advertised the young boy's wish internationally. The chap claims he never met the boy. I believe he claimed that he tried to track down the boy (whose town was some miles away), but to no avail. The postmaster of the little town clained to know nothing of the boy's existence, and was pleading with the world to stop sending postcards. A hoax? Draw your own conclusions, I suppose. ------------------------------------------------------------------------ The hoax required the perpetrator's village to expand its postal facilities by some huge multiple. Is this a new one? Fillmore From: Willard McCarty CHAA006@VAXA.RHBNC.AC.UK Subject: Digest: Volume of mail on HUMANIST &c. (314) Date: Fri, 11 Mar 88 22:16:53 EST11-MAR-1988 13:12:47 GMTFri, 11 Mar 88 08:11:52 ESTFrom Dr Abigail Ann Young Fri, 11 Mar 88 12:32:10 GMTFrom AYI017@IBM.SOUTHAMPTON.AC.UKBrendan O'FlahertyFri, 11 Mar 88 13:15:17 ISTFrom David Sitman Fri, 11 Mar 88 10:21:52 GMTFrom AYI004@IBM.SOUTHAMPTON.AC.UKThu, 10 Mar 88 20:53 ESTFrom Fri, 11 Mar 88 17:29 ESTFrom Fri, 11 Mar 88 08:49:13 PSTFrom "John J Hughes" 10-MAR-1988 18:53:24 GMTFrom ARCHIVE@VAX.OXFORD.AC.UK X-Humanist: Vol. 1 Num. 969 (969) ------------------------------------------------------------------------ Although I have no objection to a digest format (which, apart from simply leaving things along, I prefer to all the other suggestions), I must raise a personal objection to a TeX-hax-style index at the start. The following may be of more interest to a psychologist than fellow Humanists, but I do feel the point is worth making :- When I log-in of a morning, I am invariably told "You have new mail messages", where is some positive integer generally less than 100, and frequently less than 10. Depending on how busy I am, and one how many new messages are waiting, I then either (a) type "Directory" (to Mail), if I am very busy, or is large, or (b) simply read each mail message in turn, if neither of the preceding constraints obtains. However, I derive more pleasure from style (b) of operation than style (a), in that each new mail message is a surprise, a present, of which I have no advance knowledge. When I read TeX-hax (which, like Humanist, I field through the MRL/PFC `Bulletin' utility), I have no choice: whether I am busy or not, the first thing I am presented with is a list of the topics to be discussed in that day's mailing. Now, I can try not to read the list, if I want the surprise of each message in turn, but that turns out to be surprisingly difficult, and much of the pleasure is lost, because each new message is no longer a total surprise. ------------------------------------------------------------------------ I'm sorry to do this to Willard, but I cast my vote for continuing anarchy as far as free range of topics and style (chattiness, etc) of submissions are concerned. I can see that, if so many people seem to feel strongly about it, we do need to distribute in a more efficient way, ie, in batches like TeXHaX, whose editor does not, as far as I can tell, actually 'edit' the submissions, except to post very long ones to a fileserver and include only a short 'pointer' in the digest. It's never taken me that long to comb through my messages in the morning, but I confess that I only read the ones on subjects of interest: others get skimmed at best and then discarded..... Perhaps the current revolt of the masses is a as much a response to the quality of our topics over the last few months as it is to the volume of discussion! Abigail Young young at utorepas ------------------------------------------------------------------------ I must have been away at the same time as the other Humanist who found 60+ files waiting in the reader on return because I found the same. I also got a memo from the Computing Service about the limited capacity of the spool area of the local IBM mainframe asking me to show constraint in my use of the reader. I vote for the daily (or whatever) digest, preferably with a table of contents, for browsing at leisure. Brendan. ------------------------------------------------------------------------ I would like to cast my vote for continuing with HUMANIST in its present configuration. I find it far easier to sift through a lot of small messages than to go through a huge biweekly "digest". It frustrates me to have to find space to save a 700-line digest because there is one 25-line item that interests me. I think that the key to HUMANIST's continued success is self-restraint, rather than imposed guidelines. I found the "postcard" messages totally inappropriate to HUMANIST, but I found an easy solution: each time I saw a "postcard" subject, I deleted the mail immediately. All told, I probably wasted 45 seconds (about the amount of time you're wasting to read this). Instead of spinning off an array of related lists, it's far more natural for a group of HUMANISTs to move to private correspondence as soon as an issue becomes too specific or esoteric for "mainstream" HUMANIST. For those of you who are afraid of deleting or missing something of importance, remember that a copy of every HUMANIST message is archive at UTORONTO (Willard mentioned this the other day). If the LISTSERV DATABASE facility can be installed at UTORONTO, then we will have an efficient way of searching those archives (that's a hint to Willard and Steve Younker). For better or for worse, it seems that the topic most discussed on the HUMANIST list is the HUMANIST list itself (but I would never stoop so low). David ------------------------------------------------------------------------ Discard this file without reading unless you are a serious person Digesting the daily mass of green (on my monitor) pulp? about the information overload is a daily pleasure. It takes about 2 seconds to delete a file - as soon as a deadly significant title or personality scrolls up. How else can the lonely deskbound academic make a comment about the world? I've got an Brian Molyneaux (AYI004@uk.ac.soton.ibm) ------------------------------------------------------------------------ From "Michael Sperberg-McQueen" The first digests came through this morning, and I found them good. It won't solve the space problem for the Vax users, perhaps, but by reducing the flow it will make it more manageable. I am struck by how similar our experience has been to that reported in 'The Mythical Man-Month' by Frederick Brooks: in developing System/360, they eventually found that the ease with which computers allowed them to change their specs led to chaos. To control the chaos they arbitrarily 'quantized' the changes: specs were changed only once every six months. Shame about the volume problem: I've always rather *liked* the chatty tone of Humanist. Michael ------------------------------------------------------------------------ Date Fri, 11 Mar 88 06:35:13 CST From Richard Goerwitz I don't mind the volume of mail on Hum.; I just make sure to be quick to delete messages that don't interest me. It takes me no more than fifteen minutes a day to read what I want/need, and logout. If the volume is oppressing you (since you must read everything), or it seems appropriate, please feel free to discard any messages of mine. That's what a monitor is for! -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer ------------------------------------------------------------------------ I am impressed, but also overwhelmed, at the volume of activity on HUMANIST. Can't some means be found to provide only a digest or topic list and then allow users to request sets of messages in which they are interested? 83 messages per day is overwhelming. Doug Davis ------------------------------------------------------------------------ I would like to cast a vote against digesting. For those of us using mail readers, rather than readerlist, the subject line provides adequate warning of what to kill and what to read later, and the like. In addition for those of us who read or skim everything (just in case) mailers let you decide whether to continue reading dor discard the message with the choice of typing either a d, or a carriage return. How simple! Of course, having also used VM I know how hard it is to quickly zip through your inbox with readerlist. Perhaps there could be two lists one automatically spraying out messages sent to it, the other digested by your program from the messages of the first. It seems to me that depending on your software, choosing either one of the two alternatives will inconvenience somebody. I don't find the volume of mail to be too large for comfort, in fact I find that humanist postings are generally a bright point in my day. However, some discussions (such as the endless programming language one), probably need a little restraint. In general, a suggestion that a discussion might better continue by private mail, and then be summarized or digested to the list later on, can prevent a lot of the duplication that can occur. A little self restraint can also help. I think most discussions of programming language virtues are pretty useless for a number of reasons. I refrained from making that assertion in order to avoid feeding the fire of discussion on a topic I was tired of. We can all apply this technique. Anyhow, this started out small and ended up long. Feel free to quote or summarize me if I said anything good, or let it all ride, if you find nothing new. David G. Durand Brandeis University ------------------------------------------------------------------------ Two thoughts about making HUMANIST more humane. (1) Offer HUMANISTs two ways of receiving HUMANIST information: A. Everything--all messages B. Only messages in user-specified interest areas. Currently, every HUMANIST receives everything. But why not allow HUMANISTs to decide what they want to receive? For example, if HUMANISTs were restricted to one topic per message (with no limit on the number of messages that could be sent), and if each message had a sender-designated topical header (i.e., HYPERTEXT, TEXT RETRIEVAL, OS/2), and if HUMANISTs had the _option_ of asking only to receive messages on certain topics, and if your software is or could be made smart enough to recognize topical headers and to send individual HUMANISTs only those messages that fall within their specified topical areas, then those of us who are not interested in learning about `postholes' or why we should learn Danish, for example (no offense meant!), would not have to spend time reading and/or printing such messages. Furthermore, if HUMANIST had on-line user-profile files that HUMANISTs could edit to change their specified areas of interest, then users could freely update or change their specified areas of interest. (2) Here is a more radical idea that does _not preclude_ continuing on with (1) A or adding (1) B. That is, (1) A & B and this idea, (2), could all be used concurrently. They're not mutually exclusive. Instead of the fileserver at UToronto sending HUMANIST messages to HUMANISTS, why not give HUMANISTS the _option_ of accessing the UToronto fileserver for HUMANIST messages. In other words, this would be a "don't-call-us, we'll-call-you" option. Here is a modification of that idea that appeals to me. Why not have a _version_ of HUMANIST that operates like a bulletin board or like a dial-up conferencing system (e.g., CAUCUS). That would allow HUMANISTS to access a structured system in which messages and information were divided by topic. After logging on to such a system, HUMANISTs could select the topical areas in which they wished to read or post messages. Each topical area could have an on-line local or remote "manager" (pick any title) who was responsible for overseeing the contents of the area(s) he or she was responsible for. This would take some of the work off of Willard. I don't believe such a system would diminish the usefulness of HUMANIST to those who prefer receiving information in a more structured and selective fashion. And separating discussions into topical areas would, I believe, facilitate _better_ discussions. It would help to keep them more focused and more structured, and it would allow HUMANISTs to see and interact with the whole discussion--from its inception to the present--instead of getting a message here and there, now and then on a given topic of interest and trying to save and file such messages in a coherent fashion, which is the way HUMANIST now works. Right now, HUMANIST seems like a "one-room" discussion group that has grown so large that some participants are walking away because of the volume of information they are being asked to process--too many voices talking about too many things all at the same time in one room. By having a version of HUMANIST that functioned as a "multi-room" set of somewhat more structured discussion groups, and by allowing HUMANISTs to move from "room" to "room," you might help end some of the frustration of information overload without diminishing the usefulness of HUMANIST. Large "multi-room" conferencing systems like BIX and the conferences on CompuServe function as useful sources of information without inundating their users with all sorts of stuff they may not be interested in. And HUMANISTs interested in "direct contact" with other HUMANISTs can always e-mail them on BITNET, NetNorth, EARN, etc. I hope this is not just one more piece of "junk mail" that will precipitate some HUMANIST into writing and asking to be removed from the fileserver's list! John John J. Hughes XB.J24@Stanford ------------------------------------------------------------------------ Date Fri, 11 Mar 88 10:07:23 EST From elli@husc6.BITNET (Elli Mylonas) Please don't make humanist into digests! I appreciate the problems of those who have to use rdrl in CMS, and who don't see headers, but that is what headers are for. If you are not interested in the topic, then discard it! Digest form essentially forces one to accept or discard all of the messages in a digest. RISKS and the Mac Digest are a good example of this. Single message format is a much easier way to sift through postings. I also would cast my vote against Humanist sub groups, because that too is a de facto exclusion of possibly interested contributors to different topics. Isn't broadness and a wide range of interests that ultimately come together supposed to be a characteristic of humanists? Finally some guidelines for contributors that may help keep volume down: 1. Some conversations go on a long time, because what is being discussed is a matter of religion, garbed as arguements. an example of this is the programming language discussion. Not much one can do about that, except to exercise self restraint, or to address oneself to the individuals by mail. 2. If a conversation seems vacuous, better to ignore it than to add one's mite to it, especially if that mite refers to the vacuity of the discussion. That way the conversation will die a natural death. 3. One way to avoid clogging the net is not to talk about what to talk about. 'nuff said. 4. Willard, do you want to step in and call a halt to things when they seem to be going too far? Are we ready to abide by his decisions? ------------------------------------------------------------------------ I vehemently object to the notion of a single subject only! I am sending under separate cover (i.e. to umanist erself) a note on a topic which appears sporadically in amongst all tosh about funny scandinavian accents. How is common sense ever to re-assert itself if we all have to follow whatever lame brained topic turns up? How indeed are new topics ever to turn up? I think you should leave well alone: Humanist has always been unpredictable. Either that or you will just have to accept the responsibility of saying "Any further messages on topic X will be (a) trashed (b) trashed unless they're REALLY NEW (c) put on hold till the end of the week, when I shall circulate a bumper edition on topic X. From: Willard McCarty Subject: Digest: The TLG & PHI/CCAT Texts (59) Date: Fri, 11 Mar 88 22:30:46 ESTFriday, 11 March 1988 1717-ESTFrom KRAFT@PENNDRLNSubject -- TLG and PHI/CCAT texts and resources X-Humanist: Vol. 1 Num. 970 (970) ------------------------------------------------------------------------ Recent HUMANIST communications from Richard Goerwitz and John Haviland relating to textual data from TLG and/or PHI/CCAT lead me to attempt to clarify some issues: Homer is available from TLG, and is part of the massive collection of data on the TLG CD-ROM "C" (as well as on "A" and "B"). It can also be acquired separately, on tape (from TLG) or diskettes (from CCAT, with TLG permission). CCAT regularly supplies IBM/DOS and MAC diskettes, although not everything can be done promptly! Searching these materials can be done in a variety of ways, including the sophisticated systems on IBYCUS or on IBM/DOS (R. Smith) or on Apples (G. Crane). The TLG coding is complete -- there is nothing "limited" about the diacritics -- even if one wishes to characterize it as "idiosyncratic" (I prefer "transparent", at least for the transliteration scheme). As for the biblical materials widely circulated from CCAT, we regularly supply programs to permit the user to reformat the TLG beta ID coding into explicit coding for each line, and to permit the user to select how much of the complex Hebrew data is wanted (e.g. omit cantillation, omit vowels and cantillation). These programs are on a "utilities" diskette that is provided free with diskette orders (Richard got the material on tape, and may not have known about the utilities). Source code is included with the programs, with all the necessary code to convert the TLG beta ID format into explicit (book, chapter, verse) references. These programs also would be useful for the Homer and other TLG texts. Bob Kraft (CCAT) ------------------------------------------------------------------------ Date Fri, 11 Mar 88 08:08:35 PST From 6500rms@UCSBUXA.BITNET Subject -- TLG beta-code use As part of our TLG search software, I have source code modules which extract small passages from TLG texts and convert the beta-code markings into normal references, e.g., Plato: 17 b 3. In addition, I have written a conversion program which then prints that raw beta-code text in fully accented Greek on EGA, which can then be used with the BYU Concordance Software, a Toshiba P351 (you need both downloabable fonts, so a 321 might not work), or into a Nota Bene SLS file. These modules are written in C, and they would need to be combined into a stand-alone program. If anyone is interested, let me know. I might be able to ARC the relevant modules together and place them on the fileserver. Randall M. Smith (6500rms@ucsbuxa.bitnet) From: Willard McCarty Willard McCarty Subject: Languages on HUMANIST, &c. (332)Digest: Databases, DBMS, Mark-up, &c. (148) Date: Fri, 11 Mar 88 22:34:45 ESTThu, 10 Mar 88 10:06 CSTFrom 09 Mar 88 09:20:26 gmtFrom R.J.Hare@EDINBURGH.AC.UKSubject -- Languages, Volume of HUMANIST mail (about 12 lines)9-MAR-1988 16:51:31 GMTFrom CHAA006@VAXB.RHBNC.AC.UKSubject -- Re: ISO 88xx (multilingual, eight-bit ASCII)Wed, 9 Mar 88 10:21:56 PSTFrom Laine Ruus Subject -- Vox populi....? (8 lines)Wed, 09 Mar 88 09:39:35 DNTFrom Hans Joergen Marker Thu, 10 Mar 88 09:21:10 PSTFrom cbf%faulhaber.Berkeley.EDU@jade.berkeley.edu (Charles Faulhaber)Subject -- Re: America the Golden (17 lines)Fri, 11 Mar 88 22:43:10 EST X-Humanist: Vol. 1 Num. 971 (971) ------------------------------------------------------------------------ LANGUAGE(s) In fact, the only language we all speak on the network is ASCII. In that sense the mail from K.P. Donnelly was right to the point. When the discussion on natural languages spoken by members of the HUMANIST network bears only on the fact that "you speak the language you feel comfortable with, or the one that you receivers will understand", or that "Humanists, by nature or by culture, understand more than on language" nothing new is added to common knowledge. We simply transfer to computing what is already agreed on since the Middle Ages.... The new fact to be addressed is natural languages mediated by computers, which brings us back to ASCII. In the actual situation, French, for example, is not "speakable" (pardon me for the barbarism) on the network. The use of meta-diacritics of false diacritics like:[d/'ej/`a], or [d'ej`a] or any possible combinations (cf. TEX and LATEX) other than the pure and simple accents is something that a French user cannot do with. What is the point of translittering your own language for someone who will un-translitterate it at the end of the communication? The same goes for languages with a similar structure Then, why not drop the accents? In fact, it is the solution I prefer when I write in French on BITNET. The problem is: IT IS NOT A SOLUTION: it is an adjustement, deeply unsatisfactory. Just write the simplest note in French to another francophone user and you will see how many times you will be tempted to use brackets to avoid confusion, double-entendre or obscenity... Then the language you feel "comfortable" with on the network will have to be one that ASCII, in its short version, supports. Other attempts simply lead to some kind of pidgin or creole.... TOO MUCH MAIL? 1) when postal services came into effective action (it has not happened yet in Canada!) I am sure that the Humanists of that time complained about getting too much missives on their desk... 2) why not let it go for a while before trying to impose a more rigid regulation? 3) in the long term, I find that the idea of a compendium of the mailed communications is very interesting. BUT, one factor that I like with HUMANIST is the flexibility, promptness of intervention, and all that which is precisely related to the new medium: the computer. Would that be lost in the process of getting all the material together? 4) Hints: extent use of Delete (everybody knows that) and Print for the more interesting (or extensive [no correlations]) communications. You may then take the printouts with you on the beach and have a drink while parsing through your mail... 5) and thanks to Willard McCarty. What would it be if you had to run all over the world to deliver the mail? Jacques Julien. ------------------------------------------------------------------------ 1) If HUMANIST is to use a common(ish) language other than English (American) it seems to me that the Esperanto suggestion is a good one - no accents or diacritics, and a well defined structure (like at least some programming languages?). Also it would give me an excuse to dust of my books on the subject... 2) Re the volume of mail on HUMANIST. Like some of the recent correspondents, I am beginning to find the volume of mail a little bit OTT, and I too am having to delete messages after the first screenful, if nothing interesting is said. An additional problem here at Edinburgh (which may be common to other sites) is that we have a time/size limit on our HUMANIST mailbox of 6 months/200 messages. Curently, we are cycling through our 200 messages in about three weeks - this makes life extremely difficult for anyone who can't look at the board for longer than this due to anbsences. Roger Hare. ------------------------------------------------------------------------ Kevin Donnelly asks "Is anyone out there using the new ISO standard, IS 8859/1" I'm not, and I confess I hadn't heard of it. Does it replace ISO (DIS) 6937, or augment it ? (ISO 6937 had space for sixteen non-spacing diacritics, of which thirteen were pre-allocated at the DIS stage). ** Phil. ------------------------------------------------------------------------ Vi aer ju vuxna allihopa, och daerfoer ej sa blaa-oegda att vi tror att Vox populi aer det som styr vaerlden. 'Vox' foervisso, men minoritetens vox, ej den tysta majoritetens. Kommunicationens object aer att foermedla information. Kommunicerar man med en doev-stum, goer man det paa ett saett som den doev-stumme foerstaar. Kommunicerar man med en grupp som endast har engelskan som gemensamt spraak, saa........ ------------------------------------------------------------------------ Subject -- A provocation Svar til Birgitte Olander: Det er nok tvivlsomt om HUMANIST er det bedst egnede medium for en nordisk diskussions gruppe. En ulempe er det i hvert fald at de skandinaviske bog- staver bliver oversat s} elendigt i kommunikationen. (F.eks f}r jeg dit } (aa) som et o med accent og dit % (oe) som et udr}bstegn) Hilsen Hans J%rgen Marker. ------------------------------------------------------------------------ The number of hispanists who are sophisticated enough to know about and use electronic mail can probably be counted on the fingers of 1 hand; and as far as I know I am the only one who uses Humanist. Pero si le interesa que ponga mis notas en espanol, yo encantado. Ahora, no veo el valor de ello. Charles B. Faulhaber Department of Spanish UC Berkeley CA 94720 And there was I thinking that America had a wider racial and linguistic mix than Norfolk! Would Richard Goerwitz care to comment on that complete lack of Spanish contributions to Humanist from his country? On his side of that Atlantic there are far more Spanish speakers than ours! Sebastian Rahtz My posting about Americans was intended to allay fears that we American Humanists were interested in seeing American English become a kind of standard. If you are curious why Spanish is not used in this country, the reason should be quite obvious. Why did the French end up speaking a dialect of Latin instead of Gallic? Clearly, Latin was the prestige language in the Western Mediterranean at that time. This is not to say that Latin is better than Gallic. Likewise, I am not making the absurd claim that Spanish culture is "lower" in some sense than American. What I am saying is that we have a lot of poorly-educated Spanish-speaking people coming into this country whose dialects would not generally be useful to emulate. Also, they and their countries of origin, are often militarily and economic- ally dependent on the U.S., again reinforcing the perception that they should be learning English rather then the reverse. This is not an excuse for the failures of American economic policy in Central and South America. It is simply an explanation for why Spanish is not well known. I am sure that if Mexico had a rich economy, was producing all kinds of scholarly literature, and generally looked like a desirable culture, that Americans would learn its language. Now, a final word about the racial makeup of America. Indeed, it is diverse. But the reason for our lack of diversity in language should be obvious. When my ancestors came over here, they were not rich folk. So to work, they could not force their native tongue on Americans. More- over, once here, they needed a common language with which to communicate with various other ethnic groups, as well as with the natives. English became the natural choice. Again, if my ancestors (Prussians, so watch out :-)) had come into a state like, say, Czechoslovakia, which has a German-speaking "area", and lies near several German-speaking countries, they might have found it more useful to retain their dialect. They did not, and so naturally they learned English. They even gave up speaking German in the home, since they could see no basic need for it; in fact, they felt strongly that the children should be fully prepared to integrate into the mainstream of American culture. But to return to the point, American humanists, in spite of the monolingual- ity of their society, DO find it useful to know foreign languages. And, in spite of the extreme difficulty getting exposure to these languages here, generally learn them anyway. In sum, your American correspondents on the HUMANIST do not wish to impose American English on you! Quite the contrary! Write in any language you please! And please, those out there who would like to harp on American linguistic parochialist, please don't view this as some great moral failure. Our monolinguality is merely a product of the external conditions in which we find ourselves. If you folks had been placed in a similar situation, you would have acted in exactly the same way. For this I have superlatively convincing proof: Us (we are your relatives, after all - not all of us "black sheep"!). :-) -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer ------------------------------------------------------------------------ Date 11 March 1988, 15:17:51 EST From JLD1@PHX.CAM.AC.UK Subject -- Accent coding (20 lines) > > Suggestions for accent encoding. An accent follows the letter to which it > belongs. acute < grave > umlaut | > circumf. ^ cedilla ~ tilde ~ > hachek {v} breve {u} ring {o} > macron {-} dot over {:} dot under {.} > ogonek {;} comma {,} bar under {=} > hook under {h} double acute {<<} > > Special letters and signs: > German ss {ss} Spanish ? {?} Spanish exclam. {!} > Paragraph {P} Section {S} Copyright {C} > Trademark {T} Registered {R} degree {n} > dotless i {i} crossed d d~ crossed D D~ > Polish l l~ Polish L L~ small eth d< > Scand o o{/} Scand O O{/} Dutch ij i+j > ae diphth a+e oe diphth o+e thorn t+h > AE diphth A+E OE diphth O+E cap. thorn T+H > yogh {3} cap. yogh {C3} > wyn {w} cap. wyn {Cw} > > Font changes: open or close italic \ or _ > open or close bold \\ or __ > open or close bold ital \\\ or ___ > open/close Greek [g[ ]g] > open/close Hebrew [h[ ]h] > open/close Cyrillic [c[ ]c] > open/close small size [s[ ]s] > open/close superscript {^ ^} > open/close subscript {| |} > open single quote {' or ` > close sing. quote '} or ' > open double quote {" or `` > close doub. quote "} or " > > Conventions for representing the characters used in accents, etc. > less than {<} greater than {>} hat {^} > vert. bar {|} twiddle {~} underline sign {_} > grave sign {`} plus sign {+} backslash {\} > open curly {{} close curly {}} plus-or-minus {+-} > > We've been using these conventions for some years, and most users seem happy > with them. They use only the ASCII character set, so are possible from most > keyboards and on most VDUs, and print on most modern lineprinters. > John Dawson JLD1 @ uk.ac.cam.phx ------------------------------------------------------------------------ Date Thu, 10 Mar 88 16:18:50 CST From D106GFS@UTARLVM1 A potpourri of minor comments: 1) w.r.t. Jim Coombs re. retrieving 'slang' words from an online 2) w.r.t. whether an em-dash is punctuational, presentational, or descriptive markup: I would say that any element can be assigned to any category, depending upon interface. If I type two hyphens, I am typing what we traditionally call "punctuation", but the program may know to store "oh yes, that's one of them m-dash thingies", or store some number of hyphens, or store "" or some such. As long as the mapping from input to semantics is one-to-one, the advantages of descriptive markup are kept available. It doesn't matter very much to the computer what I type or what is stored. A more challenging example is typing the letter sigma in Greek. In word-final position it is printed differently than it is elsewhere. So by simply typing 's' for sigma, are we doing descriptive markup? I would say yes, because we are specifying *the thing that is salient to us as writers*, rather than the thing which is alient to the printer, etc. 3) w.r.t the bulk of HUMANIST: I would prefer a 'digest' form with authors' subject lines extracted to form a table of contents at the top. This could be done by program, I trust. I would also suggest putting a number on each note in the table of contents, and an easily-locatable string at the start of each actual note. For example, one could then get to note 5 by using an editor command to search for "?%" or something. 4) I will send following this posting, an exec program for CMS, which will more conveniently load all of your HUMANIST MAIL, while leaving other things in your in-box (i.e. reader) untouched. It also should be much faster than 'rdrlist' or 'receive'. I place this program in the public domain. Steve DeRose Brown University and SIL ------------------------------------------------------------------------ Date Thu, 10 Mar 88 19:47 EST From Subject -- re: relational DB's and text representation (no SGML) (86 lines) This posting is a response to Jim Coombs posting on the 8th. My comment is mainly sparker by Jim's assertion that what he wants is "a relational database management system that supports full-text fields." My personal feeling is that relational databases may prove useful in doing the internal bookkeeping for a good text handling system, but that the primitive operations they provide are very clumsy for creating a base for a text handling system. The indexing and retrieval of large (typically > 1000 records) numbers of fixed format data items is inherently different from keeping track of a relatively small number (typically < 1000 texts) of inconsistently formatted texts. Databases as they currently have been very carefully optimized for handling business records, and I think that Jim's follow-up posting demonstrates the difficulties of using such a tool for text management better than I can. I think that there's a danger in knowing what computers can already do well that can afflict people who are trying either to create new tools or to do things that current tools were not designed for. This danger seems to me to consist in having faith in the extendability of existing methods. Such faith seems well justified, since the methods in use are obviously powerful, and the experts all seem to claim that this is the ``best''technology available. Unfortunately, frequently the experts are expert in the application of the method they are recommending, and not the discipline whose problems need to be solved. I think a few experiences of such expert advice are what cause some people to bitterly conclude that ``computer scientists don't know anything.'' This is only half right, usually they just don't know anything about linguistics, or writing scholarly (as opposed to scientific) papers, or whatever else the problem actually is. In fact, I think that scholarly text handling is in many ways a harder problem than database design: keeping track of uniform records is an inherently more structured task than keeping track of the heterogenous materials and multiple points of view involved in textual analysis. In fact most of the things that people need in text analysis have never been done in anything like an integrated way. I think a lot of experimentation is required before the most useful set of tools for the humanist can be a matter for agreement, and it will probably be years longer before such things are routinely implemented well. Just in case you think I'm a pessimist, I think that much of the work needed for at least some problems is being done now by commercial software houses who are working to solve the problems of organizing notes for use in office work. I think many of these tools will provide parts that can be forced to fit the jobs humanists need to do. In summary, I think that the focus both in designing new systems and in planning applications of existing technology should be on what we want to do, regardless of what the technology may provide right now. I think I would not have responded to Jim had he said: "What I want to to is this, and I expect that I can do it with a relational database if it will make a concordance of long text fields." When we build things on computers we need to keep a close eye on what the exact purpose is (ie. not a relational database per se, but something that will do X). I would like to offer a last word on behalf of the computer scientists (as I am one, at least by education): they are often trainable, if you give them lots of information and watch them closely. Also, frequently they know more about the tools than anyone not doing computers professionally has time to learn. Thus they can save the exploration of many blind alleys in technical work, as long as that work is properly directed to the ultimate goal (by the humanist who needs the tool). --- David G. Durand Brandeis University DURAND@BRANDEIS.BITNET ***** Indirectly related strong recomendation for reading material follows ***** For many good ideas, not deriving from the ``conventional wisdom'', Ted Nelson is worth reading. Some of what he proposes seems to me not to be useful, and some not to be feasible, but much of what he says is better thought out than it is presented. Computer LIB/Dream Machines is out in a trade edition by Tempus (Microsoft Press's new imprint) and should be easily orderable from a local bookstore. There is also e new edition of Literary Machines available. Send $25, + $5 for overseas, + $5 for purchase order to: Project Xanadu 8480 Fredricksburg # 138 San Antonio TX 78229 He may be a little crazy, but he's smart and many of his ideas are good. From: Willard McCarty Subject: Query: word-processor conversion software (20) Date: Fri, 11 Mar 88 22:47:58 EST X-Humanist: Vol. 1 Num. 972 (972) ------------------------------------------------------------------------ Date Fri, 11 Mar 88 16:06 EST From Does anyone have experience with commercial software packages intended to convert files from one word-processor format to another? My school is interested in acquiring such a program to convert between the major commercial word-processors in use here, esp. Microsoft Word and WordPerfect. Does anyone know how reliable these programs are? How many are available? Which are best? Any advice would be greatly appreciated. David Carpenter ST_JOSEPH@HVRFORD From: Willard McCarty Subject: Digest: Programming languages &c. (39) Date: Fri, 11 Mar 88 22:51:08 EST X-Humanist: Vol. 1 Num. 973 (973) ------------------------------------------------------------------------ Date Fri, 11 Mar 88 08:53:13 CST From "Eric Johnson Liberal Arts DSC Madison, SD 57042" Subject -- Re: Anaphora in Homeric Greek by computer (35) I can never understand why those interested in complex text analysis do not learn and use SNOBOL4. It is an uncommonly powerful language (it has been called "dangerously powerful") with a wide-range of built-in functions for all kinds of string manipulations. There are excellent compilers that run on all common hardware. As another new member of HUMANIST, I also find the voluminous mail very interesting. I think it should be continued in the present form. Eric Johnson ERIC@SDNET ------------------------------------------------------------------------ Date Fri, 11 Mar 88 08:44:21 CST From "Eric Johnson Liberal Arts DSC Madison, SD 57042" Subject -- Re: Computing for humanities students (20) Dakota State College with its new mission of computer integration in every program, teaches a course in SNOBOL4 and SPITBOL required for English majors and commonly taken by all arts and humanities majors. The course is ideal for arts and humanities majors for the reasons given by Susan Hockey (see ICEBOL 86 PROCEEDINGS, pp. 1-25). For additional information, please contact me: Eric Johnson ERIC@SDNET.BITNET Professor and Head, Liberal Arts Division Dakota State College Madison, South Dakota 57042 (605) 256-5270 From: Willard McCarty Subject: Genealogical software (26) Date: Fri, 11 Mar 88 22:54:26 EST X-Humanist: Vol. 1 Num. 974 (974) ------------------------------------------------------------------------ Date 11-MAR-1988 08:50:14 GMT From S200@CPC865.UEA.AC.UK Genealogical Software There are some booklets in a series called Computers in Genealogy published by, or in association with, The Society of Genealogists. 14, Charterhouse Buildings, LONDON EC1M 7BA UK They contain information about computer programs written commercially and by members of the society, for micros only, and describe users experiences with both machines and programs. They could be a bit too beginner-ish, as many of the articles are to do with getting started, difficulties with disks, documentation (for the machine), etc, which we (and the person asking the original question) would know how to sort out. However, they does describe programs available, which could be useful. Pat Newby (P.NEWBY@CPC865.UEA.AC.UK). From: Willard McCarty Willard McCarty Roberta Russell amsler@flash.bellcore.com (Robert Amsler)Sterling Bjorndahl - Claremont Grad. Schoolamsler@flash.bellcore.com (Robert Amsler)Wayne Tosh / English--SCSU / St Cloud, MN 56301 Subject: Editorial: Digestion (31)Digest: Volume of mail on HUMANIST &c. (121)volume of mail on HUMANISTRe: Editorial: Digestion (31)pro digestingRe: Editorial: Digestion (31)HUMANIST communications--volume & variety Date: Fri, 11 Mar 88 22:56:53 ESTSun, 13 Mar 88 17:53:30 ESTFri, 11 Mar 88 12:11 ESTSat, 12 Mar 88 22:59:36 estSat, 12 Mar 88 12:42 PSTSun, 13 Mar 88 03:53:34 estSun, 13 Mar 88 01:16 CDT X-Humanist: Vol. 1 Num. 975 (975) Dear Colleagues: The latest batch of HUMANIST mail is my first consistent response to the several demands that something be done about the amount of conversation among us. As you'll see from one of the digests, some people rather like the former deluge, and I'm guessing from this that several will not like the digesting of our phantasmagoric plethora into neat and rationalized bundles. There's no pleasing of all of the people all of the time, I suppose. Anyhow, all I ask is that you live with this twist of man-made fate for a while, as I will do, and see what it's like. You're all welcome to let me know what you think of it. I seem to be wisely outvoted on the question of simultaneous conversations. Actually, in one respect I have always sided with the majority. As the sorter of marvellous variety, however, I naturally wish for the minimum number of categories per day. No polemics on my part, just the pressing realization of one mortal's limits. I'll stretch 'em, and we'll see what happens. Thank you all for your participation, your interest, and your patience. If I didn't love doing this, I'd have gone nuts long ago, and I guess you're much the same. Willard McCarty mccarty@utorepas [Mail on the subject of mail continues to trickle in, and I continue to think about the problems some people are having with the volume of it. At the moment the two most reasonable possibilities seem either to send out a few digests each day sorted roughly by subject OR to send out one digest with everything in it. Michael Sperberg-McQueen has already written software that will clean up the sometimes voluminous headers and produce a table of contents, so in either case IF SENDERS OF MAIL ARE CAREFUL WITH WHAT THEY SAY ON THE SUBJECT LINE then receivers of the digests shouldn't have much difficulty navigating around in them. Please note that it is also important for senders to discuss insofar as possible ONE SUBJECT in each message. Also note that HUMANIST cannot become a bulletin board or news service in which subscribers declare their interests in particular topics and only receive mail on those topics. Everyone will continue to receive everything. I, for one, think that the character and value of HUMANIST are intimately tied to the commonality of our conversations. One way or another you can expect a drastic reduction in the *number* of messages you receive from HUMANIST each day. The total amount of *storage space* required to hold HUMANIST mail cannot be reduced except through editorial censorship or self-control. The former I refuse to practice unless pushed to it by gross indencencies -- not to be expected here at any rate; the latter is generally a good thing, I suppose, but as our numbers increase such control will become more of a threat to creative and informative expression. The morals of this story are, then: (1) discuss only one subject per message; (2) be as clear and concise as possible in your subject lines, using a question mark if you're asking for information, eliminating it if you're not; and (3) look at your mail every day. Comments on any of this continue to be welcome. --W.M.] ----------------- I agree with Mark Olsen. Why all the whining? Can't these people do a listing of their newmail, attend to those items of interest, and delete/all the rest? Nobody says you have to read everything. Turning HUMANIST into another BBS will destroy it. Roberta Russell % LISTSERV UTORONTO 3/12/88 % Roberta Russell humanist@utoronto 3/11/88 volume of mail on HUMANIST ----------------------------------- Conventional ARPANET digests long ago developed companion programs which `undigested' the data and turned it back into individual mail messages to which people could respond. The key to both having you cake and eating it too with regard to one subject at a time is to save up the messages re: some topic until you have enough for one digest or the flow stops and then to distribute the digest. That way digests are about one topic and still there need not be any control on what people can talk about at any time; it is controlled in terms of when it is `published'. % BITMAIL SUVM 3/12/88 % Robert Amsler MCCARTY%UTOREPAS.BI 3/12/88*Editorial: Digestion (31) --------------------------------------------- Willard: I cast my vote "pro" the current level of digesting. Thank you for doing it. I frequently read my mail via a 1200 baud modem, so paging through every message can be very time consuming. Keep up the good work! Sterling % BJORNDAS CLARGRAD 3/12/88 % Sterling Bjorndahl mccarty@utorepas 3/12/88 pro digesting --------------------------------------------- I guess my reaction so far is that I still get too many mail items from humanist. Some are also too small, so I'd prefer that some unrelated items be `bundled' together to make fewer larger packages. For me, the problem is that I subscribe to a dozen or more % MAILER CUNYVM 3/13/88 % Robert Amsler MCCARTY%UTOREPAS.BI 3/13/88*Editorial: Digestion (31) --------------------------------------------- Rather than broadcasting out all communications to all subscribers, would it be possible to set up something like a bulletin board with topics which the subscriber could elect to browse and respond to? % WAYNE MSUS1 3/13/88 % WayneTosh/English-- mccarty@utorepas 3/13/88 HUMANIST communications--volu From: Willard McCarty Subject: Searching Homer (33) Date: Sun, 13 Mar 88 18:19:54 EST X-Humanist: Vol. 1 Num. 976 (976) ------------------------------------------------------------------------ Date Fri, 11 Mar 88 09:35:17 EST From Paul Kahn For John Haviland Software to do what you describe does exist, for the mix of equipment you have available. Whether what the student proposes has been done already or not I can't say, but the tools to do it are there. The tape from TLG at UC Irvine will contain the complete works of Homer as a flat file. Greg Crane (Dept of Classics, Harvard University, Cambridge MA 02138) developed a set of UNIX programs which generate an inverted index of all words in a TLG author file, and a set of search programs for locating any word or string within the file and displaying it on any device which supports a Classical Greek character set. These programs are available from Crane (known as the HCCP software) and have proven to be portable to all versions of UNIX. They were originally created on a VAX running 4.2 BSD so you should be set there. George Walsh (Dept of Classics, Univ of Chicago, Chicago IL 60637) developed a set of Greek fonts for the Mac, and a matching version of Mac Terminal with the Greek font hacked into the VT100 emulation which lets you use a Mac as a terminal on the UNIX machine and display the Greek text. We have done a system which is used in the Classics and Religions Studies departments at Brown which uses an RT PC platform with 178 Greek authors on CD ROM, using Crane's index and search software and a simple front-end program written here. If you will give me a mail address I will send you some papers about it. Or give me a call Paul Kahn, IRIS, Brown University Box 1946 Providence RI 02912 401 863-2402 From: Willard McCarty Hans Joergen Marker Hartmut Haberland Sebastian Rahtz Ken Rumery K.P.Donnelly@EDINBURGH.AC.UKJ. K. McDonald Hans Joergen Marker Hans Joergen MarkerHartmut Haberland Sebastian Rahtz Ken Rumery 602-523-3850 CMSKRR01 at NAUVMK.P.Donnelly@EDINBURGH.AC.UKJ. K. McDonald Subject: Digest: Miscellaneous notes (233)Travelling plans3 ---------------------------------------------------------Postcards4 ---------------------------------------------------------why not learm SNOBOL5 ----------------------------------------------------------Job postings6 ----------------------------------------------------------Diacritics, 8-bits and IS 69377 -----------------------------------------------------------El uso de otras lenguas en HUMANIST (37 lines)Travelling plansVisit to New England in MayPostcardswhy not learm SNOBOLJob postingsDiacritics, 8-bits and IS 6937El uso de otras lenguas en HUMANIST (37 lines) Date: Mon, 14 Mar 88 20:48:07 ESTMon, 14 Mar 88 11:32:16 DNTMon, 14 Mar 88 11:47:37 DNTMon, 14 Mar 88 10:26:43 GMT14 March 88, 10:10:45 MST14 Mar 88 12:53:20 gmtMon, 14 Mar 88 18:40 ESTMon, 14 Mar 88 11:32:16 DNTMon, 14 Mar 88 11:47:37 DNTMon, 14 Mar 88 10:26:43 GMT14 March 88, 10:10:45 MST14 Mar 88 12:53:20 gmtMon, 14 Mar 88 18:40 EST X-Humanist: Vol. 1 Num. 977 (977) [Please let me know what you think of the following -- a gathering of notes on several topics with a handy table of contents. The numbers of the items in the table correspond to the numbers at the beginning of each note, each of which is enclosed in curly braces, e.g., {2}. Thus if you're interested in the second item, use your editor to search for {2}. -- W.M., with thanks to Michael Sperberg-McQueen.] Digest of Miscellaneous Messages HUMANIST Discussion Group 14 March 1988 Table of Contents 1 --------------------------------------------------------- Honorifics conference 2 --------------------------------------------------------- Digest ----------------------------------------------------------- Reed College, Anthropology and Linguistics Portland State University, Department of Modern Languages announce A Conference on |HONORIFICS|, Portland, Oregon, April 8-10, 1988 [for more information see HONORIFX CONFRNCE on the file-server] (2)--------------------------------------------------------------------- In the days 26th to 29th of May I shall be attending a conference in Washington DC. Having spend so much of the Danish taxpayers money it would be resonable to visit peoble and institutions with reasonable connection to my line of work (archiving of machine readable data, documentation and description of research data materials, computing in history and related fields). Does anyone on the HUMANIST discussion group have suggestions? My travelling plans are still open, but visits to peoble and/or institutions should take place within a resonabledistance in time and place from the conference. Although I am determined to give the tax payers their moneys worth, a visit to LA for instance would have to be extremely well argued for. All suggestions are most welcome. Hans Joergen Marker. P.S. Should I come across any one of you during my stay I promise not to speak a single word of Danish. Lou Burnard can confirm that despite his later outbursts about "lingo". % LISTSERV UTORONTO 3/14/88 % Hans Joergen Marker HUMANIST 3/14/88 Travelling plans (3)--------------------------------------------------------------------- The Postcard issue suddenly reminded me of the following story which a Swedish friend of mine told me years ago. One Thursday afternoon, ten minutes before closing time of the shops, the Stockholm local radio announced that the Swedish State Monopoly (liquor stores, Systembolaget) would raise their prices from next week and that the shops in order to prevent people from stocking up with booze at the old prices would be closed for three days, starting next day, Friday. Within a couple of minutes queues were forming in front of the shops and people tried to get hold ofthe cheap booze as much as they could carry in the ten minutes or so that were left. Now my friend (it was him who also told me "Of course I am paranoid, but that doesn't prove that they are not out to get me" - in full earnest, by the way) had the following explanation for the information `leak'. The whole thing was actually carefully planned (I don't know whether the price raise was part of the plot, or if whoever was responsible just took advantage of it). This was two weeks after Harrisburg, and the Swedish public was very worried about possible accidents in the Swedish nuclear power plants. So the whole thing was a test as to how fast you could reach the Swedish population by sending out a radio message in case of a disaster of some magnitude. So this worked. But what does the postcard avalanche through BITNET prove? % LISTSERV UTORONTO 3/14/88 % Hartmut Haberland HUMANIST Discussio 3/14/88 Postcards (4)--------------------------------------------------------------------- People don't learn SNOBOL routinely because a) it isn't compiled and b) its not suited to applications over c.500 lines of code. Icon, mind you, is a different story. % LISTSERV UTORONTO 3/14/88 % Sebastian Rahtz humanist 3/14/88 why not learm SNOBOL (5)--------------------------------------------------------------------- The College of Creative Arts and Communication at Northern Arizona University has reorganized following several years of self-study. The college is now configured as follows: School of Art and Design School of Communication (incl. Sp.Comm, Telecomm, Journalism) School of Performing Arts (incl. Dance, Music, Theatre) Department of Humanities and Religious Studies (incl. Arts Management) Northern Arizona University is seeking a Dean for the College as well as Directors for each of the three schools. Screening opens March 21 and will remain open until the positions are filled(July 1). An earned doctorate in a discipline of the unit(s) to be administered is required. Details may be obtained from Dr. David M. Whorton Associate Vice President for Academic Affairs, box 4085C Northern Arizona University Flagstaff, Arizona 86011 % CMSKRR01 NAUVM 3/14/88 % Ken Rumery MCCARTY@UTOREPAS 3/14/88 No subject (6)--------------------------------------------------------------------- Someone was asking about IS 6937 as compared to IS 8859/1. If my understanding is correct, IS 6937 was based on teletex, which was designed in Germany as the successor to telex. It was made an international standard round about 1984. Part 2 of it defined an 8-bit extension to the ASCII character set, to allow for accented letters in languages with Latin based alphabets as well as lots of other useful symbols like "pound", and "half". The unusual feature of it was that one of the columns of sixteen characters in the extended ASCII table contained "non spacing diacritics". The idea was that "e-accute", for example, was represented by "non spacing accute" followed by the usual "e". This has certain advantages such as making efficient use of the eighth bit so that more characters could be accomodated, making it easy to strip a text of diacritics, and perhaps making alphabetic sorting algorithms simpler. But it has great disadvantages. It would need a fundamental rewrite of many editors and other programs if the number of characters were no longer equal to the number of bytes. Whereas it would need only a minor change to most programs to allow them to cope with 8-bit text in the more recent IS 8859/1 standard. Many programs might cope without any change at all. So as far as I know IS 6937 never really caught on, whereas the newer IS 8859/1 standard looks like it is taking off. As to the relationship between them, IS 8859/1 certainly doesn't augment IS 6937. In fact IS 6937 has lots of characters which IS 8859 does not have, such as "1/8", "division (mathematical)", "capital OMEGA" and "ij ligature". There is some overlap. Seventeen out of the 94 extra character positions are common to the two standards. I don't really know much about all this. Can anyone else say anything more authoratative? Does anyone know of any printers implementing IS 8859/1? Kevin Donnelly % LISTSERV UTORONTO 3/14/88 % K.P.Donnelly@EDINBU HUMANIST@UTORONTO 3/14/88 Diacritics, 8-bits and IS 693 (7)--------------------------------------------------------------------- Nuestro colega Faulhaber de Berkeley lamenta el nu'mero muy reducido de hispanistas en EEUU que sepan utilizar el correo electro'nico (y/o que se hayan hecho miembros de HUMANIST) y que e'stos se podri'an contar en los dedos de una mano: !Espero que tenga una mano monstruosa! Quiza' se refiere a hispanohablantes y no a hispanistas. De todos modos, el nu'mero crecera'. Y no so'lo en esa parte de Norteame'rica que se llama Canada'. Como dice Julien acerca del france's, la falta de acentos, de signos diacri'ticos, de precisio'n en la lengua escrita es inaceptable a la intelectualidad de cualquier e'poca. Sin embargo, no nos toca a nosotros los humanistas la tarea de proveer el medio tecnolo'gico que nos facilite definir y avanzar nuestra vocacio'n. Lo que nos interesa es establecer siempre nuevamente nuestros objetivos y expresarlos lo ma's constante y lu'cidamente posible. Los te'cnicos suministrara'n los medios que precisemos. Goerwitz nos abre la dimensio'n cultural de este debate: los francos aprendieron el lati'n por ser e'ste el idioma de ma's alcance civilizador del mundo de esos siglos del alto medioevo; lo que dejo' en su tintero es que el lati'n a'ulico fue un vehi'culo civilizador tambie'n para los intelectuales de todas las provincias occidentales, incluso en Roma, para todos los que se esforzaban en hacerse bilingu"es. Es decir, los cle'rigos (y despue's, los humanistas del renacimiento europeo) reconoci'an que el bilingu"ismo, o el multilingu"ismo, los humanizaba a si' mismos. (El valor esencial del bilingu"ismo entre las clases dirigentes de Hispanoame'rica hoy di'a no refleja la fuerza de las 600 naves de guerra -deseadas- de los EEUU, sino la buena suerte de verse obligados a conducir una parte importante de la vida en conceptualizaciones logradas so'lo mediante dedicados esfuerzos mentales.) De los signos diacri'ticos hablare' quiza' otro di'a; mientras tanto, no digamos que el dane's o el sueco no valga en HUMANIST; la sociedad distinta de los quebequeses en Canada' se debe igualmente a tantos franco-canadienses que saben vivir con dos idiomas. Algu'n di'a se despertara' el gigante hispano en EEUU. % LISTSERV UTORONTO 3/14/88 % J. K. McDonald humanist@utoronto 3/14/88 El uso de otras lenguas en HU From: Willard McCarty D106GFS@UTARLVM1Robin C. Cover D106GFS@UTARLVM1Robin C. Cover Nancy Ide Subject: Digest: Volume of mail on HUMANIST (177)Humanist bulkHUMANIST Digests3 ----------------------------------------------------------RE: Digest: Volume of mail on HUMANIST &c. (121)Humanist bulkPostingLanguages: Why not Swahili?--- This would head a note which is part of the ubiquitouscurrent debate. I personally prefer avoiding spaces incategory names, to force them to be short; but it doesn'tmake any difference to the proposal.Markup: What's an em-dash?--- This would of course be part of a different discussion.NewTopic: Translation Theory--- This note would aim to start off a new discussion.HUMANIST DigestsRE: Digest: Volume of mail on HUMANIST &c. (121) Date: Mon, 14 Mar 88 21:05:23 ESTSun, 13 Mar 88 13:29:57 CSTMon, 14 Mar 88 15:16 ESTSun, 13 Mar 88 13:29:57 CSTSun, 13 Mar 1988 17:33 CSTMon, 14 Mar 88 15:16 EST X-Humanist: Vol. 1 Num. 978 (978) Table of Contents 1 2 Date: Sun, 13 Mar 1988 17:33 CST Digest (1)----------------------------------------------------------- It seems to me the real problem is that the usual paradigm of mailing lists doesn't support the notion of *topics* of discussion. Even the ARPA mail header standard doesn't provide a 'category' field, but only 'subject' (of course, it also doesn't provide for persons' names very neatly, either...). If each user could easily throw away notes *by topic*, then the deluge could be stemmed. This would also work if you had software which could maintain information on people's interests, and mail them only what they wanted. But that is a mess in terms of their keeping your software updated as to their interests. Here's a proposal: Every 'Subject:' field must begin with a category, from a set of active categories known to HUMANIST at the time. There is also a reserved category "NewTopic", which is always active, and used to propose a new topic of discussion. The list of active categories can be distributed as a note from time to time. Given a category label on every note, users with mailers can easily skip past topics they do not care about, and simple software can be written for others (like VM users), to delete notes not of interest (I'll probably write it for myself anyway, and will distribute it if so). Examples: I see these problems: First, category names must be standardized. One doesn't want people calling the same topic 'Language', 'Langage', 'Ling', etc. This can be handled by a simple exec you run before mailing out things, or by having an automatic posting machine return non- conforming mail to sender (which will get people to spell the categories right in a hurry, though they may not be happy). The XEDIT 'all' command (if you're on VM) will hide all but the 'Subject:' lines, so you could then review a day's notes all on at most a few screens, and normalize any oddball categories, or add categories if they are omitted. Second, categories sometimes evolve, rather than springing forth. This is not a real problem; when any category grows too large or diverse, either the moderator or a participant can propose a NewTopic. Another advantage of this method is that it is easy to track discussions, and see what topics are hot. Also, it is easy to retrieve old mail on particular topics. To really do this right, one should allow dots or some other separators in category names, thus providing for hierarchical categories for the obvious cognitive reasons. For example, one might have "Bible.NT.Lexicology", or whatever. This approach should be familiar to most users, from library card catalogs, hierarchical filing systems, or other similar things. But it might be overkill to begin with, and it is easily addable. I have a CMS bulletin-board system which uses these principles; as soon as I hook it up to handle posting and retrieval requests from remote network nodes, I will be making it available to interested sites. Steve DeRose Brown Univ., SIL % MAILER UTARLVM1 3/13/88 % D106GFS@UTARLVM1 MCCARTY@UTOREPAS 3/13/88 Humanist bulk (2)--------------------------------------------------------------------- I appreciate the digest format of HUMANIST, despite the protests of a few readers who wanted nothing unchanged. I think the idea of "digesting" has several advantages: (a)It helps lend continuity to reading of HUMANIST mail if I can read several postings related to the same topic at the same time (b) I think it will help contributors focus their comments on current topics (perhaps think a little harder about their contributions, or possible contributions) and thus cut down on inane chit-chat (c) it reduces the HUMANIST mail traffic on the network, in terms of reducing the number of mail pieces (d)it makes saving HUMANIST contributions much easier, if we want to save discussions on selected topics (e) it makes ignoring selected topics much easier. I admit that I am on a VM/CMS machine where I have to peek the reader before I can do anything to the mail, and that "receive" (or even "readcard") gets expensive with so many mail pieces. But I suspect that most HUMANISTS are on VM...do you know? Finally, I had a discussion with Nick DeRose about the idea of introducing "TOPIC" as well as (more specific) "SUBJECT" where the taxonomy of "TOPIC" would be enforced, but the "subject" specification could be flexible. Of course, VM mailers would still use "SUBJECT" line for what we mean by "TOPIC," but that isn't fatal. This too would help focus HUMANIST discussions, I think...which sometimes seem a bit shallow otherwise. Nick contributed the HORDER exec, and I'll bet other HUMANISTS would be willing to contribute similar mail software for other systems. In any case, I like the greater organization afforded by the digests. Thanks!! Robin C. Cover ZRCC1001@SMUVM1.bitnet % MAILER SMUVM1 3/13/88 % Robin C. Cover Willard L. McCarty 3/13/88 HUMANIST Digests (3)--------------------------------------------------------------------- It appears you are asking for a vote on the two methods: several digests, each on a topic, or one big digest with a table of contents. I emphatically vote for the former, which enables deletion of those digests that are not of interest and then lets me file the rest easily. I have spoken with other VAX users and much of the perspective on mail comes from CMS users, who do not appreciate the problems of VAX mail. From the VAX perspective, one giant file would be much more difficult to handle. Also, VAX users are charged the storage space for messages, even before they are read (in your jargon, "peeked" at). Several small digests will also help there. Above I meant to say that I have spoken with other VAX users who lament along with me that much of the perspective on mail... etc. Another feature of VAX mail is that one cannot back up and edit when typing mail messages. Now you know why my messages are full of typos. % IDE VASSAR 3/14/88 % Nancy Ide MCCARTY@UTOREPAS 3/14/88*Digest: Volume of mail on HUM From: Willard McCarty JLD1@PHX.CAM.AC.UKJLD1@PHX.CAM.AC.UK Subject: Queries: Emblems & Arabic w-p (45)Anyone in HUMANIST do emblems?2 ---------------------------------------------------------Query: Arabic w-pAnyone in HUMANIST do emblems?Query: Arabic w-pQuery: Arabic word-processing (8 lines) Date: Mon, 14 Mar 88 21:10:02 EST13 Mar 88 20:49 -0330Mon, 14 Mar 88 12:02:47 GMT13 Mar 88 20:49 -0330Mon, 14 Mar 88 12:02:47 GMT X-Humanist: Vol. 1 Num. 979 (979) Table of Contents 1 --------------------------------------------------------- (1)----------------------------------------------------------- If there are any HUMANISTs who are interested in emblem studies, and in particu- lar in constructing an emblematic database using HyperCard, I would be happy to hear from them. In particular, if anyone already has a bunch of digitzed emblem images, I'd be especially interested. % LISTSERV UTORONTO 3/13/88 % dgraham@mun.bitnet humanist@utoronto.b 3/13/88 Anyone in HUMANIST do emblems (2)--------------------------------------------------------------------- Does anyone know of a word-processing system with full footnoting and font- changing facilities (such as would be provided by TROFF or TEX) which will work with mixed Arabic and English text? If so, for what machine(s), in what computer language, and under what operating system(s) does it run? Similar programs which do the job for mixed Hebrew and English would also be of interest. John Dawson ( JLD1@uk.ac.cam.phx ) % LISTSERV UTORONTO 3/14/88 % JLD1@PHX.CAM.AC.UK humanist@UTORONTO 3/14/88 Query: Arabic w-p From: Willard McCarty Wayne Tosh / English--SCSU / St Cloud, MN 56301 Eric Johnson Wayne Tosh / English--SCSU / St Cloud, MN 56301 "Eric Johnson Liberal Arts DSC Madison, SD 57042" Subject: Digest: programming languages (66)alleged SNOBOL limitations (18 lines)SNOBOL, SPITBOL, and Iconalleged SNOBOL limitations (18 lines)SNOBOL, SPITBOL, and Icon Date: Tue, 15 Mar 88 19:46:18 ESTMon, 14 Mar 88 23:18 CDTTue, 15 Mar 88 11:54:20 CST X-Humanist: Vol. 1 Num. 980 (980) 1. Date: Mon, 14 Mar 88 23:18 CDT 2. Date: Tue, 15 Mar 88 11:54:20 CST (1)----------------------------------------------------------- Rahtz alleges that SNOBOL4 is unsuitable for processing texts longer than 500 lines. Wherever did you come up with that limitation, Sebastian? I've been "putzing around" (as they say here in Minnesota) with a PC disk file of Marlowe's Dr. Faustus. Without having counted accurately I would estimate it at about 2400 lines. I've been playing with about two-thirds that (the two-thirds that would fit conveniently in PC-WRITE's 60K file limitation), using SNOBOL4+ on a 4.whatever MHz machine. For those unfamiliar with SNOBOL4+, Mark Emmer says his implementation is about 10 times slower than mainframe versions of SNOBOL4. And, for my part, SNOBOL4+ seems to (and I stress the subjective nature of my measure) perform fast enough when I'm cranking out word frequencies, displaying pronouns in context, etc. WAYNE@MSUS1 (2)--------------------------------------------------------------------- My original note about SNOBOL was meant to be read in the context of the current discussion of programs for various kinds of non-numeric computing: concordance and index generation programs, etc. I am surprised that humanists are willing to routinely learn a great number of proprietary commands in order to use such programs, but they rarely learn to write their own programs in a language like SNOBOL or Icon which would not be much more difficult and would give them far, far more power and flexibility. I prefer SNOBOL to Icon because its syntax is closer to the way my mind works (and I'll bet that would be true for many humanists). Both SNOBOL and Icon code can be written with great From: Willard McCarty dow@husc6.BITNET (Dominik Wujastyk)dow@husc6.BITNET (Dominik Wujastyk) Subject: Digest: multilingual wordprocessing (56)Re: Arabic word-processingArabic/Hebrew/Greek/Cyrillic/English Word ProcessorsRe: Arabic word-processingArabic/Hebrew/Greek/Cyrillic/English Word Processors Date: Tue, 15 Mar 88 19:52:57 ESTTue, 15 Mar 88 14:30:04 ESTTue, 15 Mar 88 13:42 EDT X-Humanist: Vol. 1 Num. 981 (981) 1. Date: Tue, 15 Mar 88 14:30:04 EST 2. Date: Tue, 15 Mar 88 13:42 EDT (1)----------------------------------------------------------- There are now facilities (fonts, macros, the whole kaboodle) for using TeX to typeset both Arabic and Hebrew. These were recently announced in TeXhax by Goldberg, and are available from a listserver in Israel. Send a mail message GET IVRITEX PACKAGE to LISTSERV@TAUNIVM to get the whole package. I can post more details and information her if anyone is interested. Dominik One word processor that allows you to mix English, Greek (with full diacritics), Hebrew, Arabic and Cyrillic on the same page, and which includes a font generator (for screen and printers), and which supports 9-pin, 24-pin and LaserJet printers is: Multilingual Scholar Gamma Productions 710 Wilshire Blvd, Suite 609 Santa Monica, California 90401 tel 213-394-8622 Hardware requirements are modest: IBM PC/XT/Compatible with two floppy drives (hard disk preferred), 640 K Ram, Hercules Monographics OR Colour Graphics Adaptor (EGA real soon now!). Price: $350.00 (US). The latest version claims to support bottom-of-the-page footnotes. I haven't tried this version yet. It's easy to use and if memory serves me correctly, I think it creates ASCII-readable files. You can import standard ASCII files and then add Arabic, Hebrew, etc. It is very easy to use. Slow printing on 9-pin and 24-pin printers because text is printed in graphics mode. Sam Cioran (McMaster University, Hamilton, ONTARIO, CIORAN@MCMASTER) From: Willard McCarty Subject: 8 vs 5.25 Bernoulli box? (24) Date: Tue, 15 Mar 88 20:15:24 EST X-Humanist: Vol. 1 Num. 982 (982) ------------------------- Date Mon, 14 Mar 88 18:51:43 MST From Mark Olsen I am involved in a project that will require mailing large amounts of data on Bournoulli Box or other cartridge mass storage devices. I have an 8 inch Bournoulli Box which works very well, but I am wondering whether switching to the new 5.25 inch (internal) might not be a bad idea, as we have to purchase equipment for the project in the near future. Has anyone heard anything, good or bad, concerning the new Bournoulli cartridge systems? Are there any other cartridge systems which can match/beat Bournoulli for reliability and cost? Mailing 5.25 cartridges is, in itself, a large advantage, since the 8 inch cartridges are a *pain* to package. All suggestions/comments are appreciated. Thanks in advance, Mark -- if I screw this one up I'm dog meat -- Olsen From: Willard McCarty Subject: Test -- please ignore (20) Date: Tue, 15 Mar 88 20:46:06 EST X-Humanist: Vol. 1 Num. 983 (983) This is a test. Please ignore it. From: Willard McCarty tektronix!reed!johnh@uunet.UU.NET (John B. Haviland)Pink Freud Psyche A mbb@jessica.Stanford.EDUtektronix!reed!johnh@uunet.UU.NET (John B. Haviland)Pink Freud Psyche A mbb@jessica.Stanford.EDU Subject: Queries (115)ICON source (4 lines)Re: Extinct American Indian Languages of the Pacific NorthwestRequest for information: document comparison programsICON source (4 lines)Re: Extinct American Indian Languages of the Pacific Northwest inRe: Extinct American Indian Languages of the Pacific Northwest inRequest for information: document comparison programs Date: Wed, 16 Mar 88 19:46:41 ESTMon, 14 Mar 88 15:46:40 PSTTue 15 Mar 88 19:06:26-PSTWed, 16 Mar 88 07:36:57 -0800 X-Humanist: Vol. 1 Num. 984 (984) (1) Date: Mon, 14 Mar 88 15:46:40 PST (10 lines) (2) Date: Tue 15 Mar 88 19:06:26-PST (50 lines) (3) Date: Wed, 16 Mar 88 07:36:57 -0800 (28 lines) (1)---------------------------------------------------------- What is the best way to get ahold of an ICON implementation? (For Vax, Mac, MSDOS.) I have assumed that one should simply write to the Griswolds, at the Icon Project at Univ. of Arizona? Is there a downloadable way? Could the source be posted to HUMANIST? (2)------------------------------------------------------------------ My name is Troy Anderson. I am currently doing work at Stanford University on the extinct langauge of my ancestors the Lower Coquille Indians of the Central Oregon Coast. My method for going about reconstructing a dead language is as follows. I originally did a whole lot of bibliographic work trying to gather as much info on the language as I could. I ended up with about 300 pages of texts and about 10 hours of tapes. The textual material is half published and the other half unpublished. The published material was gathered by Melville Jacobs in his 1932 vol.8 University of Washington Coos Myth Texts and Narrative and Ethnologic texts. The unpublished material is from J.P. Harrington's collection (I question its validity) The computer has played a big part of my research. I sent the texts in published form to BYU to get optically scanned on to DOS text files which I have modified on to Word Perfect. I am now trying to reformat the texts so that the line up phrase by phrase. Let me back up. The texts I have for Mel are translated clause by clause, and I would like to make a dictionary and a list of morphemes. The way I am going about running through all these texts is by sending the formatted texts through a program called Word Cruncher. This used to be called the BYU concordance program (?). So it will make a concordance, clause by clause of the texts picking out the words so that I can analyze the texts morpheme by morpheme. Once again I must back up ... the texts are transcribed morpheme by morpheme but the translation is free. I need to find out how the translation works literally. From there all my problems should be solved, and it will be just a matter of picking up a totally foreign language and then trying to make a grammar book out of it. I am currently in the formatting of texts to Word Cruncher format stage. My technical expertise prevents me from making a program to merge the English and the Miluk files ( two seperate files ) which are lined up perfectly by number of clauses but not by number of pages (the English ran longer in its translation than the transcrition of the Miluk). So I am trying to take them apart and make them the exact same length But I am not having much luck. If you know of an easy way to go about fixing this problem please let me know. Troy (3)------------------------------------------------------------------ I'd be quite grateful if anyone has any suggestions regarding good document comparison programs running under MS-DOS or PC-DOS. I'm most interested in the ability to enter corrections from the keyboard. For example if file 1 and file 2 differ at some point and neither one is correct, can you enter the correction from the keyboard and continue without leaving the program? Please send responses to me, and I'll summarize the responses for HUMANIST. thanks very much Malcolm Brown Stanford University From: Willard McCarty Subject: Notices (53)Call for CALL papersMore on HonorificsNew VM/CMS exec for HUMANIST mailCall for CALL papersMore on HonorificsNew VM/CMS exec for HUMANIST mail Date: Wed, 16 Mar 88 19:51:54 EST16 March 198816 March 198816 March 1988 X-Humanist: Vol. 1 Num. 985 (985) (1) Date: 16 March 1988 (7 lines) (2) Date: 16 March 1988 (5 lines) (3) Date: 16 March 1988 (9 lines) INSTRUCTIONAL SCIENCE, An International Journal CALL FOR PAPERS. SPECIAL ISSUE : Computer Assisted Language Learning Guest Editors : Masoud Yazdani and Keith Cameron [for more information see the file CALL PAPERS on the file-server] A tentative schedule for the conference on Honorifics has been posted to the file-server. "HORDER EXEC" has just been posted to the HUMANIST file server. It is a utility for VM/CMS users, which will load all HUMANIST MAIL files from your virtual reader into "HUMANIST NOTEBOOK", with separator lines between them. Steven J. DeRose Brown University and the Summer Institute of Linguistics. From: Willard McCarty "James H. Coombs" Subject: Bernoulli boxes (38) Date: Wed, 16 Mar 88 19:54:59 ESTWed, 16 Mar 88 03:46:31 EST X-Humanist: Vol. 1 Num. 986 (986) ------------------------- I haven't heard of any problems with the new BBoxes. I'm not sure how many of them are out there, but Iomega has been pretty reliable. You might check the Iomega bulletin board to see if there have been any recent complaints. The number should be in your doc---I don't have it here. I know that there have been requests for assurances that the 8 inch will be supported in the future (and assurances in response). --Jim Dr. James H. Coombs Software Engineer, Research Institute for Research in Information and Scholarship (IRIS) Brown University jazbo@brownvm.bitnet From: Willard McCarty dow@husc6.BITNET (Dominik Wujastyk) Subject: Word processor conversion (25) Date: Wed, 16 Mar 88 21:00:11 ESTSat, 12 Mar 88 16:50:34 EST X-Humanist: Vol. 1 Num. 987 (987) ------------------------- There is a shareware program called XWORD written by Ronald Gans (350 West 55th Street #2E, New York, NY 10019; phone (212) 957-8361) which converts between ASCII, WordStar 3.3 or 4.0, XyWrite III or II, Nota Bene, MultiMate (or MM Advantage), WordStar 2000 (release 2), WordPerfect (4.1 or 4.2) dBase III (comma-delimited). I have used it for XyWrite and ASCII with success. Registration is $20, and gives you a slightly better, newer version, and promise of notification and a cheaper deal on future releases. MS Word and DCA format are in the pipeline, I gather. Dominik Wujastyk From: Willard McCarty Mark Olsen Subject: Collation software; Interline processing (44) Date: Wed, 16 Mar 88 21:24:35 ESTWed, 16 Mar 88 18:39:56 MST X-Humanist: Vol. 1 Num. 988 (988) ------------------------------------------------------------------------ I DISCARDED instead of received, so I can't send the following to the appropriate parties. On text comparison, have you looked at URICA? This is an interactive system by Robert Oakman and ?? which allows the user to create an apparatus for critical editions, etc. He is at University of South Carolina and I purchased the prg. for $35.00??. I tested it and found that it is a good looking system. I have not, however, used it for a critical edition yet. [Editor's comment: following is some information on URICA taken from an entry in our forthcoming Humanities Computing Yearbook (Oxford U.P.). ------------------------------------------------------------- URICA has been developed and is distributed by Robert L. Cannon and Robert L. Oakman, Dept. of Computer Science, University of South Carolina, Columbia, SC 29208 U.S.A.; (803) 777-2840; It costs $50, payable to Carolina Research and Development Foundation. It requires an IBM PC/XT/AT/PS2 with one floppy disk drive, 128K RAM, PC-DOS 2.1 or later; not copy protected. 17 pp. users manual; updates for a nominal fee.] ------------------------------------------------------------ On the Indian-English problem, one might try IT: Interlinear Text Processing program published by the Summer Institute for Linguistics. I have the program, fired it up once or twice, but have no comments on it save that it was quite reasonably priced, $50.00?? I can give more details on request. Mark From: Willard McCarty Sterling Bjorndahl - Claremont Grad. SchoolA_BODDINGTON@VAX.ACS.OPEN.AC.UKJoanne R.J.Hare@EDINBURGH.AC.UKWayne Tosh AYI004@IBM.SOUTHAMPTON.AC.UKSterling Bjorndahl - Claremont Grad. SchoolA_BODDINGTON@VAX.ACS.OPEN.AC.UKJoanne R.J.Hare@EDINBURGH.AC.UKWayne Tosh / English--SCSU / St Cloud, MN 56301 AYI004@IBM.SOUTHAMPTON.AC.UKDr Abigail Ann Young Subject: On digesting the digests (166)RE: Digest: Miscellaneous notes (233)DigestsRe: Digest: Miscellaneous notes (233)DigestRE: Digest: Miscellaneous notes (233)Humanistic digestionRE: Digest: Miscellaneous notes (233)DigestsRe: Digest: Miscellaneous notes (233)DigestRE: Digest: Miscellaneous notes (233)IndigestionHumanistic digestion Date: Wed, 16 Mar 88 21:38:18 ESTMon, 14 Mar 88 21:18 PST15-MAR-1988 12:58:50 GMTTue, 15 Mar 88 07:14:50 EST15 Mar 88 09:17:26 gmtMon, 14 Mar 88 22:53 CDTTue, 15 Mar 88 19:27:02 GMT X-Humanist: Vol. 1 Num. 989 (989) (1) Date: Mon, 14 Mar 88 21:18 PST (9 lines) (2) Date: 15-MAR-1988 12:58:50 GMT (24 lines) (3) Date: Tue, 15 Mar 88 07:14:50 EST (13 lines) (4) Date: 15 Mar 88 09:17:26 gmt (15 lines) (5) Date: Mon, 14 Mar 88 22:53 CDT (8 lines) (6) Date: Tue, 15 Mar 88 19:27:02 GMT (29 lines) (7) From: Dr Abigail Ann Young (28 lines) (1)---------------------------------------------------------- You wanted feedback on this subject. I vote against it. My editor and mailer software aren't that well integrated. I like your digesting by topic, but I'd prefer the "solo" items to continue coming through solo. Sterling (2)------------------------------------------------------------------ DIGESTS I vote for one single digest with a table of contents. Like Nancy Ide I am using a VAX, but find long messages (IN or OUT) no problem. This may suggest that we are using different mailers (mine's VMS mail). Incoming messages can simply be scanned with the editor using READ/EDIT. Outgoing are prepared as files or using SEND/EDIT (as this one). A days Humanist mailing can then simply be printed from mail using the PRINT command. Finally it can be MOVEd to a folder or DELETEd with a single command. On a good day I only get 100 messages, it is tiring to manipulate each individually and I would be happier with half as many. I don't object to the bulk, just the tedium of manipulation. Where I do sympathise with Nancy is that we here are not charged for filespace etc. Andy Boddington Open University UK (3)-------------------------------------------------------------- This is a great idea! Just to add to the mayhem, let me say that I rarely get the opportunity to reply to Humanist, although I am an avid reader. As the Director of a Computer Center I get so much mail that I have to have someone intercept Humanist, sort it out for me, and leave me a hard copy. Having a digest would greatly enhance MY ability to handle humanist myself, quickly, efficiently and effectively, and perhaps I may even be able to contribute something. Thanks as always for your wonderful assistance on this undertaking. (4)------------------------------------------------------------------ I'afraid that I have to say that for me the 'digest' is more of a hindrance than a help. I should say though that this is an entirely subjective viewpoint which has it's roots in the fact that our MAIL system is here to provide the systems programmers with something to 'maintain', rather than to provide the users with a service - ie: it is too difficult to get an editor working on the current message. I have to store it in a file, then exit from the mail system to use the editor on the file. A real pain. I prefer the messages to be left as individual entities, then I can scan the subject line and read or not read, as I choose. Roger. (5)------------------------------------------------------------------ Terrific idea! This answers very nicely to the concerns I had expressed to you earlier about a bulletin-board arrangement that would allow one to select the topics one was interested in. Huzzah! (6)--------------------------------------------------------------------- Digesting information may not in itself be bad for the network - but it does create an opportunity for those presiding over the communication flow to fashion it in their own interests. I apologise in advance for appearing to make a naive point, but it is important to realise that such seemingly 'neutral' tasks such as categorisation are highly subjective, and hence, potentially damaging to the hopeful prospect that this network will achieve more than self-gratification by like-minded and highly privileged people. Some recent comments, referring to 'shallow' conversation and 'inane chit-chat'are quite threatening in this regard, particularly as they are accompanied by a strategy for controlling the structure and content of information. The pedestrian aggressiveness in such comments is distressing - not only does the pronouncement reflect upon all contributors, all of whom, I believe, would not waste their time if they did not believe in the value of this forum, but it also seems to be far from the values inherent in the literature that many of us have devoted our lives to - in which metaphor represents a way of freely expressing one's view of the human condition. We may use tools, such as computers, to tear apart the imaginative worlds created by our language(s), but we are not trying to destroy them. Mutability, chance, the protean nature of sensible things - unconstrained discourse - these are what keep us from just building monuments to ourselves. Brian Molyneaux (7)-------------------------------------------------------------- Today I really began to experience humanist withdrawal. My constant supply of regular daily doses has been cut off. How can I survive with only one fix a day?... What I dislike about HUMANIST sometimes are the long, esoteric, needlessly (at least in my opinion) technical discussions of front and back ends, mark-up language, textbases, CD-ROM, etc which some members indulge in at intervals: don't misunderstand, I'm sure there are interesting and informative ways to discuss these issues, I just didn't notice much of it here, mostly the technological hierarchs talking.... But then to have some people talking in what seems to me to be a rather self-righteous way about the waste of their time, the difficulty of sorting wheat from chaff, and how they can't be bothered with undue chattiness (and worse still offering everyone good advice about how to do better in future!) seems a little much. I mean most of us put up rather quietly with their hobby-horses in the fall and winter!!!! I don't like HUMANIST as much as I did before. I guess the days of happy anarchy had to pass. Probably it was the postcard request that did it in! Abby From: Willard McCarty Subject: Misc. comments (24) Date: Wed, 16 Mar 88 21:50:05 EST X-Humanist: Vol. 1 Num. 990 (990) My apologies for the delay in sending out the digest on digestion. Some strange character in one of the messages was causing the whole digest to be rejected by ListServ. I downloaded then uploaded the file, and that seems to have cured the ill. I want to thank Michael Sperberg-McQueen in public for writing me a macro to do the digesting: il miglior fabbro! Please bear with me for the next few days while I experiment with various ways of sorting the mail. I, too, regret the passing of HUMANIST's ebullient youth -- or am I just telling myself another myth of a Golden Age? -- but I'm discovering in other spheres that maturity has its deeper joys. Anyhow, several of us were facing imminent collapse, and something had to be done. We make allowances for each other, please! Willard McCarty mccarty@utorepas From: Willard McCarty CMI011@IBM.SOUTHAMPTON.AC.UKSterling Bjorndahl - Claremont Grad. SchoolCMI011@IBM.SOUTHAMPTON.AC.UKSterling Bjorndahl - Claremont Grad. School Subject: Meta-talk (97)chiffchaffOn the need for aggressive mailer software.chiffchaffOn the need for aggressive mailer software. Date: Thu, 17 Mar 88 20:23:08 ESTThu, 17 Mar 88 14:49:04 GMTThu, 17 Mar 88 11:25 PST X-Humanist: Vol. 1 Num. 991 (991) (1) Date: Thu, 17 Mar 88 14:49:04 GMT (15 lines) (2) Date: Thu, 17 Mar 88 11:25 PST (64 lines) (1) -------------------------------------------------------------------- Abigail Young's 'esoteric' complaint. I think she's going a bit over the top. If HUMANISTs can't cope with markup languages and database concepts (if they are interested), its a pretty poor lookout! I would contend (and I KNOW that some of you out there agree with me) that the 'philosophical' issue of database design is crucial to many aspects of database design, and if I can't quarrel with Jim Coombs on whether or not a commerical database system can adqequately handle text, then what CAN we discuss on HUMANIST? Sebastian Rahtz (2) -------------------------------------------------------------------- I don't know about you, dear reader, but I find that my e-mail correspondents ignore my messages with unusual frequency. I find that e-mail is ignored much more than paper mail. And it is certainly ignored more than telephone calls. Perhaps 'ignored' is not the best word. Several confessions on HUMANIST have made the matter clear: e-mail messages can be quickly and easily filed somewhere to be dealt with "later" - all in good intention. The problem is that "later" never arrives, or it arrives only after the original sender, wondering if perhaps it is the network that is at fault, has sent three or four follow-up messages. Sending these follow-up messages is 'work.' Machines should work; people should think. A new field must be added to the mail header. I propose that this field be named "Reply-by-or-else!". Every day the mailer software scans everyone's received mail. Messages which have been read at least once[1] will be checked for a "Reply-by-or-else!" field. If the date in this field has been exceeded, a message will be sent by the system to the recipient reminding him/her that the message has indeed not been replied to yet. This message will be re-sent every day, with consecutively harsher and more threatening expressions being used, perhaps including curses in a variety of exotic languages. If a reply has not been made in a week, the mailer will randomly delete one file from the recipient's disk allocation. The threats and random deletions continue for another week. At that point, the system manager is notified that one of his/her users is being impolite. The system manager can reply to the original sender, apologizing for the uncouth nature of the users at site X, especially including user Z who has not replied to "your message of YYYY-MM-DD." If the system manager chooses not to take on this responsibility, a message is sent to every user on the system: "Attention all users: User Z is impolite and has not responded to a message sent on YYYY-MM-DD by colleague A@B. For this you all will suffer. Beginning tonight at midnight, one file will be randomly deleted from each of your disk allocations. This will continue until user Z is punished." Clearly this will result in a tremendous amount of pressure among the users to reply to, or at least to acknowledge, any incoming mail which includes the "Reply-by-or-else!" field. Can anyone think of a better plan? Sterling Bjorndahl BJORNDAS@CLARGRAD.BITNET ---------------------------------------------------------- [1] This is to allow for the situation when you go to another continent for two months and never read your e-mail. No fair that you should be punished for not replying to a letter that you never read. [Note on my English idiom in this last sentence, for those people who may not be absolutely fluent in this language: 'read' is the simple past tense (I think better (albeit more serious) usage would be the present perfect tense.) 'No fair' is short for "It is not fair." I know how tough it can be to deal with colloquialisms, which seldom appear in scholarly publications.] From: Willard McCarty RSTHC@CUNYVM Subject: Announcement of a book (30)New Textbook Date: Thu, 17 Mar 88 20:26:18 ESTThu, 17 Mar 88 14:51 EST X-Humanist: Vol. 1 Num. 992 (992) ------------------------- Bob Tannenbaum (RSTHC@CUNYVM) At the risk of being accused of using BITNET for a commercial purpose, I would like to announce that Volume I of my new text is now available from Computer Science Press. The Book is: Computing in the Humanities and Social Sciences, Volume I Fundamentals. I am not aware that the book has been reviewed yet or advertised widely, so you may not have heard of it. I am just completing the final revisions to Volume II Applications, and should be shipping the diskettes to the typesetter before the end of March. I am hoping that Volume II will be available in plenty of time for use in the Fall semester, 1988. If you have any questions, please do not hesitate to send me E-mail or to telephone (212) 772 - 4289. If you have seen the book and have any comments or suggestions, I would very much appreciate hearing them. Thanks. Bob Tannenbaum (RSTHC@CUNYVM) From: Willard McCarty R.J.Hare@EDINBURGH.AC.UKCMI011@IBM.SOUTHAMPTON.AC.UKR.J.Hare@EDINBURGH.AC.UKCMI011@IBM.SOUTHAMPTON.AC.UK Subject: Snobol and Icon (69)Icon Source (23 lines)a) Snobol b) Icon sourcesIcon Source (23 lines)a) Snobol b) Icon sources Date: Thu, 17 Mar 88 20:29:12 EST17 Mar 88 10:02:34 gmtThu, 17 Mar 88 14:49:04 GMT X-Humanist: Vol. 1 Num. 993 (993) (1) Date: 17 Mar 88 10:02:34 gmt (29 lines) (2) Date: Thu, 17 Mar 88 14:49:04 GMT (23 lines) (1) -------------------------------------------------------------------- I'm not sure I'm smart enough to work out a return address for John Haviland (to be honest, I'm sure I'm not!), so here's my reply to his query about Icon sources - I'm sure you will get several in a similar vein. Icon is PD - there is no registration fee, etc. and Griswold et al positively encourage copying of the code between users. This is how I interpret all the literature I have anyway. So, if you know someone 'locally' who has the system running, you could beg, borrow or steal a copy from them (this is how I got my first implementation, from Sebastian Rahtz). Here in the UK, I think I am correct in saying that the Microcomputer Software Distribution Service at Lancaster University has the executable files forat least one implementation of the language in their down-loadable library. This facility is not (from memory) available from the Icon Project, and they give some pretty convincing reasons for this being the case in one of the Icon newsletters they publish regularly. The Icon project also make the source code of Icon available for those senterprising enough to want to try and implement it on a new system. I'm sure a message to them would yield a price list, etc. Roger Hare. (2) -------------------------------------------------------------------- Two small points: a) I didn't say SNOBOL couldn't handle 500 lines of text, but that programs over 500 lines very easily get unmaintainable. I once wrote a SPITBOL program called MONSTER, which ended up about 2000 lines, and I can honestly say that there were parts that worked entirely by the High Magic (ie luck). I'm not saying you CANNOT write good long programs but that it would be much easier in a more modular language (like Icon) b) re: the request for Icon source; I think the questioner would blanch at how much material would be needed to download the source of Icon. I'd recommend people definitively to write to the Griswolds and send a small amount of cash. The documentation, newsletter, up-to-date material, program library etc are well worth a few paltry dollars and a few weeks wait. Sebastian Rahtz From: Willard McCarty David Sitman Subject: Multilingual wordprocessing (45) Date: Thu, 17 Mar 88 20:31:34 ESTThu, 17 Mar 88 12:50:22 IST X-Humanist: Vol. 1 Num. 994 (994) ------------------------- Right-to-left word processing (Hebrew, Arabic, Persian, etc.) is, of course, a hot topic here in Israel. I don't write Arabic, so the inform- ation I have on Arabic word-processing specifically is rather limited. As Dominik Wujastyk mentioned, there is a package for TeX users available from LISTSERV@TAUNIVM (as Dominik wrote, send the command: GET IVRITEX PACKAGE). While the procedures can be used for Arabic as well as Hebrew, only Hebrew fonts are available at this time. I was given a copy of version 2 of Multilingual Scholar (mentioned by Sam Cioran) to try out a few months ago. One could easily move from one language to another, even on the same line. As I recall, I found it rather lacking in "advanced" word-processing options: indexing, footnoting, etc., and the fonts weren't very nice. The thing that bothered me most was the protection scheme: you have to plug something into the parallel port in order to use the software. Anyway, we decided not to buy it. There are a number of Hebrew-English word processors which are in widespread use here, but in general, they can't hold a candle to the monolingual mainstays. One which is available in North America is EinsteinWriter, which I am told was among the 50 word processors recently reviewed in PC Magazine. Einstein is my favorite among the simple bilingual word processors. It's relatively fast, easy to learn and bug-free. Microsoft is in the process of developing Arabic (and Hebrew) versions of its main products: DOS, WINDOWS, WORD. I don't know which of these are already on the market, but anyone who is interested can send me a note and I'll dig out the electronic mail address of the fellow at Microsoft who is in charge of this project. And then, there is Nota Bene. Version 3 has just come out. I'll make sure that a report on this new version gets to the list soon. David From: Willard McCarty Dr Abigail Ann Young lang@PRC.Unisys.COM"David Owen, Philosophy, University of Arizona"Dr Abigail Ann Young lang@PRC.Unisys.COM"David Owen, Philosophy, University of Arizona" Subject: Wordprocessing, mostly multilingual (102)Multilingual Word ProcessingConverting Word-processor filesMultilingual wordprocessing (45)Multi-Lingual Text-ProcessingMultilingual Word ProcessingConverting Word-processor filesMultilingual wordprocessing (45)Multi-Lingual Text-Proxessing Date: Fri, 18 Mar 88 19:26:16 ESTFri, 18 Mar 88 08:36 EDTFri, 18 Mar 88 08:14:58 ESTFri, 18 Mar 88 07:55:12 ESTFri, 18 Mar 88 11:10 MST X-Humanist: Vol. 1 Num. 995 (995) (1) Date: Fri, 18 Mar 88 08:36 EDT (18 lines) (2) Date: Fri, 18 Mar 88 08:14:58 EST (17 lines) (3) Date: Fri, 18 Mar 88 07:55:12 EST (23 lines) (4) Date: Fri, 18 Mar 88 11:10 MST (17 lines) (1) -------------------------------------------------------------------- David Sitman is correct in saying that Multilingual Scholar (Gamma Productions, California) is not a powerful word processor. His experience with version 2.0 is correct. However, there is a new version (3.x) which is much improved and does support basic footnoting (but not indexing). Few multilingual word processors (IBM PC based, at least) are going to give you the flexibility to mix Hebrew, Arabic, Greek, Cyrillic and Roman on the same page as well as provide a powerful font generator for rolling your own. Yes, there is a nasty "dongle", or piece of hardware that you have to stick in the parallel port to get print out. It's a clumsy attempt at discouraging software piracy and one I wish Gamma would abandon. However, they will allow you to buy a second "dongle" for use on a second computer, presuming that you might have one at home and one at the office. Sam Cioran (CIORAN@MCMASTER) (2) -------------------------------------------------------------------- It is true, as several people have mentioned, that WordPerfect supplies its own CONVERT programme to 'translate' files from its own format to others (including WordStar) and vice versa. But one should be aware that there is an 'undocumented feature' in the conversion of a WP file containing footnotes to WS: the notes, which WP embeds in the file surrounded by an escape sequence which causes them to be invisible when you scroll through the text but printable, are stripped out by the CONVERT programme and simply thrown out, not written to another file. If, as I had to, you must have a WS version of a scholarly text originally written in WP, this can create some extra work! (3) -------------------------------------------------------------------- A couple of years ago, I worked for Jack Abercrombie at the University of Pennsylvania's Center for Computer Analysis of Texts, and at that time, Jack and his group was developing an IBM-PC-based word processor for all sorts of ``exotic'' languages, including Greek, Hebrew, Arabic, and Devanagari. Since it's been a while since I was associated with that project, I can't say anything about its current status, but perhaps Bob Kraft (if he is one of the hardy souls who is still reading this newsgroup) could provide more details... --Francois Francois-Michel Lang Paoli Research Center, Unisys Corporation lang@prc.unisys.com (215) 648-7469 Dept of Comp & Info Science, U of PA lang@cis.upenn.edu (215) 898-9511 (4) -------------------------------------------------------------------- A couple of HUMANISTS have mentioned a package called TurboFonts to me that runs in conjunction with standard word processors such as Word Perfect. It needs and EGA or Hercules+ card (which leads me to believe that it runs in text rather than graphics mode) and supports a number of printers, including the HP LaserJet+ or Series II. It is designed to display and print, within eg Word Perfect, non Latin alphabets such as Greek, as well as many scientific symbols. Have any HUMANISTS used this package? With what hardware? If I receive sufficient replies, I will post the results in HUMANIST (or on its server). David Owen OWEN@ARIZRVAX.BITNET OWEN@RVAX.CCIT.ARIZONA.EDU From: Willard McCarty Mark Olsen Hans Joergen Marker Mark Olsen Hans Joergen Marker Hans Joergen Marker Subject: At play (67)Sterling -- break his kneecaps -- BjorndahlThe missing daily jokeSterling -- break his kneecaps -- BjorndahlThe missing daily jokeToo much discipline Date: Fri, 18 Mar 88 19:34:51 ESTThu, 17 Mar 88 20:10:37 MST (23 lines)Fri, 18 Mar 88 09:35:31 DNT X-Humanist: Vol. 1 Num. 996 (996) (1) Date: Thu, 17 Mar 88 20:10:37 MST (23 lines) (2) Date: Fri, 18 Mar 88 09:35:31 DNT (26 lines) (1) -------------------------------------------------------------------- This can be filed under IDLE CHATTER -- and please, no return mail Now here is a man I never want to borrow 50 cents from. And -- don't doubt it -- people RETURN Sterling's phone calls. Or else. Seriously, I can't think of one thing I would detest more than an intrusion on my computer. I have a modem link to the mainframe from home and office and only one phone at each location. I often, particularly in the morning, logon just to tie up the phone. At last a little peace and quiet. If I am ignoring something or have forgotten it -- like not returning phone calls -- I want someplace of refuge where some damned machine can't track me down. Now Sterling wants to equip someone else with an enforced phone answering machine that I have to respond to. If its important, they'll phone back. I don't have call forwarding, an answering machine or an auto-dialer, and I certainly don't want that on MY computer. Please don't take my electronic refuge away. Mark (2) -------------------------------------------------------------------- I would like to second the opinion of Dr. Abigail Ann Young where she stresses the loss of unseriousness in the Humanist discussion group. Whatever became of the one line messages from Sebastian Ratz that used to brighten up our lives. Is he exercising selfdiscipline under the impression that it all has to be serious busines from now on? I admit that the amount of messages was becoming overwhelming at one stage, on the other hand I feel that we are missing something now. If the Humanist discussion group has to be devoted to business like messages then it will probably soon be restricted to members from the main stream of the discussion. The term 'survival of the fittest' was brought up in the discussion before the changes were made. I think that the term is still relevant, only the environment has changed and so has the type of 'fitness' needed for survival. Hans Joergen Marker. From: Willard McCarty mbb@jessica.Stanford.EDURoberta Russell mbb@jessica.Stanford.EDUmbb@jessica.Stanford.EDURoberta Russell mbb@jessica.Stanford.EDU Subject: Database/textbase software (175)The Great Text Search Engine: why hasn't it been done?Query: VAX/VMS database softwareSpecifications for a useful text search engineThe Great Text Search Engine: why hasn't it been done?Query: VAX/VMS database softwareSpecifications for a useful text search engine Date: Fri, 18 Mar 88 19:41:50 ESTFri, 18 Mar 88 08:52:39 -0800Fri, 18 Mar 88 11:03 ESTFri, 18 Mar 88 10:13:29 -0800 X-Humanist: Vol. 1 Num. 997 (997) (1) Date: Fri, 18 Mar 88 08:52:39 -0800 (24 lines) (2) Date: Fri, 18 Mar 88 11:03 EST (8 lines) (3) Date: Fri, 18 Mar 88 10:13:29 -0800 (127 lines) (1) -------------------------------------------------------------------- I've been going over the notes that appeared recently on HUMANIST regarding the text search engine. As I read through Jim Coombs' and Robin Cover's notes, I wondered if what stands in the way of the development of such a system is logistical rather than technological in nature. That is: if a programmer and a humanist -- both competent! -- worked together, is there any technological barrier to their developing a search engine? It seems to me that the algorithms and hardware already exist. If so, then what stands in the way of the development of the search engine includes items such as lack of funding, lack of recognition, etc. If this is indeed the case, then it seems rather silly, doesn't it? Malcolm Brown Stanford (2) -------------------------------------------------------------------- Can someone recommend a text-oriented database program for VAX/VMS machines? One that can generate bibliographic-style reports? We are presently running a dinosaur version of REF-11 (2.3) that eats up an inordinate amount of disk space. (Please respond to prussell@oberlin) (3) -------------------------------------------------------------------- It is with some "fear and trembling" that I offer this for commentary on HUMANIST. I was intrigued by Robin Cover's note that kicked off a brief discussion of a text search engine that would do the kind of things that humanist scholars would need. I have been asked here to submit just such a specification, a kind of manifesto of the needs of the humanist scholar. Accordingly, I offer the following draft and invite suggestions, criticisms, additions --- yea, even flames. Please take note of the following: > if some of this sounds like I'm plagarizing Robin Cover, it's only because I am. I thought his ideas were good ones. This document, whatever its eventual form, will not be for publication, but rather for circulation as a memo. > the brief section on the "concept of the text" is not by any means meant to be theoretically adequate. In this document, it merely attempts to point out aspects of texts that might not be apparent to programmers. > This is an initial draft, and I do not expect it to be exhaustive by any means. thanks Malcolm Brown, Stanford - - - - - - - - - - - - These notes attempt to specify the capabilities of a text retrieval or "search engine" that is required for serious research by humanist scholars. The specification of such a search engine comprises half of what would be the ideal academic text research tool. The second half of such a tool would be a robust hypertext system. Before listing the characteristics of the engine itself, it is important to review just what a text is. It is light of such a concept of the text that the requirements for a search engine make sense. The concept of the text A text has empirical and non-empirical aspects. The former includes all the letters, spaces, punctuation and other marks that comprise the text. But there are also a great many non-empirical aspects of a text, the denotations and connotations. A text is an ordered hierarchy of objects. Texts are invariably subdivided into units such as chapters, pages, verses, paragraphs, sections, etc. Moreover, it is possible to apply more than one hierarchical scheme to the same text. One example would be chronology ("the writings of K from 1797-1799" and "the writings of K from 1800-1802"); another example would be the parsing of the text into grammatical units (subject, verb, object, singular, plural, inflected, etc.) Texts can be classified according to types, such as fiction, correspondence, essays, and so forth. As with its hierarchies, it is quite possible that a text can be labeled in more than one way. Different type classifications generally result in a different hierarchical structures. The concepts of hierarchy and type are examples of non-empirical aspects of the text. Additional examples would include semantic contexts and etymolgical and historical references. In order to represent them (an hence make them "visible" to something as empirically-minded as a computer) a system of annotations can be employed. The capabilities of the search engine The search engine should support the full range of Boolean operators. The search engine must be capable to search large bodies of texts. The search engine should be able to produce concordances and be able to chart the distribution of words over the text. The search argument should permit nesting of Boolean search expressions. Search requests can be saved to disk and later recalled, either for repetition or for inclusion in more complex search requests. Full pattern matching should be supported (as with the UNIX grep). The engine must be capable of viewing and searching its text in terms of its ordered hierarchy. In addition to the capacity to search for arbitrary strings, the scope of such searches should be delimited by hierarchical units. The delimiters of "words" -- the basic text unit -- should be user-definable. The database structure must be capable of supporting annotations or assigned attributes at the word level and, ideally, at any higher textual level appropriate to the given document. Word-level annotations would account for things such as assignment of lemmas, gloss, syntatic function, etc. Such annotations must be accessible to the search engine. (this from Robin Cover) Examples of search requests that should be supported by the engine: retrieve all occurrences of "x" retrieve all occurrences of "x" and "y" but not "z" retrieve all verses in which "x" but not "y" occur retrieve all verses written in 1797 in which "x" but not "y" occur Malcolm Brown, AIR/SyD Stanford University mbb@jessica.stanford.edu From: Willard McCarty D106GFS@UTARLVM1 Subject: Biographies (41) Date: Sun, 20 Mar 88 12:24:50 ESTFri, 18 Mar 88 19:41:39 CST X-Humanist: Vol. 1 Num. 998 (998) ------------------------- As you are all aware, we have an excellent resource in the compiled biographies available from the file server. However, so far we have not established a standard for format or content. I have written a proposal for a standard, which uses simple descriptive markup tags for the various parts of the biography. Specifically, I have tags for name, address, phone, institution, e-mail address, biography text proper, and new paragraph. This seems innocuous yet adequate to me; possible additions might be hardware used, and a list of special-interest keywords (the latter of course being more work). I have also written (a) a program which extracts several of these fields from the biographies on file and converts the files into the proposed format; and (b) a HyperCard stack which can import the tagged files, building a stack with the usual sorting, browsing, and retrieval functions. I plan to add an "Export" feature, so that the data can be maintained in HyperCard, but written out in SGML at will. I am willing to carry out the entire conversion, though I don't *promise* that addresses embedded in the midst of regular biography text will be noticed and moved to the 'address' field (such information would not be deleted, merely left in the text as is), etc. My questions are (please reply directly to D106GFS at UTARLVM): 1) Who wants a copy of the proposal to read and comment? 2) Who wants a (beta) copy of the HC stack to try? 3) How many people might find either the stack, or at least the tagged biographies, useful? Thanks for your interest, Steven J. DeRose From: Willard McCarty tektronix!reed!johnh@uunet.UU.NET (John B. Haviland)MCCARTY@UTOREPAS (Willard McCarty)tektronix!reed!johnh@uunet.UU.NET (John B. Haviland)MCCARTY@UTOREPAS Subject: Icon (55)ICON sourceInformation on IconICON sourceInformation on Icon Date: Sun, 20 Mar 88 12:43:18 ESTSat, 19 Mar 88 20:46:12 PSTSunday, 20 Mar 1988 12:36:00 EST X-Humanist: Vol. 1 Num. 999 (999) (1) Date: Sat, 19 Mar 88 20:46:12 PST (9 lines) (2) Date: Sun, 20 Mar 1988 12:36:00 EST (30 lines) (1) -------------------------------------------------------------------- Source for various versions of ICON--in answer to my own earlier query-- can be had via ftp from arizona.edu. The cost for disks and documentation on version 7 for MSDOS I am told is $20. E-mail queries should be directed to icon-project@arizona.edu. (2) -------------------------------------------------------------------- [The following I've extracted from my file on Icon, assembled for an entry in The Humanities Computing Yearbook (forthcoming, O.U.P.).] Developed by Ralph Griswold, Icon Project, Dept. of Computer Science, Gould-Simpson Building, The University of Arizona, Tucson, AZ 85721; 602-621-6613; Network addresses: icon-project@arizona.EDU, {ihnp4, noao, allegra}!arizona!icon-project. Public domain, distributed by vendor; cost: $15-$25, depending on computer system and format; source code available at slightly higher prices; registered owners receive a newsletter; technical support provided via electronic mail and an electronic bulletin board; all written correspondence answered. Implementations for most UNIX-based systems, VAX/VMS, MS-DOS, Amiga, Atari ST, and Macintosh; host system requirements vary with the target computer (e.g., MS-DOS systems require at least 256k bytes of RAM). From: Willard McCarty tektronix!reed!johnh@uunet.UU.NET (John B. Haviland) Subject: Interlinear translation programs (48) Date: Sun, 20 Mar 88 12:46:02 ESTSat, 19 Mar 88 20:42:37 PST X-Humanist: Vol. 1 Num. 1000 (1000) ------------------------- As some of you will know, I also have written some programs, in a package called TRANSC(ript), which are designed for formatting and glossing transcript material, especially conversational transcripts where synchrony is shown by overlap. The programs (originally written in SIMULA) are something of a hodgepodge, "maintained" in a diffident and sloppy way for my own needs, mostly now written in C, currently available from me for CP/M and MSDOS (the only version I actively hack at). Module 1, SCAN, produces the formatted conversational transcript from a relatively simple input format (which was meant to be easy to type); module (2), GLOSS, attaches morpheme-by-morpheme glosses (properly aligned, word-by-word) and free translations to each line, updating (and using where possible) a running dictionary as it goes along; module (3), MERGE, gloms all the resulting files together into various sorts of more-or-less readable text. I have recently emancipated the GLOSS module from its previous dependence on SCAN, so it can be used for ordinary texts as well as for conversational material. I have also made MERGE more forgiving. I don't think these programs are as flexible as the SIL IT programs, but then again I think they are the only programs around that handle the overlap/synchrony problem for conversational transcripts. The running dictionary is a useful side benefit, made even more useful if you are handy with your editor. (If there are any Humanists out there who work on them, I have fairly sizeable working dictionaries, produced by these programs, for the two languages Tzotzil and Guugu Yimidhirr.) I would be glad to offer more details, although I am reluctant to get into the disk-copying business unless people have urgent needs for this kind of software. From: Willard McCarty "Tom Benson 814-238-5277" Subject: British anti-classical education? (27) Date: Sun, 20 Mar 88 12:50:44 ESTSat, 19 Mar 88 17:20 EST X-Humanist: Vol. 1 Num. 1001 (1001) ------------------------- In his review of I. F. Stone's THE TRIAL OF SOCRATES (NEW YORK REVIEW OF BOOKS, 31 March 1988, p. 18), M. F. Burnyeat writes: As I write this review, legislation is passing through the British Parliament that gives to Her Majesty's secretary of state for education power to control the research and teaching of each individual university teacher in the country. An imposed curriculum will determine the content of the history taught in the state schools and will effectively exclude from their timetable all study of the classical languages. Is this true? To what legislation does Professor Burnyeat refer? Tom Benson Penn State University t3b@psuvm (bitnet) From: Willard McCarty CMI011@IBM.SOUTHAMPTON.AC.UKRandall Jones CMI011@IBM.SOUTHAMPTON.AC.UKRandall Jones Terrence Erdt ERDT@VUVAXCOM Subject: Survey; announcement; query (119)Survey of mailersMLA session on hypertext and literatureTESTNG OCR DEVICESchif chaffMLA session on hypertext and literatureTESTNG OCR DEVICES (41 LINES) Date: Mon, 21 Mar 88 17:45:53 ESTMon, 21 Mar 88 22:02:28 GMTMon, 21 Mar 1988 15:17 MSTSun, 20-MAR-1988 15:53 EST X-Humanist: Vol. 1 Num. 1002 (1002) (1) Date: Mon, 21 Mar 88 22:02:28 GMT (33 lines) (2) Date: Mon, 21 Mar 1988 15:17 MST (19 lines) (3) Date: Sun, 20-MAR-1988 15:53 EST (45 lines) (1) -------------------------------------------------------------------- This man Marker is a real hero! Someone ASKS for my 'wit'? Actually our mail system (outgoing) was broken... Anyway, since alls quiet on the humanist front (rumblings of text engine guns in the distance), can I ask HUMANISTs to help me with a quick survey? I am interested in mailing software (its a sort of research interest in this department), and I'd like some hard facts on a sort of 'homing pigeon' experiment. I am interested a) in what mailer (and thence operating system) people use, and b) the approximate timings through nodes. I'd be very grateful if HUMANISTs could all send me a piece of mail which contains simply two lines with the name of their mail system (if known) and the name of their operating system. I can extract the route from the mail header, and the date sent. No, this is not a silly 'postcard' request, its a serious interest in communication. My address is NOT as shown above (as I said the mailers broken) but is spqr@uk.ac.soton.cm thanks sebastian PS two bulls at the bullfight, looking at bull #3 whos first into the ring. the crowd roar, bull #3 is confused and bewildered. "the cape, Larry, go for the cape!" bellows one of the watching bulls... but thats Gary Larson, not me (2) -------------------------------------------------------------------- ACH (Association for Computers and the Humanities) is co-sponsoring a session at the 1988 Modern Language Association Annual Meeting in New Orleans in December on the topic of hypermedia and literature. If anyone is interested in participating in the session or can make a recommendation in behalf of someone else, please send a BITNET message to Randall Jones, JONES@BYUADMIN . Include in the note a brief description of the subject you (or the person you are recommending) would like to speak about. Participants in the session must be members of the MLA and must register for the conference. Randall Jones Humanities Research Center Brigham Young University (3) -------------------------------------------------------------------- Standards for Testing Optical Character Recognition (OCR) Systems During the next few months, I shall be testing some new OCR equipment --the Kurzweil Discoverer 7320, the TransImage 1000, and the Saba Handscan (I've already examined the Palantir CDP). The the TransImage is said to be capapble of reading typeset matter as well as standard typewriter fonts, while the Handscan is confined to typewriter fonts and printer output. The Discover, which sells for between ten and twelve thousand dollars, supposedly will scan typeset English, French, Italian, Swedish, and German. It seems to me that the capacity to convert typeset materials to machine readable form eventually will be an important element in a scholar's workstation, and it would make for efficiency if we could device a standard test to evaluate new OCR products, particularly since commercial publications, such as PC Magazine, tend to focus only upon business applications, where the need to scan typeset matter isn't as critical. Some of the participants in the HUMANIST forum are quite experienced users of OCR devices, particularly Kurzweil products, and therefore I would like to pose the question of whether or not a standard test of OCR equipment is feasible, as in terms of media submitted--such as microform, newsprint, line printer output; secondly, in terms of languages represented by the test. If a corpus of the kinds of test materials were agreed upon, would it be possible to standardize the administration and grading of the test? Would it be feasible to establish, for instance, a collection of test documents consisting of photocopies of the agreed upon materials for use by anyone wanting to test the capabilities of an OCR device? Terrence Erdt, Ph.D. Technical Review Editor Computers and the Humanities Graduate Department of Library Science Villanova University Villanova PA 19085 ERDT@VUVAXCOM.BITNET (215) 645-4670 From: Willard McCarty "John J Hughes" Subject: Desiderata of retrieval software (285)Text Retrieval Desiderata Date: Mon, 21 Mar 88 17:50:14 ESTSun, 20 Mar 88 17:48:12 PST X-Humanist: Vol. 1 Num. 1003 (1003) ------------------------- I would like to augment Robin Cover's and Malcolm Brown's thoughts on text retrieval desiderata by offering the following "Blue Sky Wish List" that outlines the sort of features I would like to see in an ``ideal'' text retrieval program. Some of these features were outlined on pages 247-49, 266-67, and 345-46 of _Bits, Bytes, & Biblical Studies_. Many of the features are taken from a discussion in a forthcoming review of some MS-DOS and Macintosh text retrieval programs that will appear in the next issue of the _Bits & Bytes Review_. Some of the features come from Robin's and Malcolm's lists. Although concording and hypertext programs are types of text retrieval programs (TRPs), I will use the latter term to refer to programs whose primary function is locating text, rather than concording it or linking sections together in a hypertext fashion. Generally speaking, I believe that an ideal TRP should be as fast and flexible as possible in its abilities to (1) locate highly defined (``disambiguated'') matches, (2) display them on-screen, (3) move from hits to the texts in which they occur, (4) sort the hits according to default or user-defined templates, (5) copy matches in user-defined contexts to a file or send them to a printer, and (6) format that output in user-defined ways. Specifically, an ideal TRP might support the following features. (Some of these features pertain to TRPs that work with indexed texts. I'm not going to discuss sorting options.) SEARCHING/INDEXING FEATURES. An ``ideal'' TRP should support (1) the Boolean operators AND, OR, XOR, and NOT, (2) parentheses, e.g., "(Holmes AND Watson) NOT Moran" (3) nested Boolean operators, e.g., "((Holmes AND (Watson OR Hudson) OR Mycroft) NOT Moran" (4) user-specifiable default Boolean operators, e.g., OR, AND (5) positional operators (for positionally-dependent, or sequence-specific, searches), e.g., "Holmes W/5 Watson"--Holmes followed within 5 words by Watson-- (6) proximity operators (nearness), e.g., "Holmes N/5 Watson"--Holmes and Watson within 5 words of one another in either order-- (7) metric operators, e.g., "5" in the two previous examples is a metric operator (8) comparative operators, e.g.,"SHOW Dates=> 01/08/75" (9) range operators, e.g., "Holmes 1885:1887" (10) mixing Boolean, metric, proximity, positional, range, and comparative operators in a single search construction, (11) user-defined and user-specified text units on which operators operate (i.e., restricting searches by any user-defined or specified text unit, e.g., character, word, line, verse, sentence, paragraph), e.g., "(Holmes AND Watson)/S"--Holmes and Watson in the same sentence (S)--"(Holmes W/3 Watson)/S--Holmes followed within 3 sentences (S) by Watson-- (12) using positive and negative metrical operators when restricting searches by user-defined or user-specified text units, e.g., "(Holmes AND Watson)/2L"--Holmes and Watson within 2 lines of one another-- (13) long, multiple search-term constructions with expressed operators, e.g., "((Holmes AND (Watson OR Hudson) OR Mycroft) NOT Moran" (14) initial, medial, and terminal truncation of search terms, e.g., "*olmes", "H*es", "Hol*" (15) single and global wild-card characters, e.g., ? and * (16) mixed wild-card searches, e.g., "?olm* AND ?at*n" (17) regular-expression pattern matching for word and phrase fragments, (18) wild-card pattern matching for word and phrase fragments, e.g., "?olm* AND ?at*n" (19) exact-phrase searches, e.g., "Mr. Sherlock Holmes" (20) case-sensitive and case-insensitive searches, e.g., "Holmes" and "holmes" (21) punctuation-sensitive and punctuation-insensitive searches, (22) date-delimited searches, (23) user-specifiable, noncontiguous search ranges, e.g., "Matthew-John, Revelation" or "Romans 1:-4:10; Hebrews 2:3-10:2" (24) synonym-group searches, i.e., bilateral equivalence between a thesaurus entry and its synonyms, as Concept Finder allows (25) search macros, as ZyIndex allows (26) searching sets of retrieved records, i.e., searching the results of searches, as DIALOG allows (27) naming, saving, recalling, editing, and reusing search constructions, as Concept Finder allows (28) term weighting, i.e., user-assigned weighting of search terms, as SIRE and Personal Librarian allow (29) relevance ranking of hits, i.e., algorithm-ranking of hits from most probably relevant to least probably relevant, as SIRE and Personal Librarian allow (30) root matching, i.e., allows searches by root to locate all or user-specified inflected forms, as SIRE and Personal Librarian allow (31) automatic term associations, i.e., algorithm-suggested set of search terms based on associations between the original search terms and the data base, e.g., in response to a search for "food," the program suggests "vegetables, meat, dairy products, grains" as additional search terms, as SIRE and Personal Librarian allow (32) short-context disambiguation with output sorted by increasing or decreasing frequency or alphabetically, i.e., displaying hits in a context of 1 or 2 words so that users may specify contexts to be included and excluded in the search construction, as the IRCOL search software allows (33) counting as matches only those constructions that meet a user-specified number of conditions, some of which may be absolute conditions e.g., 3 out of 5, as the IRCOL search software allows. Additionally, an ``ideal'' TRP that creates indices should support (34) multiple, simultaneous index creation, as DIALOG does (35) large indices, (36) numbered index sets that can be used in search constructions, e.g., "((S1 AND S2) OR S14) NOT Watson"--where S1, S2, and S14 represent sets of hits created be previous search constructions (DIALOG users will understand!) (37) statistical information about word usage as search terms are entered, i.e., display word-frequency information as search terms are entered in the construction before the search is run (38) search term selection from alphabetized word-frequency lists, i.e., as WordCruncher allows (39) search term selection from generic and from user-defined data base thesauri, i.e., as ZyIndex allows (40) automatic saving of search constructions with indices, i.e., as ZyIndex allows (41) user comments--header information--in indices, i.e., as ZyIndex allows (42) search-session histories, i.e., as DIALOG provides (43) appending new text to an existing index without having to reindex the entire file, as Concept Finder allows. For special scholarly needs, an ``ideal'' TRP should support (44) nonroman alphabets, (45) user-defined indexing sequences, as WordCruncher allows (46) texts that are tagged and searchable at the morphological, syntactical, and semantic levels, (47) searchable annotations on any element or tag in the data base, and (48) lemmatized data bases. DISPLAY/MOVE/PRINT FEATURES. An ``ideal'' TRP should allow users to (1) display the search construction--terms, operators, and range--when searching and when displaying terms, (2) see the following information dynamically updated on-screen: name of term being searched for, logical operation being performed, name of file or record unit being searched, location of hits by file and line, cumulative number of hits per file or record unit, cumulative time required to perform function, and time remaining to complete entire search operation, (3) see matches highlighted in inverse video, (4) display the following information when outputting hits in any format to screen, printer, or disk file: the search strategy, the file name, file creation date, and line number in which the hit is located, the total number of hits found, the total number of hits in the file, record, or text unit being viewed, and the number of the hit being viewed, (5) display hits in the context of any specified number of user-defined or specified text units (e.g., 3 words, 4 lines, 2 sentences, 2 paragraphs, 5 verses) with either a balanced or an unbalanced number of units before and after the match, (6) jump directly from ``hits'' to a default or user-specified context--including the full-text context--and back, (7) jump directly from display format to display format, (8) jump from one hit to any other hit, (9) page bidirectionally through hits a user-specified number at a time, (10) page bidirectionally through the full text of a document, (11) go to any user-specified line in a document, (12) go immediately to the beginning or end of a document, (13) go immediately from any level marker or tag to any other level marker or tag, (14) page bidirectionally from level marker or tag to same-type of level marker or tag, (15) display and print ``hits'' in a user-specifiable context of N-number of characters, words, lines, sentences, paragraphs, or user-defined text unit, (16) display and print hits in default and user-defined formats, (17) display and print word-frequency information, (18) display and print distribution-frequency information, and (19) automatically and conditionally write matches in user-specified contexts and formats to disk files or send them to the printer. Additionally, TRPs that create indices should allow users to (20) display hits nonsequentially in a user-specified order. INDEX MANIPULATION. An ``ideal'' TRP program that creates indices should allow users to (1) combine two or more indices (the equivalent of a Boolean OR operation), (2) intersect two or more indices (the equivalent of a Boolean AND operation), (3) show the difference between two indices (the equivalent of a Boolean NOT operation), (4) edit indices by adding or eliminating verses. ***** Of course, such a program does not exist! I believe that a program such as I have roughly outlined would best be designed by a group that included one or more of each of the following sorts of people: (1) humanists, (2) linguists (of course, they're humanists too), (3) computer scientists, (4) information retrieval specialists, (5) programmers with academic backgrounds, (6) and persons with experience in designing interfaces for commercial software. From: Willard McCarty lang@PRC.Unisys.COMItamar Even-Zohar lang@PRC.Unisys.COMItamar Even-Zohar Subject: Nota Bene (56)Wordprocessing, mostly multilingual (102)Nota Bene 3.0Wordprocessing, mostly multilingual (102)Nota Bene 3.0 Date: Mon, 21 Mar 88 17:57:06 ESTMon, 21 Mar 88 07:58:30 EST17 March 1988 X-Humanist: Vol. 1 Num. 1004 (1004) (1) Date: Mon, 21 Mar 88 07:58:30 EST (25 lines) (2) Date: 17 March 1988 (13 lines) (1) -------------------------------------------------------------------- What is the current status of Nota Bene? About three years ago, I reivewed the then-current version for Jack Abercrombie (I forget the version number, but it was a very early one, and my recollection was that it didn't then offer Hebrew, but did offer Greek. I don't recall if either Hebrew or Arabic was even planned. While I'm on the subject of Nota Bene, I'd be interested in hearing from any HUMANISTS who currently use NB to find out about their opinions of the package. A friend of mine in the Classical Studies department at Penn is considering buying NB, and I told him I'd try to get some feedback from NB users. Since this might not be of sufficiently general interest, I'd suggest that any opinions about NB be mailed directly to ME (lang@prc.unisys.com or lang@linc.cis.upenn.edu) and if I get enough interesting opinions, I'll send the lot of them en masse to HUMANIST. Many thanks for any opinions. (2) -------------------------------------------------------------------- I received Nota Bene 3.0 a few days ago and have dedicated some time now to studying, investigating and customizing it. In this version, Hebrew works all right the way I customized the Beta version before, but we still expect the more advanced Nota Bene version. So I will NOT refer to any specific problems with Hebrew in this document. [An extensive review of NB 3.0, follows in the version of this document currently available on the file-server as NOTABENE REVIEW. -- W.M.] From: Willard McCarty tektronix!reed!johnh@uunet.UU.NET (John B. Haviland) Subject: Biographies (18)biographies Date: Mon, 21 Mar 88 18:01:01 ESTSun, 20 Mar 88 12:32:42 PST X-Humanist: Vol. 1 Num. 1005 (1005) ------------------------- I, for one, applaud Steven DeRose's initiative in making use of the biographies which are what caught my eye in the first place about HUMANIST. Why not circulate the proposed standard, if only because it might inspire some of us to revise what was (for me at least) an uninformed shot-in-the-dark at the terminal keyboard when I first made application to Humanist? From: Willard McCarty Mark Olsen JACKA@PENNDRLSD106GFS@UTARLVM1Toby Paff cbf%faulhaber.Berkeley.EDU@jade.berkeley.edu (Charles Faulhaber)Mark Olsen JACKA@PENNDRLSD106GFS@UTARLVM1Toby Paff cbf%faulhaber.Berkeley.EDU@jade.berkeley.edu (Charles Faulhaber) Subject: Queries, replies, & an announcement (93)Drivers for Hercules graphics cardLIST OF TEXTS AND SOFTWARE AVAILABLE FROM CCATQuery on CAI progs for Biblical languagesRe: Multilingual wordprocessing (45)Re: Interlinear translation programsDrivers for Hercules graphics cardLIST OF TEXTS AND SOFTWARE AVAILABLE FROM CCATQuery on CAI progs for Biblical languagesRe: Multilingual wordprocessing (45)Re: Interlinear translation programs (48) Date: Tue, 22 Mar 88 23:17:09 ESTTue, 22 Mar 88 20:24:19 MSTTuesday, 22 March 1988 1816-ESTTue, 22 Mar 88 13:58:55 CSTTue, 22 Mar 88 09:38:20 ESTMon, 21 Mar 88 12:02:03 PST X-Humanist: Vol. 1 Num. 1006 (1006) (1) Date: Tue, 22 Mar 88 20:24:19 MST (18 lines) (2) Date: Tuesday, 22 March 1988 1816-EST (6 lines) (3) Date: Tue, 22 Mar 88 13:58:55 CST (15 lines) (4) Date: Tue, 22 Mar 88 09:38:20 EST (13 lines) (5) Date: Mon, 21 Mar 88 12:02:03 PST (8 lines) (1) -------------------------------------------------------------------- I'm reviewing a program called RS/1 and it requires a couple of drivers for the Hercules monchrome graphics card that I've never heard of. The system requires the HGC FULL command (which I have) followed by INT10 (COM) and HARDCOPY (COM) which seem to act as extensions to DOS. Also involved in this is GRAPH_X which is a screen dump utility (I think, the documentation is vague here) which is called by HARDCOPY.COM. I'm stumped and wonder if anyone has run across these routines. Any information would be greatly appreciated. I am, obviously, looking for copies of these utilities which I am assuming were written by HERCULES for their card. All these years with my HERC card and I find out that I haven't even been using right. Maybe I'll have to convince my wife that we really NEED EGA and color. Thanks, Mark (2) -------------------------------------------------------------------- LIST OF TEXTS AND SOFTWARE AVAILABLE FROM CCAT [now available on HUMANIST's file-server s.v. CCAT HOLDINGS] (3) -------------------------------------------------------------------- I have been asked to inquire about CAI programs for Biblical languages for a colleague. Could people who are aware of such relieve my ignorance? I am aware of George Kong's "MemCards" system, which he presented at SBL, and the U Minn Basic Word Study programs, mentioned in Wheels for the Mind, but haven't looked into the area any further. Please reply direct to D106GFS @ UTARLVM1; I'll summarize to HUMANIST if there is sufficient interest. Thanks! Steve DeRose, Brown Univ. and SIL (4) -------------------------------------------------------------------- I have seen the NOta Bene supplement for Hebrew... Mark Cohen at Princeton has a copy. It is strictly a beta test version, and we have developed a program to convert our ksemitic files into their version of Hebrew, the code points for which, alas, conflict with all the other standards. Anyone interested can contact me. Toby Paff Princeton University C.I.T. (5) -------------------------------------------------------------------- Ken Whistler (Dept. of Linguistics, UC Berkeley) is selling a concordance/text analysis package which is specifically designed to handle linguistic corpora with interlinear translations. From: Willard McCarty "Michael Sperberg-McQueen" Subject: On desiderata for text retrieval (116) Date: Tue, 22 Mar 88 23:21:21 EST22 March 1988 10:40:55 CST X-Humanist: Vol. 1 Num. 1007 (1007) ------------------------------------------------------------------------ The desiderata listed by Robin Cover, Malcolm Brown and John J. Hughes for text search and retrieval programs have all given me great pleasure (imagining programs with such features), some pain (reflecting that such programs don't exist), and some wistful daydreams (ah, if one only had the time to drop everything else and develop such a program, or set of programs . . . ). I hope the software developers among us are listening! And on the off chance that they are, here are my own two cents' worth: An ideal text retrieval program should understand and help examine textual variation, whether from successive authorial drafts, successive editions of a printed work, divergent manuscript transmission of a text, or revision of a text (e.g. in legal codes or commentaries). (These are technically and correctly known as 'text states', but I will refer to them as 'manuscripts' because that's probably easier to follow.) Understanding and handling manuscripts correctly means: 1 Each of the kinds of searches noted by Cover, Brown, or Hughes should be available - for specified manuscripts ("In ms C, find all occurrences of 'Holmes' within one sentence of 'Watson' ...") - for all manuscripts ("In *any* ms, find ...") - for some subset of the manuscripts ("find this, but only in mss A, B, or C ...") Note that the distance between two words is no longer absolute, but a function of the manuscript in which the distance is measured. 2 In browsing, the user should be able to select any manuscript as the basis for the display ("show me this passage in ms. A! ... now in ms. B!") 3 In displaying hits, the program must obviously display the version of the text that has the hit. (If ms. A has 'Holmes' and 'Watson' together in this sentence, but ms. B doesn't, the display must be based on A, not B.) Where more than one ms. has the same wording for a passage, that passage should be listed as a hit only once, and displayed according to the 'preferred' manuscript. The user, of course, gets to say the order in which mss are to be preferred. And should be able to switch, within the 'hit' window, to any other ms. 4 Both for browsing and for display of hits, the user should be able to request and see an apparatus criticus for the passage shown on the screen. When the base manuscript changes ("show me this passage in manuscript K!") the apparatus, of course, must change too. When the text window scrolls, of course the apparatus scrolls too. 5 The user should be able to specify dynamically which manuscripts should be included in the apparatus. Extra credit: the user can mark specific variants as interesting or not interesting (or with a variety of flags: lexical, orthographic, syntactic, metrical, ...) and have variants included or excluded from the visible apparatus on that basis. 6 In browsing, the user should be able to open a parallel window with a different base manuscript, so that ms. A can be seen in one window and ms. B in the other. Apparatus should be available in each window. When one scrolls, the other should scroll, too. 7 In addition to the searches specified by Cover, Brown, and Hughes, the user should be able to search for manuscript variations of a specific 8 As a sideline, the program ought to be able to generate lists of variants and variations of the sort developed by Henri Quentin and used by W. W. Greg or Vinton Dearing, for analysis using other programs and eventually for the generation of stemmata. It would be nice if the program could also be given a stemma and from it generate the text determined by that stemma. (Indeterminate passages highlighted, with alternatives printed below each other, for the editor to choose between?) I should point out that some things of this kind can be done by the program TUSTEP, developed at Tuebingen by Wilhelm Ott; I don't know much in detail because I have not used the program or had the chance to read its documentation. Unless I am much mistaken, however, Tustep is principally batch-oriented, and I have been envisioning an interactive program. Several programs have been mentioned already in this discussion as able to do at least some of the kinds of searches desired. Not mentioned yet, though (I think) is ARRAS, developed by John B. Smith and now sold by Conceptual Tools, of Chapel Hill, NC. While not yet the system of our dreams, ARRAS (which runs on IBM System/370 machines, under TSO or VM/CMS) will handle points 1-3, 5-7, 11-16, 19, 24, 38, 42, of John J. Hughes's list. Not bad at all, on the whole, and cheap by comparison with most other mainframe software. Smith is working on a micro (RT-class machine) version that is even more elaborate and slick. But I notice that all of this discussion of text-retrieval software remains, as Willard observed some months ago, within the world of the text itself: if not 'New Critical', then at least still 'text-immanent'. And I wonder: what desiderata would others add to these lists? Michael Sperberg-McQueen, University of Illinois at Chicago From: Willard McCarty SRRJ1@VAXA.YORK.AC.UK Subject: Computing for humanists at York, U.K. (94) Date: Tue, 22 Mar 88 23:27:02 EST22-MAR-1988 15:16:37 GMT X-Humanist: Vol. 1 Num. 1008 (1008) ------------------------------------------------------------------------ The following message was intended for Keith Whitelaw. I'm sorry to burden the whole of Humanist with my reply to his recent request [due to lack of adequate userid/postal address] , but I hope it may contribute to the fitful debate about the allocation of resources for Humanities computing, which still seems a vital issue to me. In reply to your recent request to Humanist for information about computing for arts students, I thought you might be interested to know about the way we have begun to introduce computing in the History department at York. As yet we have no money for hardware, no money for special staff, no money for software (we're trying), so we're not really comparable to places like Glasgow, but things can still be achieved. We do have a responsive, roomy new VAX/VMS cluster on campus with terminal classrooms to which we have free access, and we do have some terminals in our offices now too. We have started this year by introducing a revamped computing ancillary course for history students taught by history lecturers. [ A former ancillary taught years ago by Computing staff and not aimed especially at History students had withered away and finally died during the confusion of changing over to a new mainframe. ] The new course was initiated by history lecturers and is offered as a voluntary extra to students and is not an integral part of their degree. So far wee've been overwhelmed by its popularity. By the end of the first year we will have taught 60 (out of c. 270) of our students. Demand was greater, but this is the maximum number it was possible to accommodate. 1. course content Part 1 (4 hours) Introduction to VAX/VMS Introduction to Edit and Mail, and some use of remote systems. Part 2 (4 hours) Introduction to Datatrieve (data entry and retrieval system) Part 3 (4 hours) Introduction to WPS plus 2. assignments Part 1 To edit a text on the childhood of Henry V and MAIL it to tutor. Part 2 To retrieve information from existing bibliographical database in the tutor's area. To set up a database of the population of 19th century English towns in the student's own area using a single domain. To set up a bibliography database linked to a keyword database (using two linked domains) in student's area. Part 3 To enter and format a text, containing a table, which is at least 3 pages long. 3. methods of assessment A voluntary exam at the beginning of the following term, undertaken independently in students' own time testing skills in all three areas, again using historical materials. The student will be given a certificate ratified by the Board of Studies listing satisfactory/unsatisfactory performance in each of the three areas. 4. It is taught by History staff. 5. any other information that you think appropriate. The first time we taught the course we also included introductions to the SORT/MERGE and SEARCH facilities of DCL, and to OCP. But this was definitely trying to squeeze a quart into a pint pot and has now been dropped. Students who have taken the course continue to use the VAX and teach their friends, but we haven't yet measured this continued use. Next year we intend to expand on this by incorporating the use of IT into mainstream history courses, but it has been useful to gain this experience of teaching in a terminal room which was a new experience for all of us. Staff involved Dr Sarah Rees Jones, Dr Ted Royle, Dr Edward James, Dr John Wolffe. Advice from John Illingworth and Dr Rob Fletcher in the Computing Service. Hope this is of some help. Sarah Rees Jones. [srrj1@uk.ac.york.vaxa] From: Willard McCarty CMI011@IBM.SOUTHAMPTON.AC.UKComserve@RpicicgeCMI011@IBM.SOUTHAMPTON.AC.UKComserve@Rpicicge Subject: Mail survey; another e-mail service (78)mail surveyAbout Comservemail survey Date: Wed, 23 Mar 88 19:50:17 ESTWed, 23 Mar 88 10:03:59 GMTWed, 23 Mar 88 12:13 EDT X-Humanist: Vol. 1 Num. 1009 (1009) (1) Date: Wed, 23 Mar 88 10:03:59 GMT (16 lines) (2) Date: Wed, 23 Mar 88 12:13 EDT (45 lines) (1) -------------------------------------------------------------------- a) To all those who asked me a personal question: my outgoing mail is dead, so I'll wait to reply for a week or so until it recovers b) For those who have never mailed to the UK, it should be said that some mailers require you to say spqr@cm.soton.ac.uk not uk.ac.soton.cm It depends how sophisticated your software is! Mine likes the highest level name (such as uk) first, others last. Sebastian Rahtz PS could the notes about biographies be put on the file server (2) -------------------------------------------------------------------- - - - C O M S E R V E - - - Comserve is a service for professionals and students interested in human communication studies. Conserve is supported through the cooperation of the Center for Interactive Computer Graphics and the Department of Language, Literature, and Communication at Rensselaer Polytechnic Institute. Comserve's Principal Functions 1. Comserve is a "file server;" i.e, Comserve can send you copies of files -- computer programs and documents including bibliographies, instructional materials, announcements, research instruments, etc. -- from its extensive collection. 2. Comserve is a news service. Announcements of interest to users are distributed periodically in issues of Comserve's electronic news bulletin. 3. Comserve maintains a "white pages" or "user directory" service. 4. Comserve has a "Hotline" system that provides a method for communicating with others on topics of general interest in communication studies. 5. Comserve has a system for automatic distribution of announcements or survey forms in electronic format. If you have questions about Comserve or would like to submit information to be distributed by Comserve, contact Comserve's editorial staff at Bitnet address: SUPPORT@RPICICGE. A free hardcopy booklet named "Comserve User's Guide" can be obtained by sending a request to SUPPORT@RPICICGE. Be sure to include your "normal" (i.e., not your computer mail) address with your request. Comserve is supported by the Eastern Communication Association, the International Communication Association, and Rensselaer Polytechnic Institute From: Willard McCarty Bob Tannenbaum (RSTHC@CUNYVM) Subject: Teaching computing to humanists (50) Date: Wed, 23 Mar 88 21:26:51 ESTWed, 23 Mar 88 10:15 EST X-Humanist: Vol. 1 Num. 1010 (1010) ------------------------- The question by Keith Whitelam regarding information on courses designed to teach computing to humanists and the responses by Joe Rudman and Sarah Rees Jones have opened a subject that I feel is most important. I hope that others will share their experiences and suggestions via HUMANIST. At the Vassar Workshop on this subject in the summer of 1986, over 100 faculty members from many institutions in North America and Europe gathered to discuss their experiences in developing and teaching courses in computing to humanists. All who were actually teaching such a course brought materials such as syllabi and assignments to share. I believe a collection of these materials still exists somewhere in Nancy Ide's closet, because our original intention was to begin a "clearinghouse" for materials related to such courses. Unfortunately, we could not obtain the funding for the clearinghouse, so it remains a dream. We have produced the issue of CHum 21(4) to which Joe Rudman made reference. That issue is a direct result of the Vassar Workshop. It contains Bob Oakman's Keynote Address, Joe Rudman's excellent survey and bibliography, and articles by Nancy Ide and me about "What" we should teach (Nancy) and "How" we should teach it (me). The conference scheduled for 16-18 June 1988 at Oberlin College in Oberlin, Ohio is also concerned exclusively with teaching computers and the humanities courses. We will have a Keynote Address by Joe Rudman, four panels devoted to different aspects of the subject, and over 30 contributed papers by scholars from the United States and Canada who are teaching computing to students in all different branches of the humanities. The papers include teaching students in music, languages and literature, translation, history, and philosophy, among other subjects. It is my hope that the papers will appear in a formal proceedings, together with materials gathered at the Vassar Workshop and at this conference. I am currently working on realizing that hope. I invite all of you who are teaching in this field to share your experiences with the rest of us via HUMANIST and to join us at Oberlin in June. To be put on the conference mailing list, send a message to Dr. Roberta Russell at Oberlin (PRUSSELL@OBERLIN). Bob Tannenbaum (RSTHC@CUNYVM) From: Willard McCarty CMI011@IBM.SOUTHAMPTON.AC.UKComserve@RpicicgeCMI011@IBM.SOUTHAMPTON.AC.UKComserve@Rpicicge Subject: E-mail survey & an e-mail service (79)mail surveyAbout Comservemail survey Date: Thu, 24 Mar 88 16:29:16 ESTWed, 23 Mar 88 10:03:59 GMTWed, 23 Mar 88 12:13 EDT X-Humanist: Vol. 1 Num. 1011 (1011) [This mail was detained by ListServ for some completely obscure reason. My apologies on behalf of the software. --W.M.] (1) Date: Wed, 23 Mar 88 10:03:59 GMT (16 lines) (2) Date: Wed, 23 Mar 88 12:13 EDT (45 lines) (1) -------------------------------------------------------------------- a) To all those who asked me a personal question: my outgoing mail is dead, so I'll wait to reply for a week or so until it recovers b) For those who have never mailed to the UK, it should be said that some mailers require you to say spqr@cm.soton.ac.uk not uk.ac.soton.cm It depends how sophisticated your software is! Mine likes the highest level name (such as uk) first, others last. Sebastian Rahtz PS could the notes about biographies be put on the file server (2) -------------------------------------------------------------------- - - - C O M S E R V E - - - Comserve is a service for professionals and students interested in human communication studies. Conserve is supported through the cooperation of the Center for Interactive Computer Graphics and the Department of Language, Literature, and Communication at Rensselaer Polytechnic Institute. Comserve's Principal Functions 1. Comserve is a "file server;" i.e, Comserve can send you copies of files -- computer programs and documents including bibliographies, instructional materials, announcements, research instruments, etc. -- from its extensive collection. 2. Comserve is a news service. Announcements of interest to users are distributed periodically in issues of Comserve's electronic news bulletin. 3. Comserve maintains a "white pages" or "user directory" service. 4. Comserve has a "Hotline" system that provides a method for communicating with others on topics of general interest in communication studies. 5. Comserve has a system for automatic distribution of announcements or survey forms in electronic format. If you have questions about Comserve or would like to submit information to be distributed by Comserve, contact Comserve's editorial staff at Bitnet address: SUPPORT@RPICICGE. A free hardcopy booklet named "Comserve User's Guide" can be obtained by sending a request to SUPPORT@RPICICGE. Be sure to include your "normal" (i.e., not your computer mail) address with your request. Comserve is supported by the Eastern Communication Association, the International Communication Association, and Rensselaer Polytechnic Institute From: Willard McCarty Bob Tannenbaum (RSTHC@CUNYVM) Subject: Teaching computing to humanists (52) Date: Thu, 24 Mar 88 16:33:35 ESTWed, 23 Mar 88 10:15 EST X-Humanist: Vol. 1 Num. 1012 (1012) [The following was also delayed by ListServ. -- W.M.] ------------------------- The question by Keith Whitelam regarding information on courses designed to teach computing to humanists and the responses by Joe Rudman and Sarah Rees Jones have opened a subject that I feel is most important. I hope that others will share their experiences and suggestions via HUMANIST. At the Vassar Workshop on this subject in the summer of 1986, over 100 faculty members from many institutions in North America and Europe gathered to discuss their experiences in developing and teaching courses in computing to humanists. All who were actually teaching such a course brought materials such as syllabi and assignments to share. I believe a collection of these materials still exists somewhere in Nancy Ide's closet, because our original intention was to begin a "clearinghouse" for materials related to such courses. Unfortunately, we could not obtain the funding for the clearinghouse, so it remains a dream. We have produced the issue of CHum 21(4) to which Joe Rudman made reference. That issue is a direct result of the Vassar Workshop. It contains Bob Oakman's Keynote Address, Joe Rudman's excellent survey and bibliography, and articles by Nancy Ide and me about "What" we should teach (Nancy) and "How" we should teach it (me). The conference scheduled for 16-18 June 1988 at Oberlin College in Oberlin, Ohio is also concerned exclusively with teaching computers and the humanities courses. We will have a Keynote Address by Joe Rudman, four panels devoted to different aspects of the subject, and over 30 contributed papers by scholars from the United States and Canada who are teaching computing to students in all different branches of the humanities. The papers include teaching students in music, languages and literature, translation, history, and philosophy, among other subjects. It is my hope that the papers will appear in a formal proceedings, together with materials gathered at the Vassar Workshop and at this conference. I am currently working on realizing that hope. I invite all of you who are teaching in this field to share your experiences with the rest of us via HUMANIST and to join us at Oberlin in June. To be put on the conference mailing list, send a message to Dr. Roberta Russell at Oberlin (PRUSSELL@OBERLIN). Bob Tannenbaum (RSTHC@CUNYVM) From: Willard McCarty Subject: Test -- please ignore! Date: Thu, 24 Mar 88 16:49:48 EST X-Humanist: Vol. 1 Num. 1013 (1013) Testing 1,2,3. From: Willard McCarty Subject: Test - please ignore! Date: Thu, 24 Mar 88 16:53:34 EST X-Humanist: Vol. 1 Num. 1014 (1014) Testing 4,5,6. From: Willard McCarty Steve Younker Subject: Test - please ignore!test Date: Fri, 25 Mar 88 10:29:04 ESTFri, 25 Mar 88 10:23:22 EST X-Humanist: Vol. 1 Num. 1015 (1015) Ignore this test From: Willard McCarty Steve Younker Subject: Test - Please ignore!test Date: Fri, 25 Mar 88 10:35:37 ESTFri, 25 Mar 88 10:23:22 EST X-Humanist: Vol. 1 Num. 1016 (1016) Ignore this test From: Willard McCarty Steve Younker Subject: Test - Please ignore!test Date: Fri, 25 Mar 88 10:48:18 ESTFri, 25 Mar 88 10:23:22 EST X-Humanist: Vol. 1 Num. 1017 (1017) Ignore this test From: Willard McCarty Subject: Test - please ignore Date: Sun, 27 Mar 88 18:46:39 EST X-Humanist: Vol. 1 Num. 1018 (1018) Please ignore this entirely. From: Willard McCarty Subject: Test - please ignore! Date: Tue, 29 Mar 88 10:13:40 EST X-Humanist: Vol. 1 Num. 1019 (1019) Ignore this except to celebrate the fact that it has arrived. From: Willard McCarty Subject: Another test! Date: Tue, 29 Mar 88 10:17:24 EST X-Humanist: Vol. 1 Num. 1020 (1020) Ignore this joyfully. From: Willard McCarty Grace Logan John D. Hopkins Willard McCarty Grace Logan John D. Hopkins Willard McCarty Subject: Announcements (58)Conference announcementCompTrans ReportError in file-nameConference announcementCompTrans ReportError in file-name Date: Tue, 29 Mar 88 11:24:22 EST26-MAR-1988 11:33:19 GMTMon, 28 Mar 88 14:40 O29 March 1988 X-Humanist: Vol. 1 Num. 1021 (1021) (1) Date: 26-MAR-1988 11:33:19 GMT (13 lines) (2) Date: Mon, 28 Mar 88 14:40 O (11 lines) (3) Date: 29 March 1988 (12 lines) (1) -------------------------------------------------------------------- Assoc. for Logic Programming: Fifth Int'l Logic Programming Conference; Fifth Symposium on Logic Programming. 15-19 August 1988, Seattle, Wash. Contact on the above conference is Kenneth A. Bowen, Syracuse University, Logic Programming Research Group, School of Computer and Information Science, Syracuse, NY 13210. (2) -------------------------------------------------------------------- COMPUTER APPLICATIONS IN TRANSLATOR TRAINING AND TRANSLATION WORK IN FINLAND A report by John D. Hopkins, University of Tampere (Finland) International Conference For Translators and Interpreters Vancouver Community College, Canada 22-24 May, 1987 [Now available on the file-server, s.v. COMPTRAN REPORT] (3) -------------------------------------------------------------------- I mistakenly announced that the listing of CCAT's texts and software had been stored on our file-server as CCAT HOLDINGS. The real name is CCAT COLLECTN. My apologies. Willard McCarty mccarty@utorepas From: Willard McCarty Subject: Greetings (27)HUMANIST Date: Tue, 29 Mar 88 11:29:40 EST25 Mar 88 08:47 -0330 X-Humanist: Vol. 1 Num. 1022 (1022) Dear Colleagues: You should already be receiving HUMANIST mail from our locally resurrected system. My thanks for the several encouraging messages, such as the following: -------------------------------------------------------------------------- AAARRGHHH! It's been 36 hours! HUMANIST! I need HUMANIST! Hope you are not ill. Best Wishes -------------------------------------------------------------------------- A question for you. I know that at least one person is having some trouble extracting digested messages she wants to keep from those she doesn't. Who else? Perhaps if the people with this trouble were to identify themselves and their systems, others might have useful suggestions or solutions. Willard McCarty mccarty@utorepas From: Willard McCarty Willard McCarty "John J Hughes" Willard McCarty "John J Hughes" Subject: Files on the server (75)Availability of files on the serverOutline of Bits, Bytes, & Biblical StudiesAvailability of files on the serverOutline of Bits, Bytes, & Biblical Studies Date: Tue, 29 Mar 88 11:49:47 EST24 March 1988 (0 lines)Wed, 23 Mar 88 08:47:57 PST X-Humanist: Vol. 1 Num. 1023 (1023) (1) Date: 24 March 1988 (44 lines) (2) Date: Wed, 23 Mar 88 08:47:57 PST (13 lines) (1) -------------------------------------------------------------------- Please allow at least 24 hours from the time a file is announced as being available on the server until you attempt to fetch it. Soon, I am told, I'll be allowed to put things there myself, but until the file-serving software is granted its maturity by the local authorities, I must ask the assistance of our always helpful postmaster. Thus the delay. It is vastly more convenient for the both of us to work in this way under the current dispensation than for me to wait until a file is actually there before I announce it. Your indulgence, please. I attach below my standard message that I send to HUMANISTs who ask me to get files for them and to those who make the easy mistake of asking HUMANIST, rather than ListServ, for the file(s) they want. W.M. mccarty@utorepas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Dear Colleague: Thanks for your query about fetching items from our file-server. The instructions for doing so are contained in the latest edition of "Guide to HUMANIST" in a file named HUMANIST GUIDE. If you joined our group recently, you will have received the proper edition of the Guide as part of the initial batch of files. If you've been a member for some time, you will have received this edition as a separate piece of mail. If you do not have the proper edition of the Guide, please let me know. Otherwise, please, attempt to get the file(s) you want by following the instructions. If all your attempts fail, then let me know, and I'll gladly get the file for you. I'm sorry to have to redirect your request in this way, but the volume of mail I receive daily does not allow me to give as much individual attention as I would like. +++++++++++++++++++++++>>Note well<<+++++++++++++++++++++++++++++++++ Send your requests for files, including HUMANIST FILELIST (the file of files), to LISTSERV@UTORONTO and *not* to HUMANIST@UTORONTO. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Willard McCarty mccarty@utorepas (2) -------------------------------------------------------------------- Analytical Outline of _BITS, BYTES, & BIBLICAL STUDIES: A Resource Guide for the Use of Computers in Biblical and Classical Studies_. John J. Hughes (Grand Rapids: Zondervan Publishing House, 1987. 650 pp. Available from Zondervan or from Bits & Bytes Computer Resources, 623 Iowa Ave., Whitefish, MT 59937; (406) 862-7280; XB.J24@Stanford.BITNET). [This outline is available in full (more than 480 lines) on the file-server s.v. BITBYTES OUTLINE.] From: Willard McCarty Itamar Even-Zohar Subject: Nota Bene for Hebrew (86)Information about Nota Bene Hebrew version(s) Date: Tue, 29 Mar 88 11:52:48 ESTFri, 25 Mar 88 10:29:02 IST X-Humanist: Vol. 1 Num. 1024 (1024) ------------------------- I have seen some queries in HUMANIST about Nota Bene's Hebrew version. As we have had quite a lot of experience with that here at Porter Institute, Tel Aviv University, here is a piece of brief information: 1. There is a Beta version of Hebrew for Nota Bene's multilingual supplement. This version is made for American scholars who have no Hebrew chip in their printers (nor in their computers, but the latter can be easily solved with Hercules Graphics Plus Card). It can print vocalization plus other massoretic stuff. It probably can provide everything a Biblical scholar in America might need for his/her scholarly work (especially since Greek is available too). 2. This version is NOT adequate for Israelis and other people using Hebrew as a *primary* language, because the Hebrew alphabet has been imposed on various ASCII positions that are not compatible with those used in Israel. Since IBM people have imposed Hebrew on ASCII 128- 154, once you change that you cannot be compatible with any other local programs, and that is not acceptable to us. For this reason, Dragonfly have provided us with a so called "chip version", intended for people who have access to hardware adapted to Hebrew computing and printing (the regular hardware sold in this country). The Beta version originally used print modes to go over from Hebrew to Latin (allowing both push mode and regular writing of both directions within one file). I have temporarily changed that to impose language (direction) change on PITCH (e.g., Latin is PT12 while Hebrew is PT112 etc.; LR0 and LR1 respectively control the primary/major direction). I have avoided the SLS mode and returned completely to regular Nota Bene. The same solutions actually work very well with the new 3.0 version. You just need a different printer table (not provided by Dragonfly) and impose the Hebrew letters on the CapsLock key. 3. However, we still wait for a new Hebrew version, since there are some more problems to solve: in my customized version, each time you change directions you also must press/depress CapsLock (where the Hebrew letters are located). Perhaps a quicker solution eventually can be found (though I do not consider that any major issue). Moreover, I have heard that instead of the somewhat awkward modes with numbers (Hebrew=MD71, Hebrew underline=MD65 etc.) there will be more easily memorizable codes (MDHM or MDHU probably). 4. All versions carry out most of the regular functions of Nota Bene. Equal Width columns however are a bit shaky if you use more than 2; Newspaper Columns won't work. Textbase, database, mailmerge will all work with certain minor extra steps (you can now get a full vocabulary of a Hebrew text since the new Textbase fully recognizes the high-bit ASCII without transforming it to low-bit ASCII equivalents as it used to do before); accumulation of bibliographies works perfectly all right; indexes and table of contents work all right, but for the moment you must run a program I have written to transform all the *numbers* to "left-right" direction, since they come out "right-left" (21 instead of 12 etc.). If you are interested in more details, reports on problems and other suggestions, I will shortly put on LISTSERV@TAUNIVM a document based on my previous memos to Dragonfly people about the Hebrew version. But no doubt this is the most powerful REAL software package for Hebrew and multilingual writing. But if all you need is only some words in Hebrew inserted in some text, or just quotations, I am not sure you should bother, unless you are using Nota Bene anyway. [Editor's note: This file, NOTABENE REVIEW, is on our file-server.] Hebrew fonts designed by me and Nimrod Gil-Ad for the Hercules Graphics Card Plus are downloadable from LISTSERV@TAUNIVM. I have designed Russian and Arabic fonts as well, but I do not recall for the moment if they are available right now on the listserve. (Writing Arabic with Nota Bene would work precisely like writing Hebrew, but you have no built-in solution for printing; only hardware can solve the issue, unless you are an expert in downloading printer fonts yourself.) From: Willard McCarty mbb@jessica.Stanford.EDU Subject: Text retrieval software (44)a few more thoughts on the text retrieval (31 lines) Date: Tue, 29 Mar 88 11:57:04 ESTWed, 23 Mar 88 20:04:21 -0800 X-Humanist: Vol. 1 Num. 1025 (1025) ------------------------- Well, recent contributions by John Hughes and Michael Sperberg-McQueen have, I think, pretty much done the job of covering all that is wished for in a text search engine. It's been a most helpful exchange, assisting me to completely revamp and extend the initial list I published on HUMANIST. Obviously we won't all agree as to what is "required," which is natural, reflecting different types and approaches to scholarship. For example, much of what Michael calls for would be nice to have, but to my mind would be "extra credit." But vive la difference! Michael's contribution recalled my submission of some months back that kicked off a conversation about hypertext and high-powered workstations. In my utopian computer environment, BOTH a search engine (as has been outlined) and a robust hypertext system are integrated. For example, Michael's point about the on-line availability of an "apparatus criticus" seems ideally suited to a hypertext system. Only if both components are available and integrated can the computer really keep pace with the work of a humanist, who both examines and analyses texts (search engine) and documents his/her findings (hypertext). Finally, I think we are all waiting anxiously for MicroArras. The most promising approach I've heard that John Smith is taking is to place the "analtyic engine" in UNIX on a workstation and the user interface in a DOS machine. That would free the engine from the constraints that DOS poses, such as the 640K memory space. Dream on.... Malcolm Brown Stanford (gx.mbb@stanford) From: Willard McCarty dow@husc6.BITNET (Dominik Wujastyk) Subject: URICA? (24) Date: Tue, 29 Mar 88 11:59:14 ESTFri, 25 Mar 88 23:05:04 EST X-Humanist: Vol. 1 Num. 1026 (1026) --------------------- A little while ago there was a message about URICA here on HUMANIST. It was billed as a program that would be useful in constructing a critical edition. I sent mail to the uucp address that was given, but it has been greeted with a deafening silence. I shall phone and so on, but in the meantime, can anyone say anything more about what URICA can actually do? Have any of us used it? Dominik From: Willard McCarty Itamar Even-Zohar Subject: Nota Bene and WordPerfect (353)Reply to some remarks about Nota Bene and WordPerfect(in a letter to a colleague) Date: Tue, 29 Mar 88 21:15:43 ESTTue, 29 Mar 88 19:20:08 IST X-Humanist: Vol. 1 Num. 1027 (1027) [In the interests of vigorous and potentially enlightening argument, I am passing on a lengthy piece from a lively member in Israel. Nothing quite arouses the passions of a computing humanist like a debate about wordprocessing packages, and I am hoping that the following will be no exception. I am particularly hoping that we can get beyond debate and attempt to answer the question, `What makes good software good?' My thanks to the NOTABENE list, from which the following has been lifted. W.M.] ------------------------------------- 1. Of what I have learnt about and experienced with WordPerfect, it seems to be a very good word processor. However, for a large range of features, Nota Bene is quite superior to it. No doubt had there been neither Nota Bene or just its wordprocessor, known on the market as XyWrite (version 3.1), WordPerfect would definitely have come first or second. 2. The programs cannot adequately be compared in toto, since Nota Bene is not just a word processor like WordPerfect but rather a package of softwares. Beside its word processor, it also has various application programs (indexing, bibliographies, database, generating forms etc.), a unique Textbase and an extremely useful programming language. From the point of view of somebody in the social and human sciences, Nota Bene a priori therefore has a tremendous advantage due to this fact, since even pricewise it is a better bargain than if you had to add all those extras to WordPerfect. The Textbase, however, even if you wished to buy it for WordPerfect is simply not available on the market as a separate software. And the absence of a programming language in WordPerfect cannot be remedied by some extraneous programming language. I would like to stress that the Textbase is such a unique feature that I would have bought Nota Bene even if its word processor had been inferior to WordPerfect's (which is not the case, as you will see from the following). WordPerfect's official price is $495 (according to PC-Magazine), while Nota Bene's official price is also $495. Both can be purchased for much more favorable prices, but in view of Nota Bene's package- deal nature it is a far better bargain, and in view of its superiority as a wordprocessor, even if there had been price disadvantage, the quality and quantity of its features would have justify that fully well. (Note: I know there is an indexing function in WordPerfect, as well as table-of-contents utility, but they work far less elegantly than Nota Bene's, esp. as regards number of levels and formatting options.) 3. WordPerfect had until very recently, two important features lacking in Nota Bene - a good speller and a thesaurus. This was a clear cut advantage, but WordPerfect had to develop it because not being a pure ASCII program, it would have difficulties in interacting with extraneous spellers and thesauri. Nota Bene on the other hand could afford to postpone developing its own speller/thesaurus because all spellers and thesauri can be used with it, including Turbo Lightning. Since you can load any program (software) on top of Nota Bene (that is, without exiting Nota Bene), it is also very easy and practical to use non memory-resident spellers then return to Nota Bene in no time. Recently, version 3.0 of Nota Bene got fabulous speller and thesaurus. PC-Magazine enthusiastically described some time ago XyWrite's thesaurus and spelling checker, which it considered to be far superior to anything for the moment available on the market. These have been now made available to Nota Bene's users with some remarkable enhancements (like the auto-replace writing mode). So even this gap between the programs has been overbridged to the clear advantage of Nota Bene. 4. If we compare Nota Bene in toto to WordPerfect, the total WordPerfect would equal one third of Nota Bene. It would therefore be fair and adequate to compare only this portion, which is mainly the wordprocessing features of both. Let me therefore discuss some of the programs' respective wordprocessing features: 1. Speed I find it very strange to read your colleague's words about WordPerfect being a faster program than Nota Bene. Nota Bene's wordprocessor is basically enlarged XY-Write. And XY-Write, the official wordprocessor of PC-Magazine since two years (see PC- Magazine Vol.6 No 10 [May 26 1987], p. 220), has won the recognition and acclaim of all experts in the field for being the fastest program on the market. (There are many references to the "blazing speed" of XY-Write/Nota Bene. See quoted article, p. 219.) (To scroll from beginning to end of a long file takes considerably less than half the time in NB than in WP and other functions are similar. Moreover, cursor movement, deletion etc. in NB can be done by letter, word, phrase, sentence, paragraph. In WP as of version 4.1 only word level functions were available , and even that only for cursor movement and forward delete if I am not mistaken.) 2. WYSIWYG and Desktop publishing. It is not true that WordPerfect is more WYSIWYG than Nota Bene. And Nota Bene is ahead of any other wordprocessor for Desktop publishing. WP is no more WYSIWYG than NB. In NB bold is shown as bold, underline as underline. and line-breaks appear where they will print. In NB one sees the deltas which indicate where format commands have been placed precisely, so that one can edit the format commands directly, a process which is extraordinarily difficult in WP. One must enter there a reveal codes mode in which the cursor moves very sluggishly, and even then only one or two lines at a time can be seen. Moreover, one cannot edit the values of format commands directly, but must erase the existing ones and reenter new ones from the menu. Finally, if one wants in NB to see the document with true spacing and with the deltas suppressed, these modes are only a key-stroke away; moreover, they can be made defaults as well. Toggling between these modes and the regular mode is a matter of a keystroke. It is true that WP shows on screen page breaks, which one must do review mode to see in NB, and this is a distinct advantage to WP; however, it is at the cost of severely reduced scroll speed in WP. NB offers you a page-line mode, which tells you very quickly where there are page breaks as well. Further, it is simply not true that one requires a Hercules card to access extended characters in NB. These are fully accessible, and if you wish, you can also put them not where NB had but where YOU would like them to be. The Hercules card or EGA is only required for special languages whose characters are unavailable on a chip such as Hebrew with vocalization, Greek, or Cyrillic. WP cannot access such downloaded character sets at all at present. So NB not only has a better access to the extended characters but has access to downloaded characters which are inaccessible at all in WP. As for desktop publishing, "So many professional writers make use of XYWrite's ability to generate pure ASCII files, which can be handed directly to a typesetter, that staying ahead of the pack here virtually assures continuing preeminence for XyWrite" (PC-Magazine, Vol. 6 No 10:219). For more about Nota Bene's superiority as a Desktop wordprocessor see page 220 of article quoted above. In a special review about "The Desktop-Publishing Phenomenon" by John W. Seybold (Byte, Vol. 12 No 5 [May 1987]:149-166), it is stated that: "As a microcomputer-based text generator for composition systems, XyWrite has no competitors. XyWrite's compatibility with almost every other system in the composition-systems market has made it the most popular text processing package for microcomputers in the publishing industry. It has become the program of choice for people in the composition business and is commonly used to emulate the editing functions of large editorial system such as Atex..." (p. 164, 166). On the whole, Layout features of Nota Bene are far superior to WordPerfect's, as you can infer from the description in PC-Magazine. Nota Bene/XY-Write fantastic invention of the commands hidden in deltas just allows an incredible flexibility with layouting. You can not only shape the document you like, but you can easily see your hidden commands any time (without getting a preview). 3. Sorting Sorting is an extremely advanced feature in Nota Bene, and has been even further enhanced in version 3.0 (see my document on that version in NOTABENE@TAUNIVM). You can sort a file the way DOS does, but in contradistinction to DOS, you have far better control of the parameters. For instance, you can decide what order sorting will take place by construing various "sorting tables". This means that even non-English language material can be sorted correctly and that if you wish, upper and lower case would be treated equally (unlike in DOS). Moreover, you can define, within a file, any number of lines that sort them in a split of a second. Among the most ingenuous sorting options is the one allowing you to sort material without actually altering anything in file. For instance, if you have a list of addresses where each address begins with "Mrs. and Mr....", you can put into a hidden delta the item according to which you want sorting to take place (like a family name). (And I have written a program in Nota Bene which carries out this operation automatically.) 4. Printer and Character set(s) customization There is not anything like Nota Bene's printer and character tables in WordPerfect, allowing you to customize complicated matters quickly and in an incredibly versatile way. Since printer tables are open to modifications, you can have control of a lot of features involved with the computer-printer communication. For instance, you can decide how a certain font or print mode will actually print (e.g., underline as italics, bold reverse as enlarged or whatever). Character tables allow high flexibility with printwheels, ASCII writing and automatic transliterations. 5. Automatic Caps There isn't anything like Nota Bene's automatic capitalization at the beginning of sentences in WordPerfect. This simple yet ingenuous feature saves you at least 25% of typing time and mistakes. 6. Automatic numbering I am not aware of the existence of 10 levels of automatic number- ing, the shape of which can be controlled by few deltas, in WordPer- fect. (That is, you can change numbers to letters or Roman numbers in no time, even after having written the automatic numbers.) 7. Cross-referencing and multiple cross referencing I do not think WordPerfect has got any cross-referencing fea- tures. In Nota Bene you can refer to number of footnote, page number or section number. Anybody in our field knows very well how many tedious hours of hard drudgery are thus saved! And "Better still, if you like to create separate documents for different chapters that have subdivisions that are numbered, but you want to chain-print them, the number references can be made conditional, so that they will be ignored when included in a larger print chain but displayed if that docu- ment is handled alone" (PC-Magazine 6, 10:220). 8. Writing in columns I am not aware of WordPerfect's column features, but as far as I have worked with it, I do not recall it has the capacity of writing equal-width columns (far better than tabs for many documents, such as programs of conferences). As for newspaper columns, I don't think they work as easily in WordPerfect, since with Nota Bene you can either pre-decide to write that way or insert the necessary deltas post factum. (I am not sure of the number of columns either; in Nota Bene you can write 6 such snaking columns.) 9. "Foreign" Characters and "Foreign Language" versions I doubt that WordPerfect can so freely accommodate "non-English" (or "foreign"), as the Americans call them, characters. Nota Bene simply allows you to put those anywhere you like on its keyboards plus you have full access to ASCII wherever you are. For multilingual writing this is a great relief. Moreover, Nota Bene has developed a very advanced Hebrew version, still experimental but already highly advanced, more than any extant Hebrew wordprocessor (I have been working with it for some time). Russian and Greek follow suit (these are not as complicated as Hebrew). Since you can access any downloaded characters and use character tables freely, other non-European languages can also be written. WordPerfect is not even interested, so it seems, in develop- ing anything for Hebrew or other languages. 10. The open nature of Nota Bene is on the whole far more sophisticated than anything on the market. You may work either with or without menus/help screens, while with WordPerfect you are com- pelled to go through them for quite rudimentary matters (such as copying or moving). You can customize many defaults (some of which do not even exist in WordPerfect: turning backup option on/off; having a prompt before erasing a file; changing the cursor from blinking to non-blinking and a lot more) in a matter of seconds. 11. Various features that seem to be lacking in WordPerfect The following features seem completely absent from WordPerfect (please ask your friend to correct me where I am wrong): 11.1. APPENDING (rather than merging) a file to another file (without actually entering it), or portions of a file to some other file. 11.2. Calling a file from any subdirectory then saving/storing it to that directory without going to it. 11.3. Finding (through "find" command) any file on hard disk then calling it to screen without going to actual directory 11.4. Writing hidden prompts anywhere in file then accessing them quickly. (I believe WordPerfect 4.2 now introduced something similar to that.) 11.5. 3 sets of footnotes plus endnotes, with call number either with Arabic, Roman or Latin numbering system, plus any sign (asterisk or no sign at all). 11.6. 9 windows, full, split horizontal, split vertical or all combined. Easy and fast movements between windows, copying/moving etc. (See praises of inter-window movements in the said article, p. 219.) 11.7. Automatic hyphenation, fully customizable plus an open dic- tionary of exceptions for both English and any other language. 11.8. Better methods for print modes (underline etc.) than any other wordprocessor's, since these can be changed/abolished/searched in one command. 11.9. Much stronger (and of course quicker!) search/search back/ change/change invisible/change for only upper or lower case/etc. operations. 11.10. Multiple search on whole diskettes/subdirectories. 11.11. Enlarged directory (with desired number of lines from beginning of file). 11.12. Full control of print types and fonts from file (with deltas), including mixture of pitches and proportional vertical and horizontal spacings (Nota Bene calculates the screen in tenth-of-an- inch units rather than in number-of-characters). 11.13. Chain printing with sequential/non-sequential numbering. 12. Clumsy functions in WP Many wordprocessing functions are very clumsy in WordPerfect. I admit I have not worked with 4.2, but as far as I have read, 4.2 has not dramatically changed 4.1. For instance, to move a passage from one place to another. In WP you press Alt F4 to begin defining, then move the cursor by word to the end of the paragraph. Then Ctrl F4, and you get a menu, where you choose the option cut, then move the cursor to insertion point, press Ctrl F4 again and then choose the insertion function from the menu. In NB, you define the paragraph with one stroke, move the cursor to insertion point, and press gray minus. To change footnotes to endnote in WP you must write a macro with about 8 steps and run it for each note. Of course, you can automate this process, but it takes several seconds for each individual note. In NB you add two format commands (deltas). 13. WordPerfect's clumsy macros vs. Nota Bene customizable keyboard and unique programming language WordPerfect allows writing macros for adding necessary functions. If you read a lengthy article about this in PC-Magazine ( ) while you are already familiar with Nota Bene's customization and programming possibilities you are definitely astounded by the clumsiness and rudimentary nature of these "macros". Such results, and far better ones, can be achieved most elegantly and easily in Nota Bene by either customizing the keyboard, which is a smooth and painful opera- tion, or by writing programs, small or large. If you take my file of programs I have written for Nota Bene, you will be able to fully appreciate the difference. Besides, some of the macros suggested in that article are already built-in features in Nota Bene. An interesting exchange in PC-Magazine sheds some real light on the matter of macros in WP. Dave Tocus from Rockville, Maryland writes: One drawback to WordPerfect macros is that the macro files must live in either the current directory or the same direc- tory as WP.EXE. But since I have 130 macros, I would like to keep them in their own subdirectory. The editor of this section, M. David Stone, reacts: ...My own preference with WordPerfect is to ignore the macro feature and use Superkey, Prokey, or some other keyboard redefinition utility instead. These programs eliminate the clutter of macro files by putting all WordPerfect macros in a single file. They also let you edit the macros, even without the WordPerfect Library. Prokey also permits names mac- ros.(PC-Magazine, Vol. 16 No. 12, June 23, 1987:365. [Power User section]) 14. Finally, it is NOT true that learning Nota Bene is a diffi- cult matter. On the contrary, with its clear and transparent philosophy, language-oriented (rather than arbitrary-key-oriented) commands, (optional) help screens and menus, very good Tutorial and extremely laudable Manual, learning Nota Bene is a real enjoyment. Everybody can use it after a very short time very successfully, and those who wish to really make the most of it never can reach a point of disappointment. In making version 2.0, Dragonfly people have accommodated many incredible whims and dreams by many of its users, and it seems that this promptness has made it what it has become. *****END***** From: Willard McCarty Rocco Capozzi Mark Olsen Rocco Capozzi Mark Olsen Subject: Query & Reply (98)Machine-assisted translationURICAMachine-assisted translationURICA Date: Tue, 29 Mar 88 21:29:33 ESTTue, 29 Mar 88 14:08:49 ESTTue, 29 Mar 88 11:20:26 MST X-Humanist: Vol. 1 Num. 1028 (1028) (1) Date: Tue, 29 Mar 88 14:08:49 EST (21 lines) (2) Date: Tue, 29 Mar 88 11:20:26 MST (60 lines) (1) -------------------------------------------------------------------- Does anyone know of reasonably inexpensive software for machine-assisted translation in an instructional setting? We in the Department of Italian Studies at the Univ. of Toronto would like to set up a course to train translators, and we are interested in using PC-type machines in a lab. We'd like software that would provide a good assortment of tools, e.g., online dictionaries and thesauri and user-constructed terminological dictionaries. The software could either provide the usual assortment of word-processing tools or work in conjunction with a word processor. The classroom should be kept in mind, but we do not need a network in which the students' machines are linked to the instructor's. Thanks very much. Rocco Capozzi ersatz@utorepas (2) -------------------------------------------------------------------- To respond to Dominik's query about URICA, I have tested it for use here and think it is a pretty good system. URICA stands for "User Response Interactive Collation Assistant" -- stress the interactive. It compares one text file to either another text file or keyboard entry, stopping at EVERY variant, allowing the user to either correct (in keyboard mode) an error or write the variant to an apparatus file. It runs well and is fast enough to be useful. Text appears in two windows and you can follow along as it compares the texts (assuming of course that you are working with two files as opposed to keyboard entry). The format of the appartus is typically as follows: URICA : User Response Interactive Collation Assistant TEXT 1 : C:GRIMM.TX2 TEXT 2 : C:GRIMM.TX2 INSERTION P001L01W08 his << >> wife P001L01W08 his << lovely >> wife TYPOGRAPHICAL ERROR REPLACEMENT P001L02W22 which << over- looked >> a P001L02W22 which << overlooked >> a DELETION P001L03W07 of << lovely >> flowers P001L03W06 of << >> flowers REPLACEMENT P001L04W01 and << nobody >> dared P001L04W01 and << no one >> dared INSERTION P001L04W11 a << >> powerful P001L04W12 a << very >> powerful REPLACEMENT P001L04W17 by << everybody. >> One P001L04W19 by << everyone. >> One DELETION P001L06W13 eat << some of >> it. P001L06W13 eat << >> it. I had been playing with OCCULT before seeing URICA, and let me assure you that it is a far sight easier to use and more accurate than that old beast. I have not had the opportunity to use it "for real," but the people I have shown it to here think that it would be most useful. Mark From: Willard McCarty Richard Goerwitz CMI011@IBM.SOUTHAMPTON.AC.UKRichard Goerwitz CMI011@IBM.SOUTHAMPTON.AC.UK Subject: Announcements (78)Penn OT textsJob in multi-media databasesPenn OT textsJob in multi-media databases Date: Tue, 29 Mar 88 21:31:49 ESTTue, 29 Mar 88 15:22:09 CSTTue, 29 Mar 88 17:40:37 GMT X-Humanist: Vol. 1 Num. 1029 (1029) (1) Date: Tue, 29 Mar 88 15:22:09 CST (25 lines) (2) Date: Tue, 29 Mar 88 17:40:37 GMT (38 lines) (1) -------------------------------------------------------------------- The two programs that I had which a) sliced up Penn OT texts and b) printed them out are still available. I say this because a couple of people reqested them two weeks ago - at a time when I happened to be about to take an Aramaic and Akkadian exam. I had to send out only file a. File b has a font that goes with it, and I said, "Send me a reminder." Please go ahead and send me those reminders you people who still want them. Apologies to Bob Kraft, by the way. My notes on his texts seemed to imply that his texts were somehow defective - which they are not! The coding scheme is, as it stands, minimal; but he has programs that can expand them as needed (as do I). Note also: In more than a year of intense experiments with these texts I have yet to find an error. This has been very hard for me to grasp. How can this be?! -Richard L. Goerwitz goer@sophist.uchicago.edu !ihnp4!gargoyle!sophist!goer (2) -------------------------------------------------------------------- Here is an advert that went out a couple of weeks ago. Push it out on any bulletin board you can think of. Thanks. (PS I don't know how to put the pounds sign in for the salary) RESEARCH ASSISTANT IN COMPUTER SCIENCE The Image and Video research group of the Department of Electronics and Computer Science has been awarded a research grant to employ a research assistant/programmer for at least one year to work on multi-media databases. Part of this work, at least one-third of the time, will be undertaken at the University of Essen in West Germany. Appropriate supplementation will be given to reflect extra living expenses whilst in Germany. The successful applicant would be eligible to register as a part-time Ph.D. student at the University of Southampton. A suitable honours degree and an ability to program in Pascal and C are necessary requirements for the job but some knowledge of German would be an advantage. For further information contact Dr Wendy Hall, Department of Electronics and Computer Science, University of Southampton, Southampton S09 5NH, UK. Applications, including CV and names and addresses of two referees, should be sent to Mr. H.F. Watson, Staffing Department, The University, Highfield, Southampton, SO9 5NH, as soon as possible, quoting reference 629/HFW/SMT. From: Willard McCarty Gene Boggess Subject: Text comparison software (32) Date: Wed, 30 Mar 88 19:06:39 EST X-Humanist: Vol. 1 Num. 1030 (1030) Regarding Mark Olsen's March 16 () remarks on text comparison programs, one of my colleagues, Dr. Peter Shillingsburg, has something that may be of use. He has long been involved in The Thackeray Project, which compares various editions of Thackeray's works. To facilitate this process, he devised the CASE (Computer Assisted Scholarly Editing) program, which is designed to assist the production of critical editions from text comparison through preparation of textual apparatuses and typesetting. One of my assistants has recently completed the conversion of this program from PL/1 to Pascal and has compiled the system to run as a series of menu-driven programs for the IBM-PC and compatibles. The text-comparison portion is not interactive, as Olsen requested, but it is integrated so that the output from one process serves as input for the next. It is best for use with large prose texts with multiple relevant manuscripts and editions. For futher information, write: Dr. Peter L. Shillingsburg English Department, Mississippi State University Mississippi State, MS 39762. From: Willard McCarty Mark Olsen Subject: Translation software (25) Date: Wed, 30 Mar 88 19:09:54 ESTTue, 29 Mar 88 20:40:57 MST X-Humanist: Vol. 1 Num. 1031 (1031) I have reviewed a couple of translation tools that might be useful. Mercury by Lingua-Tech (recent _Computers and the Humanities_ review) is a pretty decent memory resident multi-lingual glossary manager. INK TextTools provides more sophisticated glossary management and memory resident access. (I reviewed it in the most recent number of _LT:Language Technology_). I can send you e-mail copies of either of these reviews. You may also be interested to know about the translation support that Dr. Ted Cachey and I are giving to the _Repertorium Columbiaum_ project under the direction of Friedi Chaippelli (UCLA). This is a twelve volume body of texts dealing with the discovery of America. We are using WordPerfect, WordCruncher and Mercury to provide an interesting translation environment. A preliminary outline of that methodology by Cachey and I appeared in _Computers and Translation_ last year. The goal of the project is consistent transltion across the volumes and we hope that this approach will encourage that. Let me know if there is anything I can do for you. Mark Olsen From: Willard McCarty "David Owen, Philosophy, University of Arizona""Michael Sperberg-McQueen" "David Owen, Philosophy, University of Arizona""Michael Sperberg-McQueen" Subject: Wordprocessing (79)TurbofontsNota Bene and Word Perfect (ca. 40 lines)TurbofontsNota Bene and Word Perfect (ca. 40 lines) Date: Wed, 30 Mar 88 19:13:15 ESTTue, 29 Mar 88 23:29 MST30 March 1988 09:18:21 CST X-Humanist: Vol. 1 Num. 1032 (1032) (1) Date: Tue, 29 Mar 88 23:29 MST (22 lines) (2) Date: 30 March 1988 09:18:21 CST (39 lines) (1) -------------------------------------------------------------------- Only one person responded to my recent request for experiences in the use of the multi-lingual add-on Turbofonts. He reports that a colleague has used it with some success though with some difficulty. I quote: "She said that Turbo Fonts produces good enough output once it is con- figured, but that configuring it to work with WordPerfect 4.2, with the printer driver in use in the department, and with one or two other things had just about driven her crazy. For example, when she finally thought she had it installed and working, every printout had seemingly random dots and partial underlines scattered through it, which were apparently the devil to track down and eliminate. She does say, though, that TurboFonts has not caused anything else to crash, which is some blessing, I guess." Sounds as if we should wait for WordPerfect ver 5, or switch to NotaBene. David Owen OWEN@ARIZRVAX.BITNET OWEN@RVAX.CCIT.ARIZONA.EDU (2) -------------------------------------------------------------------- While an admirer of both Nota Bene and Word Perfect, I have had more experience supporting the latter. So I can offer these corrections to Itamar Even-Zohar's list of things Word Perfect cannot do: Word Perfect has no trouble at all displaying or accepting from the keyboard any character in your character set; I never had any problems with EGA or other user-loaded fonts. The key definition facility is, to be sure, less flexible than Nota Bene's (but also easier to use). For complex key redefinitions, one can and should use a memory resident keyboard macro program. (These have always worked with Word Perfect; last time I tried they did not work with XyWrite or N.B. -- has that changed?) Also, Word Perfect does have the abilities: - to add columns after the fact - to retrieve and save files from/to directories other than the current directory (11.2) - to search for formatting codes (11.8) - to search for a word or phrase in a whole set of files (11.10) - to control font switching, etc. in the printer. (11.12) (I don't know what 'full control' might include, so I won't claim it. Word Perfect's printer drivers are numerous and readily accessible for user customization. Their chief drawbacks vis-a-vis XyWrite / N.B. printer drivers are that they are not ASCII files and they have some mysterious overall length limitation, which I ran into only with laser printers requiring extraordinarily long escape sequences.) I don't dispute the central claim that Nota Bene is a good program and more powerful than Word Perfect. But the record should be correct on the details. Michael Sperberg-McQueen, University of Illinois at Chicago From: Willard McCarty Itamar Even-Zohar Willard McCarty PROF NORM COOMBS Itamar Even-Zohar Willard McCarty PROF NORM COOMBS Subject: Wordprocessing: NB & WP (251)Re: Wordprocessing (79)What makes good software good?Word Processing softwareRe: Wordprocessing (79)What makes good software good?Word Processing software Date: Thu, 31 Mar 88 20:59:47 ESTThu, 31 Mar 88 16:54:08 IST31 March 1988Thu, 31 Mar 88 12:42 EST X-Humanist: Vol. 1 Num. 1033 (1033) (1) Date: Thu, 31 Mar 88 16:54:08 IST (36 lines) (2) Date: 31 March 1988 (72 lines) (3) Date: Thu, 31 Mar 88 12:42 EST (121 lines) (1) -------------------------------------------------------------------- I should like to thank Michael Sperberg-McQueen for his corrections to my comparative description of WP vs. NB. When more corrections hopefully arrive, we will have a better comparison no doubt. I would like however to make two comments: 1. I tried to underline in my description that it is not enough to have the same feature in 2 different programs. A major question is how *accessible* and *implementable* that feature is. I think that in view of that, WP drivers don't even come a bit close to the flexibility of NB's drivers. This is not an easy task in NB either, basically because printers are complicated. But even with Laser you have all drivers open to you as an open book, and you have full control of all details. 2. Although I was sad to discover that NB 3.0 has cancelled the ESC keyboard, the overall power of the keyboard has not been changed. I now use Right Shift as a toggle key instead, and with my AT I have customized the SysReq key. You can put any macros as before, and powerfully combine phrase library macros, ampersand macros with keyboard. I normally run most of my most needed programs (written in NB language and downloadable from LISTSERV@TAUNIVM. If you want to see what's available type: TELL LISTSERV@TAUNIVM INDEX NOTABENE) from keyboard rather than imposing long macros on it. Thanks again for all corrections. But I am afraid that even with the pluses added, WP is a less adequate software for us, researchers in the human and social sciences. Itamar Even-Zohar Porter Institute for Poetics and Semiotics (2) -------------------------------------------------------------------- Itamar Even-Zohar has touched on an interesting point about wordprocessing packages that, I think, applies to software in general. He remarks that just because two packages have roughly (or even exactly) the same features does not tell the whole story; you must look at how these features have been implemented. I would like to carry this further. I would like to argue that features *as such* are epiphenomena, that what matters more to the user of a program is the underlying `structure' manifested in these features and their manner of implementation. As in other matters, we are constrained to know invisible things through the visible, but we must take account of the invisible or we are at the mercy of devils. That is to say (using less religious language), computer programs are human artifacts and so are bound to incorporate a human mentality. It is this mentality, or whatever else you wish to call it, that must finally be considered, and *is* considered subliminally if not analytically by the user. Why else is it that people tend to feel so deeply about their wordprocessing packages and get so defensive if they (which is the antecedent to this pronoun?) are attacked? I was driven to think in this way by spending considerable time reviewing software, mostly wordprocessing software. I had a minor epiphany one day when attempting to figure out a particular package (I mercifully forget which): I suddenly realized that the thing must have been designed by a raving lunatic. It had all the right features, but the way they were put together seemed to make no sense whatsoever. To use an analogy, the experience was not like talking to a man confused by drink, rather like talking to someone in the preternatural clarity of some schizophrenic state. Less dramatic encounters have reinforced my conclusion that programs reveal mental states or conditions. Thus I've briefly also lived with cloyingly obsequious programs, with neo-Stalinist systems, and with others that have all the arrogant helpfulness of a benign authoritarian towards slaves and children. I am rather less sure of how to systematize this sort of analysis. Features are easily listed, and one can say whether or not a program does what its vendor claims it does. Perhaps one reason why people rightly continue to feel that programs must be tried out before they buy them, or that the opinion of a trustworthy advisor must be sought, is that the worth of a program cannot be determined from any list of attributes. Would any of us substitute the listing of a table of contents for a good book review by a dependable scholar? I happen to like the features of Nota Bene, and I use many of them. Fundamentally, however, I use the package because I have considerable respect for the mind in the program. It is a very scholarly mind, and it indeed manifests some of the quirks of personality scholars often have, e.g., not suffering fools gladly but offering the initiate great rewards of intellectual joy. (Let it be noted, however, that my 11 year old son, who is not especially a good student, uses NB for all his writing assignments and school projects. So, is NB difficult? Perhaps the academic with 4 languages and a Ph.D. should feel a bit reluctant about claiming that it is.) I for one would be very interested to know what others think about these matters, i.e., what makes good software good. Since many of us are more or less directly involved with the designing of software if not the actual writing of it, and since most or all of us live with software daily, I'd think this a worthy subject for debate here. Willard McCarty mccarty@utorepas (3) -------------------------------------------------------------------- What follows is a commentary on Word Perfect by one of our software specialists who also teaches Word Perfect courses for users. The comments and oppinions are his. His name is Vince Incardona VXIACC@RITVAX.BITNET I am Norman Coombs NRCGSH@RITVAX.BITNET ......... ........ I don't mean to imply that NB is no good (actually, I wish Dragonfly Software had chosen WordPerfect as a base to work from), or to compare WP and NB as this author has done, but I really feel that the guy who wrote the "comparison" in 46.0 should get his facts straight before putting a critique like this out on a network. For example: > 6. Automatic numbering > > I am not aware of the existence of 10 levels of > automatic number- ing, the shape of which can be controlled by few > deltas, in WordPer- fect. (That is, you can change numbers to letters > or Roman numbers in no time, even after having written the automatic > numbers.) WordPerfect has had this feature for at least 4 years that I know of. They call it "mark text". > 8. Writing in columns > > I am not aware of WordPerfect's column features, If you don't know anything about the feature, how can compare it to the same thing in some other package? > it, I do not recall it has the capacity of writing > equal-width columns Well, it does, and has had this ability since at least 1985 > programs of conferences). As for newspaper columns, I don't think they > work as easily in WordPerfect, since with Nota Bene you can either > pre-decide to write that way or insert the necessary deltas post > factum. You can do this with WordPerfect, too, and in much the same way NB does it. The author couldn't have even tried it. > (I am not sure of the number of columns either; in Nota Bene > you can write 6 such snaking columns.) WP allows as many as you can fit on the page. 6 or 8 is about as many as you can reasonably fit. > You may work either with or without menus/help screens, while with > WordPerfect you are compelled to go through them for quite > rudimentary matters (such as copying or moving). No, you're not "compelled" to go through _any_ sequence of keystrokes, menu or otherwise. That's what macros are for - but then this author says he doesn't like to use them because he prefers to write programs instead. To each his own, but I'd rather not program if I don't have to. > The following features seem completely absent from WordPerfect (please > ask your friend to correct me where I am wrong): Consider yourself corrected: > 11.2. Calling a file from any subdirectory then saving/storing it to > that directory without going to it. > 11.3. Finding (through "find" command) any file on hard disk then > calling it to screen without going to actual directory This ability has been part of WordPerfect ever since Version 3.4. Use the "retrieve" key. > 11.8. Better methods for print modes (underline etc.) than any other > wordprocessor's, since these can be changed/abolished/searched in one > command. You can do this in WordPerfect, too. Always could, as far as I know. > 11.10. Multiple search on whole diskettes/subdirectories. It's option number 9 on WordPerfect's "list files" menu. > Many wordprocessing functions are very clumsy in WordPerfect. > to move a passage from one place to another. In WP you press Alt F4 to > begin defining, then move the cursor by word to the end of the > paragraph. You're right, that's clumsy. That's why the best way to move a paragraph in WP is to Press CTRL-F4 and choose "move paragraph." - something this person has obviously not even looked up in the manual. > Everybody can use it (Nota-Bene) it after a very > short time very successfully.. That statement is more of a glittering generality than an actual fact. I would challenge that notion with respect to ANY software, and the lack of objectivity in a statement like this really makes me question this person's agenda in comparing these two packages. There are other erroneous or misleading statements in here, but the point is that postings like this should be taken with a grain of salt. I've noticed that sometimes people tend to make up their minds that they are going to dislike some things before trying them, and I wonder if a little of that isn't going on in this person's mind. They apparently used a long-outdated version of WP, and could not have spent more than an hour with it before deciding it wasn't as good as whatever it was they were already using. From: Willard McCarty GW2@VAXA.YORK.AC.UK Subject: Readers of Madame Bovary? (25)A New Translation of Madame Bovary Date: Thu, 31 Mar 88 21:03:59 EST30-MAR-1988 22:41:40 GMT X-Humanist: Vol. 1 Num. 1034 (1034) I am currently working on a new translation of MADAME BOVARY, to be published by Penguin Books in 1990. Anyone interested in reading - critically - a few chapters-in-progress? I don't expect any massive labour of erudition, or even a knowledge of the original. But it would be useful to have a couple of 'test- readers' scanning my version for the incomprehensible or the merely fatuous. Please get in touch if you think you might be interested. Geoffrey Wall From: Willard McCarty Diane Balestri Norman Zacour Sterling Bjorndahl - Claremont Grad. SchoolDiane Balestri Norman Zacour ZACOUR at UTOREPASSterling Bjorndahl - Claremont Grad. School Subject: Wordprocessors & minds (159)the mind in the programThe folly of comparingword processing and what makes good software goodthe mind in the programThe folly of comparingword processing and what makes good software good Date: Fri, 01 Apr 88 17:21:33 ESTFri, 01 Apr 88 11:47:34 EST1 April 1988, 10:25:48 ESTFri, 1 Apr 88 10:59 PST X-Humanist: Vol. 1 Num. 1035 (1035) (1) Date: Fri, 01 Apr 88 11:47:34 EST (11 lines) (2) Date: 1 April 1988, 10:25:48 EST (45 lines) (3) Date: Fri, 1 Apr 88 10:59 PST (80 lines) (1) -------------------------------------------------------------------- I appreciated Willard's comments about the difference between features and the (in some sense) human level of interaction between a user and the structure of a program such as a wordprocessor. They seemed connected to the thesis of a book that I have just started to read, called Understanding Computers and Cognition (Addison Wesley, I think?) by Terry Winograd and Fernando Flores. Have any other humanists read it, and would they be willing to offer an opinion about its value? (2) -------------------------------------------------------------------- I doubt that a comparison of the "features" of different word processors like Note Bene (=NB) and WordPerfect (=WP) is going to get us any forwarder. As time goes on the leading word processors, like the commercial applications of spreadsheets and data bases - indeed, like the leading laundry soaps - will become more and more alike, all claiming to get your clothes whiter than white. Where the differences remain marked, of course, they can remain important; but it is difficult to make a balanced assessment when one is, say, an enthusiastic specialist in one package who bases much of his opinion about the other on the pages of PC Magazine. Others may wish to redress the balance a little; my own hope for NB 3.0 is that it will number printed lines (every line, every fifth line, every tenth, whatever), giving the user the choice of numbering blank lines or not, restarting numbering on each page or not, turning the numbering off and then turning it on, and so on. It's a nice feature of WP, which would be especially useful in any word processor that aims at a scholarly market. But in fact these differences will disappear. What is important when introducing new users to large and elaborate word processors (which at least in part was what that wonderful encomium on NB was about) is something that many of us in our pride of knowledge tend to forget: that most such users have no real interest - and will never have any real interest - in computers; will not develop any enthusiasm for computer software, logical or otherwise; and will never use many of the features offered by the larger packages. The true believer has a natural tendency to convert the infidel; I would rather think that in my father's house there are many mansions. Our enthusiasm might best be constrained at least by the following: a) the capacity of the user - most of the people I know who use word processors know as much about computer operating systems as they do about the internal combustion engines in their cars (after two years, one professor still cannot copy a file from one disk to another, but he did finish his book), and therefore ease of limited learning and use is essential. Here I would emphasize "limited"; b) the interest of the user - most users have no interest in the special activities of text analysis or even data Willard has asked us, in the context of the word processing discussion, what makes good software good. One important aspect for me is that the software should offer flexibility of interface for the users. This means that the novice should have comforting prompts and menus, and the expert should be able to get where she or he wants quickly without having to fiddle with the same stuff the novice does. A second important aspect for me is that the different parts of the software package should be well integrated. My second "important aspect" means that I have little patience for word processing packages that have separate editing and formatting/printing programs. One doesn't need absolute WYSIWYG, but some aspects of WYSIWYG can save a person a lot of time and paper. From the computer's point of view, of course, editing and printing are separate functions. But this is 1988 and we should be beyond the stage of having software that is written from the computer's point of view. An integrated environment is a much more pleasant place to work, in my opinion. My first "important aspect" brings me to my favourite word processing package, hitherto not mentioned in the discussion. Lest people think that Word Perfect and Nota Bene are alone in the MS-DOS field, do not forget Microsoft Word version 4. Besides being essentially WSYIWYG and more or less integrated in all its parts (taking care of my second "important aspect"), it offers the user three ways to do everything The power user, of course, does not need all that help. For him or her, there are multiple options. Each function key has four meanings (which can be re-assigned under version 4) in combination with shift, control, and alt keys. There are macros, which can be "recorded" simply by typing what you want to do, or which can be programmed if one should have need for conditional branches or storage of values in variables while the macro executes. Finally, there is the mouse, which offers extremely easy ways to select and format text, to split windows (I often have three or more windows open), to move through the document, or even to operate the menu if you so choose. There are of course flaws. The spell checker is not all that well integrated into the package (although it is clearly better than some: at least it does not require you to go searching through the text for '#' symbols marking incorrect words, or something like that!). The printer drivers are hard to customize - they were clearly designed with programmers in mind rather than "lay" users. Fortunately, they are at least well documented. On the positive side once again, the outline processor is fabulously integrated. The thesaurus is much better integrated than the spell checker's "lookup" feature. The style sheets are a concept that I haven't seen on any other package in quite this way, and now I don't know how I could get along without them. There is no messy fiddling around with making formatting codes visible and invisible, as in Word Perfect. MS Word fits me well, with my high ratings for integration and flexibility. From what I have seen, Nota Bene will need one or two more major version releases before I would consider switching (although I do envy NB's database features). And MS Word is no worse off than Word Perfect when it comes to using non-Roman fonts (I use the Turbofonts package). The reviews I've read of MS Word agree with my experience, that any discussion which includes WP and NB must also include MS Word version 4, since its word processing features and power are comparable. Sterling Bjorndahl BJORNDAS@CLARGRAD (bitnet) From: Willard McCarty Willard McCarty "Patrick W. Conner" Willard McCarty "Patrick W. Conner" Subject: Wordprocessors & minds (108)The withering away of the differences?Wordprocessors & minds (159)The withering away of the differences?Wordprocessors & minds (159) Date: Sat, 02 Apr 88 17:27:00 EST1 April 1988Saturday, 2 Apr 1988 04:05:32 EST X-Humanist: Vol. 1 Num. 1036 (1036) (1) Date: 1 April 1988 (66 lines) (2) Date: Saturday, 2 Apr 1988 04:05:32 EST (24 lines) (1) -------------------------------------------------------------------- If I understand him correctly, Norman Zacour has argued that comparing wordprocessors is not likely to get us anywhere, since as these programs develop their differences will tend to disappear. `Everything that progresses must converge.' Hmm. Whether or not this is something devoutly to be wished, my experience with other people's software suggests very strongly that it is simply not true. The whole point of talking about `the mind in the software' is that programs, especially highly complex ones, have a discernible underlying structure, which I am wanting to call a `mentality'. Ornithologists say that crows, for example, can identify particular human beings no matter what kind of disguises they put on; I'd suppose that the crow discerns a characteristic rhythm of movement that ripples through the flowing cape and floppy hat. In any case, I'd think that the good software reviewer similarly can see through the features to the program's basic assumptions; and in my experience it is very seldom true that a program changes fundamentally from version to version. There's a highly pragmatic reason for constancy of this sort: it's very expensive to rework radically the fundamentals of a complex program, whereas if the program has been well designed new features won't be terribly difficult to add. Slow routines can be rewritten in assembler, new algorithms adopted for doing this or that, but the basic ways of the program will tend to remain constant. Then, too, a successful program will have a committed group of users who for whatever reasons *like* those basic ways and would be upset to see them change. How people complain when even a single keystroke is redefined! For the software designer the principle of constancy would seem to have an important consequence. Forgive me if this is obvious, but isn't it true that initial decisions about a program are crucial to its eventual outcome? Another question springs to To my mind a software review that does not attempt to verbalize the mentality of a program is not worth reading. Should we not demand from software reviews standards comparable to those we expect of book reviews? I want to know what the author is getting at and how -- not just the topics. I realize that arguments over which wordprocessor is best are apt to add little to our knowledge about anything except each other's passions. As Zacour said, the believer is driven to convert the infidel, and few believers see any reason why they should understand the infidel's scripture. (It's the devil's work anyhow and therefore dangerous to mess with.) I'm suggesting here, however, that so much heat and so little light are produced by such arguments because the combatants don't know their weapons nor how to handle them. When they do, I suspect that they'll become more interested in the differing movements appropriate to their different implements than in fighting each other; those with truly inferior implements will eventually get discouraged and give up. So, the question remains, what makes good software good? What is a program's `mentality' anyhow? Willard McCarty mccarty@utorepas (2) -------------------------------------------------------------------- Bernard Bjorndahl's comments of the DOS version of MS-Word echo my own sentiments for that package on Macintosh. I've been using MS-Word 3.01 for the Mac for about a year now, and not in a million years would I return to earlier packages (including Macintosh WP's and WordPerfect, which I learned in the antique days of 1983 on an Osborne. The point that I would like to make is that Microsoft apparently came up with a product which could be implemented on both DOS and Mac machines, making it possible for us to share files regardless of our hardware configurations. I hope that idea is considered sufficiently important in the world of systems designers that we can get more software which is optimally executed on a choice of operating systems. Unfortunately, I don't know anyone who uses DOS MS-Word, so I cannot test how well connected the two versions of Word might be. Has anyone had any experience with moving back and forth on Mac & DOS machines with Word? Also, does anyone know whether Dragonfly has any intentions of producing NotaBene for the Mac. If not, was it not politically reprehensible for the Modern Language Association to endorse a product which could not serve a good-sized chunk of us? But then, MLA manages to be politically reprehensible a lot of the time, if I remember correctly its seesaw policies back during the good ol' Viet Nam days. From: Willard McCarty (Wade Schuette) Subject: Software mentality (44)good software mentality Date: Sun, 03 Apr 88 18:35:23 EDTSat, 2 Apr 88 18:15 EST X-Humanist: Vol. 1 Num. 1037 (1037) (1) -------------------------------------------------------------------- As one who has done serious programming in something like 20 computer languages and a variety of operating systems and hardware, I have to agree with Willard McCarty that "good" software, whatever else it has, does indeed have a distinctive flavor and philosophy behind it. It really pays, for example, before programming in "C", to read Kernigan and Richies books and see what was in their minds when they wrote it. It is just a lot more fun to use a language with a spirit, than some of the committee efforts that are sold as software today. I'm not sure, however, that "best" is a meaningful term, as it implies a single-valued measure that satisfies us all. We could similarly argue over whether an IBM or a Mac is "best", or which of 5 good friends is one's "best" friend. At the current time different machines clearly appeal to different groups, and most large corporations are finally realizing that they are going to have to live with a mixed-vendor environment, as no one package delivers all things to all people, nor is it likely to. If people can at least agree that "best" is indeterminate, then maybe we can move on to "best for the particular purpose of ...., all other things being equal." As far as languages go, almost anything can be done in almost any one of them, with sufficient fluency and effort. The question is, what can one do easily. The good ones allow you to build up both speed and a library of higher level constructs for personalizing them, so that you can, after some time, really forget about the package and focus on the problem you are attempting to solve. I guess I'll suggest one *component* to look at in evaluating a package is how nicely the package becomes transparent once you have used it a lot. Packages with a unified philosophy and spirit are much easier to internalize and fly well than tacked-together spaghetti. But, like good friends - why does there have to be a "best"? From: Willard McCarty Steve DeRose sano@VLSI.JPL.NASA.GOV (Haj Sano)tektronix!reed!johnh@uunet.UU.NET (John B. Haviland)Steve DeRose sano@VLSI.JPL.NASA.GOV (Haj Sano)tektronix!reed!johnh@uunet.UU.NET (John B. Haviland) Subject: Good software; wordprocessors (198)Wordprocessing virtuesquality softwareWORD on MSDOS and Mac (44 lines)Wordprocessing virtuesquality softwareWORD on MSDOS and Mac (44 lines) Date: Tue, 05 Apr 88 22:15:15 EDTTue, 05 Apr 88 10:59:11 EDTMon, 4 Apr 88 15:54:05 PDTMon, 4 Apr 88 18:10:26 PDT X-Humanist: Vol. 1 Num. 1038 (1038) (1) Date: Tue, 05 Apr 88 10:59:11 EDT (94 lines) (2) Date: Mon, 4 Apr 88 15:54:05 PDT (33 lines) (3) Date: Mon, 4 Apr 88 18:10:26 PDT (48 lines) (1) -------------------------------------------------------------------- It seems to me that the discussion of word-processor (WP) virtues is mixing a number of questions. Perhaps breaking them apart will help. The first key distinction is *editing* vs. *formatting*. Although I think they should be integrated, the distinction is not artificial, nor is it (as someone seemed to imply) a new and computational distinction. Indeed, for the serious author too poor to buy a phototypesetter, they remain distinct. Editing is what authors (and their consciences, the copy editors) do; formatting is what graphic designers do. With rare and sometimes wonderful exceptions, authors are poor graphic designers, and vice versa. For letters and minor documents, just about any WP will get by, though (obviously) the "nicer" (vagueness intentional) it is, the better. For a book or other major publication, there are 2 options: (a) typeset it yourself, learning the art of graphic design (among other things) and using lots of time and expense, or (b) have someone else (presumably the publisher) take care of it. We almost always do (b) for serious books. In which case the sophistication of one's WYSIWYG display is of little importance: why does one need widowing features if none of one's page breaks will be the same in the end? Likewise for most of the high-end features which distinguish particular WPs. It seems to me we (i.e., authors) are being taken to the cleaners. We used to have publishers to do things for us; now we have to do the work. The display should be pretty enough not to impede authoring, but more is a bonus, nice but with little relevance to the actual task at hand. To which I say: Tag a paragraph as a paragraph, and so on for the other textual elements that *you as an author* find important, and leave the rest of the work to your publisher, so you can get on with scholarship. (This is what "generic" or "descriptive" markup in general, and SGML in particular, is for). If you have a system that lets you do that, great; if not, look for a new system. As for the scholarly task, namely the editing and content-production part as opposed to arranging ink in pleasing patterns, consider: 1) How easy is the interface to learn (i.e. for beginners)? On this, clearly anything with menus beats anything without, due to the established cognitive differences between recognition and recall. Let me amend that slightly to "menu or menu-like" for safety, but I mean to exclude systems which depend on memorizing 50 function keys and variants. 2) How convenient is the interface for experienced users? Here so-called "hot-keys" seem to me the clear winner. Once you're good with it, an editor like Unix's "vi" or any of the million-key PC editors will get a lot done fast. Alternatively, a large-scale system with complex syntactic commands can do quite well (e.g. CMS XEDIT). 3) How easily can I hand off my text to a publisher and be done with it? For this, any dependency of particular formatting and layouts is a drawback. Files should be as simplistic as possible; this dictates avoiding anything which is tied to the features of your WP or printer, and leads directly back to descriptive markup. Ideally, you should not have to know what the house style *is* in order to work with a publisher (please note that I have been talking about unfair requirements being placed on authors; if an author *wants* also to be a typesetter, that's fine; but few want to). Allow me to agree emphatically that software reflects a "mind" -- See WeinBerg's **excellent** "Psychology of Computer Programming" on this, and then parts of Brook's "Mythical Man-Month". At least, some software has this property; but some is designed by committee, with the consequent cybernetic schizophrenia; much more is designed well, but dies of accretion because the original vision wasn't great enough to encompass new thoughts. In that case, the new thoughts eventually force their way in, but they result in death or decrepitude rather than growth. One might consider analogies from the history of religion, philosophy, and science. Steve DeRose Brown Univ. and the Summer Institute of Linguistics (2) -------------------------------------------------------------------- Whenever I encounter a clumsy tool, I always wonder if it was designed to be used by a reasonable human. This applies to a hammer, heat gun, car, hockey stick, calculator, and even software. When something is well designed and well built, it might not be noticed right away, but if it is poorly designed or built, its usually apparent immediately. Good products are seldom the result of accidents. Someone (or some committee) had to think the problem through, anticipate user needs, and perform some clever designing. Then, it was tested to search for unanticipated problems and possible non-standard uses. After several iterations, a product is released. If things were planned far enough in advance, future evolutionary enhancements are possible until the fundamental design has exceeded its useful life cycle. Two examples of long lived and evolutionary design are most German cars and motorcycles, and VAX/VMS. In both of these cases, the basic design was sound, and future enhancements were taken into account. In this high tech age, not too many things are around for very long. Software is no different from any other tool. When you use a program with a good interface, it feels good right from the start, and continues to feel good as you develope expertise. The problem with much of the software available is that human factors was not integrated into the design, and the hacker mentality of "code as you go along, who cares how it looks or feels as long as it gets the job done". My philosophy is don't buy it unless its of good quality. Haj Sano sano@vlsi.jpl.nasa.gov (ARPAnet) (3) -------------------------------------------------------------------- Word on Mac and Ms/DOS I have been studiously staying out of the "which is best" fray for wordprocessors, although I agree that there is a certain delight in the meeting of the minds that goes with learning somebody else's program--whether a mere editor or a whole programming language. As one who has spent more weeks (if not months) than I care to remember writing my *own* editor (in Z80 assembler back in CP/M days), I also know only too well that insisting on meeting my *own* mind in the software isn't all it might be cracked up to be: one can waste a lot of time trying to tune a program to one's fussy desires. But I wanted to address Patrick Conner's specific questions about transportability. For serious editing, I now use a variety of tools, mostly of an EMACS flavor--generically similar editors abound on all the machines I routinely use: MSDOS, Mac, and Vax. For wordprocessing, I use MS Word--Version 4.0 under MSDOS and version 3.01 or whatever on the Mac--and here I only wish to add a footnote to Bjorndahl and Conner. I use Word for all the reasons they mention. I particularly like style-sheets, which let me totally alter the formatting parameters of a printed document with three keystrokes. I also make promiscuous use of MSDOS Vers. 4's macros, especially useful for converting documents from one formatting system to another, as well as for other more or less complex editing tasks. I will stick with these programs without regrets until someone tells me of another word processor that provides compatibility between MSDOS and Macintosh. Here my experience may be useful to others: although conversion between the Mac and MSDOS Word formats is not perfect, it is by far the best thing available of its kind. I routinely write formatted documents on my MSDOS machine at home, using font and format information designed for the Apple Laser Printer, incorporated into a variety of style sheets. I transfer the resulting binary files to the Mac in my office, fire up Word there, and it automatically converts the file perfectly, incorporating the details of the style-sheet I wish to attach. From there, I can use the superlative laser-printing capacities of the MacIntosh without having to put up with what is, for me, its relatively sluggish performance in other areas. (In reverse the process is not quite so From: Willard McCarty KRAFT@PENNDRLNtim@violet.Berkeley.EDUKRAFT@PENNDRLNtim@violet.Berkeley.EDU Subject: Notices (33)OFFLINE 18 now on the file-serverAnnouncement of seminars now on file-serverOFFLINE 18 now on the file-serverAnnouncement Date: Wed, 06 Apr 88 20:50:35 EDTWednesday, 6 April 1988 0033-ESTWed, 6 Apr 88 15:42:47 PDT X-Humanist: Vol. 1 Num. 1039 (1039) (1) Date: Wednesday, 6 April 1988 0033-EST (8 lines) (2) Date: Wed, 6 Apr 88 15:42:47 PDT (8 lines) (1) -------------------------------------------------------------------- Issue 18 of the regular column "OFFLINE", with material of interest to computing humanists, is now available on the file-server s.v. OFFLINE 18. This issue has been written by Robert A. Kraft and John J. Hughes. (2) -------------------------------------------------------------------- "Beyond Word Processing: A Series of Seminars on Humanities Computing" by Dr. Tim Maher, Humanities Computing Specialist, Univ. of California, Berkeley, April-May 1988, in Berkeley, Calif. A description, s.v. HUMCOMP SEMINARS, has been posted to the file-server. From: Willard McCarty Robin C. Cover Robin C. Cover Subject: Request for information (56)Academic Computing: Computer Labs Date: Wed, 06 Apr 88 20:53:02 EDTWed, 6 Apr 1988 14:42 CST X-Humanist: Vol. 1 Num. 1040 (1040) (1) -------------------------------------------------------------------- At least 15% of HUMANISTS have direct responsibility for supervising institutional "academic computing," and nearly as many direct the affairs of student/faculty computer labs. May I ask for your help? I am responsible for helping design and set up a new student microcomputer lab at our graduate school. The lab will have about 20 workstations (IBM and Macintosh microcomputers), several printers and one full-time staff person. The workstations will be capable of serving as terminals on a campus network (library system; online databases services), but users will not have direct access to mainframe or minicomputer CPU, at least initially. It would be helpful if I could obtain copies of documents that describe the services of computer labs at other institutions. I suppose most labs have summary sheets for users listing hardware configurations, software support, schedules for tutorials, hours of operation, printing fees, etc. Information of this sort would be useful to me even if the computer lab contains primarily terminals connected to the campus mainframe or network. If I can count on the good will of fellow HUMANISTS to supply me with a copy of this minimal documentation (which probably exists in every lab) .. I am equally interested in longer documents which would be useful in . I am equally interested in longer documents which would be useful in thinking -- more broadly -- about computer labs as a part of campus computing services. I realize that support for academic computing varies greatly with the size of the institution, curricular offerings, administrative & financial support, etc., and that computer labs are not as essential in highly networked environments. If anyone has internal memoranda, spec-sheets or working papers that were used in determining the PURPOSES/GOALS/FUNCTIONS of the campus computer labs, these documents would be of great assistance to me. Communications by postal or email are equally welcome. Thanks to each of you who might be willing to cooperate in this request; if you cannot answer personally, perhaps you could at least forward the request to support personnel in the computer labs. Professor Robin C. Cover ZRCC1001@SMUVM1 3909 Swiss Avenue Dallas, TX 75204 (214) 296-1783 From: Willard McCarty Subject: Software and mind (78) Date: Wed, 06 Apr 88 20:55:30 EDTWed, 6 Apr 88 09:22 EST X-Humanist: Vol. 1 Num. 1041 (1041) (1) -------------------------------------------------------------------- 1 Does software reflect the mentality of its developer? The word 'mentality' is both too weak and too strong? It is too strong in presupposing that the individual personality of the software could be reflected by a piece of software just as it could by a piece of art or literature. It is too weak in not discriminating among the various domains that guide software development. 2 The development of educational software is guided by such domains as software engineering--'structured design'--curriculum theory and design--'individualized instruction'--and cognitive psychology--'mental models', 'expert strategies'. Similarly, in the design of computer languages, theories of what a computer program is and does, guides their development. 'A program=data+algorithms' guided the development of PASCAL. 3 The more appropriate question seems to be: what theories and domains are most relevant for specific applications? In word processing, what theories have guided the development of various software systems? I have the suspicion that word processing packages, for the most part, grew out of attempts to improve clumsy main-frame line editor packages. Line editors, originally, served the function of editing program instructions rather than composing literary products. People used these program editors, beyond their original purpose and design, to write notes, and then tried to write essays. Soon designers added various tools to help them achieve these secondary uses of line editors as main goals--such as various supplementary commands for formatting hard copy. Somewhere along the line, developers decided to make 'user interface' a bit more 'friendly' by adding a 'screen' edit mode on top of the original line editor. Here is where word processors speciated from the original line-editor ancestor: the vision of editing text, not 'line-by-line', but 'screen-by-screen'. Of course, some of the precursor elements remain, such as odd commands for driving hard copy. Unlike most other software, word processing packages developed in an evolutionary manner as a by product. Theories of software design, of the nature of the writing process, only came late on the scene to guide current modifications of pre-existing structures. Indeed, the understanding of the writing process will only now be approached that we have word processors as an alternative to the technology of pen+paper. Furthermore, it is the use of full-screen functions, and of 'windowing', that will most help us to understand the underlying mental processes used in writing and thinking, and that will guide the development of word/text processing. Since nowadays, most packages provide for scholary desiderata such as creating indexes and foot notes, the leading edge will be abiltiy to produce publication quality text ('desk-top publishing'), WSIWYG, mutliple windowing, and 'importability/exportability' between text processing and graphic processing, and more generally: the distance between the intermediate process of composing/editing with a computer and the desired final product. For instance, LOTUS MANUSCRIPT has a structured edit mode where text can be created and edited by (numeric) section/sub-section. This is very close to the type of final product I mostly 4 So, what desiderata should we present to word processor developers? In general: how close is the intermediate tasks of editing text on the screen to-- 1)the various final products we want to produce (structured texts, integrated text and graphics, fancy fonts..) and; 2)the processes we use in thinking with writing (zooming to the most abstract structure and flipping among parallel texts in different 'windows'). From: Willard McCarty Peter.Capell@CAT.CMU.EDUJakob Nielsen Tech Univ of Denmark Peter.Capell@CAT.CMU.EDUJakob Nielsen Tech Univ of Denmark Jakob Nielsen Tech Univ of Denmark Subject: Notices (73)Study Group on the Structure of Electronic TextRE: HyperCard stack with report on HyperTEXT workshop availaStudy Group on the Structure of Electronic TextRE: HyperCard stack with report on HyperTEXT workshop availableHyperCard stack with report on HyperTEXT workshop available Date: Fri, 08 Apr 88 00:05:38 EDTThu, 17 Mar 88 12:11:28 ESTSun, 06 Mar 88 22:25:26 DNTMon, 29 Feb 88 15:22:49 DNT X-Humanist: Vol. 1 Num. 1042 (1042) (1) Date: Thu, 17 Mar 88 12:11:28 EST (20 lines) (2) Date: Sun, 06 Mar 88 22:25:26 DNT (36 lines) (1) -------------------------------------------------------------------- Carnegie Mellon University's Study Group on the Structure of Electronic Text (SGSET) will sponsor a 2-day conference, May 23-24, 1988, with the theme: "The Coming of Age of Electronic Text" The conference will include discussions of the "practical aspects of coping with the difficulties of making large amounts of text available for general distribution," says William Arms, University Vice-president for Academic Services at Carnegie Mellon. "The program will consist of five parts, each of which addresses what we we feel are among the most pressing issues in moving electronic publishing forward: real-world experience in electronic electronic publishing, the capture of information, electronic text processing, implications of structuring text for retrieval, and the economics of information. SGSET's aim is to bring together researchers, librarians, publishers, and information vendors and brokers in order to facilitate the distribution and use of electronic text." [The complete posting is now on the file-server, s.v. SGSET SEMINAR.] (2) -------------------------------------------------------------------- [Extracted from the IRList 4.18 with thanks.] The stack does not contain the actual papers presented at the workshop because of copyright problems. It only contains stuff written by myself. It also includes 3 earlier reports which I refer/link to from the primary report - the reason for this is to give some feel for the hypertext situation even in a situation where I can only publish my own stuff. . . . [Note: now that I have permission, and have received a copy in the mail, I am happy to recommend this - there may be some later distribution of this and related materials through ACM as part of their new Database Products series (see Feb. CACM article by P. Wegner). Meanwhile, see details below. - Ed.] My report on the recent HyperTEXT workshop is now available in a hypertext version in the form of a 400 K HyperCard stack. To read it, you will need a Macintosh and APple's HyperCard program. To get a copy of this electronic document please send two double sided Macintosh diskettes to the following address. One diskette will be returned to you with the hypertext report and the other will be kept to cover postage and handling. Jakob Nielsen Technical University of Denmark Dept. of Computer Science Building 344 DK-2800 Lyngby Copenhagen Denmark From: Willard McCarty Subject: What has happened to HUMANIST? Date: Fri, 08 Apr 88 00:07:54 EDT X-Humanist: Vol. 1 Num. 1043 (1043) Dear Colleagues: A fellow HUMANIST just this evening sent me the following comment: ------------------------------------------------------------------- I think we might have made HUMANIST too formal and stiff. Unless my mailer has been screwing up, I have noticed an extraordinary reduction in messages of all kinds. At the risk of encouraging anarchy, you might consider relaxing the content organisation and suggesting that trivial messages won't be nuked, etc. The bursts of heavy traffic -- certainly a problem -- are more than made up for with a constant and usually interesting chatter. The experiment to formalize HUMANIST has, in my opinion, not worked out all that well. ------------------------------------------------------------------- I'm not sure that I wholly agree, but I certainly know what he means. There may be a simpler and less damning explanation for the radical decline in interesting argumentation on HUMANIST, however. At this time of year, who has the time to say much of anything except "later"? Nevertheless, the higher degree of organization, the sorting of messages into categories, brings along a subtext -- which some of you have found congenial, and others have not. I have been hoping that the earlier vigour would not be lost, indeed, that all sorts of discussion would continue despite the fact that the one big room has been subdivided into several smaller ones, rather is daily subdivided into whatever rooms seem to be required. Please be assured that no messages, trivial or otherwise, have been "nuked" or even censored. In the 11 months since HUMANIST began I don't recall ever having restrained or substantially altered a single message (I have corrected the occasional typo), nor do I recall ever having thought that I should. As far as I am concerned HUMANIST is still exactly what we make it day by day, according to the principle enunciated by Blake in that poem from his Notebook, He who binds to himself a joy Doth the winged life destroy; But he who kisses the joy as it flies Lives in Eternity's sun rise. (Nos. 43 & 59, Keynes) So, let there be much kissing of joys on HUMANIST, as well as among humanists! There is no reason whatever that we cannot have both the vigorous argumentation of old and the exchange of useful information neatly classified by topic. Willard McCarty mccarty@utorepas From: Willard McCarty mcvax!imag!siri@uunet.UU.NET (Equipe Chiaramella)mcvax!imag!siri@uunet.UU.NET (Equipe Chiaramella) Subject: Notices (46)Jakob Neilsen's HyperCard report11th ACM-SIGIR ConferenceJakob Neilsen's HyperCard report11th ACM-SIGIR Conference Date: Fri, 08 Apr 88 23:05:44 EDT08 Apr 88 14:44 -033030 Mar 88 07:56:38 GMT X-Humanist: Vol. 1 Num. 1044 (1044) (1) Date: 08 Apr 88 14:44 -0330 (13 lines) (2) Date: 30 Mar 88 07:56:38 GMT (14 lines) (1) -------------------------------------------------------------------- Humanist members may be interested to know that this report is available on Bitnet from the server MACSERVE@PUCC as HYPERCARD-HYPERTEXT-WORKSHOP-PART*.HQX.1 where "*" is a wild-card character representing one of the numbers 1 through 4. (In other words, it's in four parts...) I have downloaded it and looked through it with some interest. I presume that HUMANISTs with access to the Info-Mac archives at sumex-aim could get it from there, too. David Graham dgraham@mun.bitnet (2) -------------------------------------------------------------------- PROGRAM OF THE 88ACM - SIGIR Conference 11th INTERNATIONAL CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL JUNE 13-15, 1988 GRENOBLE FRANCE [Full announcement now on the file-server, s.v. SIGIR CONFRNCE.] From: Willard McCarty Wade Schuette Wade Schuette Willard McCarty (indirectly)Wade Schuette Wade Schuette Willard McCarty (indirectly) Subject: Forum (147)A bit of humorWhither HUMANISTRe: What has happened to HUMANIST?A bit of humorRefrigerator SIGS, anyone?Whither HUMANIST (15 lines)Re: What has happened to HUMANIST? Date: Fri, 08 Apr 88 23:20:05 EDTFri, 8-APR-1988 11:21 ESTFri, 8-APR-1988 07:15 ESTFri, 08 Apr 88 09:12:04 EDT X-Humanist: Vol. 1 Num. 1045 (1045) [One of the new aspects of HUMANIST that seems to have provoked the most criticism and nostalgia ("longing for return") is the absence of relatively unstructured discussion. Here is an attempt to provide a subdomain for this kind of thing, to be treasured or discarded as you see fit. If you have an opinion and the time in which to voice it, please do so! -- W.M.] ------------------------------------------------------------------------- (1) Date: Fri, 8-APR-1988 11:21 EST (63 lines) (2) Date: Fri, 8-APR-1988 07:15 EST (28 lines) (3) Date: Fri, 08 Apr 88 09:12:04 EDT (27 lines) (1) -------------------------------------------------------------------- Here's some thought-provoking humor in regard to computing in general. Why do refrigerators NOT need user groups, pray tell? First of all, refrigerators generally WORK, and work well. Their flyback transformers do not burn out. They do not require upgrades every year, whether the basic hardware is obsolete or not. Refrigerators usually defrost themselves; their cold solder joints do not give out; they are not sensitive to power line spikes, or static electricity. Your average refrigerator has modest design goals. When you open it, it does not say "Welcome to the Refrigerator," or blink its lights. There is probably a simple control system, no megabyte of RAM. You do not need your refrigerator to be making toast at the same time it is keeping your leftovers from spoiling, controlling your VCR, or turning the front porch lights on and off. If this seems somewhat obvious consider this: By 1995 a new automobile will probably contain the equivalent of a Motorola 68000 processor, and 1 Meg of RAM. It may well speak to you, and contain several bit mapped displays. Almost every component, from the engine to the brake system, will be microprocessor controlled. And if those components fail, your vehicle may lose manuverability, or worse. Can we be so sure that there will be not be AUTOMOTIVE user groups in 1995? Or that refrigerator user groups will not some day follow suit? Here's to a 1973 Buick, and my Kenmore fridge! Disclaimer: I have no business relationship with Kenmore, or any other refrigerator manufacturer. I am a knowledgeable refrigerator user, but do not earn my living from giving refrigerator seminars, or from leading the BMUG refrigerator SIG. ------------------------------------------------------- Maybe you can help me. I just downloaded a can of Black Olives (Collosal, I think) from my refrigerator. When I tried to open it, the 'fridge told me "A utensil cannot be found to open this can." What gives? BTW, it says 'NO PITS' on the side. Thanks in advance. ---------------------------------------------------------------------------- You have to run the Olives through CanHex first. ---------------------------------------------------------------------------- Your fridge "bug" report was not specific enough. How do you expect to get answers without telling me what version refrigerator you are using? It sounds like an incompatibility with other units residing in the Fridge, or with Kenmore's new Fridge Input/Output System (FIOS). Yes, I know we did not follow the Kenmore guidelines, but it's their fault anyway. Have you tried reformatting the Fridge? -------------------- (Your turn!) (2) -------------------------------------------------------------------- As a member of HUMANIST all of two weeks, I can hardly speak of trends, but I can compare HUMANIST to other services. Perhaps we need a poll, or simply to pool our knowledge, on what makes a truly interesting electronic bulletin-board (BB) system. This can be as hot a debate as what makes a great Word Processor, and it is a timely topic. (I also care because I'm working on exploring running my own BB for all our school's alumni and friends, and am trying to start comparing systems.) Has anyone else used other systems with nicer features that we could possibly copy? I really enjoy COMPUSERVE (not free, alas), with its wide range of SIGS and the ability to leap in, find "threads" of interest, and follow the exchange of messages related to just those topics in the order they were created. There was also a great deal of small talk, (CB radio type chatter), that DID make the system feel a lot more relaxed and was a lot of fun to watch. Of course, that involved remote log-ins. Maybe no one here has mastered send/remote and we just need a nicer front end for the VAXEN out there as well as the IBM. (I volunteer to write it if that's the problem.) Other ideas, anyone? ... Wade (3) -------------------------------------------------------------------- [The following is extracted from a note sent privately to me. Occasionally I'll do this, omitting the sender's name, when the message seems worth broadcasting and does not either damage that person's reputation or slur anyone else's. I'm quite prepared to be castigated for this practice.] It's a tough business, trying to encourage people to participate and yet preventing electronic mail overdose, which, I fear, some of us suffer from, especially when we are involved in several (somewhat unrelated) networks. The temptation simply to purge my reader when I get back from a one week absence and discover 150 files in it is very great indeed. I would suggest that it is not the case that we do not appreciate the somewhat greater formality in the latest edition of Humanist, but rather that formality has intimidated some of us who would contribute more casual remarks, but find them not substantial enough to elaborate or articulate with such reasonableness as to send them on to HUMANIST. We can't just send short notes saying "I think Word Processing is a highly overrated topic (not that I do) because the human brain is adaptable and can make most well-designed tools do what we want them to." That doesn't sound scholarly enough. From: Willard McCarty Steve DeRose Subject: Aims of software (49)Reply to S. Richmond on software Date: Fri, 08 Apr 88 23:40:16 EDTFri, 08 Apr 88 10:15:50 EDT X-Humanist: Vol. 1 Num. 1046 (1046) (1) -------------------------------------------------------------------- > Since nowadays, most packages provide for scholary desiderata such as > creating indexes and foot notes, the leading edge will be ability to > produce publication quality text ('desk-top publishing'), WSIWYG, .. and a number of similar points. . and a number of similar points. I must disagree; it seems to me this view, while prevalent, fails to acknowledge that one can do new things with literature and computers. As Jim Coombs, Allen Renear, and I pointed out, "the dominant model [tragically] construes the author as typist or, even worse, as typesetter. Instead of enabling scholars to perform tasks that were not possible before, today's systems emulate typewriters." (CACM 11/87, p. 933). Or as Ted Nelson more colorfully put it, when asked if he was pleased with how new WYSIWYG systems fulfill all his dreams about hypertext and the online docuverse: "NO! WYSIWY*G* -- GET, where? On *paper*! The Macintosh is a *paper simulator*. Millions of virtual trees cut down; it's the deforestation of the American mind." (Keynote address at Hypertext '87 conference, UNC Chapel Hill). I believe the cutting edge is not desktop publishing, but desktop access to literature. Writing brought knowledge to those not alive at the same time; alphabetic scripts brought widespread literacy; the printing press brought standardization and affordability of single books; optical storage and hypertext can bring affordability and the ability to navigate effectively to entire *libraries*. One can cram over 1000 books on a single disk; that means I could have a major research library (say 2 million or so volumes to start) on the shelves of one small room, at a media cost (not counting publishers' profit, I'm afraid) of perhaps $10,000. At that point, I'll care about page breaks and footnote placement about as much as I now care about paper selection for my printer (i.e. a little). Comments? Steve DeRose From: Willard McCarty Itamar Even-Zohar Subject: Report on NB 3.0 (21) Date: Sun, 10 Apr 88 18:31:12 EDT10 April 1988 X-Humanist: Vol. 1 Num. 1047 (1047) (1) -------------------------------------------------------------------- I received Nota Bene 3.0 some time ago and have written a review (distributed via HUMANIST and NOTABENE LIST) of its achievements and problems. This is a corrected updated version of that document, based on a longer experimentation with the program. In this version, Hebrew works all right the way I customized the Beta version before, but we still expect the more advanced Nota Bene version. So I will NOT refer to any specific problems with Hebrew in this document. [Now available on the file-server, s.v. NOTABENE REPORT2. It is more that 600 lines long.] From: Willard McCarty PROF NORM COOMBS Malcolm Hayward PROF NORM COOMBS Malcolm Hayward Subject: ForumNew Humanist formatA Cold Day in the KitchenNew Humanist formatA Cold Day in the Kitchen Date: Sun, 10 Apr 88 18:34:16 EDTSat, 9 Apr 88 16:55 EST09 Apr 88 12:06:21 EST X-Humanist: Vol. 1 Num. 1048 (1048) (1) Date: Sat, 9 Apr 88 16:55 EST (29 lines) (2) Date: 09 Apr 88 12:06:21 EST (13 lines) (1) -------------------------------------------------------------------- Like many humanist members, I have mixed reacations to the recent change in format. In general, I appreciate not getting 20 to 40 messages a day. However, I think it is possible that I am very quick to throw away a series of connected messages on the basis of the title or firfst one in the series. It certainly takes less of my time. On the other hand, I keep feeling like I may well be missing something. I do not know how much others are disturbed by one feature of the mass mailing as it now exists. Each item in the mailing still contains the whole screen=full of bitnet garbage. All I need is the name and node etc. of the sender. All those other lines just interfere. Of course, because I "read" with a speech synthesizer, I must "endure" all of the garbage. Perhaps the rest of you can visually skip it and pick out the line of interest without becoming annoyed with the REST! I wonder if the software could not identify the meaningful line and throw the rest away before mailing us the package. In general I vote for the change. I do get another digest without all the electronic trivia. Whether it is done by machine or hand I do not know. I would think programming that would not be too difficult. Norman Coombs Rochester Institute of Technology NRCGSH@RITVAX.bitney (2) -------------------------------------------------------------------- An addendum to the last message. KNOCK KNOCK. Who's there? KENMORE. Kenmore who? KENMORE BE SAID ON THE SUBJECT OF REFRIGERATORS? Last time I went to my Kenmore for a byte (too obvious), I opened the freezer door too quickly and that stupid cheap plastic bar that holds the frozen orange juice cans and stuff broke right off. It was the last remaining one that had not broken off. Anyone who has a Kenmore will know what I mean. I thought, maybe that BAR was too COLD. I could go no farther. From: Willard McCarty Sterling Bjorndahl - Claremont Grad. SchoolMalcolm Hayward Sterling Bjorndahl - Claremont Grad. SchoolMalcolm Hayward Subject: Software, mostly wordprocessing (102)aims of software: pro WYSIWYG, con DeRoseOn Authors as Typesetters and so onaims of software: pro WYSIWYG, con DeRoseOn Authors as Typesetters and so on Date: Sun, 10 Apr 88 18:36:17 EDTSat, 9 Apr 88 11:37 PST09 Apr 88 11:50:01 EST X-Humanist: Vol. 1 Num. 1049 (1049) (1) Date: Sat, 9 Apr 88 11:37 PST (52 lines) (2) Date: 09 Apr 88 11:50:01 EST (32 lines) (1) -------------------------------------------------------------------- I must disagree with Steve DeRose and others who seem to denigrate WYSIWYG. I agree that electronic publishing is in the process of making paper publishing obsolete, but will we not care how our electronic documents look? I do hope we won't be cursed with text-only, 80 column screens forever. If so, give me paper!! Preparing a document for publication means taking some care with how it looks. Not that I want to control every last detail - but my typescript provides one rather effective way of communicating my ideas to the publisher's graphic design department, with whom I can subsequently negotiate. Furthermore, most people in the humanities do a lot more than publish books. Most of them also teach courses. Having a WYSIWYG word processor makes the production of nice-looking handouts much more pleasant (especially since my free-hand drawing capabilities are quite poor). And even if paper hand-outs should one day be replaced by one networked work-station per student, I hope that my electronic handouts will not be limited to text-only, 80 column screens. Here too, I'll vote for WYSIWYG. In addition, many folks around here distribute papers in seminars. These papers may never be published, or may go through several versions before being published. WSYIWYG helps me prepare this kind of document so that its visual appearance reflects the high quality of its contents :-). Remember, neatness *always* counts, like it or not. One doesn't need WYSIWYG in order to be neat and tidy, but I find that it helps me. So, to come back to DeRose versus Richmond, one "cutting edge" of software where I live, is how to make the things I produce look good more easily. I am interested in the latest advances in word processing software, graphics software, printers. Since I am not a computer professional, these things are very important to me. DeRose is of course right when he says that the technological "cutting edge" for humanists will be the electronic library. I do in fact use texts on CD-ROMS - but I find that this is still at a very primitive state of development compared to what it will be ten years from now. I have no way to make notes in the margins of the CD-ROM texts I use. There are no accompanying illustrations. There is no way to break the 80-column barrier. And it will be several years before this technology is widespread enough to affect most of the scholarly community. Thus, the small advances in WYSIWYG-on-paper are not to be sneered at, at least for the medium term. This is something that is very relevant to us. Sterling Bjorndahl BJORNDAS@CLARGRAD.BITNET (2) -------------------------------------------------------------------- I think Steve DeRose is making some good points here about the kind of overkill of features available in editing/wordprocessing/desk-top publishing packages. I edit a journal and have recently been getting some submissions--nicely laid out, laser-printed--that honest-to-God look better than they will in print, just about. But those looks don't really matter because the papers have to be read for content anyway, and if they are going to be published, they will have to be reformatted for the journal. Another factor enters in here: in the last year or two the software for typesetting machines has gotten much better at interfacing with ordinary word-processing packages. Whereas two years ago I had to recode material for the typesetter (,, etc.), now the typesetting program will automatically convert my WordPerfect files, picking up paragraph indents, italics, and so on. All I need to do is specify type fonts, point sizes, and indicate where a new font is to be used (a title or works cited). I guess the point is, that as long as research continues to be distributed in this more or less traditional way, wordprocessing packages have gone about as far as they need to go in their formatting capabilities and typesetting packages have come about as close as they need to come to wordprocessing software to make it easy to go from an author's ideas keyed in to his or her machine to a copy of a journal in the hands of a reader (or the back stacks of a library). A more complex (for me) issue is whether to plan to continue publishing this way. I seem to remember there was a plan afoot in the Canadian Parliament a few years back to force Canadian scholarly journals to convert from a paper to an electronic medium. Was that right? Is that a live issue? From: Willard McCarty Subject: One person's junk (20) Date: Mon, 11 Apr 88 18:28:44 EDT X-Humanist: Vol. 1 Num. 1050 (1050) The following arrived this afternoon as a private note to me. I pass it on for your consideration. `I vote for a little "censorship" in the sense that there ought to be a "junk" mail folder with subject list so one can pick and choose. The first refrige note was cute, but by the second one I was already wishing it hadn't gotten started.' If you think that this is worth discussing, please do. Willard McCarty mccarty@utorepas From: Willard McCarty Jeffrey William Gillette Subject: Interfaces: the appeal of the mouse (78)Of Mice and Men Date: Mon, 11 Apr 88 18:37:47 EDTMon, 11 Apr 88 15:51:03 EDT X-Humanist: Vol. 1 Num. 1051 (1051) (1) -------------------------------------------------------------------- A colleague and I recently discussed word processors, their interfaces, and the future. In the course of the discussion a rather stereotyped (if vigorous) debate ensued on the merits of a mouse-and-menu interface (what I call the "MacWindows" interface) vs. a more conventional command-oriented interface (I believe the test case was Nota Bene). I make no secret of my infidelity towards my IBM PC clone. I feel that the advent of the Macintosh was the single most important event of the 1980s, and that the salvation of the PC is in Windows, the Presentation Manager, and similar products. In days past I have condescendingly dismissed my colleague (and other friends with a similar point of view) as one who, perhaps, had little interest in the technology beyond getting a particular job done, or, perhaps, had never used a superior MacWindows program, or, perhaps, for some other reason did not realize that the ascendancy of the Mouse Age is inevitable. After many such conversations, I am almost prepared to own that there are intelligent, informed, computationally savvy users who have tried the MacWindows approach, and found it unsatisfying. The questions I should like to pose to humanists are two: 1) Why is it that some people (myself included) swallow the MacWindows interface hook, line and sinker, while others find such an approach unconducive to their work? What is the difference in temperament that provokes such opposite responses? 2) Is it possible to please both classes with a single product? What type(s) of interface(s) would such a word processor have? ----------------------------------------------------------------- I should like to suggest an observation on the subject (which will, I hope, occasion some debate and refutation). When the Mac was first introduced, it was touted as the computer "for the rest of us." This slogan notwithstanding, the first, largest, and most loyal group of Mac users are the computational sophisticates I frequently refer to as "hackers." At the risk of pressing my point beyond proper bounds, I will add that, although I know people who have purchased Macs as their first computer, I cannot think of a one who was not a "quick study" with computers. I know many technologically "average" people who use PCs, but it seems to me that it is a technologically elite group that gravitates toward the computer "for the rest of us." To complete the thought, my impression is that the same situation holds with respect to the Microsoft Windows product. Would anyone care to venture an explanation as to why it is largely engineers and hackers who are pushing forward the MacWindows standards, frequently against the objections of the very users (both "power users" and neophytes) whose cause they purport to champion? From: Willard McCarty Ron Zweig Subject: w-p, typesetting, & the price of technology (48) Date: Mon, 11 Apr 88 18:41:41 EDTMon, 11 Apr 88 22:41:49 IST X-Humanist: Vol. 1 Num. 1052 (1052) (1) -------------------------------------------------------------------- In the ongoing debate on w/p, there have been a number of references to the increasing ease with which new w/p software allows authors and editors to have text typeset directly from disk. A number of discussants have been using these techniques, as I have, for the past few years. I have an observation to offer on this new task, and wonder if others might have reached the same (or other) conclusions. Five years ago, the direct interfacing with typesetters from a w/p disk was cumbersome, and it usually required a lot of mutual patience and practice, not to mention the entering of typesetting codes into the w/p file. But it was worth it because it (i) offered significant savings in cost - 40% (my facts are drawn from my experience in Israel, but I think that are probably valid elsewhere too) (ii) it was, or at least promised to be, much quicker, (iii) it was definitely hi-tech. Since then, the techniques have become widespread, much simpler and far more likely to work successfully. BUT ... the price advantage has disappeared as typesetting costs have crept back up. So much so, in fact, that this novel use of w/p technology has simply redefined the division of labor between typesetter and editor/publisher/author. The latter do more of the typesetters work and have little gain to show for it. True, the quicker turnaround and the lessened aggrevation are worth something. But has the introduction of computing transformed our relations with typesetters to our benefit or theirs? As desktop publishing intrudes more and more into our work, I begin to feel that the situation will repeat itself. We will not only do all the keyboarding and introduce the major codes, but we will also end up doing the page-makeup, design etc, with little to show for it in reduced production costs. I enjoy playing with Ventura as much as anyone, but I begin to wonder whether scholarly publishing or the printing trade has most to gain by the new technical possibilities. Ron Zweig Tel Aviv University H27@TAUNIVM From: Willard McCarty Malcolm Hayward Subject: Typesetting costs (26) Date: Mon, 11 Apr 88 21:25:07 EDT11 Apr 88 20:10:59 EST X-Humanist: Vol. 1 Num. 1053 (1053) (1) -------------------------------------------------------------------- Another issue with typesetting costs: obviously as it becomes easier for an editor to move from an electronic text to typeset galleys, so too it is easier for a typesetter to do so, in theory driving down the cost there. Pretty quickly it will be seen that the only advantage to doing typesetting will be the small time saving that might accrue to putting in codes at the point of assemblage rather than first assembling a text in one place (the editor's computer) and coding it in another. I find right now, however, that typesetting only runs about one third of my total production costs; since production costs are about equal (rule of thumb) to other editorial and distribution costs, typesetting is only one sixth of the total cost of producing a journal, book, or what have you. Thus even if I had total control of my typesetting the savings would not be of much moment. From: Willard McCarty JACKA@PENNDRLS (Jack Abercrombie & Todd Kraft)"Patrick W. Conner" Diane Balestri JACKA@PENNDRLS (Jack Abercrombie & Todd Kraft)"Patrick W. Conner" Diane Balestri Edward Friedman Subject: Interfaces (168)Comments on MS Windowswsiwyg (19 lines)Interfaces: the appeal of the mouse (78)of mice and studentsInterface for the blindComments on MS Windowswsiwyg (19 lines)Interfaces: the appeal of the mouse (78)of mice and studentsInterface for the blind Date: Tue, 12 Apr 88 19:25:15 EDTTuesday, 12 April 1988 1019-ESTTue, 12 Apr 88 10:11 ESTTuesday, 12 Apr 1988 07:44:07 EDTMon, 11 Apr 88 20:27:03 EDTMon, 11 Apr 88 22:38 EST X-Humanist: Vol. 1 Num. 1054 (1054) (1) Date: Tuesday, 12 April 1988 1019-EST (58 lines) (2) Date: Tue, 12 Apr 88 10:11 EST (23 lines) (3) Date: Tuesday, 12 Apr 1988 07:44:07 EDT (16 lines) (4) Date: Mon, 11 Apr 88 20:27:03 EDT (24 lines) (5) Date: Mon, 11 Apr 88 22:38 EST (29 lines) (1) -------------------------------------------------------------------- Under an IBM technology transfer project, we have been working with MS Windows (version 2.03) for some three months now. We have made it through the mounds of technicial information accompanying the package as well as the software itself. We feel that we now can share with you some of our preliminary observations. Most important, MS Windows is a forward albeit diagonal step in the right direction. We can understand why Apple is sueing Microsoft. (Question: Why didn't Xerox sue Apple?) MS Windows 2.03 seems to be a polished and relatively stable product with no significant bugs. List of Positive Comments: 1. The Graphics interface is well-designed and powerful. We have been running Windows on a PS/2 (model 60), and speed of display is comparable to early Macintosh. 2. It is good that they have given us a full diskette of sample programs to use and study since programming in the Windows environment is unlike normal DOS programming. (See commment below.) 3. Windows extensive libraries, GDI, User, and Kernel, certainly cut down on programming development especially in designing user interface. List of Negative Comments: 1. Although Windows provides for powerful functions for Latin fonts, it lacks sufficient development for non-Latin scripts such as Arabic or Hindi. From our perspective, foreign font development would include additional information (left offset and movement) on each character in a font. This information is lacking in the current version of Windows we are using. A good example of what we are advocating can be found in the font descriptions used by the HP LaserJet. Since Windows lacks these important pieces of information, again in our view, we have been forced to work around the problem by creating an add-on resource. 2. Windows is gigantic by DOS standards, and as we move to OS/2 such will not be the case. Also, the entire package runs adequately on a PS/2 50. (We haven't tried it on anything smaller.) We tend however to feel that the PS/2 80 will become the low-end machine for real functional use of Windows/Presentation Manager. 3. Programming with Windows is not the easiest task. Because of its interactive and multitasking nature, programming problems that were more difficult to solve are now simplier, but some of the simplier things have become difficult. One example of the latter is trying to execute a fscanf (PASCAL: readln(filein,line) from a file. (2) -------------------------------------------------------------------- What should we "get" with WSIWYG? "WSIWYG" encapsulates the desideratum of no distance between the intermediate process of composing/editing with a computer and the desired final product." Steve DeRose points out that when we emphasize getting paper products as the output of wordprocessing the leading edge becomes the cutting edge of deforestation. However, as others point out, the current demand on those of us who use word processing is to produce paper products. For instance, a recent conference announcement on computing and philosophy stipulates that submissions be in paper rather than through electronic mail or floppy disk. Perhaps, if we shift our desired end product from paper output to monitor output, we can change the criteria of WSIWYG to monitor-ready quality, such as: split screen, colour, graphics, multiple-layered screen, and mutiple-windowed screen as the "get" part of the formula. Realistically, we want wordprocessors to perform different functions--as e-mail text generators, desk-top publication systems.... So, WSIWYG must, to day, include a "G" which equals paper. When "G" shifts to mainly monitor products, WSIWYG will change its meaning. (3) -------------------------------------------------------------------- I have to run to teach a class, so this will be brief and ill proof-read. The Mac interface appeals to technological sophistocates because they appre- ciate its potential not only in word processing, but in developing the union of man and machine. That's really what engineering is all about. The folks who only want to use a single application are served by any computer they take the trouble to learn (and to them it is trouble); they aren't interested in the machine's potential, and they don't have the sort of imaginations which conjure up better versions of their applications. I think that the notion of bicameral dominances may also be involved, but that may also be a bunch of once-fashionable hogwash. Does anyone know whether right-brained people prefer one sort of machine and left-brained another? (4) -------------------------------------------------------------------- I read Jeffrey Gillette's comment about Macs and hackers with interest, and a little surprise. At Princeton, where I keep an eye on the way computers are penetrating the population and the curriculum, the Mac is the overwhelming choice of the student body, only a few of whom could by any stretch of the imagination be called hackers. We initiated a student discount purchase plan (with easily obtained loan plan to cover it) this fall; over 75% of machines purchased have been Macs. At this point most of the public machines available for students in labs or wordprocessing clusters are IBMs, by the way, thanks in most part to the major grant we've enjoyed from Big Blue--so it's not that we are encouraging the Mac with classroom application. On the contrary, student (and increasing faculty) preference for the Mac is beginning to drive our planning for new clusters and facilities. My sense is that the kids are finding the mouse/menu interface very intuitive and visually appealing; they also like the ease of formatting a paper and the quality of the laseroutput. (Needless to say, most of what they are doing is wordprocessing.) In other words, I don't see the Mac as a hackers' heaven at all, though it may be that too. To me, it's the machine that's converting the doubters and making the amateurs feel that the computer is a pretty friendly and useful tool after all. (5) -------------------------------------------------------------------- Regarding computer interface concerns - I recently visited the research group of the National Foundation for the Blind in NYC. Various voice and dynamic braille interface devices make most computing activity possible for the blind. However, the more recent windowing, use of icons and other intrinsically visual interface devices ( like the mouse ) have created a crisis for those concerned with use of computers by the blind. It may not be possible for substitute proceedures to be developed. I wonder if some of the Humanist members have thoughts on this issue. Are there any blind members in Humanist? Ed Friedman From: Willard McCarty Mark Olsen Subject: Typesetting (32) Date: Tue, 12 Apr 88 19:27:21 EDTMon, 11 Apr 88 20:56:44 MST X-Humanist: Vol. 1 Num. 1055 (1055) (1) -------------------------------------------------------------------- There are a number of hidden costs that occur when a journal or author wants or is required to perform typesetting. I was on the staff of an academic journal that the SSHRC required -- for funding purposes -- to do all its typesetting in-house. Very quickly, the editor and I, the only two computer "wizards" who had sufficient knowledge and computers, functioned as copy-editors, typists, layout artists and everything else. Needless to say, the journal did not appear at a regular rate. Even worse, the learning curve for editorial assistants, graduate students, was so long that few had learned all the ins and outs before they had graduated or moved to more "rewarding" work. The required level of expertise will necessitate a half-time permanent person who can learn the job and provide continuity. While this was a few years ago, and I doubt that one would have to write a typeset simulator to check the formatting on dot matrix before setting it to press (every error cost money), I am still VERY suspicious of anyone who wants to get their texts camera ready. The benefits are minimal and the costs in terms of time and effort that could be expended elsewhere are too large. Writers and editors should write and edit! Mark From: Willard McCarty Wade Schuette Willard McCarty Wade Schuette Willard McCarty Subject: Forum: improving HUMANIST (82)HUMANIST FORMAT;filter items with subject keywordsGood ideas for improving HUMANISTHUMANIST FORMAT;filter items with subject keywords (10 lines)Good ideas for improving HUMANIST Date: Tue, 12 Apr 88 19:58:01 EDTTue, 12-APR-1988 06:30 EST12 April 1988 X-Humanist: Vol. 1 Num. 1056 (1056) (1) Date: Tue, 12-APR-1988 06:30 EST (18 lines) (2) Date: 12 April 1988 (46 lines) (1) -------------------------------------------------------------------- An important property of a bulletin board system seems to be a workable facility for either automatically filtering out items you don't want to see (due to topic or source), or for telling at a very fast glance from one line whether to read or delete something (or maybe explore it further.) I'm not sure what HUMANIST's technical capacities are. I'd rather have 15 unbundled messages, with one *good* key line for each, than 5 bundled message groups. How about the idea of forcing messages to have the subject line in the format: KEYWORD:(# lines):details, where we agree on 10-20 short, fixed KEYWORDS, such as WP (word processing), HUMOR, HELP, CONF (conference notice), LIB (new library item), etc.? On some systems you can do that, and automatically set a filter so you never even see messages that have keyword areas you don't want to see. (2) -------------------------------------------------------------------- As `editor' of HUMANIST I am very grateful indeed for the time, energy, and ingenuity members expend on making suggestions for the improvement of our discussion group. Often it must seem that these simply get ignored. Often, it seems to me, suggestions are good but are either technically impossible with the present software or would mean a significantly greater amount of work for me. In part HUMANIST's success depends on the donation of my time, the postmaster's time, some computer cycles and storage space, the ListServ software, indeed, the networks by means of which all of you receive its messages. Resources in Toronto are sufficient to run HUMANIST as it now is, but we haven't the programming power to improve ListServ even if we were permitted to do so, and since the author gives it away to all, we cannot complain much about what we have from him. A few HUMANISTs have contributed VM/CMS software to make my job easier and to help with the biographies, and I am most grateful; two or three are working on resorting and tagging the biographies; another has taken on the job of writing the periodic summaries of activity; another volunteered to be a software review editor, but that initiative came to nought through no fault of his (the world isn't ready for co-publishing electronically). We have, that is, the beginnings of an editorial staff. Its membership is open. All you need to join is to work for the common good. Speaking of which, I detect that the next piece of real work that needs doing for some HUMANISTs is the writing of software to take apart bundled messages. I seem to recall that bundling is a problem for some of you. Is anyone willing and able to write clever code to handle this problem on whatever systems it may exist? Some of you like bundling; the rest of you, I'm sorry to say, will just have to accept it as a fact of life if for no other reason than it radically reduces the amount of time I have to spend processing the mail. Some of you will remember what HUMANIST was occasionally like before editorial intervention was imposed, and if your memory is clear you'll not want to return to those bad old days. `Electronic Chernobyl' was an evocative phrase at the time. So, thanks for the suggestions. I'll be even more happy to see the volunteers. Willard McCarty mccarty@utorepas From: Willard McCarty Philip Taylor Subject: Interfaces (191)Of mice and men Date: Wed, 13 Apr 88 22:08:50 EDT13-APR-1988 18:06:50 GMT X-Humanist: Vol. 1 Num. 1057 (1057) (1) -------------------------------------------------------------------- Jeffrey William Gillette's recent contribution ("Of mice and men") so aroused my innate biases that I feel obliged to put aside my lethargy and respond; unfortunately, in so doing, I realise that I am about to open (re-open ?) the very same can of worms that INFO-VAX seems unable to close. Jeffrey suggests that computer users may be classified into "pro-mouse" and "anti-mouse" groups; INFO-VAX (frequently) suggests that computer users may be classified into "pro-VMS and pro-UNIX" groups. I wonder if the two groupings are orthogonal or related ? I should start by confessing my own prejudices: I am anti-mouse, anti-UNIX, and pro-VMS. My reasons are probably not well understood, even by me, but introspection suggests that they include: A liking for (natural) language; a dislike for the current tendency to represent as many concepts as possible as ideographs, particularly where the potential audience does not share a common first language (c.f. the safety instructions now universal among airlines, and which make as little sense to me as would the same instructions expressed in hieratic or demotic scripts); a dislike for slang, and a dislike for (unnecessary) abbreviation ("cuppa" for "cup of tea"; "Toys Us for "Toys are Us"; "Pick 'n' match" for "Pick and match"); a heretical belief in proscriptive grammar, as opposed to the (supposedly old-fashioned) prescriptive and (currently acceptable) descriptive grammars; the belief that, when typing, the fingers rest naturally on the `home keys', and should move as little as possible from those keys. (there may well be others). So, given these prejudices, what is my ideal man/machine interface ? (I apologise for the use of sexist terms, and assure any reader(s) that no slur is intended). A command-line interface, in which the vast majority of the characters used are alphabetic, and in which these characters form (natural-language) words. Where natural-language words are used, their meaning should be as close possible to their meaning in natural language. Where alphabetic characters are used, their case should not matter (I justify this by suggesting that, in the English language at least, it is extremely difficult to construct a sentence in which the meaning would change by changing the case of the letters, and by asking the question that students invariably ask when encountering a case-sensitive computer for the first time: "But WHY doesn't the computer understand that `DELETE' means the same as `delete' ?") Where non-alphabetic characters are used, their meaning should not conflict with their most common non-computer-related meaning. For example delete myfile.text /confirm (or Delete MYFILE.TEXT /confirm) meets these requirements; rm myfile.text -c and do not. (The examples are somewhat artificial; I am not sufficiently familiar with UNIX or MAC/Windows to contruct real analogues of the VMS command above). My objections to the UNIX-like command are: that "rm" is not a natural-language word, and therefore has no inherent meaning; that "-" typically indicates negation, whereas in this context it indicates assertion; that "c" has no unambiguous meaning; and that "C" should mean the same as "c". and my objections to the MAC/Windows-like command (sequence) are: that it requires removal of the hand from the home keys; that it requires good visual-motor co-ordination to move a pseudo-object using an essentially `detached' right-hand; (I have far less objection to a light-pen approach, where the pseudo- object is actually pointed to by an object held in the hand). that it requires an understanding of the various ideographs used to represent the concepts being manipulated; and that it requires a time-dependent response, which is atypical for computer applications (I can spend an hour entering a single command to VMS, breaking off for coffee if I like, even between the characters of a single word, but once I start to select an object using a mouse, I must complete the second button-depression within a (seemingly very) short time. But even the suggested interface is not sufficient; the most proscriptive of proscriptive grammarians would accept that there is sometimes a case for abbreviation, as, for example, when a term is frequently used. It is therefore necessary to allow commands and qualifiers to be abbreviated, but clearly undesirable that they should be abbreviable to the point where they become ambiguous (does "d" mean "delete" or "define" ?). VMS adopts the convention that all commands and qualifiers are abbreviable to the first four characters, and are guaranteed unique when so abbreviated (as the VMS command set is dynamically extensible by the user, some constraints must be placed on the definition of new commands to ensure that they do not clash with existing commands after abbreviation). Abbreviation to four characters allows the knowledgeable user to speed up his/her entry of commands, but still leads to a fairly verbose command string; VMS therefore allows any command (or any parameter, or any qualifier, or any combination thereof: in fact, any arbitrary string) to be bound to a key. Such keys are not a part of the main keyboard, and therefore the home keys plus their neighbours retain their canonical meanings; only the function keys may be re-bound. It is important to realise that the option to abbreviate a command, or to bind a command-string to a key are just that: options; it is never necessary to abbreviate a command, or to use a function key instead of a command, and thus the naive user need never learn obscure commands or function-key sequences; he or she may continue to use full natural-language words for as long and as often as they like. But even VMS has its limitations: for example, if a user has entered the command Delete MYFILE.TEXT and wishes to know what the keyword is which will allow him or her to change their mind before the deletion actually takes place, there is currently no (simple) way of asking VMS what options may follow a given command, once the command has been part-entered. (One can always ask for help on a command before using it, but, once it has been entered, one cannot branch to the help system without cancelling the command). Stan Rabinowitz's "WHAT" program overcomes this limitation. As each command is entered, an incremental syntactic analysis takes place; at any point during command entry, it is possible to request help (by entering a question-mark): the syntax analyser `knows' what may legitimately follow the part-command entered, and (using nested windows, to which I have no objection whatsoever) displays the legitimate continuations of the part-command. Furthermore, for those who are slow typists, but who still prefer to see commands and qualifiers expressed in full, any part-command may be automatically completed by the system; on entering , the syntax analyser will automatically complete the part-command, provided that it is already unambiguous, or will indicate that the part-command is ambiguous and requires further specification before it can be completed. (I would accept that the key is not mnemonic in this context, whereas I believe that the earlier use of the question-mark for incremental help is mnemonic.) So, is the combination of the VMS and "WHAT" interfaces ideal ? I believe that it is, and that it leaves the MAC/Windows interface miles behind (and leaves the UNIX interface light-years behind, but that's not really the point at issue). OK, I've opened the can of worms; over to you ..... (Philip Taylor; RHBNC, University of London; U.K.) From: Willard McCarty Charlan@CONU1Wayne Tosh / English--SCSU "prof.s.r.l.clark" AYI004@IBM.SOUTHAMPTON.AC.UKCharlan@CONU1Wayne Tosh / English--SCSU / St Cloud, MN 56301 "prof.s.r.l.clark" AYI004@IBM.SOUTHAMPTON.AC.UK Subject: Forum: expanding eyes (67)The new formatre: (What's happened to HUMANIST?)Re: One person's junk (20)CensorshipThe new formatre: (What's happened to HUMANIST?)Re: One person's junk (20)Censorship Date: Wed, 13 Apr 88 22:14:38 EDTTue, 12 Apr 88 22:51:50 EDTTue, 12 Apr 88 20:59 CDTWed, 13 Apr 88 13:07:04 BSTWed, 13 Apr 88 18:50:48 BST X-Humanist: Vol. 1 Num. 1058 (1058) (1) Date: Tue, 12 Apr 88 22:51:50 EDT (13 lines) (2) Date: Tue, 12 Apr 88 20:59 CDT (6 lines) (3) Date: Wed, 13 Apr 88 13:07:04 BST (8 lines) (4) Date: Wed, 13 Apr 88 18:50:48 BST (12 lines) (1) -------------------------------------------------------------------- I must say I approve of the new compressed format for HUMANIST. I access netnorth through the UMass maile operating on a CYBER system. I am connected through a 1200 baud line. I would not have access to software packages that would do a triage, nor can I scan through quickly. The current configuration permits me to disregard technical discussions that do not interest me (eg. the computer-based analysis of text files) and focus on those subjects that I find of interest. My thanks to Willard for his efforts to make the system manageable. Maurice Charland (2) -------------------------------------------------------------------- Keep up the good work, categorizing as you have been. HUMANIST is working fine. (3) -------------------------------------------------------------------- I endorse the plea for a junk-mail folder: I only joined the hotline a few days ago, and my reader is already clogged up every morning. Open discussion is fin e, but it would be nice if it were about something significant. The call for various forms of censorship, either editor-administered (one person's junk) or through some filtering device, destroys one of the positive aspects of HUMANIST - the unexpected. One could even say that learning requires the unexpected - not to expand one's edifice, but to change it. William Blake would probably say something like "The eye sees more than the heart knows". Brian Molyneaux (AYI004@uk.ac.soton.ibm) From: Willard McCarty Geoffrey Wall Subject: The new OED (89) Date: Wed, 13 Apr 88 22:17:48 EDT13-APR-1988 13:36:55 GMT X-Humanist: Vol. 1 Num. 1059 (1059) (1) -------------------------------------------------------------------- From official ( and semi-unofficial) statements about the progress of the NEW OED, I have put together a brief account of what is on the way in the next five years A new integrated OED (original text of 1928, along with Supplements and some newly compiled material) is to be published in book form in 1989. Meanwhile, the old OED, without material from the supplements, was published on CD-ROM at the beginning of 1988. OUP sees this as a dummy run for an integrated version on CD-ROM, due in the early 1990's. The current CD-ROM version will help get the design right. It's just 'the first step on a sharply rising learning-curve'. Various technical problems have limited the current CD-ROM version. Firstly, the typographical intricacies of the original printed text could not easily be transferred to the electronic version, because of the limitations of most current VDU screens. OUP decided to do away with some of the more obscure language fonts (esp the Greek and the Old English) and with the profusion of diacritical marks. 'A more purist approach would have put the publication beyond the reach of most of its potential users.' Secondly, limits are imposed by the size of the work. One CD would hold the entire OED, with just a little space left over on the disk. But obviously the disk only becomes useful as indexes are added to the raw text. Given the complex structure of the OED text (some forty fields in the database) indexes could easily bulk larger than text. OUP's solution has been to put out several CD versions. First, the 'complete' version. This fills three disks. And, to do any useful work, it needs three linked CD-ROM drives, so as to search all three disks simultaneously. (Most current CD-ROM editions require only one CD drive.) Then there will be two reduced versions: a linguistic disk (OED minus the illustrative quotations) and a literary disk (quotations only). Facilities on this version include: search whole corpus for any word or phrase; or generate complex specialised lists. (For example: lists of words supported by quotation from Shakespeare; words from any chosen register (eg slang) used by Walter Scott; words from Hindi that occur in English before 1750. Thus, a whole realm of information hitherto buried in the printed version of the OED will become accessible. Simple queries, for example: -What interjections were in common use in the period 1670-1720? -List all mineral names with the dates when they were named -What meanings of words does Milton follow Spenser in using? Or, more complex queries, for instance -investigate historical shifts in thinking about relations between body and mind by scrutinising a set of key terms (mind, body, emotion etc) The electronic OED will not, they think, be in competition with the New OED in printed form. It will be used in different ways, as a 'list-maker' rather than a 'page-turner'. ( Who would want to read a 60,000 word entry on screen?) OUP say that an on-line database is not (not yet? not ever?) appropriate for a dictionary of this kind. OED on CD-ROM will sell mainly to the major research libraries, rather than to individual users. OUP currently predict that there will be no significant income from the sale of electronic versions in the first five years. Personally, not being able to afford a PC clone, let alone a dedicated three-drive CD player, I'd like to have on-line access to the OED as soon as possible. Along the lines promised by the Institut National de la Langue Francaise, for general access to their lexicographical database at Nancy. Meanwhile, perhaps the NEW OED Centre at Waterloo would consider some kind of access? Geoffrey Wall From: Willard McCarty David Graham PROF NORM COOMBS "Patrick W. Conner" PROF NORM COOMBS "Patrick W. Conner" Subject: Interfaces (132)interfacesMouse or keyboardsMacSonnetinterfacesMouse or keyboardsMacSonnet Date: Thu, 14 Apr 88 20:07:50 EDT14 Apr 88 10:24 -0330Thu, 14 Apr 88 03:58 ESTWednesday, 13 Apr 1988 23:07:19 EDT X-Humanist: Vol. 1 Num. 1060 (1060) (1) Date: 14 Apr 88 10:24 -0330 (76 lines) (2) Date: Thu, 14 Apr 88 03:58 EST (12 lines) (3) Date: Wednesday, 13 Apr 1988 23:07:19 EDT (22 lines) (1) -------------------------------------------------------------------- First of all, apologies for what will probably be a long posting... As someone whose first experience with computers involved VMS, whose second was UNIX, and whose third was (and is) Macintosh, I really feel that I cannot let Philip Taylor's comments go by without replying. I frankly find the assertion that the VMS command-line interface is a "natural language" interface ludicrous. How well I remember my first attempts to create a new directory, and my fre- quent trips to the on-line help and the manuals. Not to mention trying to navigate through several levels of directories once I had managed to create them. And as for sorting or merging something... Not to mention the fact that often this "natural language" interface tends to be completely discarded once the command-line has been left behind. Just now I wanted to include a copy of Philip's message in this file as a buffer (Yes, I'm logged onto a VMS system from home at the moment). So inside my mail file I entered something like the following series of commands to view the first lines of the file to make sure I had spelled his name correctly: CTRL-Z * inc humanist.bmail;1 =hum * sh buf * ty 1 to 10 * % Unexpected characters at end of command [or something similar] * ty 1:10 [I try again, having forgotten how to do this] * ty 1:20 [first try didn't show enough of the file] I submit that for me at any rate this is neither natural nor economical. I'm sure there are far better ways to do this in VMS, but for me VMS will always be something one puts up with rather than something one loves. Contrary to Philip's experience, I find Unix much more congenial than VMS. When learning Unix (which I will admit I do not manipulate much better than VMS), I found the basic command set easy to learn an d remember *because* it was abbreviated and because it was (in the main) mnemonic [please don't quote 'cat' at me--I always use 'more' :-)]. My main objection to Unix is that in order to get it to do anything _really_ worthwhile, one has to learn *much* more than the basic command set. I'm sure that with 'vi' and 'troff' one can perform wonders, but the amount of time I have to expend to get anything remotely resembling what I want is such that I simply cannot be bothered--there's more to life than memorizing command languages, and I want to be *doing* something rather than figuring out *how* to make the /*&??$ machine do something it clearly doesn't sympathize with. I am not going to indulge in a rhapsodic description of the wonders of the "Mac/Windows" interface, but I must say that Philip's description of mouse use is misleading in that after the first few minutes, there is no conscious thought involved in clicking on a file and dragging it to the trash. [Mac mice have only one button, not "three apparently identical" ones] And contrary to the talk about ideographs, the great bulk of Macintosh use (in my experience) does *not* involve icons. The interface is graphic-based, true, but the icons are largely restricted to the "desktop" or top-level directory; once an application is launched, very little actual icon use usually occurs. The advantage of the mouse/window/pull-down (or pop-up) menu interface, in my view, is that it permits the beginner to use the menus, where all the commands are laid out in full view, and progress to using the keyboard equivalents, of which a multipli- city are often provided, as experience is gained. This means a sharply reduced reliance on the manual, which cuts the loss of time which entailed by going to the manual, finding the appropriate passage, and trying to understand what the manual-writer has written. I think Philip's comments, however, point up something said earlier by Jeffrey Gillette (I think), which is that there is a class of sophisticated, knowledge- able, intelligent computer users who simply do not find the mouse-based inter- face attractive or useful. Often these people, when I encounter them, seem to be computer professionals: system managers, Unix gurus, people with a storehouse of proprietary knowledge which they have spent time acquiring and which is valuable to them and to others. Often, I think, the "Mac/Windows" interface is uncongenial to them *because* it divorces them from the command line and from the heart of the machine which they know so intimately. As our local "micro- computer specialist" said, "I do not think faculty will want Macintosh computers, because on the Macintosh you cannot have direct access to the oper- ating system." I must emphasize that I most surely do not want to start some pointless flame war about VMS/Unix/Mac/DOS interfaces: isn't the interesting question here precisely the debate about why some of us prefer one interface to such a high degree, and the subsidiary question of why we feel so strongly about it? Why do I feel impelled to answer Philip's comments? Is it just consumer loyalty? I don't think so... And yes, I still use VMS and Unix :-) David Graham (2) -------------------------------------------------------------------- Could not resist throwing in another perspective on using a mouse vs using a keyboard. It seems to me that the keyboard is a more linear, rational process while the mouse is more visual and intuitive. I would love to see a psychological study on right brain, left brain and mouse or keyboard preference. Or is this way off the wall? Norman Coombs (3) -------------------------------------------------------------------- The Poet Composes a Sonnet on His Mac Shall I compare thee to an IBM PC? Thou are more friendly and more temperate: Rough DOS doth encrypt the darling disks A &B, A hind'rance to my work I've learnt to hate: Sometimes in code the IBM can shine, But often is his grey complexion dimm'd By graphic Mac from which he graphically decline, By fate, or Apple's interface, untrimm'd; But thy unclon.ed excellence shall not fade, Nor lose possession of the users thou ow'st; Nor shall Amiga brag thou wander'st in his shade, When in increasing bytes to pow'r thou grow'st: So long as men can breathe, or eyes can see, So long lives Mac, and this gives life to me. From: Willard McCarty Lou Burnard Subject: Visit report: Knowledge Warehouse Project (22) Date: Thu, 14 Apr 88 20:16:47 EDT14-APR-1988 09:23:24 GMT X-Humanist: Vol. 1 Num. 1061 (1061) (1) -------------------------------------------------------------------- A company called Mandarin Communications has for the last year been running the Knowledge Warehouse project, a pilot investigation into the feasibility of archiving publishers' typesetting tapes as a quasi- commercial, semi-philanthropic venture. Despite the billing for this one day conference organised by Mandarin at SOAS ("To review the next steps in establishing the National Electronic Archive"), I did not come away with the impression that the interests of all three parties were being (or were likely to be) equally well-served by the proposed Archive, or Warehouse. [The full report is now available on the file-server, s.v. WAREHOUS REPORT.] From: Willard McCarty "Patrick W. Conner" Subject: The New OED (25) Date: Thu, 14 Apr 88 20:20:03 EDTWednesday, 13 Apr 1988 22:52:40 EDT X-Humanist: Vol. 1 Num. 1062 (1062) (1) -------------------------------------------------------------------- I agree with Geoffrey Wall. I'd like to have the OED available as soon as possible, too, and I won't be able to afford a personal set-up. Could we on Humanist create a petition to BRS or Dialog or another vendor (or vendors, if those are not internationally accessible) to urge them to supply the OED at an hourly rate? If a large number of HUMANIST members indicated that they would like to have access to the OED on-line, and that they might even subscribe to a commercial database vendor to get it, I'll bet it would be made available fairly quickly, assuming that Oxford UP were willing. How should we go about this? I'll be glad to keep a file of short endorsements to be forwarded to a vendor once we've built up enough to be persuasive. Patrick Conner VM47C2@WVNVM West Virginia University From: Willard McCarty Max Wood Subject: Classification of Topics on HUMANIST (22) Date: Thu, 14 Apr 88 20:23:50 EDTThu, 14 Apr 88 10:36:49 GMT X-Humanist: Vol. 1 Num. 1063 (1063) (1) -------------------------------------------------------------------- TOPIC : TOPIC I currently recieve HUMANIST into the dark plumbing of Primos Mail via the none too gentle auspices of ISOCEPT. I already run software that sorts and auto-edits my mail so as to remove header junk and topics that don't interest me. Unfortunately I have to rely upon the Subject field of the header for these choices. Therefore I heartily agree with the TOPIC classification that has been proposed for HUMANIST and should it be introduced I would certainly be willing to pass on my Prime based software for pre-processing mail to any that might also be in a Primos environment. From: Willard McCarty Jeffrey William Gillette cbf%faulhaber.Berkeley.EDU@jade.berkeley.edu (Charles Faulhaber)Willard McCarty Jeffrey William Gillette cbf%faulhaber.Berkeley.EDU@jade.berkeley.edu (Charles Faulhaber)Willard McCarty Subject: Interfaces (145)Of Mouse and Men (pt. 2)Re: InterfacesInterfacesOf Mouse and Men (pt. 2)Re: Interfaces (132)Interfaces Date: Fri, 15 Apr 88 23:54:04 EDTFri, 15 Apr 88 19:38:45 EDTFri, 15 Apr 88 15:24:13 PDTFri, 15 Apr 88 15:25:38 EDT X-Humanist: Vol. 1 Num. 1064 (1064) (1) Date: Fri, 15 Apr 88 19:38:45 EDT (54 lines) (2) Date: Fri, 15 Apr 88 15:24:13 PDT (20 lines) (3) Date: Fri, 15 Apr 88 15:25:38 EDT (48 lines) (1) -------------------------------------------------------------------- Yes, my first note on interfaces was intended to be somewhat incendiary. I hoped to provoke a response like that of Philip Taylor, and I suspect that he is not the only one with similar feelings. No, I am not interested in theoretical disputation over: Is the MacWindows interface better than a command line approach, or vice versa. As a sometime developer of computer applications I am more interested in the questions: What types of people prefer the MacWindows interface, and what types of things do they want their computers to do? Similarly, what types of people prefer a command-line interface, and what types of tasks do they want their computers to assist them with? I am rapidly coming to the conclusion that most of the "common wisdom" regarding the merits of one style interface over the other are, in fact, nothing more than marketing slogans and old wives' tales (apologies to any old wives who happen to be reading this!). The Mac is "simpler and more intuitive." People who prefer a command-oriented approach have "vested interests in preserving the status quo." The real issue is efficiency - keyboard vs. mouse (this last myth will, I suspect, be put to rest by Microsoft, which has mistakenly redesigned Windows 2 to make the keyboard behave like a mouse). On the other hand, personal observation has convinced me that there are significant distinctions in personality and temperament between those who are attracted to the MacWindows style, and those who prefer a command-oriented interface. The earliest and most loyal converts to the Mac were hackers (which does not mean, as Diane Balestri correctly points up, that everyone who buys a Mac today is a hacker). Over the last 3 years, almost all genuine innovation in computer software has originated on the Mac and later been ported to the PC (e.g. desktop publishing, hypertext). While I do not think it true that Mac users are more creative than their PC counterparts (although this myth is popular in some circles), I think I can observe a certain pattern of creativity, or of curiosity in the technology for its own sake, that seems much more common in Mac users than in users of command-oriented systems. In particular, both Philip Taylor and Norman Coombs echo an intuition I have come to, and can almost articulate: computer users who prefer the MacWindows interface tend to think spatially while users who prefer a command-oriented approach tend to think verbally. Obviously I am not foolish enough to think this generalization true of all users of Macs or PCs, but I do suspect it to be true of a large number of people who have their choice of environments. Any comments on this theory? (2) -------------------------------------------------------------------- I am not a technical computing specialist (i.e., I couldn't program my way out of a paper bag), but I started with UNIX, got a PC when they first came out, have worked with CMS, and now have a Sun (running suntools) which is an icon-based system similar to the Mac. Suntools wins hands down. And my experience is that mouse-based systems are the tool of choice for people who are interested in getting a piece of work done rather than in hacking. Charles B. Faulhaber Department of Spanish UC Berkeley CA 94720 Somebody recently observed that the point of machine interfaces is to humanize the technology. I think that this is not just true of computers but is the general human tendency to transform nature into art, to put a human face on (or find it in) a very inhuman world, i.e., to create. What face can we put on (or find in) something except our own? Perhaps it's true that computers are now complex enough to begin to manifest aspects of human personality. If this is so, then we should look to the cultural and social contexts in which certain computers were developed. The Macintosh makes a good study in this regard because the company that developed it (yes, I know about Apple's intellectual debt to Xerox PARC) has kept the borrowed design successfully to itself. It is interesting to look back on the original article in Byte, for example, in which the Mac was announced, and to notice the people involved. It's easy to make silly generalizations about this fairly coherent group, but they did come out of a particular historical moment that some of us have shared and will recognize instantly, whether or not we feel sympathy with it. The corresponding case of the IBM design is more difficult to bring into focus, at least for me, and I wonder why. I don't think we'll get anywhere, or anywhere very interesting, by putting value judgments on these machines. We may get somewhere and contribute substantially to the advance of what is loosely called "interface science," however, by probing more deeply into the reactions people have, not just psychologically but culturally. To return to the Mac, we could ask about the relationship joining the apocalyptic dreams of ca. 1965-1975, their translation into the technical subculture, particularly in California, and the sort of concerns and aims of the designers of this really revolutionary machine. Could we then see in the machine a reflection not only of the designers' personae but also of what has happened to the social revolution in which they participated? I am not name-calling, rather trying to account for the rather amazing degree to which the inert micro-boxes of both major kinds call forth passionate zeal and seem to demand unreasoning belief. Even if no significant contribution can be made to this point directly, at least it seems possible that we might come to understand how building better programs is tied to more accurate professional self-knowledge. Willard McCarty mccarty@utorepas From: Willard McCarty Mark Olsen Darrell Raymond Mark Olsen Darrell Raymond Subject: Electronic OED (63)OED NOWaccess to the electronic OEDOED NOWaccess to the electronic OED Date: Fri, 15 Apr 88 23:56:29 EDTThu, 14 Apr 88 21:10:18 MSTFri, 15 Apr 88 14:26:56 EDT X-Humanist: Vol. 1 Num. 1065 (1065) (1) Date: Thu, 14 Apr 88 21:10:18 MST (16 lines) (2) Date: Fri, 15 Apr 88 14:26:56 EDT (30 lines) (1) -------------------------------------------------------------------- You might be better off trying to get your library to access the OED on-line. The ASU library has already set up the Grollier's Encyclopedia on the main card-cat. computer. A user can access it from any terminal in the library and (real soon now) on dial-up from any remote computer. The library is currently exploring putting the OED on the same machine. This is far better than having it on DIALOG since a) there is no connect charge and b) you can use it at home, the office or the library, and c) it is free, just another library service. The Grollier's Encyclopedia works well and I am looking forward to having the OED. Any guesses at how long it would take to download the OED at 2400 baud???? Mark (2) -------------------------------------------------------------------- In response to Geoffrey Wall : >The electronic OED will not, they think, be in competition with the New >OED in printed form. It will be used in different ways, as a 'list-maker' >rather than a 'page-turner'. ( Who would want to read a 60,000 word entry >on screen?) Given that you have a workstation running X.10, it's much easier to read a large entry on screen than on paper, because you can quickly display lots of text and dynamically alter its format or elide various parts of the entry. Our display software supports all the fonts and special characters necessary to produce an accurate proof of OED text, and does it in real time. >Personally, not being able to afford a PC clone, let alone a dedicated >three-drive CD player, I'd like to have on-line access to the OED as soon >as possible. Along the lines promised by the Institut National de la >Langue Francaise, for general access to their lexicographical database at >Nancy. Meanwhile, perhaps the NEW OED Centre at Waterloo would consider >some kind of access? Ownership of the content of the OED is retained by Oxford University Press, hence access permission is granted by OUP and not by the University of Waterloo. Scholars are most welcome to visit and access the online OED, but we cannot unilaterally extend this to general online access. From: Willard McCarty Ian Lancashire Subject: ALLC/ICCH Conference (33)Call for papers, ALLC/ICCH Conference Date: Fri, 15 Apr 88 23:59:18 EDT15 April 1988 X-Humanist: Vol. 1 Num. 1066 (1066) (1) -------------------------------------------------------------------- CALL FOR PAPERS Association for Computers and the Humanities Association for Literary and Linguistic Computing 16th International ALLC Conference -- 9th ICCH Conference 6--10 June 1989 University of Toronto, Toronto Ontario, Canada The 16th International ALLC Conference and 9th International Conference on Computing and the Humanities will be held conjointly at the University of Toronto from June 6th to 10th, 1989. Papers on all aspects of computing in linguistics, ancient and modern languages and literatures, history, philosophy, art, archaeology, and music are invited for presentation at the conference. [The full announcement is now available on the file-server, s.v. ALLCICCH CONFRNCE.] From: Willard McCarty Subject: Hiatus until Wednesday &c. Date: Sun, 17 Apr 88 11:35:03 EDT X-Humanist: Vol. 1 Num. 1067 (1067) Dear Colleagues: I will be away until Wednesday of this week, and so HUMANIST will be silent until that evening. ListServ, being mechanical and rooted to the spot, will not sleep nor leave the country with me, so you are welcome to send in contributions to HUMANIST, which I'll bundle up and send out on my return. I will also be away from HUMANIST for a much longer period, approximately two months, beginning the middle of May. Rather than allow HUMANIST to run automatically (and so court the floods of electronic junk mail some of us will remember), or to shut it down for this period, I have arranged for a local HUMANIST and good friend, Abigail Young, to take over as editor temporarily. For various reasons she'll do this as , so you should notice no interruption in service. The temporary change may provide a test of my hypothesis that we impress our personalities on the things that we do with computers. So if HUMANIST's editorial touch becomes more graceful, witty, incisive, and good humoured after 15 May, you'll know why. In any case, I am very grateful to Abby for agreeing to take on the job. Willard McCarty mccarty@utorepas From: Willard McCarty Prof. Choueka Yaacov munnari!fac.anu.oz.au!nasdling@uunet.UU.NET (DAVID NASH)Prof. Choueka Yaacov munnari!fac.anu.oz.au!nasdling@uunet.UU.NET (DAVID NASH) Subject: Notices (67)ALLC/AIBI Conference (updated posting)Update on Australian non-link...ALLC/AIBI Conference (updated posting)Update on Australian non-link...Re: News -- sci.lang Date: Wed, 20 Apr 88 20:56:04 EDTTue, 19 Apr 88 15:32:59 +0200Wed, 20 Apr 88 12:36:44 est X-Humanist: Vol. 1 Num. 1068 (1068) (1) Date: Tue, 19 Apr 88 15:32:59 +0200 (17 lines) (2) Date: Wed, 20 Apr 88 12:36:44 est (38 lines) (1) -------------------------------------------------------------------- ALLC/AIBI 1988 Joint Conferences: Fifteenth International Conference on Literary and Linguistic Computing and Second International Conference on Computers and Biblical Studies. June 5-13, 1988, Jerusalem * Sponsored by: ALLC - Association for Literary and Linguistic Computing AIBI - Association Internationale Bible et Informatique * With the participation of: ACH - Association for Computing in the HUMANITIES [An updated posting is available on the file-server, s.v. ALLCAIBI CONFRNCE.] (2) -------------------------------------------------------------------- [The following will interest those of you who remember the loss of our members from New Zealand due to the excessive cost to them of receiving international e-mail. For those who don't, the following is more or less self-explanatory. It was sent to me by David Nash, formerly of MIT and now in Australia. While at MIT he was investigating ways of getting HUMANIST to our colleagues Down Under; what he has subsequently discovered confirms what the New Zealanders told us about the cause of their resigning. -- W.M.] Here is an exchange which you find informative: 20-APR-1988 09:46:51 > I've been acquainting myself with the news available here, which is a > nice selection, but only part of Usenet, right? What's involved in > adding something like sci.lang ... You are right. The newsgroups you mention come under the category of "privately imported", ie you don't get them unless you or someone else pays for them. The cost is around $250 per megabyte of news. You (or ANU -- we're not picky) pay in advance, preferably for at least 2 Mb of news, and we arrange to bring it in. Everyone in Oz will get it, BTW, which might have a side effect of encouraging others to pay for it (when it is due to vanish!!!!) From: Willard McCarty Wade Schuette Subject: Query: citing e-text (20)Legal/practical question: How cite w/o page #? (10 lines) Date: Wed, 20 Apr 88 20:59:34 EDTWed, 20-APR-1988 18:14 EST X-Humanist: Vol. 1 Num. 1069 (1069) (1) -------------------------------------------------------------------- Today's New York Times (4/20/88 p D1) has an article on the lawsuit between West Publishing Co. and Mead Data Central, Inc. (Westlaw vs Lexis) over Lexis' use of West's page numbers as standard for citing legal decisions. This seems a far more general questions, with many issues related to the protection of intellectual property. Question: does anyone know of good ways to cite text once it becomes stored electonically and "page #" becomes a rather meaningless term? Who does what now? From: Willard McCarty Sebastian Rahtz Subject: Software, mostly w-p (70) Date: Wed, 20 Apr 88 21:02:25 EDTMon, 18 Apr 88 12:21:57 GMT X-Humanist: Vol. 1 Num. 1070 (1070) (1) -------------------------------------------------------------------- I just read 62 pages of HUMANIST and have finally caught up with the WP/software minds debate, concerning which I wish to make four observations; but before I do so, may I in passing refer to some nonsense about refrigerators I skimmed over? On our departmental machine (named Queen Victoria, don't knock her..), our refrigerator has an account; he is called Vincent, and performs a sterling job in keeping the coffee milk cold, and cooling bottles of wine for special occasions. Vincent (vf@uk.ac.soton.cm) often contributes to the departmental bulletin board, sends occasional mail, and generally keeps a lively look out on our affairs. Vince is not yet on the payrol, but we are working on getting him elected to the University Senate. (This is all true). But as for WP: a) The frenetic NotaBene adulation neglects the fact that, in this country at least, the *support* for Nota Bene from the distributor is laughable. I chose my washing machine on the reputation of the supplier for backup, not for features - if I had to choose a word-processor (god forbid!), I'd buy Microsoft Word because I trust Microsoft to stand by me. OK, I know on that argument I'd buy IBM mainframes, but I hope the point is taken about support. b) The mind-in-the-software - Willard says that software, once designed, remains faithful to the initial concept. Yes, but what about software designed to meet a prescribed task? Take databases - if (when?) the ISO standard for SQL is adopted, will we be able to tell the difference between Oracle, DB2 and Ingres? Or C compilers - as someone said, the spirit of C is a personality thing, but the difference between Microsoft and Borland isn't. To pursue that, once one accepts the idea of the integrated compiler a la Borland, you can't really do much about it. I suspect also that Willard is not a MacMan; I take his point about the devious mind of the NotaBene creator being visible still, but would it be on a Mac? I don't see much personality in Mac programs (with the honourable exception of Illustrator). Maybe others do. c) De Rose states the case against WYSIWYG more eloquently than I can; I'd go all the way with him and the others, and further, to say that academic word-processing is a dead-end which we will soon see the death of, thank god. There's text-processing, preparing material for electronic publication (from BBs and mail to generic markup for databases and printing) and there's computer-assisted design (of which 'DTP' design of leaflets for paper printing is a a subset). The use of a computer as typerwriter for letters, memos and the like (which is all that WP programs are any good for) should wither away, I hope. Even letters fall better into the generic markup game - a reversion to the 'take a letter, Miss Jones, and don't bother me with the details' system, which I am sure we will all agree is *much* better. d) I am editing a conference proceedings at the moment; all the contributors so far have sent disks or e-mail files. Not *one* of them has marked-up his/her text in a *complete* form for sensible editing by me (and yes, I did send out guidelines). Even the one who used troff with a sensible macro package did not understand about heading macros, but put in his own spacing and fonts. As for bibliographies.... The moral is, as I sit over a hot PC-Write reformatting them (forget about all those monster packages, PC-Write is *wonderful* for daily editing), that however many features you give people in their WP package, they probably won't use them! Thank god no-one has sent me newspaper column formatted text. Next year I'll send them all Coombs et al.'s CACM article on generic coding. Sebastian Rahtz, Computer Science, University, Soutampton, UK PS back to the 'humour': the TeXhax mailing list produced a very witty April 1st issue; why nothing on HUMANIST? Are we all that boring? From: Willard McCarty Sarah Rees Jones Subject: Job posting (17)Advert for Information Officer, Computing Service Date: Wed, 20 Apr 88 21:12:39 EDT20-APR-1988 16:21:55 GMT X-Humanist: Vol. 1 Num. 1071 (1071) (1) -------------------------------------------------------------------- U N I V E R S I T Y O F Y O R K COMPUTING SERVICE [Job posting for an "Information Officer" will be found on the file-server, s.v. INFOFFER JOB.] From: Willard McCarty Grace Logan Subject: New OED, a brief history (83) Date: Wed, 20 Apr 88 21:15:59 EDT20-APR-1988 14:23:10 GMT X-Humanist: Vol. 1 Num. 1072 (1072) (1) -------------------------------------------------------------------- New OED Project In 1984, for the 100th anniversary of the publication of the letter "A," Oxford University Press announced its intention to computerize the OED, putting the 12-volume text and the 4-volume Supplement into machine-readable form by the end of 1986, and then publishing an integrated OED by 1989. At the time, this seemed like science fiction, but in fact, at the moment, the project is right on schedule. The fourth volume of the Supplement was published in 1986, and almost at the same time the machine-readable text of the 12 volumes of the OED and the 4-volume Supplement had been inputted. This was accomplished by the International Computaprint Corporation (ICC) of Fort Washington, Pa., between January 1985 and June 1986. The text contains almost half a million definitions illustrated by two and a half million quotations. The ICC had inputted some 350 million printed characters at less than the contractual rate of 7 errors or less in 10,000 keystrokes--in the latter batches a rate of 4 errors or less was regularly achieved. Since then, the text has been proofread by OUP, and Murray's notation of pronunciation has been converted into the International Phonetic Alphabet. By last summer, OUP had completed the automatic integration of the 4-volume Supplement into the body of the l2-volume OED and made the first pass of the cross-referencing system. The editors are now working on the manual integration of the Supplement into the OED, that is, the 20 per cent by volume that could not be done automatically. To assist the editors in the integration of the text, the OUP computer group has developed a computerized editing system, whose name tells it all: Oxford English Dictionary Integration, Proofing, and Updating System, or OEDIPUS. So far there are no signs of a tragic conclusion. Indeed, the New OED computing system has already won the British Computer Society Applications Award for 1987 for its innovative use of the computer. OEDIPUS as part of this system provides a means of examining and correcting the text of the Dictionary on computer screens. It allows the editors not only to insert new words into the dictionary but to enter the editors' additions, corrections, insertions, deletions at any point within an entry. As this is being done, it is sent to the typesetters and proofread again, after which further corrections are made. In December 1987, OUP and Bowker and Tri-Star made available the text of the unintegrated 12-volume OED produced by ICC on two CD-ROM disks, A-N and O-P, costing about $1250 for the set with documentation. The entire dictionary could have fitted onto one disk, but adding indexes and inverted files required another disk to be used. Bowker and Tri-Star had devised a system that allows the user to make inquiries concerning the data stored on the CD- ROM. The program divides the screen into three windows: the top for query and field, the middle for viewing the results of a search, and the bottom for identifying function keys. The text can be searched very quickly and easily on eight fields derived from the approximately forty fields found in the dictionary entries. These include LEMMA, ETYMOLOGY, SENSE, and LABEL (part of speech, usage, specialized language), and, from the quotation field, DATE, AUTHOR, WORK, and QUOTATION TEXT. The results may be viewed in the saved queries window. The system is still being developed and is fairly rudimentary at the moment. For instance, since the earliest date of a word's occurrence is not marked in the data, one cannot find out what words in a particular field first entered the language. Similarly, it is impossible to search on the ends of words or within a word. No doubt this will be corrected in time. The integration of the OED and its four Supplements is expected to be completed and published in book form in the Spring of 1989, extending to 20 volumes. The second edition of the OED will then be available on CD-ROM. In the meantime, the University of Waterloo is developing two inquiry tools, Generalized OED Extracting Language (GOEDEL) and PAT, which together form a very powerful means of searching and retrieving information from the database. From: CBS%UK.AC.EARN-RELAY::EARN.UTORONTO::LISTSERV 13-SEP-1989 12:21:48.68Revised List Processor (1.6a) Subject: File: "BIOGRAFY 1" being sent to you Date: Wed, 13 Sep 89 07:00:25 EDT X-Humanist: Vol. 1 Num. 1073 (1073) Autobiographies of HUMANISTs The following is a collection of brief biographies written by members of HUMANIST. Many of these are spontaneous statements originally included in notes requesting membership in HUMANIST, and for this reason they share no common format, style, or scope. I have edited them only slightly. I am circulating the collection so that we can get an idea of who we are professionally and what range of interests and talents we possess. Later a more detailed and systematic questionnaire will be circulated. Additions, corrections, and updates are welcome, to MCCARTY at UTOREPAS.BITNET. W.M. 4 July 1987 ----------------- *Baldini, Pier (ATPMB@ASUACAD) Department of Foreign Languages, Arizona State University, Tempe AZ. Supporting applications in foreign languages. ----------------- *Balestri, Diane I am an Assistant Dean of the (undergraduate) College at Princeton University. The office is responsible for the undergraduate curriculum and life. So I do my share of advising students, but half of my time is given to nurturing the integration of computing into the curriculum. I work with faculty in all departments to help acquire resources that will (in my judgment) be put to the best possible educational use. I have written successful grant proposals with faculty to the state of NJ and to Apple. I also work very closely with the Office of Computing and Information Technologies to see that their planning is well coordinated with the educational goals of the university. This, happily, is a very constructive working relationship. We have been developing plans for student access to computing, for instance, as well as for faculty support. How I got to this position is odd, but not unusual. My PhD is in English. I taught fulltime for several years, moved with my husband, and ended up a part timer in composition at Bryn Mawr College; from there to Assistant Dean; to a FIPSE grant for coordinated teaching of writing and computer programming; to the chair of the FIPSE Technology Study Group (70 FIPSE project directors, FIPSE being a funding agency in the Dept. of Ed, who all are working with technology in post secondary settings)--that while at BMC--to Princeton. The Group is finishing a report directed at Academic Deans and those who implement academic decisions about computing, in which we lay out a series of recommendations about powerful applications of computing to learning along with some of the consequences for institutions which get into the computing game. That should be available sometime this fall, and coming from a set of innovative, mostly faculty, types, will be interesting. ----------------- *Bjorndahl, Sterling - Claremont Grad. School Institute for Antiquity and Christianity, Claremont Graduate School, Claremont, CA 91711 (714) 621-8066 (o) (714) 624-7110 (h) I am a graduate student in the New Testament program. I am resident computer consultant and programmer at the Institute for Antiquity and Christianity. I work mostly in the MSDOS and Ibycus microcomputer environments. ----------------- *Borchardt, Frank L. Department of German, Duke University, Durham, NC 27707 [919] 684-3836 [919] 489-5949 Principal Investigator, Duke University Computer Assisted Language Learning (DUCALL) Project. Should like access to HUMANIST. Have incipient interest in Neural Network Computing for natural language applications. ----------------- *Burkholder, Leslie University affiliation: Carnegie Mellon University Description: Editor, The Computers and Philosophy Newsletter; development of instructional software in philosophy (in particular, logic); work on bibliography of available instructional software in PHILOSOPHY; INSTRUCTOR FOR COMPUTERS AND ETHICS COURSE. ----------------- *Burnard, Lou Educated mostly at Bristol Grammar School and Balliol College (Oxford) during the 1960s, many of the attitudes of which remain curiously entrenched. Graduated in English 1968; then took one year off to research swinging London subculture before returning to take BPhil in 19th c. English studies at Oxford 1969-71 (anywhere else it would be called a masters), specialising in Dickens, 19th c. theatre, gutter press etc. Took up lectureship in English at University of Malawi 1971-3 (essential 3rd World life experience, but academically and politically depressing). Gruelling apprenticeship at OUCS from 1974; involved learning PLAN (ICL 1900 assembler), doing Programme Advisory, writing GEORGE3 macros, installing and maintaining various odd software packages, some of which are still with us (Famulus, Snobol, etc) many of which are not. Software for which I can take some of the credit includes:- Oxeye (EYEBALL stylistic analysis prog rewritten in Snobol; now defunct); design of OCP command language and structure; FAMULUS77 (complete rewrite of Famulus package, now being distributed by U of Edinburgh). In 1976 formalised our ragbag of electronic texts into the Oxford Text Archive and went around begging for more. Text Archive gradually took over half of my life (see semi-official history of same, available on request). Over last two years have been tinkering with possibility of transforming suitable quantities of texts into online database; to this end developed PQR software to interface to ICL's CAFS (wizard hardware text searching engine; see various papers), tried it out with some success on Oxford Shakespeare texts, various bits of Thesaurus Linguae Graecae, Bodleian pre-1920 catalogue. In 1980 OUCS was one of first UK universities to dip a toe into the world of database systems for other than "defense" projects, cornershop accounting or comp sci fanatics. Design and construction of weird and wonderful database systems is what occupies the other half of my time. The first one was to do with the wild bird population of Wytham Woods and its breeding habits; the most recent with the Great Palace at Byzantium, in fact and fiction. In between I've had to become quasi-expert on Attic Red and Black Figure Vase Paintings, Greek Personal names, 17th c. assize court records, 19th c Philadelphia business, museum conservation and collection management etc etc. I'd really rather be working with literature, but the literary scholars don't beat on the door in the way the others do; I think humanities people in general remain depressingly ignorant about what database systems are and how they can be used. One day I hope to find time to write a book to try to change that. Meanwhile I only get to write the occasional book review, electronic mail message, conference paper, lecture etc., which is not enough. ----------------- *Bush, Chuck Humanities Research Center Brigham Young University (ECHUCK@BYUHRC) ----------------- DEPARTMENT OF FRENCH AND ITALIAN, UNIVERSITY OF EXETER, EXETER EX4 4QH GB 392-264209 Have and am working of computer-assisted concordances and research into development of expert system for teaching/correction of French and French phonetics. ----------------- *Camilleri, Lelio Professor of Computer Music at the Conservatory of Music "L. Cherubini", Florence and member of the research unit of the Musicological Division of CNUCE, Institute of the National Research Council of Italy. Research work concerns with the study of the cognitive processes underlying the musical knowledge by means of the automated tools; also on the application of the computational methodologies to music theory, analysis and composition. Recent publications are: Music, Mind and Programs (DIOGENES, 133, 1986), The Current State of Computer Assisted Research In Musicology in Italy (ACTA MUSICOLOGICA, LVIII/2, 1986), A Software Tool for Music Analysis (INTERFACE, XVI/1, 1987), Towards a Computational Theory of Music (SEMIOTIC WEB, 1987). He participates at the project Basic Concepts in the Study of Musical Signification. DIVISIONE MUSICOLOGICA DEL CNUCE/C.N.R. Conservatorio di Musica L. Cherubini Piazza delle Belle Arti 2 50122 Firenze ITALIA ++39-55-282105. ----------------- *Candlin, Francis E. I am the programmer at the DISH History and Computing Laboratory at Glasgow University, Scotland. DISH consists of a room of micros, where various lecturers at the History Departments have implemented teaching using machine readable sources such as the Census, Valuation Roll, Shipping records, Company accounts. We have a programme of software development, so far concentrating on flexible data entry and tabulation of data. Our address is: DISH History Computing Laboratory, University of Glasgow, 2 University Gardens, Glasgow G12 8QQ. ----------------- *Cartwright, Dana E. 3rd Director of Academic Computing Services, Syracuse University, Syracuse, NY, 13244-1260, 315-423-4504, DECARTWR@SUVM. Employed at Syracuse University since 1971. While my undergraduate degree is in Physics and my Master's is in Computer Science, I have a strong interest in fine arts, music, and the humanities, so when worrying about computing I pay special attention to those areas. I view my job as being a problem solver (in contrast with being a supplier or controller of computing). I happen to use computing to solve problems, but that is almost incidental. I am particularly fascinated with the way artists, musicians, and humanists interact with computing technology. A major thrust at Syracuse University is bringing such people into close contact with super- computers and advanced graphics devices (while not neglecting the Macintosh!). For example, I have just hired (into the Academic Computing Center!) an recent SU graduate with an MFA in Computer Art, who will help scientists and engineers better visualize the results of their research (and anyone else needing such help--it's just that I think those two groups need SERIOUS assistance). I am a dedicated Macintosh user (SE at the office, Plus at home). I primarily use WriteNow, PageMaker, SuperPaint, and FullPaint (this should give you some insight into my philosophy about computing). Professionally I am a programmer (Every nickel earned since 1969 has come from computing, first as a systems programmer on large IBM mainframes, then as a network software creator from 1977 through 1983), but inside of me lurks a writer, singer, and illustrator who justs happens to like computers. ----------------- *Evra, James W. van (PHILOSOPHY DEPT.) My primary interest is in the teaching of formal logic by computer. The information you requested is as follows: Name: James Van Evra Address: VANEVRA@WATDCS.BITNET Telephone: (519) 8851211x2449 Affiliation: Dept. of Philosophy, Univ. of Waterloo ----------------- *Gillette, Jeffrey William (DYBBUK at TUCCVM.BITNET or ducall!jeff at duke.edu) Humanities Computing Facility, 104 Languages Building, Duke University Durham, NC 27706; phone: 919/684-3637 As Associate in Research for the Humanities Computing Facility, I am responsible for all software development in the Duke project. The largest share of this consists of Calis, the "Computer Assisted Language Instruction System" which many universities and publishers use to provide supplemental computer exercises to foreign language students, and the Duke Language Toolkit - a package which allows IBM PCs to work with text in Russian, Greek, Hebrew, Coptic, and various other non- Roman languages. My academic work lies principally in the study of Christian Origins. I am interested in the use of computer assisted text research tools, particularly those which give access to antique and medieval documents in Greek, Latin, Coptic, and other relevant languages. The most interesting technologies for this include CD-ROMs, graphics packages, "electronic concordance" tools, and statistical applications My professional work focuses on the task of using computers in the acquisition of a second language. Here my interests center on the use of technology for second language learning, the implementation of language acquisition strategies in a computer medium, and the validation of computer assisted language learning applications. ----------------- *Griffin, Catherine I work at the Oxford University Computing Service in the Computing in the Arts section. While I help all arts users, my specialty is typesetting, and most of my time is spent helping users typeset a wide variety of languages and alphabets (from English and the european languages to Syriac and Hieroglyphs). My main duty at Oxford is to run the Oxford end of our national academic typesetting service (although for the 6 months and for the foreseeable future, d to lack of staff I have been running both the local and the national ends). I have taken part in broader-based computing in the arts seminars and conferences here and abroad, and have helped Susan for many years with her SNOBOL course and with her other courses. ----------------- *Hare, Roger I am involved in training here at the University of Edinburgh, and have a particular interest in Humanities Computing. (Training Group Computing Service University of Edinburgh 59 George Square Edinburgh Scotland ----------------- *Henry, Chuck I am the person in charge of the humanities and history division of the Columbia Libraries. ----------------- *Hockey, Susan (SUSAN%VAX2.OXFORD.AC.UK@UK.AC) After taking a degree in Oriental Studies (Egyptian with Akaadian) at Oxford University I worked as a programmer/advisor at the Atlas Computer Laboratory which at that time was providing large scale computing facilities for British Universities. There in the early 1970's I wrote programs to generate non-standard characters on a graph-plotter and was involved with the development of version 2 of the COCOA concordance program. In 1975 I moved to Oxford which now supports various services for computing in the humanities which are used by other universities. I am in charge of these facilities and also teach courses on literary and linguistic computing and on SNOBOL. Both of these courses have been turned into books. I have been a Fellow of St Cross College, Oxford since 1979 and I now look after the computing interests in college. I have lectured on various aspects of humanities computing in various corners of the globe, more recently on current issues and future developments for humanities computing, Micro-OCP and its uses and on computers in language and literature for a more general audience. My recent activities have been concerned with (1) Version 2 of the Oxford Concordance Program and Micro-OCP. Both are being tested now and are in the final stages of documentation. (2) The Association for Literary and Linguistic Computing of which I am currently Chairman and am on the editorial committee of Literary and Linguistic Computing. My next project will be concerned with the introduction of computers in undergraduate courses at Oxford. These courses consist almost entirely of the detailed study of set texts, and this project, which is funded under the UK government Computers and Teaching Initiative, will set up a University-wide system for analysis of these texts via IBM-PC workstations linked to a large VAX cluster at the central service. Susan Hockey, Oxford University Computing Service 13 Banbury Road Oxford OX2 6NN England ----------------- President of the Association for Computers and the Humanities. She holds a Ph.D. in English Literature with a Ph.D. minor (M.S. equivalent) in Computer Science from The Pennsylvania State University. Currently, she is an Assistant Professor of Computer Science and a member of the Cognitive Science faculty at Vassar College, where she teaches courses in programming language theory, compiler design, natural language processing, data structures, and computer architecture in addition to introductory programming. She also teaches courses in computing for students of literature and language as has written a textbook for use in such courses, entitled Pascal for the Humanities. Her research includes computer-assisted analyses of structure and theme in William Blake's The Four Zoas; her primary area of research interest is developing formal models and methods for analysis of meaning in literary texts. She has given numerous papers and lectures on her research as well as other aspects of humanities computing, and she organized and led a workshop on Teaching Computers and the Humanities Courses in summer, 1986. She is also active in the area of computer science education and is developing guidelines for the design of joint majors involving computer science and a second discipline. ----------------- *Jones, Sarah Rees (Bitnet: SRRJ1 at VAXB.YORK.AC.UK History Department, Vanbrugh College, University of York, Heslington, York, YO1 5DD U.K Tel. York (0904) 430000 ext. 5893 I am in the History Department of the University of York, U.K. For a while now I have been engaged in creating a data-base of medieval title deeds for the city of York for research purposes, and with 3 other colleagues am now planning ways of introducing data-processing techniques to undergraduates in our teaching next academic year. Humanist seems to be producing a lot of discussion which would be of great use to us as we "learn how to use the wheel". ----------------- *Katzen, May I am Project Manager of the Office for Humanities Communication. The Office is funded by the British Library, specifically to encourage the use of computers in the humanities. To this end we run HUMBUL, an online bulletin board for computing in the humanities, which is available on JANET, as well as a printed Newsletter, the Humanities Communication Newsletter which appears quarterly. We also arrange a series of conferences, workshops, and specialist meetings. Among our recent efforts have been an Anglo-French colloquium on the use of expert systems in research in the humanities, CATH87, a conference on computers and teaching in the humanities, a series of specialist meetings on computer applications in fields such as medieval studies, music, and so on, as well as demonstrations to alert potential users to the wide range of applications of computerised systems and services. In addition, the Office conducts research in these fields. Office for Humanities Communication,University of Leicester, Leicester LE1 7RH; phone: (Leicester) 522598 ----------------- *Kaufman, Steve, Hebrew Union College account I am editor of the new project: the Comprehensive Aramaic Lexicon, centered at Johns Hopkins (CAL@JHUNIX). Steve Kaufman (BITNET address VSSVHUC@UCCCVM1) Hebrew Union College 3101 Clifton ave. Cincinnati, OH 45220 USA (phone #513-221-1875). ----------------- *Kennedy, Mark T. 815 Watson Labs, 612 W. 115th Street, Columbia University, New York, N.Y. 10025; phone: (212) 280-3259. what i do to support computing in the humanities: i'm the manager of what was the computer center's user services group (12 consultants, 1 technical writer, 4 administrative staff, 40 part-time student consultants). however, in an attempt to meet the growing needs of the entire university research community, the computing center and the university libraries have merged into one organization at columbia known as the 'S.I.C.', the Scholarly Information Center). my group works closely with reference librarians, faculty, and staff from the humanities in order to help them use computers and communications facilities to further their work. we also administer an IBM grant which has placed a large number of microcomputers in the hands of humanities faculty on campus. i will send you the latest copy of our SIC journal which contains a number of articles about computing in the Humanities. ----------------- *Kruse, Susan I am a Computer Advisor within the Humanities Division of the Computer Centre at King's College London. Although many Universities in Britain increasingly have a person within the Computer Centre who deals with humanities' enquiries, King's College is unique in having a Humanities Division. There are eight of us within the division, some with specific areas of expertise (e.g. databases, declarative languages) and others (like myself) who deal with general issues. Some of us are from computer backgrounds; others, like myself, are from a humanities background (in my case, archaeology). We cater to all users within the College but specialise in providing a service for staff and students in the arts and humanities. This involves advising, teaching, and writing documentation. The aims and activities of HUMANIST are therefore of some interest to me, and I would very much like to be placed on your list: Computer Centre King's College London Strand London WC2R 2LS England ----------------- *Lancashire, Anne (ANNE at UTOREPAS) As first Acting Chairman of the English dept. and now Vice-Provost (Arts & Science, Univ. of Toronto), I have been attemping to encourage use, by humanists, of computing technology in research and in teaching: though my personal use of computing has so far been pretty mundane--word-processing, indexing, electronic mail. I am now beginning to store on the computer the transcriptions from my current medieval and Renaissance theatre records project, with a view eventually to manipulating the data in various ways (e.g., sorting records by date, by type, etc.). ----------------- *Lowry, Anita I am a Reference Librarian in the Humanities and History Division of the Columbia University Libraries, with special responsibilities for providing information, training, resource development, etc. relating to computer applications in the humanities; areas of particular interest and expertise include information retrieval, bibliographic data base creation and management, and text analysis. I consult with faculty and students on projects, teach classes, write articles and training materials, and build collections of print and machine-readable resources to support computing in the humanities. I look forward to exchanging information and views with other HUMANISTs. Anita Lowry Reference Department Butler Library 325 Columbia University New York, NY 10027 (212)280-2242 (cul.lowry@cu20b.columbia.edu) ----------------- *Makkuni, Ranjit System Concepts Laboratory, Xerox Palo Alto Research Center, 3333 Coyote hill road, Palo Alto CA 94304. (415) 494-4387. My research explores the potential of electronic technologies -- computers and video -- towards preserving and disseminating traditional crafts. As a first experiment, I am setting up a project that will apply electronic technologies to the practice of Tibetan Thangka painting, which is endangered by external forces. My research explorations seek to demonstrate the relationship of electronic technologies towards preserving traditional values while, at the same time, illustrate the changes brought by electronic media to Thangka painting. ----------------- *McCarthy, William J. Dept. of Greek and Latin, Catholic University of America, Wash., D.C. 20064 (202) 635-5216/7 In our department I am the party responsible for acquiring and maintaining hard- and software. Although I have no training or particular interest in programming, I am interested in HUMANIST. ----------------- *McCarty, Willard Centre for Computing in the Humanities, University of Toronto 14th floor, Robarts Library, 130 St. George Street, Toronto, Canada M5S 1A5 (416) 978-3974. I am an editor, researcher, academic consultant, and administrator in the Centre for Computing in the Humanities (CCH), and a Miltonist with strong interests in classical and biblical literature. In the former role I edit _Ontario Humanities Computing_, the newsletter of the provincial Ontario Consortium for Computing in the Humanities; with Ian Lancashire, _The Humanities Computing Yearbook_ (Oxford Univ. Press); HUMANIST; and occasional guidebooks. My research interests in computing centre on personal information systems, i.e., database software useful in an academic setting chiefly for management of source material and notes. On the basis of the experience with a prototype I wrote some time ago, I've designed a more adequate system and given papers on it. This design is currently being implemented commercially. Within the CCH I also supervise a couple of people, help organize conferences, and assist in other local work. Most recently I have completed a lecture tour of several European countries both preceding and following the ALLC/AIBI conference in Jerusalem (June 1988). In my latter role, as Miltonist, I have written an essay on the classical motif of descent into the underworld in relation to Satan's journey in Paradise Lost (UTQ 56.2) and, in support of research on Satan's narcissism, another on metaphorical mirroring in classical literature (Arethusa, forthcoming). At the moment I am finishing a chapter on Ovid's Narcissus for a book of collected essays. This recent writing is part of a larger investigation of the pragmatic and theoretical issues raised by Milton's use of biblical and classical sources. (Tracing and managing the evidence of such sources and discovering the patterns in them is work for which a computer is an admirable assistant.) Once the current project is out of the way, I plan to do work on the theory of allusion itself and to return to the focus of my Ph.D. thesis, the relation between Paradise Lost and the Bible. I am a founding member of the ACH/ALLC Special Interest Group on Humanities Computing Resources, of which HUMANIST is an expression, and an enthusiastic though critical proponent of e- mail. ----------------- *McCutchan, Walter Works for DCS, but he has a long history of interest in computing in the humanities and he does much of the humanities consulting over in DCS. Also has worked on projects at the Centre for the NOED. Particular responsibility at DCS is as the SPIRES expert. User Services Department of Computer Services, Math and Computer Building, University of Waterloo (519)-885-1211 x.6447 ----------------- *Mitchell, David (D.MITCHELL at QMC.AC.UK) Department of Geography and Earth Science, Queen Mary College University of London, Mile End Road, London E1 4NS England Tel. 01 980 4811 ext. 3631 I am a post-graduate researcher in human geography in the Department of Geography, Queen Mary College, University of London. I am part of the team who teach computing to the first year human geography under- graduates. I will also be involved in the instalation of software on the new departmental micro-Vax, and for general upkeep of the same system. Thus issues relating to teaching computing skills to humanities students are of great interest to me; as is any discussion on the specific problems faced in the implementation phase. My views on the general teaching of computing skills to non-computer science based and non-science based students will shortly be appearing in the Journal of Geography and Higher Education. Humanities consulting for Dept. of Computing Services. SPIRES; NOED ----------------- *Mok, Shu-Yan I am a Ph.D. student in the philosophy department of York University. Among my interests is formal philosophy. Recently I learnt PROLOG. Department of Philosophy, York University, North York, Ontario M3J 1P3 Telephone numbers: 736-5113(office); 736-1162(home). ----------------- *Nardocchio, Elaine Computer address: ELAINE AT MCMASTER or CJONES at UTOREPAS ----------------- I am working at The Norwegian Computing Centre for the Humanities (there is a note about us on HUMBUL). Since we are a national service centre for computers in the humanities in general, our work is fairly varied. My field of interest is processing of non-English texts, with a bias towards papyrology, but I'm also working with different technologies for optical storage. ----------------- *Owen, David -UA CCIT Academic Computing Academic Computing, Center for Computing and Information Technology, The University of Arizona, Tucson, AZ 85721. [602] 621-2835 or 2915 I am currently in Academic Computing at the University of Arizona, and have close links with the Language Research Center here. It is the closest thing we have to a Humanities Research Center. Next year I shall be joining the Philosophy Department and hope to introduce them to n e use of Computer Conferencing as a supplementary teaching tool. I will retain my links with the Language Research Center and hope to start some projects there. ----------------- *Page, Stephen I am completing a D.Phil. at Oxford on uses of computers in music research and music analysis, my main area of interest being database query systems for music. I also act as moderator for the music-research mailing list, which has a couple of hundred recipients interested in uses of computers in music research, music education, and other areas (although our scope normally excludes sound generation and music composition). Programming Research Group, Advanced Information Technology Group University of Oxford, Arthur Andersen & Co, Management Consultants, 11 Keble Road, 1 Surrey Street Oxford OX1 3QD London WC2R 2PS U.K. ----------------- *Rahtz, Sebastian (Bitnet: CMI011 at IBM.SOTON.AC.UK Computer Science, University, SOUTHAMPTON S09 5NH, UK 44 703 559122 ext 2435 (international) Sebastian Rahtz is a Lecturer in the Department of Electronics and Computer Science at the University of Southampton, UK, with responsibility for teaching courses for the Faculty of Arts. These courses are all examined, and count towards student degrees; they include an introductory course (3 terms) and several 1 term courses for the Archaeology Department. Research includes two joint projects with Archaeology (a graphical database, and excavation simulation for teaching), the Protestant Cemetery in Rome and various issues in electronic publishing. Prior to this appointment in 1985, worked for Project Pallas at the University of Exeter; previous incarnation were as a field archaeologist (from 1977 to 1984), as editorial assistant on the _Lexicon_of_Greek_Personal_Names_ (typesetting, mainly) and as a Classics and Modern Greek undergraduate. I teach courses for the Faculty of Arts here: - Introduction to Computing - Literary Computing - Archaeological Computing - Advanced Archaeological Computing - (from 1987) new MSc in Archaeological Computing (I am co-organiser, and teach part of one unit) All these courses are examined and form part of undergraduate degree. ----------------- *Richmond, S. (416)224-3180 (416)889-3558 17 Jonathan Gate Thornhill, ON L4J 3T9 Canada Presently I am attempting to develop socratic teaching computer systems. I am in the MA program at OISE in the department of Computer Applications to Education. I earn my living as a technical writer and technical support person in a Federal Government Department. Previously I taught philosophy and have a Ph.D from Boston University. What I am seeking to find how far one can go with simulating socratic teaching on the computer. I have written a socratic guided text storage system in dBASE. Now I am looking into the suitability of PROLOG systems for writing more advanced systems--such as a system that simulates "baby" learning as a subset of socratic learning. I would be interested in contacting humanists interested in the theory of learning, computer simulations of learning, computer simulations in general, PROLOG, or other AI-type languages available on PC's. ----------------- *Roberts, D. D. (PHILOSOPHY) Department of Philosophy, University of Waterloo, N2L 3G (519) 885 1211 x2638; res: (519) 885 0315 I do all my writing (early drafts, the whole bit) on the computer, using the UW 4341's, usually from a terminal at home (with a sytek connection on a dedicated phone line). All course descriptions, handouts, exams, etc. are also done on computer. My start, in 1970, was in connection with the chronological edition of the writings of Charles Peirce, which is centered at Indianapolis (Indiana University-Purdue University at Indianapolis, IUPUI). Starting then on the projected 4th volume of the series (it is now at the printers, and is much changed from what we "finished" with 4 years ago) we began typing in previously unpublished manuscripts and we now have, in addition to the volume 4 material, two future volumes' worth of material in the computer--much of it proofread here (our "preliminary" proofing) and some of it proofread at Harvard, where the original mss are. Waterloo is a modest sized contributing part of the project, of which I am an associate editor. I have also been our department's "computer advisory (?) representative" and in that position have done a little individualized tutoring of the very basic use of computers, editors, formatters (though now many of our students know far more than I do, just as it is in philosophy proper). Just this past weekend I took part in a kind of brainstorming session at IUPUI whose purpose was to begin to define a project for computers in the humanities which would make use of such recent computerized editions of writings as the Peirce and Santayana editions, as well as other in the fields of literature and history. I was one of the novices in the group, which included Joe Raben, Mike Preston (U. Colorado), Helen Schwartz (Carnegie-Mellon and elsewhere, soon to be at IUPUI), and others. ----------------- *Sano, Haj I have a BS in Computer Science and Engineering from MIT (1982), am working on an MSEE in Digital Signal/Image Processing at USC, and am thinking of studying with Bill Buxton at U of T in a few years. My humanities interests are in music (I have been a pianist for 24 years, guitarist for 4 years, and have played brass and percussion instruments along the way), and psychology (my humanities concentration at MIT). I have always tried to balance my technical education with the humanities, and am interested in hearing about application of computers in the humanities. I may be reached at: 741 Mar Vista Avenue Pasadena, CA 91104 home: 818/797-5995 work: 818/354-0370 (JPL/Caltech) Email: sano@jpl-vlsi.arpa or sano@vlsi.jpl.nasa.gov ----------------- *Sousa, Ronald de Address/Affiliation: Philosophy Dept. U of T 215 Huron St. #907 How do I support the Aims and Aspirations of the group? Well, I don't have that much to contribute, but I'm full of enthusiasm and MORAL support and have a morbid interest in desktop publishing, playing with fonts, etc. I've been waging for years a now-no-longer-completely solitary campaign to get my colleagues to get onto & use E-mail. I know quite a bit about Nota Bene. ----------------- *Sperberg-McQueen, Michael Visiting Research Programmer at the University of Illinois at Chicago. I was trained in German and comparative literature at Stanford University, and went to work at the Princeton University Computer Center as the resident humanist about the time I finished my dissertation and saw just how many job opportunities there really are for Ph.D.'s in Middle High German and Old Norse philology. Since February 1987 I have been at the University of Illinois at Chicago, where the user services and system programming groups are currently trying to divide me between them. For user services, I support database work in Spires; for systems I work on CICS and support the library automation system (Notis). Personal interests include textual criticism, stylistics, metrics, and semantic studies, and I want someday to develop tools for such work. Day to day, however, my tasks have been largely dictated by my users, so my time has gone to less high-flown projects: font design for printers, PCs, and terminals; a Hebrew / Arabic / English version of Kedit for the Near Eastern Studies department; an online catalog database for the PUCC data library; a simple text collation program for correcting texts read twice with a Kurzweil machine; and so on. When I can, I work on database designs for intensive textual study, and am collaborating with Karen Kossuth of Pomona to produce a Prolog parser for Middle High German syntactic studies. I serve on the ACH executive council, the steering committee of this incipient Special Interest Group for Humanities Computing Resources, and on the ACH Working Committee on Text Encoding Practices, which is endeavoring to formulate guidelines for the encoding of texts for research and teaching, in collaboration with other interested groups. an amateur. ----------------- *Swenson, Eva V. As a coordinator for computer matters for central administration at the University of Toronto, it is my job to keep in touch with computer- related activities on campus. Personally, I am a consumer of humanist's products. I have a continuing connection with the Dept of Computer Science and teach the occasional course on Data Processing or Economics of Computers. ----------------- *Thornton, Dave or Computer Science Department, Michigan State University, East Lansing MI 48824 [517] 353-0831 Paul Barrett and I edit concordances to Charles Darwin's work. So far, Cornell University Press has published three: -- Origin of Species, First Edition -- Expression of Emotions -- Descent of Man (in press; will be published by early Fall) Other concordances are in process or expected for the near future: -- notebooks: See Barrett et.al. edition of the Notebooks, which Cornell lists in the Fall catalog. -- Beagle -- Origin of Species, Sixth Edition We very much would like to be added to your list. We undertake to provide computer-readable text either of the source text themselves or of the concordances to any legitimate researcher. ----------------- *Wiebe, M.G. My main computer interest is through the Disraeli Project, of which I am the head: we use the computer to process our materials all the way up to doing our own page-layouts and typesetting, mostly on a system originally developed in connection with the Project starting in the mid 70s. We also use the mainframe for running the indexing program CINDEX, for some communication, and for archiving materials. I am also the English Dept computing chairman, and have been involved with George Logan in devising departmental policy and installations for student and faculty computing. I am one of the Associates of the proposal submitted by Logan and David Barnard for a Centre of Excellence grant (Ian Lancashire has a copy of the application). My primary work however is in editing the Letters of Benjamin Disraeli: see Volume III just published by U of Toronto Press. ----------------- *Winder, Bill (WINDER at UTOREPAS) As a doctoral candidate at the University of Toronto's French Department, my activities -- computing and otherwise -- are circumscribed by my thesis topic: "Maupassant: predictability in narrative". One axis of research concerns automatic abstracting: precisely why do automatic abstracting techniques fail with literary texts? Maupassant's 310 short stories will be the literary corpus. The other main axis of research concerns critical model building and the semantic role of predictability in texts and in the critical model. More generally, my interest in computing lies in translating semiotic theories (particularly those of L. Hjelmslev and C. S. Peirce) into a semblance of computational form. This endeavour has led me to (Turbo) Prolog and Deredec. The use of the latter is presented in CHum's recent issue on France, where J. M. Marandin discusses "Segthem", a Deredec automatic abstracting procedure. My interest in Prolog developed out of my studies in logic (particularly combinatory logic and Peirce's existential graphs). As a teaching assistant for one of the French Department's computer application courses, my principal activities have been teaching Word Perfect and demonstrating packages such as Deredec, BYU concordance, TAT (my own French concordance package), COGS, and MTAS. ----------------- *Young, Abigail Ann Presently, I am part of the editorial team at the Records of Early English Drama (REED), doing checking of mediaeval and Renaissance MS sources, translation of Latin texts, and the compilation of Latin glossaries. But for a variety of reasons (primarily because the glossarial work rests on computer-generated concordances of the primary texts in our volumes) I have ended up co-ordinating computer work at REED. Most importantly, I take charge of the actual production of concordances, which includes transferring the text from a micro to a mainframe, and writing JCL to run the concordance package. I work with the typesetters to implement, maintain, and (when necessary) upgrade our computer-typesetting system, hardware and software; advise on the purchase of new hardware and software; write some programs to pre-process text for various applications, and generally troubleshoot computer problems at REED. My formal background is in Classical Languages, especially Latin, and Mediaeval Studies, especially the history of NT criticism in the 12th and early 13th centuries. I've learned what I know about computers from reading manuals, asking bushels of silly questions, and making lots of mistakes, especially when it came to learning C. ----------------- *Zacour, Norman Professor of Medieval History at the University of Toronto (just retired); interested especially in the history of the papacy of Avignon; just finished writing about the treatment of Jews and Muslims in 14th century legal works; now working on the history of the college of cardinals in the Middle Ages; has written a short manual on WordPerfect to get students of the Centre for Medieval Studies up and running on the IBM; and some quick programming for a blind friend who is a writer and a professor of English, to simplify his Life with DOS and Company, word processing in general, and keeping data about his students in particular. Interested in hearing about any software that will handle multiple variants of medieval mss., to produce notes giving the lemma, the line number, the variant(s) and their witnesses, etc. From: CBS%UK.AC.EARN-RELAY::EARN.UTORONTO::LISTSERV 13-SEP-1989 12:11:49.85Revised List Processor (1.6a) Subject: File: "BIOGRAFY 2" being sent to you Date: Wed, 13 Sep 89 07:00:36 EDT X-Humanist: Vol. 1 Num. 1074 (1074) Autobiographies of HUMANISTs First Supplement Following are 20 additional entries to the collection of autobiographical statements by members of the HUMANIST discussion group and 1 update to an existing entry. Further additions, corrections, and updates are welcome, to MCCARTY at UTOREPAS.BITNET. W.M. 10 August 1987 ----------------- *Beckwith, Sterling 248 Winters College, York University, 4700 Keele St., North York, Ontario (416) 736-5142 or 5186. I teach at York University, have created and taught the only Humanities course dealing with computers, in the context of Technology, Culture and the Arts, and serve as director of computer music in the Faculty of Fine Arts, at York. ----------------- *Boddington, Andy Academic Computing Service, Open University, Milton Keynes, MK7 6AA I am a Research Adviser at The OU responsible for advising a broad range of disciplines but specialising in the arts and social sciences. My particular interests professionally at the OU are in encouraging conferencing and developing data handling and data analysis packages for the non-scientist and the 'computer timid'. I also specialise in statistical analysis. I am an archaeologist by training and inclination I am particularly active in propagating computing as an analytical tool within archaeology; as well as the benefits of desk top publishing to a discipline which produces large volumes of printed emphemera. ----------------- *Brown, Malcolm gx.mbb@stanford.bitnet ACIS/IRIS Sweet Hall, Stanford University, Stanford, CA 94305-3091 Humanities background. Undergraduate: UC Santa Cruz, BAs in Philosophy, German Literature Graduate: Universitaet Freiburg (two years); Stanford University (German Studies). Dissertation: "Nietzsche und sein Verleger Ernst Schmeitzner: eine Darstellung ihrer Beziehungen" Primary interests: European intellectual history from the Enlightenment to the present Computer background. Systems experience: IBM MVS, IBM VM/CMS; DEC TOPS-20; Berkeley 4.3 UNIX; PC- DOS and MS-DOS; Apple Macintosh. Current responsibilities. I support the Stanford Humanities faculty in all aspects of computer usage. We are currently looking at ways in which more powerful microcomputers (PS/2, Mac II) might assist humanist scholars in their research. Additional interests. all aspects of text processing, from data entry (such as scanning) to printing, which might loosely be called digital typography. Especially: page description (e.g. PostScript), typesetting (e.g. TeX, Interleaf, PageMaker etc), typeface design. ----------------- *Brunner, Theodore F. Theodore F. Brunner, Director, Thesaurus Linguae, Graecae, University of California Irvine, Irvine CA 92717. My telephone number is (714) 856-6404. Short description of the TLG: A computer-based data bank of ancient Greek literature extant from the period between Homer and A.D. 600 (we are now beginning to expand the data bank through 1453). ----------------- *Choueka, Yaacov Department of Mathematics and Computer Science, Bar-Ilan University, Ramat-Gan, Israel, 52100. ----------------- I am the Secretary of the Association for Literary and Linguistic Computing and a member of the editorial committee of Literary and Linguistic Computing, and co-author (with B. H. Rudall) of Computers and Literature: a Practical Guide, recently published by Abacus Press, along with a number of articles and papers on humanities computing. I look forward to hearing from you. ----------------- *Cover, Robin C. Assistant Professor of Semitics and Old Testament 3909 Swiss Avenue; Dallas, TX 75204 USA; I am the faculty coordinator of the (current) "Committee for the Academic Computerization of Campus"; we are just beginning to face up to the need for a distinct entity which will be responsible for academic applications of computers: software development for textual analysis; multi-lingual word processing; supervision of the student computer lab (with CAI for Koine Greek and Biblical Hebrew); purchase of workstation equipment dedicated to textual analysis (micro-IBYCUS, etc); faculty education in humanistic computing; etc. My specific role now is to represent to the administration the need for this new entity, the precedent for it (at other universities); definition of the role of the entity within institutional purpose; proposal for staffing, funding and organizational structure; etc. My special interests are in MRT archives and text retrieval programs to study encoded texts. ----------------- *Curtis, Jared Curtis Department of English, Simon Fraser University, Burnaby, BC V5A 1S6 (604) 291-3130 I conduct research in textual criticism, including the use of computers, teach "Humanities research and computers" to graduate students, and give advice to colleagues and students. ----------------- *Erdt, Terry Graduate Dept. of Library Science, Villanova University, Villanova PA 19085 (215) 645-4688 My interests, at this point in time, can be said to be optical character recognition, scholar's workstation, and the computer as medium from the perspective of the field of popular culture. ----------------- *Goldfield, Joel Assistant Professor of French, Dept. of Foreign Languages, Plymouth State College, Plymouth, NH 03264; Tel. 603-536-5000, ext. 2277 My work focuses on stylostatistical and content analysis, especially in the field of 19th-century French literature. I am currently developing a sub-field called "computational thematics" wherein a selective database based on conceptually organized words and including frequency norms for appropriately lemmatized entries can be applied to thematic and content analysis. My current application is to the 19th-century diplomat and author, Arthur de Gobineau, his use of "tic words" and other stylistic traits disputed by Michael Riffaterre and Leo Spitzer. I attempt to resolve this controversy through this conceptual, thematic, and stylostatistical approach. See the project description listed by Klaus Schmidt in the latest newsletter/booklet from the Society for Conceptual and Content Analysis (SCCAC). I would welcome comments on database structures, stylostatistical applications and programming from other UNIX users, who may want to compare their experiences with those I described in my article for the ACTES of the ALLC meeting in Nice (1985), a 1986 publication by Slatkine, vol. 1. I am hoping to prepare a manuscript on humanities computing on the UNIX system for publication within the next 3 years and would welcome all suggestions for contributions. The scope may be restricted later to literary and linguistic applications, depending on contributions and an eventual publisher's preferences, but, for the moment, everything is wide open. The only real computer connection with what I teach here in the University System of New Hampshire (Plymouth State College) is computer-assisted instruction/interactive videotape & videodisk. My 4-course/sem. teaching load typically includes 2 beginning French course sections, 1 intermediate course, and an advanced one (translation, culture & conversation, 19th-cen. Fr. lit., or history & civ.). I also conduct innovative FL teaching methodology workshops and consult with various public school and college foreign language departments on evaluating, using and authoring CALI/interactive video. ----------------- *Hare, Roger Training Group, Computing Service, University of Edinburgh, 59 George Square, Edinburgh, Scotland. Graduated in Applied Physics from Lanchester Polytechnic (Coventry) in 1972. First exposure to computing in second year course (algol on an Elliot 803), and third year training period (Fortran on IBM and Honewell machines at UKAEA Harwell). Thereafter spent several years working in the hospital service in Manchester and Edinburgh, mostly in the area of respiratory physiology and nuclear medicine. Computing interests re-awakened on moving to Edinburgh in 1974. After a couple of years away from computing, followed by a couple of years working as an 'advisor/programmer/trouble-shooter' for a bureau, re-joined Edinburgh University in 1980 as an 'adviser/programmer/trouble-shooter' on the SERC DECSystem-10 in 1980. After three years or so in this job, joined the Training Unit of the Computer Centre (now the Computing Service) where I have remained. We teach various aspects of computing, but my own interests are in the Humanities area (amongst others), literary analysis, languages suitable for teaching computing to non-numerate non-scientists, computerised document preparation (I don't like the terms word-processing and text-processing) and puncturing the arrogant idea held by many scientists that computers are solely for use by scientists, etc. I am currently looking (or trying to find the time to look) at Icon, Prolog, Lisp, Simula, Pop (?), etc. (I gave up on C!), with a view to using one of these as a language to teach programming to humanists. The first thing I have noted is that my head is starting to hurt! The second is that Icon seems to be a good idea for this sort of thing, though I am not deep enough into the language yet to be sure. If anyone out there has any ideas/experience on this one, I'll be happy to pick their brains... ----------------- *Holmes, Glyn <42104_263@uwovax.UWO.CDN> Department of French, The University of Western Ontario, London, Ontario, Canada N6A 3K7. Phone: (519) 679-2111 ext. 5713/5700. Main area of research is computer-assisted language learning, with emphasis on input analysis and instructional design. Most of my publications have been in these areas. I have also taught a course on French and the Computer, which covered CALL, literary and linguistic computing, use of databases, etc. I am the editor of Computers and the Humanities. ----------------- *Hulver, Barron Houck Computing Center, Oberlin College, Oberlin, OH 44074 My position is technical support analyst. Basically I assist students and faculty in trying to use our computers and networks. ----------------- *Kashiyama, Paul I AM A PHILOSOPHY PH.D. CANDIDATE AT YORK UNIVERSITY CONCENTRATING IN THE AREA OF ETHIC AND JURISPRUDENCE. I AM PARTICULARLY INTERESTED IN THE POTENTIAL ROLES COMPUTERS/AI WOULD PLAY IN FORMULATIONS OF ETHICAL/LEGAL JUDGMENTS; AND THE PHILOSOPHICAL QUESTION OF WHETHER SUCH JUDGMENTS ARE ADEQUATE REPLACEMENTS FOR HUMAN DECISIONS OR AT LEAST ADEQUATE MODELS OF ETHICAL AND LEGAL DECISION MAKING PROCEDURES. MY BACKGROUND IN COMPUTING INCLUDES PROGRAMMING IN BASIC,PASCAL, PROLOG, SOME C, APPLICATIONS PROGRAMMING IN FRED,DBASEIII+, TRAINING AND TEACHING EXPERIENCES IN DATABASE MANAGEMENT, SPREDSHEET ORGANIZATION, WORD PROCESSING AND INTRODUCTION TO PROGRAMMING FOR CHILDREN AND BUSINESS PERSONS USING PERSONAL / MICRO COMPUTERS. ----------------- *Matheson, Philippa MW Athenians Project, Dept. of Classics, Victoria College, Univ. of Toronto, Toronto, Canada M5S 1A1 (416) 585-4469 My university affiliation is the ATHENIANS project, Victoria College, University of Toronto, and my humanist computing activities are varied: programs for the Canadian classics journal, Phoenix; all forms of computer and scholarly aid for the ATHENIANS (Prosopography of ancient Athens) project; an attempt to establish a bibliography of articles in Russian (translated) on the subject of amphoras (ancient wine jars) on the EPAS machine; as well as trying to exchange amphora data for a database project on the stamps on ancient wine jars (called, imaginatively, AMPHORAS). I call myself a computer consultant, and am mostly consulted about how to make PCs deal with Greek... ----------------- *McCarthy, William J. Dept. of Greek and Latin, Catholic University of America, Wash., D.C. 20064 (202) 635-5216/7 Although untrained in computer science - and doubtless possessing little aptitude for it -, I have plunged considerable time into an effort to harness for myself and my colleagues the powerful tools of study and "productivity" which the computer offers to accommodating scholars. My hope is that groups such as HUMANIST will be able, in some way, to guide the development of a fruitful conjunction of technology and humanism. ----------------- *McGregor, John University of Durham, Abbey House, Palace Green, Durham DH1 3RS, UK Areas of interest: Septuagint/ Greek/ CALL/ Bible Present status: Developing CALL software for NT/Biblical Greek ----------------- *Roosen-Runge, Peter H. Dept. of Computer Science, York University, 4700 Keele St., North York (416) 736-5053 I have been involved with supporting and extending computing in the humanities for many years (I think I taught the first course at the UofT on computing for humanists in 1968!) Current projects include melody generation based on a model of a "listener" expressed in Prolog, and a music database system under Unix. I am also very interested in the impact of large comprehensive text databases on teaching, and the role of universities in creating and publishing such databases; but I am only in the early stages of formulating a research project in this area. ----------------- *Seid, Timothy W. 74 Clyde St., W. Warwick, RI 02983; (401) 828-5485 Religious Studies Dept., Box 1927, Brown University, Providence, RI 02912; (401) 863-3401 My involvement in computers started only when I came to Brown and began the doctoral program in History of Religions: Early Christianity two years ago. During last year, when TA positions were scarce, I was able to get a Computer Proctorship. Again, for this year, I hold such a position. The main project, for which we have an Educational Computing Grant from the university, will be to develop a CAI which will teach students about textual criticism. I've chosen to use Hypercard and am scanning in images with Thunderscan. This way I can show examples of manuscripts and have buttons which will enable the user to find out more about particular items. A personal project is a program written in Lightspeed Pascal that generates word-division options for ancient Greek. I'm also a member of Brown University's Computing in the Humanities User's Group (CHUG) and co-leader of the Manuscript Criticism Working group of CHUG. As a service to the department and the University at-large, I maintain RELISTU, a Religious Studies Common Segment on the mainframe on which I archive the ONLINE NOTES, the BIBLICAL SCHOLARS ON BITNET ADDRESS BOOK, and HUMANIST. I have an article appearing soon in a book ("Isocrates: A Guide to Antiquity" in The Impact of CD-ROM: Case Studies from a User's Perspective). ----------------- *Sitman, David Computation Centre, Tel Aviv University I teach courses in the use of computers in language study and I am an advisor on computer use in the humanities. ----------------- *Zayac, Sue I work for the Columbia University "Scholarly Information Center". This is an experimental union of the Libraries and the Computer Center designed to "stimulate and support the productive and creative use of information technology by our faculty and students" - Pat Battin, Vice President and University Librarian "Information technology" includes everything from parchment to CD-ROM, and from thumbing through a 3x5 card catalog to searching a database on a new supercomputer from the Vax workstation on your desk. My title is Senior User Services Consultant, Academic Information Services Group. My areas of responsibility are statistical programs, particularly SPSSX and SAS, word-processing, particular the mainframe text-formatting product, SCRIBE, and a smattering of anything and everything that anybody might ask me. I have a BA in Geology from Barnard College and a Masters from the Columbia University School of Public Health (major area was Population and Family Health). I'm one of the few people at the Computer Center who didn't major in Computer Science or Electrical Engineering. One of my great uses here is to play the part of "everyuser". Interests are classical archaeology (I almost majored in Greek and Latin, but realized in time I had no talent for languages), history of science, history in general, ballet, arm chair astronomy (I don't like the cold), gardening, and nature watching. I once did rock climbing but, like many of us in the computer field, I've gotten out of shape sitting in front of a monitor all day long. Mail is welcome, on any topic. From: CBS%UK.AC.EARN-RELAY::EARN.UTORONTO::LISTSERV 28-MAR-1990 16:18:51.40Revised List Processor (1.6d) Subject: File: "BIOGRAFY 3" being sent to you Date: Wed, 28 Mar 90 09:07:31 EST X-Humanist: Vol. 1 Num. 1075 (1075) Autobiographies of HUMANISTs Second Supplement Following are 21 additional entries and updates to the collection of autobiographical statements by members of the HUMANIST discussion group. Further additions, corrections, and updates are welcome, to MCCARTY at UTOREPAS.BITNET. W.M. 28 August 1987 ----------------- *Barnard, David T. Head, Department of Computing and Information Science, Queen's University Kingston, Ontario, Canada K7L 3N6; 613-545-6056 My research interests are in communication systems, information systems, and literary applications. In the latter area, I collaborate with George Logan (English) and Bob Crawford (Computing Science). Our joint work has involved development of coding standards for documents being used in textual analysis, investigation of text structures for electronic books, and some preliminary work toward building an archive based on our encoding standard. I have just completed a five-year term as Director of Computing and Communications Services. ----------------- *Baumgarten, Joseph M. I teach Rabbinic Literature, Dead Sea Scrolls, and related subjects at the Baltimore Hebrew College, 5800 Park Heights Ave, Baltimore, Md. 21215. Aside from using a Compaq computer for word processing in English and Hebrew, I am especially interested in CD-ROM's for accessing biblical and rabbinic sources in the manner of TLG. I am awaiting the results of the CCAT program to enable access to CD ROMs with IBM type computers. ----------------- *Beckwith, Sterling 248 Winters College, York University, 4700 Keele St., North York, Ontario (416) 736-5142 or 5186. I teach Music and Humanities at York University, have instigated and taught the only Humanities course dealing with computers that is currently offered there, under the rubric of Technology, Culture and the Arts, and serve as coordinator of computer music and general nuisance on academic computing matters in both the Faculty of Arts and of Fine Arts at York. I was the first researcher in an Ontario university to work intensively on the design of educational microworlds (for exploring and creating musical structures) using the then-obscure and still-poorly-exploited computing language known as LOGO. This led to my present interest in discovering what today's AI languages and methods can offer as vehicles and stimulating playgrounds for music-making and other kinds of artistic and intellectual creation. ----------------- *Bing, George 154 Thalia St., Laguna Beach, CA 92651; Phone: (213) 820-9410 I am a student at UCLA, and I work for the Humanities Computing program here to support the computer needs of the Humanities departments. ----------------- *Brainerd, Barron Department of Mathematics, University of Toronto, Toronto, Ont., Canada M5S 1A5 I am professor of mathematics and linguistics. My particular professional interests are in quantitative stylistics (using for the most part statistical methods) and early modern English. I have an Apple at home and an XT at the university and program naively in Basic and Snobol. I access SPSSX, which among other thing i use in my course 'Statistics for Linguists,' via CMS. ----------------- *Burnard, Lou [note change of address, effective from 24th August ] I work at Oxford University Computing Service, where I am responsible for the Text Archive and for database support and design. I have designed and even written many bits of text processingn software, notably OCP, FAMULUS and recently a general purpose text-searching interface to ICL's CAFS hardware search engine. But I don't think academics should write software at that level any more; just good interfaces to standard backages such as INGRES (or other SQL compatible dbms), BASIS... My main enthusiasm remains database design, which I see as an important and neglected area of humanities computing. ----------------- *Church, Dan M. Associate Professor of French, Vanderbilt University Box 72, Station B, Nashville, TN 37235, (615) 322-6904 (office), (615) 292-7916 (home) I have produced computer-assisted learning exercises for elementary French courses and a database containing information on all plays produced in state-subsidized decentralized theaters in France since World War II. And I have plans for many more projects using computers in the Humanities. ----------------- *Erdt, Terrence Graduate Dept. of Library Science, Villanova University, Villanova PA 19085, ph. (215) 645-4688. My interests, at this point in time, can be said to be optical character recognition, scholar's workstation, and the computer as medium from the perspective of the field of popular culture. ----------------- *Gold, Gerald L. Department of Anthropology, York University, North York, Ont. M3J1P3; (416) 225 8760 (home); (416) 736 5261 (office) I am a cultural anthropologist and a Metis (half-humanities/half-social SCIENCES. I HAVE DEVELOPED AN INTEREST IN THE RELATIONSHIP OF QUALITATIVE and quantitative data. More specifically, how can a computer assist with the storage and retrieval of field notes, archival materials, interviews, life histories and other textual materials. Of specific interest is the preservation of the intrinsic character of narrative while using the computer as an analytical tool that can assist in statistical overviews and tabulation. In this sense, I am thinking beyond 'content analysis' which limits the qualitative side of data recovery. Some of my solutions are relatively simple, but I would like to discuss them and get feedback from others. More important, I am open to the suggestions and proposals that may reach my terminal. ----------------- *Goldfield, Joel D. Assistant Professor of French, Plymouth State College, Plymouth, NH 03264 USA My exposure to computers began in Saturday morning courses offered to ambitious high school students. I took FORTRAN 4 and "Transistor Electronics" in the early 1970's. The FORTRAN 4 manual was poorly written and the language itself seemed almost totally worthless for my musical and communications-oriented interests, so I summarily forgot it and paid more attention to French, literature, science and math, all of which seemed more useful. Also, some of my home electronic projects worked, some not, just like computer programs, as I later discovered. Although I majored in Comp. Lit. (French, German, Music) in College, I took a few math courses and had to complete computer assignments in BASIC, invented by a couple of genial professors in the same department. The son of the major architect was to be one of my "students" that summer when I served as an undergraduate teaching assistant on a language study abroad program in Bourges, France. How I ever successfully completed those BASIC programs on figuring probabilities for coinciding birth dates, etc., I'll never know. Most of what I wrote was based on "Euclid's Advanced Theorum," as we called it on our high school math team: "trial and error." For my doctoral degree at Universit'e de Montpellier III, I found that I needed to catalogue, sort and evaluate the distribution of vocabulary in a particular work of fiction in order to better understand the author's strange symbolic system and diachronic mixing of associated terms. I also discovered a French frequency dictionary that would supply an apparently valid and reliable norm for external comparison with the work's internal norms. Although my return to the States made on-line querying impossible, I was able to obtain a printout of all words, since, happily, the work had been included in the frequency dictionary's compilation. I learned as much of "C" and "awk" (a "C" derivative under the UNIX system) as I needed to write programs to complement UNIX utilities. A colleague in Academic Computing graciously "tutored" me on many esoteric aspects of UNIX that were, and probably still are, obscure in its documentation. I worked on a methodology to organize my word, stylistic, and thematic data for computer-assisted research. Without this need and organizational "forthought" that also evolved as I learned more and more about the utilities and languages, all programming fireworks would have been useless sparkles. My major academic interests are computer-assisted literary research applied to literary criticism, computer-assisted language instruction/ interactive video, foreign language teaching methodologies and excellent foreign language/culture teaching. ----------------- *Hockey, Susan Oxford University Computing Service, 13 Banbury Road, Oxford OX2 6NN England; telephone: +44 865 273226 After taking a degree in Oriental Studies (Egyptian with Akkadian) at Oxford University I started my career in computing in the humanities as a programmer/advisor at the Atlas Computer Laboratory which at that time was providing large scale computing facilities for British Universities. There in the early 1970's I wrote programs to generate non-standard characters on a graph-plotter and was involved with the development of version 2 of the COCOA concordance program. In 1975 I moved to Oxford and began to develop various services for computing in the humanities which are used by other universities, including Kurzweil optical scanning, typesetting with a Monotype Lasercomp and the Oxford Concordance Program (OCP). I am in charge of these facilities and also teach courses on literary and linguistic computing and on SNOBOL. My publications include two books, based on my courses, and articles on various aspects of humanities computing including concordance software, Kurzweil scanning, typesetting, past history and future developments. I am also series editor for an Oxford UNIVERSITY PRESS SERIES OF MONOGRAPHS, OXFORD STUDIES IN COMPUTING in the Humanities. I have lectured on various aspects of humanities computing in various corners of the globe, more recently on current issues and future developments for humanities computing, Micro-OCP and its uses and on computers in language and literature for a more general audience. I have been a Fellow of St Cross College, Oxford since 1979 and I now look after the computing interests in the college. My recent activities have been concerned with (1) Version 2 of the Oxford Concordance Program and Micro-OCP. (2) The Association for Literary and Linguistic Computing of which I am currently Chairman and am on the editorial committee of the ALLC's journal, Literary and Linguistic Computing. My next project will be concerned with the introduction of computers in undergraduate courses at Oxford. These courses consist almost entirely of the detailed study of set texts, and this project, which is funded under the UK government Computers and Teaching Initiative, will set up a University-wide system for analysis of these texts via IBM-PC workstations linked to a large VAX cluster at the central service. ----------------- *Hunter, C. Stuart: and to a related study of the impact of the translations of the Psalms on the development of the religious poetry of the renaissance in England. On the teaching side, I am actively involved not only in teaching basic courses in word processing and database applications in the Humanities but also in developing computer conferencing as a specific teaching tool. ----------------- *Koch, Christian < FKOCH%OCVAXA@CMCCVB > or < chk@oberlin.edu.csnet > Oberlin College, Computer Science Program, 223D King Building, Oberlin, OH 44074; Telephone: (216)775-8831 or (216)775-8380 I think it might be fair to say that I'm the token humanist on the computer science faculty here at Oberlin -- and I love the work. I come to computing from a long and eclectic background in the humanities. Am one of those people who always harbored the hope that a strong interdisciplinary background would ultimately serve a person in good stead. I think that now, working in the general area of cognitive science and computing, I'm probably as close to realizing that hope as I have ever been. My undergraduate work was in the Greek and Roman classics to which I added a masters degree in music history with pipe organ performance and another in broadcasting and film art. Ph.D. (1970) was essentially in literary criticism with psychoanalytic emphasis. Computing skills were picked up on the side during the 80's. Have also recently taken time out from the academic scene to work as a therapist with the Psychiatry Department of the Cleveland Clinic. Although I've been at Oberlin for some years, I joined the computer science faculty only in 1986 and am still sorting out directions and options. My computing interests are currently in the general area of natural language understanding, more specifically systems of knowledge representation and processing. As a kind of pet project I am working on developing an expert system for specialized psychiatric diagnoses. At the more practical level, in addition to teaching some traditional CS courses, I am charged with developing programming courses aimed at the student who wishes to combine computer programming skills with a major in a non- computer science area. In the immediate future is the offering of a course dealing with the computer analysis of literary texts. Am also introducing a more theoretical course in the general area of mind and machine (cognitive science overview). Would much appreciate hearing from persons who would like to share experiences or make suggestions in these areas as well as in areas where computing may be involved in the analysis of 'texts' in music (computer-assisted Heinrich Schencker?) and the other arts. All ideas having to do with interesting ways of combining computer programming and other traditionally non-quantitative areas of study would be most welcome. ----------------- *Kraft, Robert A. Professor of Religious Studies, University of Pennsylvania 215-898-5827 Coordinator of External Services for CCAT (Center for Computer Analysis of Texts), co-director of the CATSS project (Computer Assisted Tools for Septuagint Studies), director of the Computerized Coptic Bible project, chairman of the CARG (Computer Assisted Research Group) of the Society of Biblical Literature, editor of OFFLINE column in the RELIGIOUS STUDIES NEWS (dealing with computers and religious studies). BA and MA Wheaton (Illinois) College 1955 and 1957 (Biblical Lit.); PhD Harvard 1961 (Christian Origins). Assistant Lecturer in New Testament at University of Manchester (England) 1961-63; thereafter at University of Pennsylvania. Main interests are in ancient texts, especially Jewish and Christian, paleography, papyrology, codicology, and in the historical syntheses drawn from the study of such primary materials. The computer provides a fantastic shortcut to traditional types of research, and invites new kinds of investigation and presentation of the evidence. I am especially anxious to integrate graphic and textual aspects (e.g. in paleographical and manuscript studies), including scanning and hardcopy replication. ----------------- *Kruse, Susan I am a Computer Advisor within the Humanities Division of the Computing Centre at King's College London. Although many Universities in Britain increasingly have a person within the Computer Centre who deals with humanities' enquiries, King's College is unique in having a Humanities Division. There are eight of us within the division, some with specific areas of expertise (e.g. databases, declarative languages) and others (like myself) who deal with general issues. Some of us are from computer backgrounds; others, like myself, are from a humanities background (in my case archaeology). We cater to all users within the College, but specialise in providing a service for staff and students in the arts and humanities. This primarily involves advising, teaching, and writing documentation. ----------------- *Logan, Grace R. Arts Computing Office, PAS Building, University of Waterloo, Waterloo, Ontario. I received my B.A. at Pennsylvania State University in 1956 and my M.A. at the University of Pennsylvania in English in 1960. My training in computing has been largely an apprenticeship supplemented by courses at Waterloo in math and computing. I am now a consultant and programmer for the Arts Computing Office at the University of Waterloo where I have been since 1970. I have been associated with computing in the humanities since 1958 and I helped to organize the Arts Computing office at Waterloo in the early seventies. I was a member of the organizing committee for ICCH/3. I am active in the ACH and OCCH where I am serving on the executive committees. I have also been active in the MLA where I have served as the convenor of the computer section. I have developed program packages for use by Arts users and I have taught courses in computer literacy for the Arts Faculty at Waterloo. I regularly attend computing conferences where I have presented several papers. I have also been invited to give several seminars and workshops on computing in the Arts by various groups and organizations. ----------------- *Sinkewicz, Robert E. Senior Fellow, Pontifical Institute of Mediaeval Studies, member of the Centre for Computing in the Humanities at the University of Toronto. Principal Interests: the use of relational databases in humanities research, and the development of text databases in Byzantine religious literature. Major Research in Progress: The Greek Index Project, an information access system for all extant Greek manuscripts. By Sept. 1988 we propose to have online a relatively complete listing of all Greek manuscripts as well as manuscript listings for authors of the Late Byzantine Period. IBM SQL/DS is our principal software tool. ----------------- *Sitman, David Computation Centre, Tel Aviv University, Israel I teach courses in the use of computers in language study and I am an advisor on computer use in the humanities. ----------------- *Tompa, Frank Wm. Associate Professor of Philosophy, University of Waterloo, Waterloo, Ontario, Canada N2L 3G1 I am an interested outsider. My fields of research include the development of mathematical logic in the 19th century (which in a way made modern computation possible), and problems confronting cognitive science (i.e. questions concerning the limits of the applicability of our current conception of computation). On the applied side, the University of Waterloo has long been a leader in software development, and in the area of computer application. As a result, we have had ready access to powerful computing resources for many years. I, for instance, have been processing my words since the early '70s (when IBM's ATS was in vogue, and VDTs were a novelty). ----------------- *Winder, Bill [Accents are indicated as follows: \C = caret; \G = grave; \A = acute.] As a doctoral candidate at the University of Toronto's French Department, my computing activities are largely conditioned by my thesis topic: "Maupassant: predictability in narrative". The fundamental axis of this research concerns automatic abstracting: in precisely what way can automatic abstracting techniques be said to fail with literary texts? Maupassant's 310 short stories were chosen as the literary corpus primarily because the format of the genre is computationally manageable on a microcomputer, the plot and style of Maupassant's stories are straightforward, and the number of stories allows for statistically relevant comparisons between pieces. My research on abstracting should offer the basis for a coherent approach to critical model building, particularly with respect to the semantic value of predictability in text and in the critical model itself. This endeavour has led me to Deredec, (Turbo) Prolog, and, more recently, Mprolog. The use of the first of these is presented in CHum's issue on France, where J.-M. Marandin discusses "Segthem", a Deredec automatic abstracting procedure. My interest in Prolog, as an alternative to Deredec, developed out of studies in combinatory logic, natural deduction, and Peirce's existential graphs. In connection with my research in literary computing, I am a teaching assistant for the French Department's graduate computer applications course, and in that capacity have taught word processing and demonstrated packages such as Deredec, BYU concordance, TAT (my own French concordance package), COGS, and MTAS. This recent interest in computing (1985) grew out of seasoned interest in semiotics (1979). In France, I completed a Ma\Citrise de Lettres Modernes (1982) with the Groupe de S\Aemiotique in Perpignan, and a Diplome d'Etudes Approfondies (1984) with A. J. Greimas's Groupe de Recherche en S\Aemio- linguistique at l'Ecole des Hautes Etudes in Paris. I am presently a member of the Toronto Semiotic Circle, and served in June 1987 as secretary to the International Summer Institute for Semiotic and Structural Studies, site of a promising encounter between researchers in artificial intelligence, semiotics, and humanities computing. This encounter is in fact indicative of my overall ambition in computing, which is to assess the computational component of semiotic theories, particularly those of L. Hjelmslev and C. S. Peirce. From: CBS%UK.AC.EARN-RELAY::EARN.UTORONTO::LISTSERV 14-SEP-1989 15:21:56.90Revised List Processor (1.6a) Subject: File: "BIOGRAFY 4" being sent to you Date: Thu, 14 Sep 89 10:19:10 EDT X-Humanist: Vol. 1 Num. 1076 (1076) Autobiographies of HUMANISTs Third Supplement Following are 19 more entries to the collection of autobiographical statements by members of the HUMANIST discussion group. Further additions, corrections, and updates are welcome, to MCCARTY at UTOREPAS.BITNET. W.M. 11 October 1987 ----------------- *Bratley, Paul Departement d'informatique et de r.o., Universite de Montreal, C.P. 6128, Succursale A, MONTREAL, Canada H3C 3J7, (514) 343 - 7478 I have been involved in computing in the humanities since the early 1960s, when I worked at Edinburgh University on automated mapping of Middle English dialects. Since then I have been involved in projects for syntax recognition by computer and a number of lexicographical applications. With Serge Lusignan I ran for seven years at the University of Montreal a laboratory which helped users with all aspects of computing in the humanities. As a professor of computer science, it is perhaps not surprising that my interests lie at the technical end of the spectrum. I designed, with a variety of graduate students, such programs as Jeudemo (for producing concordances), Compo (for computer typesetting), and Fatras (for fast on-line retrieval of words and phrases), all of which were or are still used inter- nationally in a variety of universities. My main current research interest involves the design of a program for on-line searching of manuscript catalogues. The idea is to be able to retrieve incipits despite unstable spelling and such-like other variants in medieval texts. The project, involving partners in Belgium, Morocco and Tunisia is intended to work at least for Latin, Greek and Arabic manuscripts, and possibly for others as well. ----------------- *Carpenter, David I am an assistant professor of theology at St. Joseph's University in Philadelphia (excuse the typos!) with training primarily in the history of religions. I work on Indian traditions (Hinduism and Buddhism) as well as some work on Western Medieval material. I have recently been engaged in putting a Sanskrit test into machine-readable form and would like to see what else has been done. ----------------- *Dixon, Gordon Bitnet Editor-in-Chief, Literary and Linguistic Computing, Institute of Advanced Studies, Manchester Polytechnic, Oxford Road, Manchester, M15 6BH U.K. In particular, my interest lies in the publication of good quality papers in the areas of: Computers applied to literature and language. Computing techniques. Reports on research projects. Hardware and software. CAL and CALL. Word Processing for Humanities. Teaching of computer techniques to language and literature students. Survey papers and reviews. ----------------- *Gilliland, Marshall Department of English, University of Saskatchewan, Saskatoon, Saskatchewan, Canada S7N 0W0 (306) 966-5501 campus, (306) 652-5970 home I'm a professor of English whose literary specialty is American literature, and I also teach expository prose, first-year classes, and utopian literature in English. Thus far, I'm the lone member of my department to use a mainframe computer and to teach writing using a computer. Most immediately, I'm the faculty member responsible for getting a large computer lab for humanities and social science students in the college, and one of the few faculty promoting using computers. I maintain the list ENGLISH on CANADA01. ----------------- *Hamesse, Jacqueline Universite Catholique de Louvain, Chemin d'Aristote, 1, B-1348 LOUVAIN-LA-NEUVE (Belgium) Je suis membre du Comite d'ALLC et Co-ordinator de l'organisation des Conferences annuelles de cette association. D'autre part, je suis Professeur a l'Universite Catholique de Louvain et Presidente de l'Institut d'Etudes Medievales. Je travaille depuis vingt ans dans le domaine du traitement des textes philosophiques du moyen age a l'aide de l'ordinateur. Pour le moment, j'etudie surtout les possibilites offertes par l'ordinateur pour la collation et le classement des manuscrits medievaux. Je viens de lancer avec Paul Bratley de l'Universite de Montreal un projet international de Constitution d'une base de donnees pour les incipits de manuscrits medievaux (latins, grecs, hebreux et arabes). ----------------- *Hubbard, Jamie I teach in the area of Asian Religions at Smith College, focusing on East Asian Buddhism. I am also active in attempting (??!!) to archive Chinese materials on CD-ROM and other sundry projects (IndraNet, bulletin board/ conferencing for Buddhist Studies, has been around for app. 2 yrs). ----------------- *Hughes, John J. (for other electronic addresses, see bottom of front page of last issue of the "Bits & Bites Review") 623 Iowa Ave., Whitefish, MT 59937, (406) 862-7280 ----------------- Dept of History, University of York, Heslington, YORK YO1 5DD, U.K. My interests are in the field of early medieval history, specifically Frankish history, and with a special interest in Merovingian cemeteries. ----------------- *Jones, Randall L. Humanities Research Center, 3060 JKHB, Brigham Young University Provo, Utah 84602, (Tel.) 8013783513 I am a Professor of German and the Director of the Humanities Research Center at Brigham Young University. I have been involved with using the computer in language research and instruction since my graduate student days at Princeton, 1964-68. My activities have included the development of language CAI, diagnostic testing with the computer, interactive video (I worked on the German VELVET program), computer assisted analysis of modern German and English and the development and use of electronic language corpora. I have worked closely with the developers of WordCruncher (aka BYU Concordance) to make certain that the needs of humanists are properly met (e.g. foreign character sets, substring searches, etc.). In 1985 I organized (with the good assistance of my colleagues in the HRC) the 7th International Conference on Computers and the Humanities, which was held at BYU. I am a member of the Executive Council of the Association for Computers and the Humanities, the Chairman of the Educational Software Evaluation Committee of the Modern Language Association, a member of the Committee on Information and Communication Technology of the Linguistic Society of America, and a member of the Editorial Board of "SYSTEM". I have written articles and given lectures on many aspects of the computer and language research and instruction. ----------------- *Lane, Simon Computing Service, University of Southampton, Highfield, Southampton, England. I am currently employed as a Programmer in the Computing Service at Southampton University, England, and have special responsibility for liaison with the Humanities departments within the University, and support of their computing needs. ----------------- *Lessard, Greg I am a linguist (Ph.D. 1983, Laval, in differential linguistics, for a study of formal mechanisms of antonymy in English and French). I have been teaching in the Department of French Studies at Queen's since 1978 and have been involved in humanities computing for several years now, in a variety of areas: 1) computer-aided analysis of literary texts. In 1986 Agnes Whitfield and I gave a paper at the annual meeting of the "Association canadienne- francaise pour l'avancement des sciences" where we used a computer analysis to compare two novels by Michel Tremblay and Victor-Levy Beaulieu, respectively. Agnes is also in French Studies. 2) production of computer-readable texts. For the past year or so, I have participated in a group project in the Department of French Studies at Queen's which involves the entry into the mainframe of computer-readable texts by means of a Kurzweil data entry machine. 3) concordance production. J.-J. Hamm (of Queen's) and I are working on a concordance of the novel "Armance" by Stendhal. 4) linguistic analysis. I make heavy use of the computer in my work analysing errors in student texts produced in French. 5) annotation. Diego Bastianutti (of Queen's) and I are working in the area of annotation as a teaching tool in the humanities. We gave a paper at this year's Learned Societies where we outlined our research and presented a prototype of an annotation facility based on the word processing program "PC-Write". 6) computer-aided instruction. With a group of colleagues in the languages and in computer science at Queen's, I am working on an intelligent computer-aided instruction system for French, other Romance languages, and eventually a variety of other languages as well. We are in the second year of this multi-year project, funded in part by the Ministry of Colleges and Universities of Ontario. ----------------- *Logan, George M. Professor and Head, Department of English, Queen's University, Kingston, Ontario, Canada K7L 3N6; 613-545-2154 My area of literary specialization is the English Renaissance. For my research interests in computer applications to literary studies, see the biography of my colleague David Barnard. For 1986-87, I have been chairman of the Steering Group for Humanities Computing of five Ontario universities: McMaster, Queen's, Toronto, Waterloo, and Western Ontario. I am also a member of the steering group of the Ontario Consortium for Computing and the Humanities. ----------------- *Ravin, Yael YAEL(YKTVMH2) I have an M.A. in Teaching English as a Second Language from Columbia University and a Ph.D in Linguistics from the City University of New York. My Ph.D thesis is about the semantics of event verbs. I am a member of the Natural Language Processing Group at the Watson Research Center of IBM. My work consists of writing rules in a computer language called PLNLP for the detection of stylistic weaknesses in written documents. I am now beginning research in semantics. This research consists of developing PLNLP rules to investigate the semantic content of word definitions in an online dictionary, in order to resolve syntactic ambiguity. ----------------- *Reimer, Stephen I am an assistant professor of English, using computers extensively both in research and in teaching. My introduction to computer use in the humanities came in the late 70s when I was beginning my dissertation and was faced with an authorship question in a set of medieval texts--I thought that the problem might be resolvable through quantitative stylistics with the help of the computer. Through John Hurd at the Univ. of Toronto, I learned the rudiments of programming in SNOBOL and learned much about concordancing algorithms; on this basis, I wrote a rather large and sloppy program to "read" any natural language text and to generate a substantial number of statistics. Producing the dissertation itself involved me with micro-computers and laser printers. And when I began teaching after graduation, I was involved in an experiment using Writers' Workbench as an aid in teaching composition. I have, this fall, moved from the U of T to the University of Alberta. Here I have been asked to act as something of a consultant for other English professors who are starting to make use of computers, and I have been assigned to a team with a mandate to establish a small computing centre to be shared by four humanities departments (English, Religious Studies, Philosophy and Classics). Finally, I am embarking on a long term project which is again concerned with authorship disputes: over the coming years I expect to consume huge numbers of cycles in an effort to sort out the tangled mess of the canon of John Lydgate. ----------------- *Salotti, Paul Oxford University Computing Service, 13, Banbury Road, OXFORD OX2 6NN U.K. Tel. 0865-273249 I work in the Oxford University Computing Service and provide support and consultancy for the application and use of databases (Ingres, IDMS, dBase etc) in academic research. ----------------- *Smith, Tony I have recently started work as research assistant to Gordon Neal in the Department of Greek at Manchester University. Our project has a number of aims. Ultimately we hope to program a computer to perform as far as possible the automatic syntactic parsing of Classical Greek. Texts with syntactic tagging (which in the early stages can be performed manually) can then be used for pedagogic purposes, by allowing a student on a computer to ask for help with the morphology and syntax of selected words and sentences. The tagged texts would also be very useful for research purposes, allowing various kinds of statistical analysis to be carried out. The texts will be drawn from the Thesaurus Linguae Graecae database on CD-ROM, which will be accessed by a network of IBM-compatibles. The system will also offer facilities for searching through the Greek texts similar to those found on the Ibycus Scholarly Computer. ----------------- *Tov, Emmanuel Prof. in the Dept of Bible, Hebrew University, Jerusalem, Israel, Tel. (02)883514 (o), 815714 (h). Together with R.A. Kraft of the U. of Penn. I am the director of the CATSS Project - computer assisted tools for Septuagint studies (for a description of the work, see CATSS volumes 1 and 2). ----------------- *Wolffe, John Temporary Lecturer in History, University of York, England. AT the moment my use of computers in my own research is confined largely to humble word-processing, but I have plans during the next academic year to develop some computer-based analysis of the 1851 England and Wales Census of Religious Worship. I am also very interested in wider questions about the use of computers in the humanities, especially as these relate to the development of coherent defense of the humanities in general and of history in particular in the face of the current political and social climate in the UK. ----------------- *Wyman, John C. Library Systems Office, Bird Library, Room B106F, Syracuse Univ. Syracuse, New York 13244-1260 USA, (315) 423-4300/2573 I am the Systems Officer for the Syracuse University Library, called Bird Library, and am in charge of all of our computer and system support for the library. This includes our on-line catalog (SULIRS); access to OCLC for shared bibliographic cataloging information; and our increasing use of microcomputers for staff support. Also I'm involved in our on-line access to remote data bases, such as Dialog or BRS, for our users and staff. Finally we have a growing effort of acquiring and providing access to collections of research data for people in the social sciences, called the Research Data System of the Libraries. My interests revolve around providing access to, and usage of computers for, non-computer type people. Even, and especially, at the expense of extra programming and systems effort. Too many computer systems today are hard for e for the casual user to use. My background is Electrical Engineering, Numerical Analysis, Computer User Service, Library User Service, with many systems designed and programmed by me or my staff. The human interface is the most important aspect of this work. From: CBS%UK.AC.EARN-RELAY::EARN.UTORONTO::LISTSERV 14-SEP-1989 20:16:43.47Revised List Processor (1.6a) Subject: File: "BIOGRAFY 5" being sent to you Date: Thu, 14 Sep 89 10:20:46 EDT X-Humanist: Vol. 1 Num. 1077 (1077) Autobiographies of HUMANISTs Fourth Supplement Following are 26 additional entries to the collection of autobiographies by members of the HUMANIST discussion group. Additions, corrections, and updates are welcome, to mccarty@utorepas.bitnet. W.M. 15 November 1987 ----------------- *Amsler, Robert Bell Communications Research, Morristown, N.J. Despite the fact that I feel I have almost exclusively a background in the sciences, I find that I am continually working with people from the humanities and have been doing so for the last 12 or so years. I graduated from college with a B.S. in math and went on to graduate school at NYU's Courant Inst. of Math. Sciences in Greenwich Village. There I changed from a mathematician to a computer scientist--and even more significantly, to a computational linguist. I just decided one day that it was a lot more fun to see computers printing words than numbers. From NYU I went to the University of Texas at Austin (UT), where I worked with Robert F. Simmons for a number of years. Texas became home for 10 years and I eventually worked on a variety of humanities computing projects there as the programming manager of the linguistics research center in the HRC (which many of us preferred to think of as the Humanities Research Center even after the University changed the name to the Harry Ransom Center). At UT I worked on machine-readable dictionaries and eventually did a dissertation entitled ``The Structure of the Merriam-Webster Pocket Dictionary'' in which I proved you can construct taxonomies out of definitions. I also worked on a few other interesting humanities computing projects including providing the programming support (sorting, typesetting and syntax-checking) for Fran Karttunen's Analytical Dictionary of Nahuatl, building a concordance for Sanskrit texts, working on pattern recognition for Incunabula, data organization for a bibliography of literature of the 18th (or was it the 17th, sigh) century, Mayan calendar generation, and in general helping to spearhead an effort in the late 1970s to get the computing center to recognize text as a legitimate use of computing resources on campus. I have an interdisciplinary Ph.D. from UT in Computer Sciences (Computational Linguistics/Artificial Intelligence), Information Science, and Anthropological Linguistics (Ethnosemantics). After school, I went to SRI International in Menlo Park, CA and worked in the AI Center and the Advanced Computer Systems Dept. there for 3 years on a variety of projects and grants involving text, information science, and AI. From SRI I came to my present job at Bell Communications Research in Morristown, NJ in the Artificial Intelligence and Information Science Research Group, where I continue to specialize in working on machine-readable dictionary research (computational lexicology) and in general on finding alternate uses for machine-readable text. I'm a member of AAAI, ACL, AAAS, ACM, DSNA, and IEEE. My long-term interest is in trying to understand what it will mean to us in the future to have all the world's text information accessible to computers, and what the computers will be able to figure out from that information. Most recently, my attention has turned to the need to create some standards for the encoding of machine-readable dictionaries and to data entry of the Century Dictionary. ----------------- *Benson, Jim English Department, Glendon College, York University, Toronto I use the CLOC package developed at the University of Birmingham for research purposes, which include statistical interpretations of collocational output for natural language texts. CLOC also produces concordances, indexes, etc. similar to the OCP. At York, CLOC is also currently being used to produce an old spelling concordance of Shakespeare. ----------------- *Bevan, Edis 014 Gardiner Building, Open University, Milton Keynes, MK7 6AA, Great Britain. (The Open University is the biggest University in Britain in terms of student numbers. Instruction is at a distance by means of broadcast materials, written texts and some local tuition. The University has on its undergraduate programme more students with disabilities than all the other higher education institutions in Britain combined.) I intend to set up a discussion group which I hope will be as international as HUMANIST. This will probably be called ABLENET, and I am discussing with Andy Boddington how we could operate as a kind of pseudo-LISTSERVE. Participating in HUMANIST would give me some insights into how such a system could operate effectively. I believe good information exchange is as much a matter of developing communicative competence amongst the users as it is in manipulating the technologies. I am told that HUMANIST is an example of good practice in this matter. I also believe that HUMANIST debates could be most relevant to my general research into information and empowerment. It is not just a matter of applying modern technology to the specific needs of individual disabled people, great through the benefits of this can be. The information technology revolution is creating a whole new world, and it is largely being created for able bodied living with some afterthoughts for possible benefits for people with disabilities. Also there is no reason why disabled people of high academic capability should not be interested in the humanities and in computing in the Humanities. I intend to prepare a directory of resources for disabled people who want to initiate or carry through research projects for themselves. If they become interested in the humanities then HUMANIST could be a relevant resource for them. Furthermore, since I want to make this a truly international resource I need to look at the problems of information exchange in languages other that English. This may be relevant to your concerns with linguistic computing. ----------------- *Butler, Terry I am active in supporting humanities computing at the University of Alberta. I am in the University Computing Systems department. We have the OCP program on our mainframe, TextPack (from Germany) recently installed, and a number of other utilites and program being used by scholars. We have considerable experience in publishing and data base publishing (I am in the Information Systems unit). I have a masters degree in English Literature from this university. ----------------- *Cerny, Jim University Computing, University of New Hampshire Kingsbury Hall, Durham, NH 03824. (603)-862-3058 I am the site INFOREP for BITNET purposes and part of the academic support staff in the computer center. We have only been part of BITNET since mid-April-87, so I am working hard to find out what is "out there" and to let our user community know about it. I am especially working hard to show these possibilities to faculty from non-traditional computing backgrounds, such as in the humanities. I am publisher of our campus computer newsletter, ON-LINE, which we produce with Macintosh desktop publishing tools. We are always interested in exchanging newsletter subscriptions with other newsletter publishers/editors. As for myself, I am a wayward geographer, Ph.D. from Clark Univ., cartography as a specialization, and I teach one credit course (adjunct) per year. ----------------- *Chapelle, Carol 203 Ross Hall, Iowa State University, Ames, IA 50011. (515) 294-7274 I am an assistant professor of ESL/Applied Linguistics at Iowa State University in Ames, Iowa. I am interested in the application of computers for teaching English and research on second language acquisition. My papers on these topics have appeared in TESOL Quarterly, Language Learning, CALICO, and SYSTEM. My current work includes writing courseware for ESL instruction and research, and developing a "computers in linguistics/humanities" course for graduate students at ISU. ----------------- *Cooper, John I am working on a UK government sponsored project under the Computer Teaching Initiative umbrella. The project is headed by Susan Hockey, and the third member is Jo Freedman. We are developing ways in which texts in several languages and scripts can be accessed by university members (undergraduates initially, but we hope that graduates and researchers will be able to make use of the facilities) directly onto micro screens connected up with the university mainframe computers. They will be able to see their texts in the original scripts, and then be able to use concordance programs such as OCP and other text-oriented software to performs searches, etc., of their material. At present we are working with Middle English, Italian, Latin, Greek, and Arabic, but we are interested in incorporating any scripts and languages for which there is a demand in the university. Jo Freedman is languages for which there is a demand in the university. I am working partcularly on the textual side of the project, and we are using texts from the Oxford Text Archive to begin with. My particular interest is in Arabic and other languages written in the Arabic script, and I am at present working on a thesis in the field of Islamic jurisprudence. ----------------- *Feld, Michael I currently teach Philosophy at University College, University of Manitoba, Winnipeg, MB R3T 2M8 (204) 474-9136. My use of computers is a newborn thing: primarily, as yet, to access data-bases, and to communicate with other scholars in my field via e-mail. My research interests center on moral epistemology and applied ethics. ----------------- *Friedman, Edward A. Stevens Institute of Technology, Hoboken, New Jersey 07030 USA. 201-420-5188 I am currently a Professor of Management at Stevens Institute of Technology in Hoboken, NJ 07030, USA. Previously, as Dean of the College I had administrative responsibility for the development of the computer-intensive environment at Stevens. Every student had to purchase a computer ( beginning in 1983 ). The first computer was a DEC Professional 350 and now it is an AT&T 6310. A great deal of curriculum development has taken place at Stevens around this program. We are currently engaged in a massive networking effort which will place more than 2,000 computers on a 10Megabit/sec Ethernet with interprocess communications functionality. My interest is in uses of information technology in society and in the impact of information technology on liberal arts students. I recently had a grant from the Alfred P. Sloan Foundation to complete a text of information technology for liberal arts students that will be published by MIT Press. I currently have a grant from the Department of Higher Education of the State of New Jersey to implement an undergraduate course using full text search techniques. We are placing approximately ten volumes related to Galileo into machine-readable form. They include writings of Galileo, biographical material and commentaries. This data base will be used with Micro-ARRAS software in a history of science course on Galileo. I am working with Professor James E. McClellan of the Stevens Humanities Department and with Professor Arthur Shapiro of the Stevens Manangement Department on this project. I would be interested in hearing from individuals who have suggestions for experiments or observations that we might consider in this pilot project when it is implemented in the Spring Semester ( Feb - May 1988 ). I am also a founder and Co-Editor of a journal entitled, Machine-Mediated Learning, that is published by Taylor & Francis of London. The Journal is interested in in-depth articles that would be helpful to a wide audience of scholars and decision makers. Anyone wishing to see a sample copy should contact me. ----------------- *Gauthier, Robert Sciences du langage, UNIVERSITE TOULOUSE-LE MIRAIL (61 81 35 49), France I am at present head of the "Sciences du Langage" Department at the "Universite Toulouse le-Mirail". I spent twenty years out of France mainly in Africa where I taught linguistics and semiotics. I started as a phonetician with a these de 3eme Cycle on teaching intonation to students learning French (FLE equivalent to TEFL). I worked for various international organisations (UNESCO, USAID, AUPELF) and the French Cooperation. I was then mainly interested in Audio-visual methods of teaching sundry subjects. I got involved in research on local folktales and wrote a few articles on the subject. I have been using computers for 10 years as a means of research, filing, word-processing, and intellectual enjoyment. I learnt and used a few languages (Fortran, Basic, Logo, Prolog...) and worked on different computers. After a These d'etat on the didactical use of pictures in growing up Africa, I came home to the Linguistics department of Toulouse university. I teach Computers or Semiotics at "Maitrise" level and I have a "Seminaire de DEA" on Communication and computed meaning (an unsatisfactory translation of the ambiguous french expression : Calcul du sens). The whole university shows a keen interest in computers and we have to fill in lots of forms to give shape to projects which aim to develop the teaching and use of computers in the Humanities. Unfortunately local problems prevent the university from having an efficient program to give students some kind of competence in dealing with Computers. In fact nobody seems aware of the specific problem posed by our literary students and their confrontation with courses given by specialists. As for what should be taught and how, this is either taboo or an irrelevant impropriety. In July 87 at the Colloque d'Albi, I presented a paper, which tried to promote a way to teach Basic to students with a literary background and I will try to perfect the method this year with the students attending my course on Basic and the Computer. I have just completed a stand alone application that helps make, merge, sort and edit bibliographies. It works on Macintosh and can be ported on IBM PC ( It was compiled with ZBasic). I am interested in hearing from persons using Expercommon Lisp on Macintosh for an exchange of views. ----------------- *Graham, David Department of French and Spanish, Memorial University of Newfoundland St. John's, NF CANADA A1B 3X9 (709) 737-7636 I was trained in 17th century French literature but have in the last few years become more interested in the history of emblematics in France. To this end, I am now investigating the feasibility of a com- puterized visual database of French emblems, and am currently exploring the use of Hypercard on a Macintosh Plus to work on this. In addition, for the last few months I have been attempting to encourage the formation of a distribution list for French language and literature specialists in Canada along the lines of ENGLISH@CANADA01 (though I understand it has not been a complete success...). Consequently, I am very interested in the use of e-mail by scholars and teachers in the hu- manities generally. We are at present looking into the use of computers for teaching FSL here at Memorial and so I would be interested in exchanges of views and material on that subject as well. I am not however personally interested in parsers etc though I have colleagues here who are. ----------------- *Hawthorne, Doug Director, Project Eli, Yale Computer Center, 175 Whitney Ave. New Haven, CT 06520, (203) 432-6680 My office is responsible in broad terms for providing the resources to support instructional computing at Yale. In addition to managing the public clusters of microcomputers available to students, I and my staff assist faculty who are searching for software to use for instruction or who are actively developing such software. In order to fulfill this role we attempt to stay abreast of recent developments and to funnel appropriate information to interested faculty at Yale. While not focussed exclusively on the humanities, we do give considerable attention to the humanists because they do not seem to be as "connected" to matters concerning computing as the scientists. But one example, I have been the principal organizer of a one day conference titled "Beyond Word Processing: A Symposium on the Use of Computers in the Humanities" which will be held tomorrow (Nov. 7). I look forward to participating in the network. ----------------- *Hofland, Knut The Norwegian Computing Centre for the Humanities P.O. Box 53, University N-5027 Bergen Norway Tel: +47 5 212954/5/6 I am a senior consultant at the Norwegian Computing Centre for the Humanities in Bergen (financed by the Norwegian Research Council for Science and Arts), where I have been working since 1975. The Centre is located at the University of Bergen. I have worked with concordancing, lemmatizing and tagging of million words text like the Brown Corpus, LOB Corpus, Ibsens poems and plays. I have also worked with publication of material via microfiche, typesetters and laserwriters. We are a clearing house for ICAME (International Computer Archive of Modern English), a collection of different text corpora, and have recently set up a file server on Bitnet for distribution of information and programs. (FAFSRV at NOBERGEN, can take orders via msgs or mail). At the moment we are investigating the use of CD-ROM and WORM disks for distribution of material. We have worked for several years with computer applications in Museums, printed catalogues and data bases both on mainframes and PCs. ----------------- *Hogenraad, Robert Faculte de Psychologie et des Sciences de l'Education, Universite Catholique de Louvain 20, Voie du Roman Pays B-1348 Louvain-la-Neuve (Belgium) For some time, I have been active here in the field of computer-assisted content analysis (limited to mainframe computers, alas, for financial reasons). For example, we recently issued a User's Manual --in French--for our recent PROTAN system (PROTAN for PROTocol ANalyzer). We intend some more work on our system in two directions, i.e., developing a sequential/narrative approach to content analysis, and developing new dictionaries, in French, in addition to the ones we already work with. ----------------- *Hughes, John J. Bits & Bytes Computer Resources, 623 Iowa Ave., Whitefish, MT 59937; telephones: (406) 862-7280; (406) 862-3927. Editor/Publisher of the "Bits & Bytes Review." After attending Vanderbilt University (1965-1969, philosophy), Westminster Theological Seminary (1970-1973, philosophical theology), and Cambridge University (1973-1977, biblical studies), I taught in the Religious Studies Department at Westmont College in Santa Barbara, California (1977-1982). During 1980-1981, while teaching third-semester Greek at Westmont College, I attempted to use Westmont's Prime 1 to run GRAMCORD, a program that concords grammatical constructions in a morphologically and syntactically tagged version of the Greek New Testament. I had no idea how to use the Prime 1, and no one at the college had ever used GRAMCORD. Several frustrating visits to the computer lab neither quenched my desire to use the program nor dispelled my elitist belief that if students (some of whom, after reading their term papers, I deemed barely literate) could use the Prime 1 productively, then so could I. (The students, of course, immediately saw that I was as illiterate a would-be computer user as ever fumbled at a keyboard or read incomprehendingly through jargon-filled manuals.) My unspoken snobbery was not soon rewarded. After several spectacular and dismal failures (including catching a high-speed line-printer in an endless loop), I welcomed--indeed, solicited--the assistance of one and all, "literate" or not. After a good deal of help, my class and I were able to use GRAMCORD. Because of the system software or the way the program was installed or both, however, users had to wait 24 hours before the results of GRAMCORD operations were available. That delay did little to encourage regular use of the program, though it did illustrate the difference between batch and interactive processing. More recently, after three and a half years of research and writing, I have just completed "Bits, Bytes, and Biblical In October 1986, while researching and writing "Bits, Bytes, and Biblical Studies," I started the "Bits & Bytes Review," a review-oriented newsletter for academic and humanistic computing. This publication reviews microcomputer products in considerable detail, from the perspective of humanists, and in terms of how the products can enhance research and increase productivity. The newsletter appears nine times a year and is available to members of the Association for Computing in the Humanities at reduced rates. (Free sample copies are available from the publisher.) I am a member of the Association for Computing in the Humanities and a contributing editor to "The Electronic Scholar's Resource Guide" (Edited by Joseph Raben, Oryx Press, forthcoming). During the summer of 1988, I will teach an introductory-level course on academic word processing, desktop publishing, and text-retrieval programs at the University of Leuven through the Penn-Leuven Summer Institute. I am interested in using available electronic resources and tools to study the Hebrew Scriptures, the Septuagint, and the Greek New Testament. ----------------- *Julien, Jacques I am assistant-professor at the Department of French & Spanish at the University of Saskatchewan, in Saskatoon. I am teaching language classes and French-Canadian literature and civilization. My field of research is French-Canadian popular song. I will have my Ph.D. thesis published in Montreal in next November. My subject was the popular singer: Robert Charlebois, and I have received my degree from the University of Sherbrooke, in 1983. I am working on a IBM/PC/XT compatable that can access the mainframe (VMS) through Kermit. Nota Bene is the wordprocessor I use more often. I am planning to use AskSam, by Seaside Software, a Text Base Management System, and SATO, from UQAM. I may say that my reasearch is based on computer assistance, as is my instruction. For example, I am very much interested at the software Greg Lessard is working on for interactive writing in French. Keywords that can define my work and my interests would be: French-Canadian literature and civilization, semiotics, sociology, CAI of French, stylistic analysis and Text Base Management. ----------------- *Kenner, Hugh I am Andrew W. Mellon Professor of the Humanities (English) at Johns Hopkins. I co-authored the "Travesty" program in the November '86 BYTE. With my students, I do word-analysis of Joyce's Ulysses, using copies of the master tapes for the Gabler edition. ----------------- *Lancashire, Ian Centre for Computing in the Humanities, Robarts Library, 14th Floor, University of Toronto, Toronto, Ontario M5S 1A5; (416) 978-8656. I am a Professor of English who became interested in applying database and text-editing programs to bibliographical indexes for pre-1642 British records of drama and minstrelsy. Somewhat earlier I had done concording for an edition of two early Tudor plays. These in turn led me in 1983 to offer a graduate course introducing doctoral students in English to research computing; and to help, my department offered to publish a textbook summarizing documentation and collecting scattered information. With the support of like-minded colleagues, especially John Hurd and Russ Wooldridge, I urged the university to set up a natural-language-processing facility. The Vice-President of Research obliged by doing so and giving us a full-time programmer at Computing Services. I worked with him on a collection of text utilities called MTAS, which we developed on an IBM PC-AT given by IBM Canada Ltd. Then we organized a conference on humanities computing at Toronto in April 1986, and a month later IBM Canada and the university signed a joint partnership to set up a Centre for Computing in the Humanities here. Four laboratories and a staff of five later, I am still a director who enjoys every hour of the extraordinary experience of leading people where they want to go, one of whom, the creator of HUMANIST, is a gentleman scholar who has worked with me from the mid-seventies and whose talents are fully revealed in the Toronto centre. My own research? I co-edit The Humanities Computing Yearbook, am interested in distributional statistical analysis of text (content analysis with pictures), and am working with Alistair Fox and Greg Waite of the University of Otago (New Zealand) and George Rigg of Medieval Studies at Toronto on an English Renaissance textbase, with emphasis on the dictionaries published at that time. I have given a fair number of well-meaning talks about the importance of humanities computing, a few of which have been published. I am optimistic that eventually some serious scholarship will come of all this chatter. My wife is a professor of English too, and we have three children, one cat, and five microcomputers between us. ----------------- *Martindale, Colin Dept. of Psychology, Univ. of Maine, Orono, ME 04469 I guess that the main way that I support computing in the humanities is by doing it. I have been working in the area of computerized content analysis for about 20 years. I have constructed several programs and dictionaries that I have used mainly to test my theory of literary evolution originally described in my book, Romantic Progression (1975). More recent publications are in CHum (1984) and Poetics (1978, 1986). I have tried to convince--with some success--colleagues in the humanities to use quantitative techniques and computers. With more success, I have interested grad students in psychology to use computerized content analysis to study literature and music . ----------------- *Miller, Stephen External Adviser, Computing in the Arts, Oxford University Computing Service, 13, Banbury Road, Oxford. OX2 6NN. 0865-273266 I would like to join HUMANIST - my role in the computing service here is to handle enquiries about computer applications in the humanities from users outside of Oxford in the main but also to provide an internal service if I can be of assistance ----------------- *Nash, David MIT Center for Cognitive Science, 20B-225 MIT, Cambridge MA 02139 tel. (617)253-7355 (until Jan. 1988) I am involved in two projects which link computers, linguists, and word lists (and also text archives), namely the Warlpiri Dictionary Project (at the Center for Cognitive Science, Massachusetts Institute of Technology, and Warlpiri schools in central Australia), and the National Lexicography Project at AIAS (Australian Institute of Aboriginal Studies). The latter is a clearinghouse for Australian language dictionaries and word lists on computer media, recently begun, and funded until March 1989. Contact AIAS, GPO Box 553, Canberra ACT 2601, Australia; or the address below until next January. At the Center for Cognitive Science we use a DEC microVax, and Gnu Emacs and (La)TeX. We also use CP/M machines, and a Macintosh SE at AIAS, and have access to larger machines such as a Vax for data transfer. My training and interests are in linguistics and Australian languages. ----------------- *O'Cathasaigh, Sean French Department, The University, Southampton SO9 5NH England I work in the French Department at Southampton, where I use microcomputers for teaching grammar and the mainframe for generating concordances of French classical texts. I'd be very interested in hearing from anyone who has used Deredec or its associated packages. I've thought of buying them for my Department, but have found it very difficult to get information from the authors. So a user report would be very welcome. Please contact: ----------------- *O'Flaherty, Brendan My interest in humanities computing is primarily in the archaeolaogical field. I did my undergraduate and postgraduate degrees at University College, Cork and am currently Research Fellow in the Department of Archaeology in Southampton (address: The University, Souhampton SO9 5NH). My interest in computing include Computer-aided learning, typesetting and databases. ----------------- *Paff, Toby C.I.T., 87 Prospect St., Princeton University, Princeton, New Jersey 08544 609-452-6068 I support, along with Rich Giordano, almost every aspect of computing in the humanities, where humanities includes the broadest number of fields possible. This means in particular, text processing, database work as it relates to humanities, text analysis, and linguistic analysis. I work a good deal with Hebrew and Arabic fonts, and with faculty and students who work in that area. Occasional work crops up in Chinese, but that comes and goes in waves. I am a SPIRES programmer and support things like the university serials list. My background is, in fact, in library work, though I support almost nothing bibliographical at this point. Given the generally cooperative atmosphere at Princeton, I work with micros, minis and mainframes... CMS and UNIX both. ----------------- *Ruus, Lane Head, UBC Data Library, Data Library, University of British Columbia 6356 Agricultural Road, Vancouver, B.C. V6T 1W5 (604) 228-5587 Academic background: anthropology, librarianship What I do: see the following. UBC DATA LIBRARY AS A TEXT ARCHIVE The UBC Data Library is jointly operated by the UBC Computing Centre and Library. Its basic functions are to acquire and maintain computer-readable non-bibliographic files, in all necessary disciplines, to support the research and teaching activities of the University, to provide the necessary user services, and to act as an archive for original research data that may be used for secondary analysis by others. The Data Library is committed to three basic principles: (a) expensive data files should not be duplicated among a variety of departments on campus, but should be acquired centrally and made available to all, (b) original data resulting from research, that might be subject to secondary analysis in the future, should be preserved for posterity, as are publications in other physical media. They should therefore be deposited in data archives, with the professional expertise to preserve this fragile medium for future analysis, and (c) one of the basic tenets of academic research is the citation of all sources used, so as to facilitate the peer review process. Data files should therefore be cited, in publications, as are as a matter of course all other media of publication. Through such acknowledgement, creators of data will be encouraged to make their data available for secondary analysis. The Data Library's collection contains over 4600 files. Because of the size of the collection, all data are stored on magnetic tapes. Files vary in size from ten card images to a hundred million bytes or more. Subject matter varies from the Old Testament in Hebrew, to images from the polar-orbiting NOAA satellites. Data files are ordered from other data archives/libraries, on request (and as our budget allows), or are deposited by individual researchers. At present, the Data Library has textual data files in the following broad subject areas: American fiction, American poetry, Anglo-Saxon poetry, Bible (New Testament, Old Testament), Canadian poetry, English drama, English fiction, English poetry, French diaries, French language (word frequency, literature, poetry), German poetry, Greek (drama, language, literature, poetry), Hebrew literature, Indians of North America - British Columbia - legends, Irish fiction (English), Latin literature. All files are accessible at all times that the UBC G-system mainframe is operating in attended mode. Text files are generally maintained in the format in which they are received from the distributor. Generally this allows the researcher maximum flexibility to choose his/her favourite analysis package (e.g. OCP), download to a microcomputer, etc. Occasionally, the Data Library will compile an index to the contents of a large, complex file, or otherwise compile a computer-readable codebook. The Data Library maintains a catalogue of its collection under the SPIRES database management system, on the UBC G- system mainframe. Each record in the database contains information as to the substantive content, size, format, and availability of data files. It also includes information as to where documentation describing the files is to be found (whether on-line disc files or printed), and the information needed to mount the tape containing the file. The Data Library also maintains, on the G-system, an interactive documentation system. The system includes documents introducing the Data Library, how to mount Data Library owned tapes, as well as documents describing how to compile a bibliographic citation for a data file, how to deposit data files in the Data Library, etc. ----------------- *Tompkins, Kenneth ARHU, Stockton State College, Pomona, NJ 08240 (609) 652-4497 (work) or (609) 646-5452 (home) Fundamentally, I support computing in the Humanities by witnessing. In 1981, I set up the college Microlab so that (1) there would be a place for the whole college to learn about micros and what can be done with them; and (2) so that non-information science students could have a place to work. Since then, I have held yearly faculty workshops, set up over 200 computers across the campus, designed an Electronic Publishing track in the Literature Program (English Dept.), set up a college BBS, and done anything I could to make sure my colleagues have a chance to use computers in their teaching and research. I did teach a course called Computers and the Humanities which was not an unqualified success. Oh yes, I built my first micro in 1975. My role, then, is to witness, persuade, pound on tables, cajole, and to make myself heard by busy, somewhat uncaring administrators and by overworked and fearful colleagues. I am a Medievalist and I have been teaching at undergraduate colleges since 1965. I came to Stockton as one of a five person team to start the college; after 15 months of work designing the curriculum and hiring 55 faculty, the college opened in 1971. I was Dean of General Studies until 1973 when I returned to full time teaching. Since 1978 I have been spending my summers at Wharram Percy in the Yorkshire Wolds. Wharram Percy is a Deserted Medieval Village archeaological dig. I am now the Chief Guide; last summer I led tours for over 1100 visitors. I have co-authored a small booklet on Deserted Villages. I am very interested in how computers can be applied to archeaology. My other projects involve graphic input to computers. At present, I have built digitizing boards and hope to begin digitizing Celtic art so that these complex pictures can be broken into constituent parts. I am also interested in graphic reconstruction of medieval buildings. From: CBS%UK.AC.EARN-RELAY::EARN.UTORONTO::LISTSERV 14-SEP-1989 15:24:15.35Revised List Processor (1.6a) Subject: File: "BIOGRAFY 6" being sent to you Date: Thu, 14 Sep 89 10:21:00 EDT X-Humanist: Vol. 1 Num. 1078 (1078) Autobiographies of HUMANISTs Fifth Supplement Following are 23 additional entries to the collection of autobiographical statements by members of the HUMANIST discussion group. Further additions, corrections, and updates are welcome, to mccarty@utorepas.bitnet. W.M. 16 December 1987 ----------------- *Atwell, Eric Steven Centre for Computer Analysis of Language and Speech, AI Division, School of Computer Studies, Leeds University, Leeds LS2 9JT; +44 532 431751 ext 6 I am in a Computer Studies School, but specialise in linguistic and literary computing, and applications in Religious Education in schools. I would particularly like to liaise with other researchers working in similar areas. ----------------- *Benson, Tom {akgua,allegra,ihnp4,cbosgd}!psuvax1!psuvm.bitnet!t3b (UUCP) t3b%psuvm.bitnet@wiscvm.arpa (ARPA) Department of Speech Communication, The Pennsylvania State University 227 Sparks Building, University Park, PA 16802; 814-238-5277 I am a Professor of Speech Communication at Penn State University, currently serving as editor of THE QUARTERLY JOURNAL OF SPEECH. In addition, I edit the electronic journal CRTNET (Communication Research and Theory Network). ----------------- *CETEDOC (CENTRE DE TRAITEMENT ELECTRONIQUE DES DOCUMENTS) CETEDOC, LLN, BELGIUM THE CETEDOC (CENTRE DE TRAITEMENT ELECTRONIQUE DES DOCUMENTS) IS AN INSTITUTION OF THE CATHOLIC UNIVERSITY OF LOUVAIN AT LOUVAIN-LA-NEUVE, BELGIUM. ITS DIRECTOR IS PROF. PAUL TOMBEUR. ----------------- *Chadwick, Tony Department of French & Spanish, Memorial University of Newfoundland St. John's, A1B 3X9; (709)737-8572 At the moment I have two interests in computing: one is the use of computers in composition classes for second language learners, the socond in computerized bibliographies. I have an M.A. in French from McMaster and have been teaching at Memorial University since 1967. Outside computers, my research interests lie in Twentieth Century French Literature. ----------------- *Coombs, James H. Institute for Research in Information and Scholarship, Brown University Box 1946, Providence, RI 02912 I have a Ph.D. in English (Wordsworth and Milton: Prophet-Poets) and an M.A. in Linguistics, both from Brown University. I have been Mellon Postdoctoral Fellow in English and am about to become Software Engineer, Research, Institute for Research in Information and Scholarship (IRIS). I have co-edited an edition of letters (A Pre-Raphaelite Friendship, UMI Research Press) and have written on allusion and implicature (Poetics, 1985; Brown Working Papers in Linguistics). Any day now, the November Communications of the ACM will appear with an article on "Markup Systems and the Future of Scholarly Text Processing," written with Allen H. Renear and Steven J. DeRose. I developed the English Disk on the Brown University mainframe, which provides various utilities for humanists, primarily for word processing and for staying sane in CMS. I wrote a Bibliography Management System for Scholars (BMSS; 1985) and then an Information Management System for Scholars (IMSS; 1986). Both are in PL/I and may best be considered "aberrant prototypes," used a little more than necessary for research but never commercialized. I am currently working on a system with similar functionality for the IBM PC. Last year, I developed a "comparative concordance" for the multiple editions of Wordsworth's Prelude. I am delayed in that by the lack of the final volume of Cornell's fine editions. A preliminary paper will appear in the working papers of Brown's Computing in the Humanities User's Group (CHUG); a full article will be submitted in January, probably to CHUM. I learned computational linguistics from Prof. Henry Kucera, Nick DeRose, and Andy Mackie. Richard Ristow taught me software engineering management or, more accurately, teaches me more every time I talk to him. I worked on the spelling corrector, tuning algorithms. I worked on the design of the grammar corrector, designed the rule structures, and developed the rules with Dr. Carol Singley. Then I started with Dr. Phil Shinn's Binary Parser and developed a language independent N-ary Parser (NAP). NAP reads phrase structure rules as well as streams of tagged words (see DeRose's article in Computational Linguistics for information on the disambiguation) and generates a parse tree, suitable for generalized pattern matching. Finally, at IRIS, I will be developing online dictionary access from our hypermedia system: Intermedia (affix stripping, unflection, definition, parsing, etc.). In addition, we are working on a unified system for accessing multiple databases, including CD-ROM as well as remote computers. ----------------- *Dawson, John L. University of Cambridge, Literary and Linguistic Computing Centre Sidgwick Avenue, Cambridge CB3 9DA England; (0223) 335029 I have been in charge of the Literary and Linguistic Computing Centre of Cambridge University since 1974, and now hold the post of Assistant Director of Research there. The LLCC acts as a service bureau for all types of humanities computing, including data preparation, and extends to the areas of non-scientific computing done by members of science and social science faculties. Much of our work remains in the provision of concordances to various texts in a huge range of languages, either prepared by our staff, by the user, or by some external body (e.g. TLG, Toronto Corpus of Old English, etc.) Some statistical analysis is undertaken, as required by the users. Recently, we have begun preparing master pages for publication using a LaserWriter, and several books have been printed by this means. My background is that of a mathematics graduate with a Diploma in Computer Science (both from Cambridge). I am an Honorary Member of ALLC, having been its Secretary for six years, and a member of the Association for History and Computing. My present research (though I don't have much time to do it) lies in the comparison of novels with their translations in other languages. At the moment I am working on Stendhal's "Le Rouge et le Noir" in French and English, and on Jane Austen's "Northanger Abbey" in English and French. I have contributed several papers at ALLC and ACH conferences, and published in the ALLC Journal (now Literary & Linguistic Computing) and in CHum. ----------------- *Giordano, Richard I am a new humanities specialist at Princeton University Computer Center (Computing and Information Technology). I come to Prinecton from Columbia University where I was a Systems Analyst in the Libraries for about six years. I am just finishing my PhD dissertation in American history at Columbia as well. ----------------- *Johnson, Christopher Language Research Center, Room 345 Modern Languages, University of Arizona Tucson, Az 85702; (602) 621-1615 I am currently the Director of the Lnaguage Research Center at the University of Arizona. Masters in Educational Media, Univeristy of Arizona; Ph.D. in Secondary Education (Minor in Instructional Technology), UA. I have worked in the area of computer-based instruction since 1976. I gained most of my experience on the PLATO system here at the University and as a consultant to Control Data Corp. Two years ago I moved to the Faculty of Humanities to create the Language Research Center, a support facility for our graduate students, staff, and faculty. My personnal research interests are in the area for individual learning styles, critical thinking skills, Middle level education and testing as they apply to computer-based education. The research interests of my faculty range from text analysis to word processing to research into the use of the computer as an instructional tool. ----------------- *Johansson, Stig Dept of English, Univ of Oslo, P.O. Box 1003, Blindern, N-0315 Oslo 3, Norway. Tel: 456932 (Oslo). Professor of English Language, Univ of Oslo. Relevant research ----------------- Academic Computing Services, 215 Machinery Hall, Syracuse University Syracuse, New York 13244; 315/423-3998 I am Associate Director for Research Computing at Syracuse University and am interested in sponsoring a seminar series next spring focusing on computing issues in the humanities. I hope that this will lead to hiring a full-time staff person to provide user support services for humanities computing. ----------------- *Langendoen, D. Terence Linguistics Program, CUNY Graduate Center, 33 West 42nd Street, New York, NY 10036-8099 USA; 212-790-4574 (soon to change) I am a theoretical linguist, interested in parsing and in computational linguistics generally. I have also worked on the problem of making sophisticated text-editing tools available for the teaching of writing. I am currently Secretary-Treasurer of the Linguistic Society of America, and will continue to serve until the end of calendar year 1988. I have also agreed to serve on two working committees on the ACH/ALLC/ACL project on standards for text encoding, as a result of the conference held at Vassar in mid-November 1987. ----------------- *Molyneaux, Brian Department of Archaeology, University of Southampton, England. I am at present conducting postgraduate research in art and ideology and its relation to material culture. I am also a Field Associate at the Royal Ontario Museum, Department of New World Archaeology, specialising in rock art research. I obtained a BA (Hons) in English Literature, a BA (Hon) in Anthropology, and an MA in Art and Archaeology at Trent University, Peterborough, Ontario. My research interest in computing in the Humanities includes the analysis of texts and art works within the context of social relations. ----------------- *Olofsson, Ake I am at the Department of Psychology, University of Umea, in the north of Sweden. Part of my work at the department is helping people to learn how to use our computer (VAX and the Swedish university Decnet) and International mail (Bitnet). We are four system-managers at the department and have about 40 ordinary users, running word-processing, statistics and Mail programs. ----------------- *ORVIK, TONE POST OFFICE BOX 1822, KINGSTON, ON K7L 5J6; 613 - 389 - 6092 WORKING ON BIBLE RESEARCH WITH AFFILIATION TO QUEEN'S UNIVERSITY'S DEPT. OF RELIGIOUS STUDIES; CREATING CONCORDANCE OF SYMBOLOGY. HAVE WORKED AS A RESEARCHER, TEACHER, AND WRITER, IN EUROPE AND CANADA; ESPECIALLY ON VARIOUS ASPECTS OF BIBLE AND COMPARATIVE RELIGION. INTERESTED IN CONTACT WITH NETWORK USERS WITH SAME/SIMILAR INTEREST OF RESEARCH. ----------------- *Potter, Rosanne G. Department of English, Iowa State University, Ross Hall 203, (515) 294-2180 (Main Office); (515) 294-4617 (My office) I am a literary critic; I use the mainframe computer for the analysis of literary texts. I have also designed a major formatting bibliographic package, BIBOUT, in wide use at Iowa State University, also installed at Princeton and Harvard. I do not program, rather I work with very high level programming specialists, statisticians, and systems analysts here to design the applications that I want for my literary critical purposes. I am editing a book on Literary Computing and Literary Criticism containing essays by Richard Bailey, Don Ross, Jr., John Smith, Paul Fortier, C. Nancy Ide, Ruth Sabol, myself and others. I've been on the board of ACH, have been invited to serve on the CHum editorial board. ----------------- *Renear, Allen H. My original academic discipline is philosophy (logic, epistemology, history), and though I try to keep that up (and expect my Ph.D. this coming June) I've spent much of the last 7 years in academic computing, particularly humanities support. I am currently on the Computer Center staff here at Brown as a specialist in text processing, typesetting and humanities computing. I've had quite a bit of practical experience designing, managing, and consulting on large scholarly publication projects and my major research interests are similarly in the general theory of text representation and strategies for text based computing. I am a strong advocate of the importance of SGML for all computing that involves text; my views on this are presented in the Coombs, Renear, DeRose article on Markup Systems in the November 1987 *Communications of the ACM*. Other topics of interest to me are structure oriented editing, hypertext, manuscript criticism, and specialized tools for analytic philosophers. My research in philosophy is mostly in epistemic logic (similar to what AI folks call "knowledge representation"); it has some surprising connections with emerging theories of text structure. I am a contact person for Brown's very active Computing in the Humanities User's Group (CHUG). ----------------- *Richardson, John Associate Professor, University of California (Ls Angeles), GSLIS; (213) 825-4352 One of my interests is analytical bibliography, the desription of printed books. At present I am intrigued with the idea that we can describe various component parts of books, notably title pages, paper, and typefaces, but the major psycho-physical element, ink, is not described. Obviously this problem involves humanistic work but also a far degree of sophistication with ink technology. I would be interested in talking with or corresponding with anyone on this topic... ----------------- *Taylor, Philip Royal Holloway & Bedford New College; University of London; U.K; (+44) 0784 34455 Ext: 3172 Although not primarily concerned with the humanities (I am principal systems programmer at RHBNC), I am freqently involved in humanties projects, particularly in the areas of type-setting (TeX), multi-lingual text processing, and natural language analysis, among others. ----------------- *Whitelam, Keith W. Dept. of Religious Studies, University of Stirling, Stirling FK9 4LA Scotland; Tel. 0786 3171 ext. 2491 I have been lecturer in Religious Studies at Stirling since 1978 with prime responsibility for Hebrew Bible/Old Testament. My research interests are mainly aimed at exploring new approaches to the study of early Israelite/ Palestinian history in an interdisciplinary context, i.e. drawing upon social history, anthropology, archaeology, historical demography, etc. I have been constructing a database of Palestinian archaeological sites, using software written by the Computing Science department, in order to analyse settlement patterns, site hierarchies, demography, etc. The department of Environmental Science has recently purchased Laser Scan an offered me access to the facilities. This will enable me to display settlement patterns, sites, etc in map form for analysis and comparison. I am particularly interested in corresponding/discussing with others working on similar problems, particularly in Near Eastern archaeology. I have also been involved in exploring the possibilities of setting up campus-wide text processing laser printing facilities. It looks as though we shall be able to offer a LaTeX service in the New Year. We are also planning to offer a WYSIWYG service, such as Ventura on IBM or a combination with Macs for the production of academic papers. Again I have a particular interest in the use of foreign fonts, e.g. Hebrew, Akkadian, Ugaritic, Greek, etc. My teaching and research on the Hebrew Bible leads to a concern with developing computer-aided text analysis, although I have had little time to explore this area. We have OCP available on our mainframe VAX but my use of this has been very limited. I see this as an important area of future development in teaching and research along with Hebrew teaching. ----------------- *Wilson, Noel Head of Academic Services, University of Ulster, Shore Road Newtownabbey, Co. Antrim, N. Ireland BT37 0QB; (0232)365131 Ext. 2449 My post has overall responsibility for the central academic computing service, offered by the Computer Centre, to the University academic community. Within this brief, my Section is responsible for the acquisition/development and documentation of CAL and proprietary software. We currently provide a program library in support of courses and research which contains approx. 400 programs; of these approx. 80 are in-house developments, 50 proprietary systems and the remainder obtained from a variety of sources incl. program libraries (eg CONDUIT - Univ. of Iowa). We have only very recently addressed computing within the Faculty of Humanities; academic staff in the Faculty have used computers in a research capacity and are now turning towards the various u'grad. courses. Presently we hold a grant of 79,000 pounds from the United Kingdom Computer Board for Universities and Research Councils, for the development of CAL software in support of Linguistics and Lexicostatistics. Within this project we are attempting to develop courseware to support grammar teaching in French, German, Spanish and Irish (details of existing materials appropriate to u'grad. teaching would be most welcome!). We also are investigating the creation of software to support an analysis of text (comparative studies) - in this area we are looking at frequency counts assoc. with words/expressions/words within registers etc. - again help would be appreciated. I am happy to provide further details on any of the above points and wish to keep informed of useful Humanities-related CAL work elsewhere. We currently use the Acorn BBC micro. but are also moving in the direction of PC clones. ----------------- *Wood, Max Computing Officer, 403 Maxwell Building, The University of Salford The Crescent, Salford, G.M.C. ENGLAND; 061-736-5843 Extension 7399 We are involved in a project to introduce the use of computing in teaching here in the Business and Management Department of Salford University and I am keen to extend links to other Business schools both here in the U.K. and indeed in the U.S.A. Obviously therefore I would like to join your forum so as to possibly exchange ideas news etc. My background is essentially in computing and I mainly supervise the computing resources available to our Department, and have formulated much of the teaching systems we currently use. ----------------- *Wujastyk, Dominik I am a Sanskritist with some knowledge of computing. Once upon a time (1977-78) I learned Snobol4 from Susan Hockey at Oxford, where I did undergraduate and later doctoral Sanskrit. More recently, I have been using TeX on my PC AT (actually a Compaq III), and in the middle of this summer I published a book _Studies on Indian Medical History_, which was done in TeX and printed out on an HP LJ II, and sent to the publisher as camera ready. It all went very well. I have received the MS DOS Icon implementation from Griswold at Arizona, but have not spent time on it. I am trying to teach myself at the moment, just to learn enough to knock out ocassional routines to convert files from wordprocessor formats to TeX, and that sort of thing. (Probably reinventing the wheel.) At the present time I am editing a Sanskrit text on medieval alchemy, and doing all the formatting of the edition in LaTeX. Before I ever started Sanskrit, I did a degree in Physics at Imperial College in London, but that is so long ago that I don't like to think about it! ----------------- *Young, Charles M. Dept. of Philosophy, The Claremont Graduate School I am a member of the American Philosophical Association's committee on Computer Use in Philosophy. One of my pet projects is to find some way of making the Thesaurus Linguae Graecae database (all of classical Greek through the 7th century C.E.) more readily available to working scholars. From: CBS%UK.AC.EARN-RELAY::EARN.UTORONTO::LISTSERV 14-SEP-1989 15:31:15.02Revised List Processor (1.6a) Subject: File: "BIOGRAFY 7" being sent to you Date: Thu, 14 Sep 89 10:21:54 EDT X-Humanist: Vol. 1 Num. 1079 (1079) Autobiographies of HUMANISTs Sixth Supplement Following are 20 additional entries to the collection of autobiographical statements by members of the HUMANIST discussion group. Further additions, corrections, and updates are welcome. Willard McCarty mccarty@utorepas.bitnet 24 January 1988 ----------------- *Beeman, William O. Associate Director for Program Analysis, Institute for Research in Information and Scholarship (IRIS), Brown University Providence, Rhode Island 02912 U.S.A. I am a linguistic anthropologist at Brown University, and I also direct the Office of Program Analysis of Brown's Institute for Research in Information and Scholarship. Our office conducts social science research on the effects of intensive computing on academic life. We are interested in learning not only how computers have changed various aspects of academic work for students, teachers and administrators, but also how academic work is conducted per se. We have recently completed a study in conjunction with the Getty Art History Information Program ----------------- Professor; Communication, Psychosociology, Universite du Quebec a Montreal, (514) 282-4897, 282-3620 University education: engineering and administration. ----------------- Summer Institute of Linguistics, 7500 W. Camp Wisdom Rd., Dallas, TX 75236. I am a computational linguist, interested mainly in the representation and analysis of text and discourse by computers. I hold an Sc.B in Computer Science, A.B.s in Linguistics and New Testament / Linguistics, and an A.M. in Computational Linguistics. I am currently writing a dissertation in CL with Henry Kucera at Brown University, concerning algorithmic resolution of grammatical category ambiguity in English and Greek prose. A portion of it is forthcoming in Computational Linguistics. With James Coombs and Allen Renear, and our colleagues Elli Mylonas (Harvard Perseus Project) and David Durand (Brandeis), I have been active in the Computing and the Humanities Users' Group at Brown, and in advocating the use of descriptive markup systems (see Coombs, Renear, and DeRose, "Markup Systems and the future of scholarly Text Processing", in CACM 11/87). I have served as a consultant to Language Systems, Inc., developing linguistically- based spelling and grammar correction systems, large dictionary databases, and other products. I have been involved in hypertext systems since 1979, when I became a director of the FRESS hypertext system project at Brown. Currently, I am working with the Summer Institute of Linguistics, which carries out linguistic research, literacy work, and translation with minority language groups. Our research group at SIL is developing hypertext systems for the management of linguistic and literary research data, combining methods from hypertext, knowledge representation, and information retrieval. I am also a consultant to several other system development projects, including the Brown- sponsored Document Interchange Project. ----------------- *Di Lella, Alexander A., O.F.M. Catholic University of America, Washington, DC 20064 U.S.A.; 202- 635-5657. I am professor of Biblical Studies at Catholic University. I used an IBM PC AT with two hard disks, 30 Mb and 60 Mb. I do research in the various texts of the Bible: the Massoretic Text, the Septuagint, Vulgate, Peshitta, and the Greek New Testament. I have Compucord and CompuBible on my 60 Mb hard disk; the former is a highly sophisticated search and concordance program on the Massoretic Text of the O.T. I also have the R.S.V. Bible on the 30 Mb hard disk and the Greek N.T. I use my AT in doing concordance work and word studies that would otherwise take me loads of time and effort. I am awaiting the arrival of the Septuagint and, I hope, the Peshitta. ----------------- *Faulhaber, Charles B. Department of Spanish & Portuguese, University of California, Berkeley, CA 94720 (415) 642-2107 My primary interests in humanities computing are in bibliographical data bases for cataloguing rare books and MSS, which I've been doing for about ten years. I am currently designing a data base using Advanced Revelation to support my ongoing Bibliography of Old Spanish Texts, a union catalogue of medieval MSS and incunabula --the primary sources for the study of medieval Spanish literature. I have been heavily involved with computing at Berkeley (chair of Academic Senate Computing Committee and of the UC system committee), chair of faculty steering committee for our Advanced Education Projects grant from IBM. As a result of these activities I have been recently exposed to the capabilities of advanced function workstations (Microvax, Sun, RT/PC, etc.) and am convinced that within three to five years they are going to be as ubiquitous as PCs and Macs are today. I am extremely interested in information about humanities programs for these machines, especially those which have been designed with the ICEC academic workstation environment in mind. ----------------- *Flaherty, Tom Academic Affairs and Research, Connecticut State University, New Britain, Connecticut 06050 USA; (203) 827-7700 I am now serving as Acting Assistant Vice President for Academic Affairs and Research for the Connecticut State University system -- enrolling about 35,000 students. When I am not so acting, I am a Professor of Psychology at Central Connecticut State U. In both of these roles I have found myself increasingly enmeshed in discussions/arguments regarding the role of the humanities. But let me back up a bit. I have been a computer lover since just about the time computers actually became available. My first exposure was to the IBM 1620 in the early 60's. Since that time, I have used computers wherever possible: data collection, data analysis, simulation, databases, word processing, etc. I, along with many others would find it hard to function if computerless. My training was in Experimental Psychology. No humanist stuff there; just rat psychology and later psychophysics. In the 60's, it was not fashionable (at least in my kind of Psychology) to talk about non-Behavioral things (note the "B", not "b"). More recently, I have been interested in cognitive processes and AI; decidedly non-Behavioral, but naturals for folks with my predilections. In my administrative role, I have seen students use computers in ways that convince me that their role in non-technical education can and will be much greater. For instance, in watching a student use a cd-rom-equipped pc to do a search using the Psychological Abstracts, it became apparent to me that this was more than a souped-up way to accomplish the same old literature search. The student was actually forming and refining concepts and research questions based upon the feedback from queries. A far different phenomenon than could ever take place through page flipping. The short version is that I am interested in the applications of computing to the study of any discipline, and I believe that important and exciting advances in such applications in the humanities (research and teaching thereof) are upon us. ----------------- *Flikeid, Karin Dept. of Modern Languages and Classics, Saint Mary's University, Halifax, N.S. B3H 3C3 (902) 420-5813 To sum up my computer-related interests: I use a number of computer applications primarily for my research on Acadian spoken language texts. These form a database of 800,000 words, presently on our VAX mainframe, where I do searches using the Oxford Concordance Program. I am however now in the process of downloading these texts to an IBM-AT and have started using WordCruncher. Most of my research is in sociolinguistics, where I use several statistics and graphics applications. These I do on the Macintosh Plus, as well as most of my word-processing. I use S.P.S.S.-X and Varbrul-2S on the VAX as well. Current preoccupations include using phonetic transcriptions in the IBM environment and streamlining the IBM/Macintosh/VAX linkups. I am also exploring ways to link the computerized database to the original voice recordings. Like most of us, my "support" of Humanities computing takes place within the informal network of exchange among colleagues. I have introduced a number of colleagues to humanities computing, in particular helping with the choice of computer environments. I also tend to be consulted on down- and up-loading and transfers between IBM and Mac. Within the framework of my French classes, I do 3-4 week introductory sessions in our IBM and Mac labs. I am a member of the Association for Literary and Linguistic Computing and the Association for Computers in the Humanities. Last spring I attended the Computers and the Humanities Conference in Toronto, which I found extremely stimulating. I am Associate Professor of French (Linguistics) at Saint Mary's University in Halifax. ----------------- *Goerwitz, Richard III Graduate Student, University of Chicago, Near Eastern Languages & Civilizations, 5410 S. Ridgewood Ct., 2E Chicago, IL 60615; 312- 643-4377 Undergraduate degree - Linguistics Master's - Bible Ph.D. (in process) - Comparative Semitics; Computer languages: C, Icon, BASIC, some Pascal (mostly Icon right now) Natural languages: Latin, Greek, Hebrew, Aramaic (old to middle), German, French, Akkadian, a little Dutch; more to come. I'm interested in the HUMANIST group because I'm a little isolated right now. Computer-oriented scholars in the humanities are spread quite thinly over the face of academia. I guess I'm hoping to find some kindred spirits. ----------------- *Lambert, Ian Mitchell Tangnefedd, Windmill Road, Weald, Sevenoaks, Kent TN14 6PJ U.K.; +44 (0)732 463460 I am a fulltime mature MA research student in Theology at the University of Kent at Canterbury. My thesis is entitled "Structuralist modeling and computer modeling of the biblical text". It thus covers (a) structuralism (b) computerising structuralism and (c) structuralism as an exegetical method. My interest derives from noting the closeness of structuralism's binary opposition concepts with computer modeling, and I therefore seek to see whether there is common ground in reality. If so, what are the implications for biblical exegesis? I am the only research student in both Theology and Computing Departments at the moment, and would welcome dialogue with others in similar fields and/or research. My supervisor is Dr John Court, Keynes College, University of Kent, Canterbury, Kent, CT2 7NL, Tel: +44 (0)227 764000 ----------------- *Lang, Francois-Michel 230 South 21st Street Philadelphia PA 19103 (215) 665-1849 (hone); Paoli Research Center P.O. Box 517 Paoli, PA 19301 (215) 648-7469 (work). BA with High Honors in Classics, 1981, Princeton University. Senior thesis topic: Homeric Speech Formulae MS in Computer and Information Science, 1986, University of Pennsylvania. Major Currently enrolled in PhD program in Computer and Information Science, University of Pennsylvania, and full-time employee as Research Scientist, Paoli Research Center, Unisys, Paoli PA. My work with Unisys deals with maintainability and portability of large natural-language understanding systems. While working on my master's degree at Penn, I was employed for two years by Jack Abercrombie of the University of Pennsylvania, and helped develop CAI software and word processors for non-Roman alphabets. ----------------- *Marker, Hans Joergen Danish Data Archives, Odense University, Campusvej 55, 5230 Odense M, Denmark I am a historian of background and employed as associate professor at the Danish Data Archives, Odense University. My job consists among other things in advising users in the field of history on the choice of methods and software in research projects that involve the usage of EDP. My own research centres on the prices and wages of Denmark in the first half of the 17th Century, and on historical informatics, methodology of computer programming and usage for the historical sciences. In the later research area I am working in close cooperation with people from other European countries, such as for instance Manfred Thaller from Goettingen, FRG and Kevin Schurer from the Cambridge Group, UK. In the field of computer programming I am currently working on some pieces to the Kleio project, which is centered at the MPIG in Goettingen and involves historians from a great number of European countries. ----------------- *McDonald, J. K. An Darach RR#1 Hartington, Ontario K0M 1W0; (613) 372-2071. B.A. (UBC), A.M. (Oregon), Ph.D. (Berkeley) Canadian. 1951- Current Status: Early Retirement; ongoing activity in CALL and text management. Co-founder of Italian Q'VINCI system (1981- ). Visiting scholar, Summer 1987, at Linguistic Institute, Stanford University, CA. Participant: 1986-, in multi-lingual funded VINCI research in CALL at Queen's University (with LESSARDG@QUCDN, BASTIANU@QUCDN, LEVISON@QUCIS, et al). Interested in mutual benefits of literary stylistics (Mexican, Italian) and linguistics. Languages, in descending order: English, Spanish, Italian, French, Latin, German, Greek, Esperanto, Celtic (smatterings). Member of the Association for Computers in the Humanities. ----------------- *Mylonas, Elli Department of the Classics 319 Boylston Hall, Harvard University Cambridge, MA 02138 I am currently a research associate in the Classics Department at Harvard, working on the Perseus Project. I am at the same time a graduate student at Brown University, finishing a dissertation in Classics on the structure of the prologues in Ovid's Fasti. My interest in computing began at Brown, about 5 years ago. It arose out of personal fascination, and continued because of the serendipitous coming together of a group of humanists and computer people. I passed through the usual phases of preparing electronic manuscripts for publication, and of being the resident "computer person" for the Brown Classics Department. (On that see my article in Scope, March 1987.) While I was at Brown, I also worked for the Computer Center, developing and teaching minicourses on using the mainframe and as a user services specialist my last year at Brown. Both these jobs made clear to me the problems facing the humanist who wants to use a computer in her daily work. While at Brown I also worked briefly for IRIS, on the Isocrates Project. My role was to write a user interface for the Harvard Search Programs that search the TLG database, and to help the Classics department to use the programs. I am also one of the early (and continuing) members of CHUG, the Computing in the Humanities Users Group. I am currently working on the textual side of the Perseus Project. Funded by Annenberg/CPB, this is a project based at Harvard and Boston University. We are building a large database of text and images from Classical Greece that will be linked together and will be provided with various scholarly tools, such as a morphological parser, an apparatus criticus, and spatial browsers for images. The first version will primarily be useful in teaching, but we think that later versions will have enough information to support scholarly research. At the moment we are prototyping in Hypercard on Macintoshes. My interests in humanities computing lie in both teaching and research. Key interests are Hypertext--Perseus is essentially a Hypertext/media project--SGML--all our texts will exist in tagged SGML compatible form--and the role of the humanist in originating and creating the software they need. ----------------- *Neu, Joyce Department of Speech Communication 305 Sparks Penn State University, University Park, PA 16802 (814) 863-3361 I am an Assistant Professor of Speech Communication at Penn State University where I specialize in cross-cultural communication and second language research and teaching. I received my doctorate in linguistics from the University of Southern California. Since 1983, I have used personal computers for instructional purposes, innovating the use of the PC in the ESL (English as a second language) classroom for teaching writing. Currently, I am attempting to establish an international intercultural newsletter. This newsletter will be sent to people around the world who have volunteered to participate in this exchange, and the newsletter will be written by students enrolled in my cross- cultural communication course around issues that we discuss in class. ----------------- Assistant Professor, Dept. of English, Box 3353 University of Wyoming, Laramie, WY USA 82071; (307) 766-3244 My experience with mainframe computers began with the text editors at the University of Chicago (Amdahl) where I keyed my own doctoral thesis on Coleridge a few years ago, and at Cambridge University where I keyed my wife's doctoral thesis in geology. Admittedly this didn't take me very far into computer science, but the merits of the keyboard were confirmed, and I have since devoted some time and money to the use and study of micros, PCs. This interest has earned me a spot as unofficial computer consultant in the humanities building at the University of Wyoming, recommending hardware and software, installing the same, and working in conjunction with our computer services who are networking (Ethernet) the campus. I have an active interest in telecommunications and digital switching, though only a smattering of the theory. After my recent visiting fellowship at Edinburgh, I have returned to Wyoming where work continues on my several jobs of editing, foremost among which is the Collected Letters of John Sterling (1806-1844) which I hope to publish from electronic manuscript in two volumes within a few years. Could any of your readers share with me stories about major university presses and their success or failure setting e-mss? ----------------- *Parker, Randolph College of Arts and Sciences, Kirkwood Hall 104, Indiana University, Bloomington, IN 47405; (812) 335-1646 Dipl. in English Linguistics, Edinburgh Univ (Scotland), MA, PhD (English Lang and Lit), Cornell Univ; faculty member or administrator, Indiana Univ, since 1972; present position: Asst Dean, College of Arts and Sciences, Indiana University. Research interests: Stylistic analysis of texts (especially dramatic texts), Linguistic and discourse theory; Administrative responsibility relative to computing: Coordinating the development of academic computing in the humanities area of the College of Arts and Sciences. Specific areas of concern: Computer techniques for textual analysis, Development of CAI for foreign languages and ESL, Innovative uses of database systems for research/teaching in humanities fields. Training and experience in computing: Coursework in data structures, theory of programming languages, and In AI (Indiana Univ). Practical experience designing and writing programs for linguistic analysis, bibliographical searches, and administrative computing (using mainframes, minis, and micros), Use of relational data-base, text processing, and concordance packages. ----------------- *Peterson, Mary Computer Specialist/Communications, Stoke Hall, University of New Hampshire Durham, NH 03824 I am a computer specialist for communications at the University of New Hampshire, which means that I am responsible for documentation, promotional and informative materials, brochures, announcements, annual report, and proposals for University Computing. In my previous position I was a senior writer/editor with University Communications. I also taught basic composition and fiction writing with UNH's English Department for several years. This semester I will be helping the English Department set up a Macintosh writing laboratory. Next fall I may help an instructor in basic composition team-teach a course in writing on the Macs. I am very interested in the use of computers in teaching composition. In my own work, I have found the computer to be an invaluable tool -- the only method of writing yet that is fast enough for me. I believe computers promote writing as a process, which is exactly the attitude students need to have to write many (improved) drafts of a paper on their way to polished, final work. I am also working on a basic composition textbook, now in production with a publisher in Boston, together with two colleagues. Our book is called WRITING MATTERS: FROM COMMITMENT TO COMPOSITION. I am interested in developing software that could accompany this textbook (perhaps in a year or two). I would appreciate and welcome correspondence regarding the use of computers in teaching writing -- composition, journalism, fiction, or poetry -- as well as correspondence regarding such writers' aids as PROSE. It would be nice to receive copies of documents other teachers and researchers have used in writing laboratories. I would share the same, as we produce materials. ----------------- *Roper, John Paul Goy University of East Anglia, Computing Centre, NORWICH, NR4 7TJ, UK Telephone 0603 592379 Telex 975197 FAX 0603 58553 Deputy Director of University Computing Centre Ex-Treasurer, current committee member of ALLC SIG chairman for microcomputers. General interest in use of computers for Literary and Linguistic research, Currently special interest in use of optical disks for storage and retrieval of images such as museum objects or documents. ----------------- *Tosh, Wayne Director CAI Lab--English Dept. St. Cloud State University St. Cloud, MN 56301; 612-255-3061. Ph.D. in Germanic philology, University of Texas/Austin, 1962; Special Research Associate, Linguistics Research Center, Austin, TX, 1960-68. Currently professor of linguistics, Dept. of English, St. Cloud State University since 1969. Chair of department's computer committee, member and past chair of university computer committee. Use word-processing in freshman composition, various language analysis programs in linguistics courses, computer-based drills in teaching freshman German. Self-taught programmer in SNOBOL4, having learned via electronic conferencing with colleagues at the University of Minnesota and elsewhere in the state. Teach an introductory course in SNOBOL4 to English and computer science majors. Direct operations and staff of a word- processing and computer-based instructional lab, perform light maintenance and repairs, prepare annual budget for department's computer equipment, conduct workshops for faculty in the use of word- processing, spreadsheets, and a variety of utility software. Interested in computer applications to the analysis and manipulation of language. Member of ACH. ----------------- *Young, Charles M. Department of Philosophy, The Claremont Graduate School, Claremont, CA 91711, (714) 621-8082. I am an associate professor of philosophy with a special interest in ancient Greek phil-osophy. In computing, I am mainly concerned with finding ways to make machine-readable versions of Greek texts more readily available to working scholars. (My department owns a baby IBYCUS system.) Until July 1988, I am a member of the American Philosophical Association's Committee on Computer Use in Philosophy. From: CBS%UK.AC.EARN-RELAY::EARN.UTORONTO::LISTSERV 14-SEP-1989 15:25:23.07Revised List Processor (1.6a) Subject: File: "BIOGRAFY 8" being sent to you Date: Thu, 14 Sep 89 10:22:06 EDT X-Humanist: Vol. 1 Num. 1080 (1080) Autobiographies of HUMANISTs Seventh Supplement Following are 21 additional entries to the collection of autobiographical statements by members of the HUMANIST discussion group. HUMANISTs on IBM VM/CMS systems will want a copy of Jim Coombs' exec for searching and retrieving biographical entries; for a copy send a note to the undersigned. Further additions, corrections, and updates are welcome. Willard McCarty Centre for Computing in the Humanities, Univ. of Toronto mccarty@utorepas.bitnet 18 February 1988 ----------------- *Bornstein, Jeremy I am an active member of CHUG, which is an acronym for Computing in the Humanities Users' Group. (This is an organization at Brown U.) I am also active in a CHUG subset called the Hypertext Working Group. In the H.W.G. we are attempting to design interfaces & data structures for our ideal hypertext system, and we've gotten pretty far and may be looking for funding soon. Another of my interests involves what my friends jokingly call 'hypotext', which refers to my practice of using computer programs to cut up and re-assemble compositions (which are later edited). Somewhat related to this, I am the moderator of a list, 'weird-l', which distributes pieces of strange creative writing, poetry, etc. Of course, I am interested in the ways which computers may aid people in non-scientific fields. ----------------- *Cardullo, Pamela I am a technical writer for the Academic Computer Center at Georgetown University. I write documentation for our computer resources. I hold a BA in History from St. Mary's College of Maryland and an MA in Divinity from the University of Chicago. ----------------- *Charlan, Maurice Communication, Concordia U. I am an assistant professor of Communication Studies at Concordia University. I have a PhD in Communication from the University of Iowa where I studied rhetoric and social theory with Michael McGee and communication, technology and culture with James Carey. Recent Publications: Technological Nationalism, Canad. J. pol. Soc. Theory, vol 10 L'informatique et la culture de la raison, Communication Information, vol 8 Constitutive Rhetoric: The Case of the Peuple Quebecois, Quart. J. Speech 73 On Rhetoric and Cultural Theory, Communication, in press. I am concerned with the relationship of discourse to social theory. This includes an interest in rhetorical theory, hermeneutics, ideological critique, theories of rationality and judgement. I am also interested in the relation- ship between technical or instrumental rationality and social or value rat. Next year, while on sabbatical, I intend to develop the concept of "critical phronesis" (where phronesis is practical wisdom) to develop a theory of judgement appropriate for social uses of technology. Finally, at Concordia, I am involved with our MA and PhD in Communication, where I teach seminars on the above, as well as on introductory theory. Oh, finally, re. computing. While I have written critically of the computer as a manifestation of the "dark side of the enlightenment" (cf. Adorno and Horkheimer), I also find it fascinating. I have a BA in mathematics and have a long background of programming experience. Consequently, I recognize the dangers of computing--of being "seduced" by the apparatus. In other words I am critical of computers because (1) they are poor substitutes for humans, (2) their significance to the human sciences can easily be exaggerated. There are those in communication who think that a computer based content analysis of a text constitutes an effective "reading" of its values or bias. (3)I am aware that I am somewhat prone to hacking, which is ultimately an obsessional waste of time. ----------------- *Cloutier, Andre Telephone (807) 343 - 8620 Professor in Quebec literature and French as a second language. Work in computers has been limited to the class environment: - Use of Word Perfect to prepare documents and text indexing; - Also am using SSI data, awaiting Data Perfect for sorting text material. - Involving classes in using computers for class work and assignments. - Research into Concordance programs hoping to find way to sort large amounts of text data and put them into practical use in the near future. - Hopefully in the near future will be able to design computer assisted classes in languages and literature. - Also a member of the University Senate Computing Committee and since last September a representative of Lakehead University on the OCCH [Ontario Consortium for Computing in the Humanities. ----------------- *Cranton, Brian 29 Hadwen Road, Worcester, Ma 01602; (617) 753-0235 I am currently a sophomore at Worcester Polytechnic Institute in Worcester, Ma and I am majoring in Mechanical Engineering. My background with computers in the humanities is fairly broad. I was a computer consultant for Rockingham County Child and Welfare Office for approximately one year helping to integrate their filing system into a computer database. Also, I worked for several months at the Harvard Graduate School of Education (Regional MATH Network) under a government grant. The project being a supplementary math workbook for elementary school children written by math teachers in the Boston area. My job was to convert their notes to a desktop publishing program. In addition to those two formal jobs, I have acted as a consultant for several people attempting to integrate computers into their small businesses and have helped teach young children the basics of operating home computers. I do not belong to any formal organizations related to education or computing at the moment, but I do have a strong interest in the field. I look forward to being able to contribute to your group. ----------------- *Durand, David My name is David Durand, and my current job is system programmer/system manager here at Brandeis. I have done a fair amount of work that could be considered ``applied humanities computing'', linguistically based work in English spelling correction and Spanish spelling correction (including a morphological parser for spanish that works from a 50,000 word dictionary). I was also involved in the design and implementation of a text-layout package based on SGML in 1983 for a company whose management was worse than its technical staff. I have been heavily involved with CHUG (the Computing and Humanities User Group) at Brown during the last year. My current interest is in hypertext systems to support scholarly research in the humanities. In particular I am trying to integrate descriptive markup, multiple-version handling, cross-linking and parallel texts into a single framework. Steve DeRose and I have solved many of the technical problems with implementing such facilities and are currently implementing the data- handling functions and starting to design the proper sorts of interface. My non-computer but relevant interests are analytic philosophy and early Maya Civilization. I'm looking forward to joining the discussion, as the samples that I've seen look very interesting. ----------------- *Fortesque, Susan Joan I have a BA Honours degree in Italian and French from the University of Reading and an MA in Linguistics and English Language Teaching from the University of Leeds. I worked in Teaching English as a Foreign Language for 16 years, in Italy, Nepal and Britain, ending up at Eurocentre, Bournemouth, UK, where I was the teacher trainer. While there I set up a computer room and software library. My experiences are described in the book I co-authored with Christopher Jones - Using Computers in the Language Classroom (Longman 1987). While at Eurocentre, I also helped to design an interactive videodisc for learners of English as a foreign language. In January 1986 I accepted a post with Barclays Bank and worked for them as a course designer, interactive videodisc, until October 1987. The team of which I was a member was responsible for the design of interactive videodisc training materials for bank staff. In October 1987 I came to Heriot-Watt University where I am studying for an MSc by research in the Department of Computer Science. My research area is the application of AI techniques, particularly expert systems, intelligent tutoring systems and natural language interfaces, to the design of interactive videodisc training programmes. At the moment I am carrying out a review of the literature. ----------------- *Germain, Ellen I'm a VM systems programmer in the Academic Systems group at Columbia University; I am also a graduate student in the English department, specializing in medieval literature. ----------------- *Giampapa, Joe I am an undergraduate at Brandeis University ('89), studying Computer Science, with interests in: AI, NLP, Linguistics, educational and responsible computing, and Italian cultural studies. Presently, I am working on a Lisp structure ==> LaTeX formatter of Walpiri-English dictionary entries, as part of the Warlpiri Project at MIT's Center for Cognitive Science. Future plans involve writing a Lisp structure <==> SGML format generator/parser for the dictionary. I have been forwarded some mail messages from the Humanist network about SGML, and feel I could benefit and possibly contribute by direct contact with the network. ----------------- *Guedon, Jean-Claude Institut d'histoire et de sociopolitique des sciences, Detache en Litterature comparee, Universite de Montreal, CP 6128, Succursale "A", Montreal, QC H3C 3J7; (514)-343-6208 (office), 343-6609 (secretary). By training, I am a historian and sociologist of science, but I also dabble in questions like science and literature, or utopias. My interest for computing, however, stems more from my sociological leanings. For example, I am presently undertaking a large socio-historical research project funded by SSHRC to study on a comparative basis career trajectories of French- and English-speaking engineers in Quebec since about 1800 until about 1965-70. A population of several thousand engineers such as the one that I am facing requires the use of a well thought-out database and I am presently investigating the possibilities of using 4th Dimension with a Macintosh II. Otherwise, I am interested by new communications technologies such as the Minitel in France and what social effects it will have in the near future. I am a full professor at the University of Montreal and I share my work between history and sociology of technology on the one hand, and the programme in comparative literature on the other. ----------------- *Krovetz, Robert I'm a doctoral candidate at UMASS at Amherst in Computer Science. I'm working on how the use of natural language techniques can be used to improve the indexing of large text databases. I'm still in very early stages (literature review), but I hope to get a proposal done sometime this summer. I received my Master's degree at the University of Maryland in 1979 and then worked for several years at the National Library of Medicine. ----------------- *Marcos-Marin, Francisco Ap. 46348, E-28080 Madrid, Spain.; Phone: (34)(1)3974529 Professor of General Linguistics, Doctor en Filosofia y Letras (Filologia Romanica), Universidad Autonoma de Madrid; Director of the Madrid team of EUROTRA (Machine Translation, European Communities); Developer of UNITE, a package for computer aided philological editing. ----------------- *Neuman, Michael Academic Computer Center, Reiss Science Building, Room 238, Georgetown University, Washington, D.C. 20057; (202) 687-6096 After earning a Ph.D. in English and working my way through the labyrinth of tenure and promotion, I became the Director of Academic Computing at my institution (Capital University in Columbus, OH). My "conversion experience" was occasioned by a review of Writer's Workbench with a friend who worked at Bell Labs; I discovered then that computers could provide my students with services beyond what I was capable of as an individual professor. Currently, as an Assistant Director of the Academic Computer Center at Georgetown University, I am exploring the ways that computers can help colleagues teach writing and do research in literary analysis. Kurzweil, WordCruncher, programs for stylistic analysis -- these are becoming a part of my professional life, though some day I am sure to return to the my own classroom with a storehouse of tales of my adventures with computers. ----------------- *Nye, E.W. Assistant Professor Dept. of English, Box 3353 University of Wyoming, Laramie, WY 82071, U.S.A.; 307-766-3244 I have been a visiting fellow at University of Edinburgh. I'm now in the English department at the University of Wyoming. My experience with mainframe computers began with the text editors at the University of Chicago (Amdahl) where I keyed my own doctoral thesis on Coleridge a few years ago, and at Cambridge University where I keyed my wife's doctoral thesis in geology. Admittedly this didn't take me very far into computer science, but the merits of the keyboard were confirmed, and I have since devoted some time and money to the use and study of micros, PCs. This interest has earned me a spot as unofficial computer consultant in the humanities building at the University of Wyoming, recommending hardware and software, installing the same, and working in conjunction with our computer services who are networking (Ethernet) the campus. I have an active interest in telecommunications and digital switching, though only a smattering of the theory. Since reaching Edinburgh on a visiting fellowship, e-mail and bulletin boards have become for me a vital connection with people and projects left behind. I'm now at work on my several jobs of editing, foremost among which is the Collected Letters of John Sterling (1806-1844) which I hope to publish from electronic manuscript in two volumes within a few years. Could any of your readers share with me stories about major university presses and their success or failure setting e-mss? ----------------- *Paramskas, Dana Professor of French and Director of French Studies, University of Guelph, Guelph, Ont. N1G 2W1 (519) 824-4120 x 3164 BSL and MSL (Georgetown) in Applied Linguistics/French and Ph.D. (Laval) in 20th century French Drama. Interested in hearing from anyone involved with courseware for French, especially anyone working in the area of pedagogical parsers (parsers limited to the analysis of surface grammar, of the stylistic level typical of a second language learner at beginning-intermediate stages, with ability to spot and identify most morphological and basic syntactical deviations. ----------------- *PHI (The Packard Humanities Institute) 300 Second Street, Suite 201, Los Altos, California 94022 USA; (415) 948-0150 All of our activity currently involves collecting and analyzing bodies of text for eventual inclusion in a CD-ROM: 1. We are collecting all Latin writings through some undecided cutoff date. We issued in December 1987 PHI Experimental CD-ROM #1, which contained: - 4 million Latin words processed by PHI. These include most of the authors of the Republic. For example, Cicero is complete. Several of these texts have not been available before in machine- readable form, e.g. Quintilian, Celsus, Seneca the Elder. - IG 1 and 2, produced at Cornell University under a grant from The David and Lucile Packard Foundation. - A number of miscellaneous texts produced by the Center for the Computer Analysis of Texts at the University of Pennsylvania. Many of these were previously included in the Pilot CD-ROM of the Thesaurus Linguae Graecae. Biblical texts include the Septuagint, New Testament, Hebrew Old Testament, Authorized and Revised Standard Versions. Other texts include Arabic, Syriac, Coptic, Aramaic, French, Danish and English. - Experimental CD-ROM #1 will be ready for distribution by the end of February 1988. The cost will be very low. 2. PHI is working with outside scholars to produce complete morphological analyses of the Hebrew Old Testament and the Greek New Testament. Various other projects are being considered and even dabbled at, but the Latin CD-ROM should occupy us for quite a while. Main PHI personnel: ----------------- Computation Center, Hebrew University of Jerusalem, Givat Ram, Jerusalem 91904, Israel; telephone: +972-2-584536 I am a senior programmer at the Computation Center of the Hebrew University of Jerusalem. My major tasks are providing consultation and preparing instructions on the use of applications software on our mainframe (CDC Cyber 180-855 operating under NOS) and on IBM and similar personal computers. [Since the campus where I am located is devoted to the natural sciences, most of my consultation and writing is for the benefit of natural science researchers, but humanities and social science researchers are also served.] My main interest outside of work is in various aspects of modern philosophy (mainly analytic philosophy and the philosophy of science) and at present I am preparing on a paper in this field. [Here there is a more direct connection with the humanities, but computers enter the picture only in that I use editors and word processors to prepare the paper.] ----------------- *Rudman, Joseph Department of Physics (or) Department of English, Carnegie Mellon University, Pittsburgh, PA 15213; (412) 243-7063 [home], (412) 268-2775 [work] Doctor of Arts in English from Carnegie Mellon University. Scientific Project Administrator in High Energy Physics and Sometime Instructor in English at Carnegie Mellon University. Treasurer of the ACH. Researching and teaching computers and the humanities courses since 1974. Currently setting up a clearinghouse of information on computers and the humanities courses. Currently working on stylistic and authorship attribution problems in the canon of Daniel Defoe. ----------------- *Smith, Randall M. <6500rms@????> Department of Classics, University of California, Santa Barbara, CA 93106 (805) 961-3556; 707 Bolton Walk, Apt. 202, Goleta, CA 93117, (805) 685-8078 I am a graduate student in Classics at the University of California at Santa Barbara. I am currently working on my PhD., and I intend to specialize in ancient philosophy, science and mathematics. For the past two years I have been working as a research assistant to bring computer aided research to our department. The main project has been setting up an MS-DOS system which uses the TLG CD-ROM #B. I have been adapting Greg Crane's UNIX software to run under MS-DOS, and I have also written an interactive user interface to drive the modified versions of his programs. Once this software is running smoothly I will probably start working with the new PHI CD-ROM and the new TLG CD-ROM C. I also do general computer consulting for the Classics department whenever other questions arise. ----------------- *Spolsky, Bernard Department of English, Bar-Ilan University, Ramat-Gan, Israel My introduction to computing in the humanities was a post- doctoral seminar that Paul Garvin ran at the 1964 Linguistic Institute; Paul and I co-edited the papers in what is one of the earliest collections in the field. While I was teaching at the University of New Mexico, we did a good deal of work with Navajo on the computer, the results of which have been published. I now teach linguistics in an English Department; I am now a user (of word-processing, statistical packages and electronic mail) rather than a researcher in the field, but like to keep in touch. My main research interests now are sociolinguistics, applied and educational linguistics, and language testing. ----------------- *Young, Nora Fleming (Stormwalker) P.O. Box 80866, Fairbanks, Alaska 99708; (907) 479-8160. Born in Aberdeen, Scotland; presently residing in interior Alaska. B.A. in Sociology B.S.; area of specialty, Social Work, B.S. Human Resource Management, area of specialty, Counseling. M.A. in Public Administration, area of specialty, the administration of state-funded counselling programs. PhD in Psychology, area of specialty, Applied Psychology. I am interested in the interface between the scientific application of computers and human beings, i.e. can computers enhance or inhibit human creativity and learning? Can we use the computer to enlarge the scope of human experience as opposed to organizing and standardizing as much as some of my more technical friends would like? I became hooked on computers when I had to write a doctoral thesis and did not know what a floppy disk was! I learned, I really learned! I am not so much interested in technology as I am in the effect of computers on creativity. I appreciate my friends who write brilliant programs (actually I am thoroughly cowed by them); however, are the programs easy to use? simple to learn? will they enrich or simply occupy time and energy? how and why does it apply and should it? When I write poetry on my computer it gives me great freedom and I enjoy that. The tactile pleasure of a big yellow pad and pencil still seems to be important. Can I learn to use my computer in new ways? I am interested in what has been done to date with computers. I am fascinated at what might be done, could be done, undreamed of things that one day will become reality. From 124 miles south of the arctic circle, Nora F. Young P.S. My job: I am in private practice. I do a great deal of marriage and family therapy, work with drug and alcohol abuse, a number of public offender type clients, human beings in the process of change, and five computers. I also test psychological software for others in the field of psychology. That is to say, if it has bugs I find them, or my clients do, and we all learn. From: CBS%UK.AC.EARN-RELAY::EARN.UTORONTO::LISTSERV 14-SEP-1989 15:26:10.55Revised List Processor (1.6a) Subject: File: "BIOGRAFY 9" being sent to you Date: Thu, 14 Sep 89 10:22:15 EDT X-Humanist: Vol. 1 Num. 1081 (1081) Autobiographies of HUMANISTs Eighth Supplement Following are 21 additional entries to the collection of autobiographical statements by members of the HUMANIST discussion group. HUMANISTs on IBM VM/CMS systems will want a copy of Jim Coombs' exec for searching and retrieving biographical entries; a copy will be found on HUMANIST's file-server. Further additions, corrections, and updates are welcome. Willard McCarty Centre for Computing in the Humanities, Univ. of Toronto mccarty@utorepas.bitnet 13 March 1988 ----------------- *Boggess, Julian Eugene (Gene) III Mississippi State University Department of Computer Science P. O. Drawer CS Mississippi State, MS 39762; (601) 325-2756 (main office); (601) 325-2079 (local). I am a humanities retread, with a B.A. in Philosophy and English, an A.M. in Linguistics, and a Ph.D. in Communications (Psycholinguistics) from the Univ. of Illinois. I spent several frustrating years trying to teach Speech and English, but job opportunities looked much better in Computer Science, so I picked up a Master's in CS recently and am currently back to teaching (CS) full time after serving as the campus microcomputer consultant for the Computing Center for several years. Although my time has been dominated by my teaching and consulting duties, I have been involved, to a small extent, with two humanities- related projects: obtaining a computerized writing lab for the English department, with the intent of eventually using the lab for all writing courses here, and working on a communication-facilitation computer system for communication- impaired individuals. In addition, next Fall I will be teaching an honors seminar in Cognitive Science, in which I hope to tie computer concepts back to some fundamental issues in Philosophy, Psychology, and Linguistics. Any suggestions with regard to what textbook(s) I should use for the course, or what articles or books I should be reading to prepare myself for the course, would be welcome. I think that HUMANIST, or something like it, has been needed for quite a while now. ----------------- *Brook, Andrew I do very little to support computing in the humanities, though I use my PC constantly. Basically, my use of the computer is restricted to word-processing and Email. I use Wordperfect and that is the only word-processing package I know. I have discovered a couple of things about it that are not in the manual but basically I am just a user. And my use of Email is similarly unadventuresome -- I just use it. But I am interested in com- puting in the humanities and I find the mail that rolls by on HUMANIST very interesting. ----------------- *Cioran, Samuel D. McMaster University, Department of Modern Languages; (416) 525- 9140 (x7012) ----------------- University of Rhode Island, Kingston, RI 02881 (401) 792-2501 I am Assistant Director of Academic Computing at the University of Rhode Island. My academic background is not particularly applicable to my present activities since I am a "re-tooled" biologist having done graduate research in plant physiology before having grown tired of standing around up to my elbows in chemical soup. At present Academic Computing reports to the Vice President for Academic Affairs although a reorganization is planned in the next (???) year(s) and the reporting structure will change. Academic Computing is responsible for support of all educational and research computing at the University, providing a variety of services including centralized computing, planning, installation, and facilities management for local (department/college) computing sites, consultation and educational support, and software review/installation/mainte- nance. Specifically, as Assistant Director, I have responsibility for long-range planning and "outreach"--our efforts at contacting and assisting new and different user communities within the University. At present use of computing in the humanities is very basic at URI. The primary application is word processing though recently interest has been increasing. A MacIntosh teaching/research/study lab is currently being developed designed specifically for use by faculty and students in the humanities. Faculty from Art, Languages, and English have all been active participants in design and hardware/ software selection for this facility. A microcomputer-based writing laboratory was opened in September, 1987, and has been very actively used this year. A few faculty members, primarily in English, are using computing applications for analytical work and there is increasing interest in such projects. I expect my participation in HUMANIST will provide me with ideas and material which will prove beneficial in assisting humanities faculty in applying computing to their class and research activities. ----------------- *Coombs, Norman R. Professor of History, Rochester Institute of Technology, College of Liberal Arts, 1 Lomb Memorial Dr., Rochester Ny 17623 I have been teaching at RIT since 1961. My original doctoral research was in the history of Christian Socialism in the Church of England. Gradually, I moved into American history. With the aid of an NEH grant, I researched and wrote _Black Experience in America_ published 1972. During recent years I have become a computer enthusiast. I am totally blind, and a PC with speech synthesizer has taken over much of my reading needs. Students now submit all papers and essay take-home exams on electronic mail. Also, during recent years I have been using computer conferencing to reach distance learners. I have written 3 articles describing this work, one published and 2 forthcoming. I was a co-presenter of papers at the Second Guelph symposium on Electronic Conferences and at the Third Conference on Computers and the Handicapped at Cal. State Northridge. I am planning to assist in a communications course on computer and audio conferencing next year which will mainly be taught through conference systems. I also have 6 articles on Black history being published in an encyclopedia on American Immigration. Presently, the courses I teach are freshman level American history, (one version through the College of Continuing Education combining telecourse materials and a computer conference.) Also, I teach upper level courses in Black history and the history of Christianity. ----------------- *Corbett, John My scholarly background is in Classics (Roman History); and my undergraduate teaching at Scarborough College in the University of Toronto is also for the most part concerned with Classics ( history, literature, civilization). I have graduate cross appointments to the Centre for Religious Studies and the Centre for Medieval Studies, where I teach various courses on religion and religious literature in Late Antiquity and the early Middle Ages. Within this broad area my own research is unified by its focus on religion and social change; in the course of this study I have had occasion to add the semitic languages to the Latin and Greek of my original Classics training: here my work is for the most part concerned with biblical and rabbinic Hebrew (Aramaic) and Syriac (on the early Christian side). A current major research interest is the comparative study of biblically based liturgical poetry (hymns) in Jewish and early Christian traditions (with emphasis on Hebrew Syriac and Greek). As for humanistic computing I am interested in word processing and text analysis in non-Roman character fonts; as yet I have encountered much frustration and made little progress (though I am hopeful for Nota Bene). My work at the Centre for Computing in the Humanities has involved chairing the Electronic Text Archive Committee and bringing to Toronto on-line access to the Global Jewish Database from Bar-Ilan in Israel. The Univ. of Toronto is the first university outside Israel to have access to this most imortant research tool. I would be pleased to cooperate with anyone working with biblical Jewish or early Christian texts, especially those who know something about computer applications. ----------------- *Davis, Douglas A. Department of Psychology, Haverford College, Haverford, PA 19041 (215)649-7717 I am a psychologist working on Freud biography. I teach a humanities-oriented personality course and am eager to stimulate more Haverford interest in humanities computing and networking. ----------------- *Delaney, Paul I'm a Professor of English at SFU, particularly interested in Macintosh applications. Am working with George Landow at Brown/IRIS on some Hypercard units to integrate with his Context32 English course--at present, a unit based on the full text of *Joseph Andrews*, later, perhaps, one on Joyce's *Ulysses*. ----------------- *Even-Zohar, Itamar (B10@TAUNIVM) Porter Institute School of Cultural Studies, Tel Aviv University, Tel Aviv, 69978 Israel Tel. +972-(0)3-427233 or 5459-420. Professor of Poetics and Comparative Literature and Artzt Chair Professor of History of Literature, Tel Aviv University. I am Director of The Porter Institute for Poetics and Semiotics and previously (1973-1982) also Bernstein Chair Professor of Translation Theory. I have studied at the Universities of Tel Aviv, Jerusalem, Oslo and Copenhagen and have been research fellow in many European and American universities and institutes. I am also editor-in-chief of *Poetics Today*, an international journal for the science of literature and adjacent fields. Main fields: theory of literature, semiotics of culture, historical poetics, transfer studies (interference and translation). Main work since 1970 has been developing polysystem theory, designed to deal with dynamics and heterogeneity in culture. My research has been based on a vast field work on cases as Hebrew (especially in its relations with Old Mesopotamian and Middle Eastern cultures, Arabic, Russian and Yiddish), French, Russian, Old Icelandic and Italian. I have published mainly in Hebrew, English and French. My enlarged collection of papers, entitled *Polysystem Studies* (after my *Papers in Historical Poetics*, 1978) is now in preparation for the press. I have been relatively active in introducing the use of computers and more sophisticated softwares to my department. We currently work chiefly with Nota Bene, which also has developed the best software for Hebrew. I have actually introduced Nota Bene to Tel Aviv University, where it gradually becomes a popular software even among non-academic staff. I have developed enhancements for Nota Bene by writing some 80 programs with its unique programming language. Many of these programs are not just utilities but research-oriented routines. The capacity of Nota Bene to store information and the variety of ways material can be organized for retrievals also has encouraged some of my younger colleagues and students to find ways of exploiting to the utmost the incredible potentialities of NB. I have just established, together with Davis Sitman from Tel Aviv Computers Centre, a Nota Bene list (NOTABENE@TAUNIVM), as well as put all my programs (both actual programs and documentation) at LISTSERV@TAUNIVM. Among my other computer-orineted activities I have encouraged our faculty to work increasingly with BITNET and its electronic bulletins, have opened a direct channel at our institute for DIALOG and am now a member of Tel Aviv University Computer Users' Committee. ----------------- *Haberland, Hartmut Roskilde University Center, POB 260, 4000 Roskilde, Denmark Academic background: Studied in Stuttgart and West Berlin (Germany). M.A. with a thesis on automatic syntax analysis in 1971. Since 1974, associate professor of German linguistics at Roskilde University Center, Denmark, with brief stints at Duesseldorf University (Germany, 1980/1) and Copenhagen University (1983-6 part time). My active interest in computers is (apart from word processing, of course) at the moment limited to using electronic mail, mostly in connection with my editorial work with the Journal of Pragmatics. But I am also interested in cognitive science/AI, although more on the basis of a general interest in the field, not as an active researcher. ----------------- *Haviland, John B. Reed College, Portland, Oregon 97202 U.S.A. I am Associate Professor of Linguistics and Anthropology at Reed College. Before that I was at the CASBS in Stanford, the UNAM in Mexico City, and the ANU in Canberra. I work on Tzotzil (Mayan), Guugu Yimidhirr (Paman), and Mixtec (Oto-manguean), doing linguistic, sociolinguistic, and ethnographic work in both Mexico and Australia. My recent work has been on verbal fights, evidentials, Aboriginal social history, and flower-selling among Mayan peasants. ----------------- *Johnson, Eric Professor and Head, Division of Liberal Arts, 114 Beadle Hall, Dakota State College, Madison, South Dakota 57042 USA; (605) 256-5270 I am a Ph.D. (Notre Dame, 1977) in English (specializations in nineteenth-century literature and in literary criticism). My dissertation is on the novels of Dickens and the theory of the novel. I became interested in computing shortly after I received my Ph.D. I studied a series of languages and operating systems. I am most interested in programming in SNOBOL4 and SPITBOL: dangerously powerful languages for string manipulation and non- numeric processing. I am the Director of a conference called ICEBOL: the International Conference on Symbolic and Logical Computing. ICEBOL3 will feature a series of presentations about non-numeric computing in SNOBOL4, SPITBOL, Icon, Prolog, and LISP on April 21-22, 1988 (contact me for additional information). WACKFORD, STRONG, BLIMBER, and MICAWBER are production programs I have written to help identify writing blunders and make suggestions for revision (I name my programs after literary characters, usually bunglers from Dickens novels).I have also written programs for literary analysis. They are written for MS- DOS microcomputers and IBM mainframes using VM/CMS. I have published articles and read conference papers about programming in SNOBOL and about computing in the humanities. ----------------- *Kennedy, Alan KENNEDY@DALAC Subject: for HUMANIST To: mccarty@UTOREPAS X-VMS-To: IN%"mccarty@utorepas" During the past year the computing centre at Dal has installed a small lab of Atari St1040's (six of them) in the English Dept, with dedicated connections to the vax8800. We use WordPerfect primarily, but don't bother giving a lot of instruction in it. We assume our students will pick it up as they need it and that has been the case. The lab is too small now for anything much more than graduate student use, but it will expand to 12 stations next year, and add a laser printer. That should allow us to run some tutorials in composition for our undergrads. One of my undergraduate classes, english 204: on The European Novel, is making use of the CoSy facility this year. CoSy stands for Conferencing System, and it is a kind of electronic message center. Students can enter their reactions to the books we are discussing in class, they can argue with each other, they can ask questions of me, and I can reply, intervene,keep silent, send them mail messages, or post information that they can get back to on numerous occasions. I find it an exciting activity, and so do some of the students. ----------------- *Labbett, Beverley. ( M110%UK.AC.UEA.CPC865 ) Lecturer in Education, School of Education, University of East Anglia, Norwich NR4 7TJ. Tel - 0603 -56161. Ext 2640. Interests. The use of data bases and interactive video in the teaching and learning of "The Humanities." ----------------- *Massirer, Mary English Department, Baylor University, Waco, Texas 76798 (817) 755-1768 I am teaching both technical writing and freshman comp. at Baylor and have been for 20 years. Our university joined BITNET about 18 months ago, so we are comparatively new at it. As far as I know, I am the only person in the English Dept. who is using BITNET so far. We are using Macintosh computers in our composition courses and are especially interested in new developments and methods in composition teaching. I'll be eager to hear from all of you. ----------------- *Piovesan, Walter I am head of the Data Library at Simon Fraser University. We manage and provide access to textual and numeric data in Machine Readable form. ----------------- *Richmond, Ian M. <42100_1156@uwovax.UWO.CDN> Department of French, University of Western Ontario, London, Ontario, Canada N6A 3K7. 519-661-2163 Ext 5703 also IMR@UWOVAX.BITNET I have a Ph.D. in French literature with a specialization in the seventeenth- century area. In this field, I have published a number of articles, a book (*Heroisme et galanterie*, Naaman: Sherbrooke, 1977) and two collections of colloquium acts. I have also published articles and given papers on the question of French immersion programs in Canada, and have given papers on Esperanto literature. In the area of computing, I have prepared an electronic, bilingual lexicon of microcomputing terms, published in 1986 by Linguatech as part of the Termex/Mercury series. I have also written and programmed *Teaching Assistant* a grade-book program currently under consideration by Gessler Publishing, revised (language content and program code) a series of CAI lessons in Esperanto for the *Esperanto Press* (Bailieboro, Ontario), and am currently working on CAMA (Correspondence des Affaires en Modules Automatises), a CAI project in French business correspondence funded by the Ministry of Colleges and Universities with funds provided by the Secretary of State. My present position is that of Chairman of the Department of French at the University of Western Ontario. In this capacity, I use a microcomputer constantly for correspondence, administration, preparation of teaching materials, grades management (using *Teaching Assistant*), and research (storage and retrieval of research notes, as well as word processing). My primary computing interests lie in 1) the use and development of applications to improve the efficiency of researchers in the humanities and to ease the difficulties they encounter (particularly in the foreign languages) in producing a hard copy of their finished product; 2) CAI; and 3) programming. ----------------- *Rumery, Kenneth Music Department, box 6040, Northern Arizona University, Flagstaff, Arizona 86011 (USA); (602) 523-3850 I teach advanced theory, analysis, and 20th century music. I supervise activities in music CAI and synthesis. I have conducted surveys and written articles on computer use in music. I have written several music tutorials for the Macintosh using Hypercard and CourseBuilder. I am working on the identification and description of musical thought processes as these relate to the music maker's use of pattern perception and memory. Evidence of musical thought is present in compositions, improvisation, and all facets of performance. Am interested in making a connections between pattern of musical thought and patterns of thinking in other arts. ----------------- *St. George Art University of New Mexico, CIRT, 2701 Campus Blvd., NE, Albuquerque, NM 87131 (505) 27708046 I'm presently the Director of External Networking and Supercomputing at the Computing Center and an Associate Professor of Sociology. I received my Ph.D. from the University of California in Sociology. For a number of years I was the Director of User Services at the Computing Center and in that capacity provided all consultative services to users, including those in the humanities. I have retained a strong interest in providing computing services to those users not in traditional areas of computing: english, philosophy, history, and so on. My research interests are eclectic and range from historical research on trails to the social impact of computing. ----------------- *Sinclair, Gerri I run a Centre at Simon Fraser University called EXCITE (EXemplary Centre for Interactive Technologies in Education) in the Faculty of Education. ----------------- *Stampe, David FROM INTERNET: stampe@uhccux.uhcc.hawaii.edu FROM UUCP: ihnp4,uunet,dcdwest,ucbvax}!ucsd!nosc!uhccux!stampe LAST RESORT: stampe%uhccux.uhcc.hawaii.edu@rutgers.edu Dept. of Linguistics, Univ. of Hawaii/Manoa, Honolulu HI 96822; 808-948-8602, 808-396-9354 (home). I'm a linguistics professor at the University of Hawaii. I was at Ohio State University 1966-85. I am best known as the founder of the theory of natural phonology. At UH I also teach computational linguistics, the whole gamut, and support a number of projects in computational support for linguistic research. Most important of our projects, perhaps, is a network archive file server we are constructing as a public repository for Pacific and Asian language data of all sorts, and for supporting software for text, lexicographic, and grammatical analysis. I'm also quite interested in verse and music analysis (not limited to western styles) and software in support of that. From: CBS%UK.AC.EARN-RELAY::EARN.UTORONTO::LISTSERV 14-SEP-1989 18:29:18.69Revised List Processor (1.6a) Subject: File: "BIOGRAFY 10" being sent to you Date: Thu, 14 Sep 89 10:22:42 EDT X-Humanist: Vol. 1 Num. 1082 (1082) Autobiographies of HUMANISTs Ninth Supplement Following are 27 additional entries to the collection of autobiographical statements by members of the HUMANIST discussion group. HUMANISTs on IBM VM/CMS systems will want a copy of Jim Coombs' exec for searching and retrieving biographical entries. A copy can be found on the file-server; see the Guide to HUMANIST for more information. Further additions, corrections, and updates are welcome. Willard McCarty Centre for Computing in the Humanities, Univ. of Toronto mccarty@utorepas.bitnet 31 March 1988 ----------------- *Adman, Peter I am the Assistant Director at Hull University Computer Centre, responsible mainly for User Services. The University has strong departments in humanities, two thirds of the students being in non-science subjects. Although I am a graduate in Mathematics my main interests have been in humanities, having got involved in using computers on historical data consisting of poll-books (18th and 19th century) and census schedules (1851, 1861). To manipulate such data without having to use standard, general purpose database systems I implemented MIST (Manipulative Interactive Software Tools) which runs on mainframe and minis, and subsequently on micros. Recently most of my work concentrated on intelligent user- interfaces (PROFILE - A Humanities Computing Workbench) and some linguistic applications. I would be happy to receive mail from people with similar interests. ----------------- *Anderson, Troy I am currently doing work at Stanford University on the extinct langauge of my ancestors the Lower Coquille Indians of the Central Oregon Coast. My method for going about reconstructing a dead language is as follows. I originally did a whole lot of bibliographic work trying to gather as much info on the language as I could. I ended up with about 300 pages of texts and about 10 hours of tapes. The textual material is half published and the other half unpublished. The published material was gathered by Melville Jacobs in his 1932 vol.8 University of Washington Coos Myth Texts and Narrative and Ethnologic texts. The unpublished material is from J.P. Harrington's collection ( I question it's validity). The computer has played a big part of my research. I sent the texts in published form to BYU to get optically scanned on to DOS text files which I have modified on to Word Perfect. I am now trying to reformat the texts so that the line up phrase by phrase. Let me back up. The texts I have for Mel are translated clause by clause, and I would like to make a dictionary and a list of morphemes. The way I am going about running through all these texts is by sending the formatted texts through a program called Word Cruncher. This used to be called the BYU concordance program (?). So it will make a concordance, clause by clause of the texts picking out the words so that I can analyze the texts morpheme by morpheme. Once again I must back up ... the texts are transcribed morpheme by morpheme but the translation is free. I need to find out how the translation works literally. From there all my problems should be solved, and it will be just a matter of picking up a totally foreign language and then trying to make a grammar book out of it. I am currently in the formatting of texts to Word Cruncher format stage. My technical expertise prevents me from making a program to merge the English and the Miluk files ( two seperate files ) which are lined up perfectly by number of clauses but not by number of pages. ----------------- *Bernhardson, Steven Department of Physical Education, University of Saskatchewan (306) 374-6472 Officially, my background is more in the computer side of things as opposed to the humanities side of things. I have a B.Sc. High Honors in Computational Science and am working now as a computer resource person; however, my electives were all in English and I almost have enough for a degree there too. Maybe next year! My interest right now is in using computers to a) teach writing skills to weak/struggling/beginning writers and b) using computers to automate the editting process. I have been tutoring English for about a year now. One of the things that I do is read over student essays and give evaluation. I have developed a set of tools that allow me to quickly generate comment sheets that target a specific students problems. For instance, typing in mixed metaphor, cliches, and split infinitives generates the standard spiel about these common problems. Then I can drop in examples from the students paper to make my comments more relevant. Ideally, this would all be built into the word processing system that the student is writing his or her essay on. Then when they type "Everyone take their seats." a pop-up window would appear, letting the student know that *everyone* is singular. Of course, I am also interested in how this tutoring can be done most On the editing side of things, I have some crude tools that allow me to check up stylistic concerns. (I have edited a book as well as some articles.) As I type in the text, I might want to pop-up a window showing what the *New York Times* policy is on the word *soviet*. Or I might what to block off a paragraph in a sociology article and make sure all the numbers are treated the same--e.g. 30%, 50 per cent, and eighty-two percent are all different and should be brought to some common standard. I am interested in putting several sourcebooks on line--Chicago Manual of Style, New York Times Style Book etc. etc.--and improving my access to relevant sections. ----------------- *Conner, Patrick W. Associate Professor of English, West Virginia University, Morgantown, WV. 26506, (304) 293-3100. I am the founder of ANSAXNET, a group of scholars interested in Anglo-Saxon studies and Old English language and literature with members in the US, Canada (incl. Old English Dictionary Project at Toronto), UK, the Netherlands, and Australia. ----------------- *Eisinger, Marc IBM France, 68-76 Quai de la Rapee, 75592 Paris Cedex 12, France I'm a system engineer in the branch office dealing with Universities and public research centers in and around Paris area. Although most of my customers are "Hard Science" specialists, more and more are in the Humanities: linguistics, sociology, geography, history and so on. They badly need information on what is done in foreign countries and I do think that the kind of tool you propose with HUMANIST is a need for them. As most of them are on EARN/BITNET I will suggest them to join HUMANIST. On the other hand, I'm working myself on precolumbian Mexico and specialy on the aztec language, the nahuatl. For the time beeing my main subject is to implement a lexicographic data base to deal with this agglutinative language. I'm working to in a team from the CNRS (Centre National de la Recherche Scientifique) on the description of precolumbian Codices. ----------------- *Falsetti, Julie Hunter College/IELI, 695 Park Ave., New York, NY 10021, (212) 772-4297; P.O. Box 801, New York, NY 10021, (212) 628-3641 I am the Testing Coordinator at the International English Language Institute of Hunter College in New York City. The IELI is a non-credit program that teaches English as a second language. We have about 800 students every eight weeks. At present, we are just beginning to use computer assisted learning in the classroom. I teach word processing as part of a writing course. We have the use of the college computer lab on a limited basis. I am interested in what software is available for ESL and how it is used in other programs. For testing, I use SPSS PC to evaluate the students progress on placement and exit tests. ----------------- *Gasque, Tom I am Prof. of English at Univ. of South Dakota in Vermillion, SD 57069 (605) 677-5229. Also am the new editor of NAMES, journal devoted to study of onomatics--personal, place, literary. This is only my third week with BITNET, but so far have received two journal submissions this way. I have done writing on placenames and personal names and on medieval literature. Next year, beginning in July, will be spent in Germany on exchange. I will continue to edit NAMES and am looking for a way to keep in touch via BITNET. I know that the place I will be is on the system, Univ. of Oldenburg (DOLUNI1), but I don't know anyone's ID there. I look forward to staying in touch with you and others who may share my interest in names and the use of computers in the humanties. ----------------- *Gillison, David Associate Prof, Department of Art, Lehman College, CUNY. Bronx New York 10468. phone 212) 960-8356. I am an instructor in photography and art and am presently exploring ways of using micro and mini computers for computer assisted instruction, and also for image enhancement in graphic art and photography. ----------------- *Gorodetsky, Gabriel Russian and East European Research Center, Tel Aviv University, Ramat Aviv, Tel Aviv, Israel, Tel: 03 5459608 (office); 13 Mordechai St., Jerusalem, Israel, Tel: 02 719831 (hone). I am a Professor of History at the Department of History at Tel Aviv University and a Senior Research Fellow of the Russian and East European Center of that University. My major interests (in broad lines) are the history of Soviet foreign policy, British modern history, and the history of the Second World War. Two of my books have been published by Cambridge University Press: The Precarious Truce: Anglo-Soviet Relations, 1924-27 (1976), and Sir Stafford Cripps' Mission to Moscow, 1940- 1942 (1984), and various articles in scholar journals. I am involved with the people developing the Hebrew version of NotaBene, and a keen user of ASKSAM (Seaside Software) -- A text based management system which I use to compile a bibliography of the Second World War and various guides to archival sources relating to the Grand Alliance in the Second World War. At present I am writing a book on Politics and Strategy: The Formation of the Grand Alliance in the Second World War. ----------------- *Harder, Raymond G. Dept. of Computer Science, Azusa Pacific University, POB APU, Azusa, CA 91702, USA; 818-969-3434, ext 3556; ICAI, MRT, Hypermedia, AI, Microcomputer text tools, Computational Linguistics. B.A. Bible, M.A. ANE Languages and Cultures (UCLA), Ph.D in progress UCLA ANELC. Diss: Translation Technique in the Syriac of John (NT). I have written programs for the contextual formatting of RTL lang (eg Arabic & Syriac), CAI and text study of minor languages (MAC only). Prefer Macintosh but own IBM xt clone too. Have HP-3000 and MicroVaxII at school. Pascal & Hypertalk programming. Extensive MAC font develpmt - bitmap and postscript I will be reading a paper at the AIBI/ALLC meetngs in Jerusalem in June. I hope to meet you there. I have just finished a mmicrocomputer version of the Syriac NT. I will be transfering the COPTIC Nag Hammadi texts to MAC format for D Parrot at UC Riverside tomorrow. My main interests are in making this stuff widely available for individual study on Microcomputers. My background is mainly humanities - the computers are just to pay the bills, it's nice to combine the two! ----------------- *Harrison, Terri Department of Language, Literature, & Communication, Rensselaer Polytechnic Institute Troy, NY 12180 USA (518) 276-8261 I co-edit "Comserve", an electronic information and discussion service, available through Bitnet, for individuals in the humanities and social sciences who are interested in human communication. My responsibilities involve maintaining and developing a data base of information relevant to the activities of students, faculty, and professionals in communication. The data base consists of bibliographies; other research and instructional materials; announcements of conferences, grant opportunities, calls for papers, etc.; and descriptions of graduate programs in communication. Comserve users also have the opportunity to participate in discussion groups relating to various themes within the field. We currently offer 15 discussion groups on topics such as rhetoric, philosophy of communication, ethnomethodology, communication education, mass communication, and intercultural communication. I teach undergrad and grad courses in Communication at RPI, specializing in organizational communication and communication theory. My research has little to do with computing; I am interested in organization as a form of social structure and its implications for communication processes. My particular focus is on the possibilities for, and fate of, strategies for extending democratic practices to the workplace. ----------------- *Hayward, Malcolm I teach in the English Department at Indiana University of Pennsylvania. I use computers quite a bit in my composition courses. I also edit a journal, Studies in the Humanities; most phases of our operations are computerized. I am particularly interested in contacting other journal editors for exchanging information. ----------------- *Hibler, David I've been teaching English for over twenty years, specializing the past twelve in composition and computers. I am particularly interested in linking with anyone sharing an interest in the areas of composition, grammar, and/or the general impact of machines upon the development of thought and our perceptions of ourselves. ----------------- *Hopkins, John D. American Studies, University of Tampere, Box 607, SF-33101 Tampere, FINLAND; telephone: 011- 358-31-156116 office direct [+2 hours GMT]. I am a U.S. citizen resident in Finland for 18 years, a tenured Senior Lecturer in English Translation at the University of Tampere, Finland. I serve as Coordinator of the American Studies Program, and Director of the Office For U.S. Exchange Programs at the University of Tampere, and have been past Chair of the English Division of the Department of Translation Studies. I have also served on the Board of Directors of the Fulbright Commission in Finland for nine years. My computer background includes the directorship of our 1984-87 "CompTrans" project, which established a coordinated microcomputer and mainframe curriculum within the Department of Translation Studies which now serves nearly 100% of our students and staff. In 1984, our 12-unit Kaypro CP/M lab -- in addition to our DEC 2060 mainframe resources -- was the first department- wide Humanities microcomputer facility in Finland, and received considerable attention on the national level. From 1985-87 we added MS/DOS micros, and included a Senior Fulbright Professor in Computational Linguistics and Computer Assistance in Translation Work with whom we expanded our curriculum considerably. A summary of the CompTrans project is available via Bitnet. I also was co-organizer of the First and Second Tampere CAI/CALL Conferences in 1983 and 1985, among the speakers of which were David Wyatt and Graham Davies, known to Scope readers. I have spoken on microcomputer usages in Translation and general Humanities work in Finland, the U.S. and Canada, and served as a consultant. I have also organized three international American Studies Conferences in Tampere in 1983/85/87, the most recent of which was the largest American Studies conference in Europe in 1987 with over 300 participants from 17 countries. I have written or edited a number of textbooks and conference proceedings within the field of American Studies, upper-secondary EFL texts, and guide/Orientation booklets in the field of Educational Exchange. The organizational work for these conferences, and for our American Studies Resource Center, Center For American Studies, and Office For U.S. Exchange Programs, has involved considerable computer work, notably in mass mailing, databasing, and printing/publishing. Primary software is WordPerfect 4.2, Reflex, Ventura Publisher, Softcraft Fancy/Laser Fonts, and a host of utilities. I currently use Kaypro and Unisys AT- compatibles connected via modem using Kermit onto the DEC2060 or VAX711 with the Ministry of Education VAX as the Bitnet server. ----------------- *Johnson, Joanna Computing Services Co-ordinator (Humanities), McMaster University; Humanities Computing Centre, CNH-428, (416) 525-9140, ext. 4155, 1280 Main Street West Hamilton, Ontario, Canada L8S 4L9 I am the contact person for Humanities faculty and staff members regarding any computing questions or problems. If I am unable to answer the question or deal with the problem myself, I try to help the user find a solution within the available resources (including personnel, time and facilities). I am involved in software development (currently in Computer Aided Instruction in second languages) and in the assessment of new hardware and software packages of potential benefit within the Faculty. ----------------- Trinity College, Department of Religion, 70 Vernon Street, Hartford, CT 06106, (203) 527-3151 ext. 519 I am an Assistant Professor in the Department of Religion, concentrating in Judaic and Islamic Studies. My particular area of specialization is the history of Jewish Mysticism. My computer work involves Hebrew text editing, and I am currently producing on my microcomputer (an AT clone) a critical edition of a medieval philosophical text that bears on the development of early Kabbalah. Thus, I am particularly interested in Hebrew- based computer applications. I also use e-mail extensively to stay in touch with colleagues and Hebrew-based computer consulatants. I am interested in working towards a North American-based database of post-Biblical Jewish texts. ----------------- *Kirkham, Victoria I am in the Dept. of Romance Langs. at Penn and principal investigator for Penn Boccaccio, a project to convert the works of Boccaccio and their Renaissance illustrations to machine- readable form. I am new to computing and particularly interested in learning about digitizing images and interactive text-image programs by getting in touch with others working along similar lines. ----------------- *MacNeil, Heather Heather MacNeil c/o University of Toronto Archives, (416) 978- 5344. My educational background is an M.A. in English and an M.A.S. (Master of Archival Studies). My interest in computing in the humanities is both personal and professional. I want to know the kinds of conversations the humanities community is engaged in and I want to know, as someone whose occupational focus is ways and means for supporting research in a wide variety of areas, what models are emerging and what questions are being asked. ----------------- *Maynor, Natalie Mississippi State University, MS 39762, USA, (601)323-8384 (home) or (601)325-3644 (office). Although I am still a computer novice, I am making rapid progress (i.e., I'm becoming a computer addict). My educational background is in English literature (Ph.D., University of Tennessee, 1978). My research and publications during the past eight or nine years have been in linguistics. I am Associate Professor of English at Mississippi State University. I have had a computer for about three months and a modem for about three weeks. Because of my fascination with the world of computing, I am now a member of the English Department Computer Use Committee, a committee that was formed last fall to make plans for the computer-assisted writing lab that we hope to have in place within the next six months. As a linguist, I am looking forward to learning enough to be able to use my computer for compiling and analyzing linguistic data. In addition, I hope to be able to exchange data with colleagues via Bitnet. ----------------- *Morgan, Martha Microelectronics and Computer Technology Corporation, 3500 W. Balcones Center Dr., Austin, Texas 78705 USA Telephone (512) 338-3431 During this decade, I have worked in computational lexicography: originally, with Siemens' German/English machine translation project, METAL, at the University of Texas, and presently with the natural language project at MCC. My research interests revolve around robust, re-usable, and extensible lexicography for artificial intelligence applications, including knowledge representation. ----------------- *Newman, Judith I teach in education at MSVU. Work with teachers helping them learn about reading and writing and we use computers for doing that. Mostly I don't teach about the computer but USE it. The students (all active teachers) write to one another, to me, to people in places like Calgary, Bloomington Indiana, McGill using email. The focus of what we do is reading and writing but in the process we use the technology to let us do both of those activities in new ways. I have even taught a distance education course with nine teachers in two locations: Charlottetown and Amherst using Netnorth. I corresponded with the students, both individually and as a group. The exprience was most astonishing as the quantity of writing both they and I found ourselves doing. So yes, I'm a humanities teacher, teaching, reviewing software, answering questions, giving advice, supporting research and teaching using computers and would like to have contact with others doing some of the same. ----------------- *Schostak, John Lecturer in Education, School of Education, University of East Anglia, Norwich NR4 7TJ. Tel - 0603 -56161. Ext 2643 Interests. Generally, the impact of computers on schooling. The potential for changing relationships in education, particularly the potential offered by electronic mail. I have also done research on violence, disaffected youth. Recently I've edited a book on information technology and its impact on schooling, its potential for liberating the individual as well as its more sinister potential for controling the individual. ----------------- *Schuette, Wade 1581 Slaterville Rd., Ithaca, NY 14850 (607) 277-0132 (home); Computing Services, Johnson Graduate School of Management, 102 Malott Hall, Cornell University, Ithaca, NY 14853-4201, (607) 255-9426 (office). AB (Physics), 1968, Cornell U. College of Arts & Sciences; MBA (Quantitative Analysis), 1976, Cornell U.; 1976-present, various analysis/computing positions at Cornell, including Co-Director, Dept. of Institutional Planning and Analysis (1980). Presently Database Administrator/Research Support Specialist at Johnson Graduate School of Management (Cornell). Wife, Judy S. Ogden, (AB (Cornell) English/Psychology '70, MPS/Health & Hospital Service Administration '76, JD '77) teaches "Legal Aspects of Health Care Delivery" in Cornell's Dept. of Human Service Studies, and is a Partner with local law firm True, Walsh, & Miller. Neither of us is officially in a "research" role; however, we are interested in any work related to making large, complex organizations better capable of making rational decisions with a long-term focus. This includes uses of (and limits of) computing related to making "hard choices" in government, health care, defense, etc; design of truly useful Information/Command and Control Systems that do a good job of perception-fusion instead of insulating and blinding the people at the top; organizational change and behavioral decision theory; leadership; role of ethics/theology in making an organization with power to build and grow without becoming corrupt; self-organizing systems; cross- fertilization between Computer Science, AI, and Management science in knowledge-representation and design of robust, reliable control systems that evolve well; qualitative physics and graphical (analog) statistical techniques, including multi- dimensional scaling in non-Euclidean perception space; and definitions of "health" and "quality" that can assist in making social resource allocation decisions and tradeoffs between cost- containment and "quality of health care." ----------------- *Slattery, Susan Yale University, Project Eli, 175 Whitney Ave., Yale Station 2112, New Haven, CT 06520, (203) 432-6680 I am a user support specialist at Project Eli, which is in charge of instructional computing on campus. I am also a poet (MFA from UMass). I help support hardware and software problems from faculty who are working on instructional software. ----------------- *Willee, Gerd Institute for Communications Research and Phonetics, University of Bonn, Poppelsdorfer Allee 47, D-5300 Bonn 1, Tel. W-Germany- 0228 - 73 56 20 I'm working as a scientific associate in the fields of computational linguistics, with stress on morphology (especially of German); I'm teaching students (as well form comp.linguistics as from other domains from the humanities, say philologies and linguistics); I give computer courses mainly in PL/I; I'm working as a consultant for the use of computers in the philologies and linguistics. ----------------- *Wolper, Harlan 170C Old North Road Kingston, RI 02881 (401) 782-8693 . I am currently a 2nd semester (graduating) senior at the University of Rhode Island. As a Speech Communication Major and a Sociology Minor, I have a background in the Humanities. Special interests I have had include the use of electronic-mail and teleconferencing. Our institution has its own teleconferencing system called Participate. Currently, the system is expanding, and I was part of a grant and a project to experiment with the use of such a system in a variety of disciplines including natural sciences, business fields, as well as others. I have studied the effects of electronic "communication" vs. "face to face" communication and have found some interesting results. I think that this discussion group would be of interest and value to me. ----------------- *Zweig, Ron I lecture in contemporary Jewish history, at Tel Aviv University. Particular fields are the history of the British Mandate; and the reconstruction of Jewish life in the immediate post-WW II years. I edit a scholarly journal on the history of the Zionist movement and the Arab-Israeli conflict. The journal has published an annotated bibliography of articles and books in these fields annually for the last eight years. I have just completed a collated disk version of the bibliography, which includes a software search engine (provided gratis by Seaside Software of Florida, publishers of ASKSAM). The bibliography contains over 3150 items. I would be pleased to make it available (no charge) to anyone with an interest in computerised bibliographies, or to anyone interested in the contemporary Middle East and/or Jewish history. I am keenly interested in the use of databases in managing historical research projects. I use a freetext database called ASKSAM, and would be happy to swap notes on that or any other freetext database. From: CBS%UK.AC.EARN-RELAY::EARN.UTORONTO::LISTSERV 14-SEP-1989 15:26:57.41Revised List Processor (1.6a) Subject: File: "BIOGRAFY 11" being sent to you Date: Thu, 14 Sep 89 10:22:53 EDT X-Humanist: Vol. 1 Num. 1083 (1083) Autobiographies of HUMANISTs Tenth Supplement Following are 25 additional entries to the collection of autobiographical statements by members of the HUMANIST discussion group. HUMANISTs on IBM VM/CMS systems will want a copy of Jim Coombs' exec for searching and retrieving biographical entries. It is kept on HUMANIST's file-server; for more information, see the Guide to HUMANIST. Further additions, corrections, and updates are welcome. Willard McCarty Centre for Computing in the Humanities, Univ. of Toronto mccarty@utorepas.bitnet 1 May 1988 ----------------- *Beeler, Stan Department of Comparative Literature University of Alberta Phone: I am a Ph.D. Student in Comparative Literature (A.B.D.) at the University of Alberta. My main area of literary research is in 17 century German mystic writings. (J.V. Andreae) but much of my time is devoted to computer related work for The Research Institute for Comparative Literature, my own Department and for University Computing Systems. I teach introductory courses in microcomputers for our Computing Services and work as a consultant for University Departments having difficulty using computers for non-English text. This includes desk-top publishing, word-processing and some database problems. I program in Turbo Pascal and have started to use Turbo C. ----------------- *Bladon, Peter R. I work in Newcastle upon Tyne Polytechnic Computer Unit as a systems programmer. A small part of my job is to help Humanities Staff in the Polytechnic to use computers, but find this difficult 1) because I know so little about both the computing and the humanities sides of the subject, and 2) there is very little feedback from the relevant users when I send them information. One reason why I do this job is because my degree was in French with German, but I am too out of touch with the subject now. ----------------- *Brasington, Ron Department of Linguistic Science, Faculty of Letters and Social Sciences, University of Reading, Whiteknights, Reading, RG6 2AA, UK. Telephone (0734) 875123 Ext 226 Unit (Macintosh). Research interests mainly in the areas of phonology and morphology; computational activity principally directed towards computer modelling of field linguistic methods and computer implementation of linguistic descriptions (mostly Prolog); most commonly worked on languages from Romance group. ----------------- *Brewer, Jeutonne P. Dept. of English, Unix. of North Carolina at Greensboro, Greensboro, NC 27412; telephone: (home) 919-334-5263; (home) 919- 454-1580 I am a linguist with a particular interest in sociolinguistics and dialectology. I began using computers in my research while in graduate school at Chapel Hill, NC in the 1970's. Microcomputers (or pc's) have been a particular interest since I bought the first pieces in 1979. I have tried to "infect" my colleagues with enthusiasm about how computers can be useful, not a popular thing to do in the early 1980's, but it was fun and challenging. At UNCG I designed and taught the first workshops in word processing for our faculty. ----------------- *Capobianco, Joseph P. My chief interest is in communicating with people - whether in North America or Europe - who have information to share about Italian education. People in the United States, for example, have very little information about the dottorato di ricerca, a new Italian university degree. Similarly, little is known in the States about Italy's newest universities (the one at Potenza, e.g.), and the "master's" degrees being awarded at certain Italian universities. When people holding such degrees arrive here, we're not quite certain where to place them in our degree hierarchy. It's my hope to meet others through HUMANIST and similar networks who can answer specific questions I have about new degree programs, new universities, and recent developments in Italian secondary education. As for members of HUMANIST not being educators, that poses no problem. Italian industry and government (particularly at the regional level) have been very active in recent years in fostering new educational programs. Again, very little is known about these in the States. Part of the reason that I am eager to communicate with Italian educators is that I strongly suspect that they would be very surprised about some of the assumptions that we are making about their new programs, schools, and degrees. A few words about my background: I have a master's degree in English and work in the Office of the Registrar. I have written a book about Italian education and am often consulted by admissions officers around the United States when they have questions about Italian educational documents. I have also served as a volunteer evaluator of Italian educational credentials for the National Association of Foreign Student Advisors and am occasionally consulted by professional evaluation services. I very much want to update my own knowledge of recent developments in Italian education and hope to produce articles and a revision of my book. ----------------- *Clark, S.R.L. Department of Philosophy, University of Liverpool, P.O.Box 147, Liverpool L69 3BX, United Kingdom. I'm a philosopher, publishing mostly in moral & political philosophy and philosophy of religion. I have no programming experience or interests, and see computers simply as a tool for exchanging thoughts. I am interested in the development of interactive philosophy texts which go some way to meeting Plato's requirements (in the Phaedrus) for a book that would answer back, and allow more routes than one through the text. I've only recently learnt (and got the hardware) to log into discussion groups, so I'm seeking a profitable hotline. ----------------- *Craig, Ken I am a Ph.D. student from the United States working in the area of biblical studies... I am attending language classes and seminars and also writing my dissertation which is on the Book of Jonah at the Porter Institute at Tel Aviv University. I plan to graduate in May of 1989. Is it possible that HUMANIST could help me as I look for a teaching position at a liberal arts university or seminary in the United States? My areas of study include: Old Testament Literature; theology; language (biblical and modern Hebrew, Greek). ----------------- *Davis, Boyd Telephone numbers: (office) 704-547-4209/2296; (home) 704-536- 7629 I am a Professor of English at UNC-Charlotte, where I teach courses in linguistics (Language and Culture, Language Acquisition, History of the Eng. Language), and a variety of writing courses, including technical and professional writing. And I'm writing my sysop tonight to ask him why he hasn't sent the news about Humanist out to those of us in the Humanities who use computers in their teaching and research. I've been showing other faculty how to use BitNet all year. Our computer center is only now getting the budget to offer the kinds of out-reach that is needed, and is finding a receptive community. I need to get in touch with some of the lexicog groups, esp. in Canada and in Quebec. As a linguist I study language change and the historiography of models for language and linguistics study. Currently I am writing about Saussure, including his insights as they contribute to our understanding of changes in terminology and technical language, and I write on terminology. I am also investigating ways to graphically demonstrate the "mental mappings" of speaking places identified by adolescents. While I don't program, I work with my own micro (a PC clone, the DataVue) at home for word processing and database with current software such as PCFile or Reflex, usually, and on campus I use our new Mac lab for technical writing and freshman composition teaching. This makes me especially interested in the interface issues. My avocation is working with nonverbal handicapped and computers/synthesizers to expand their communicative and cognitive abilities. Here I pull together my study of Saussure's notions of referentiality and my work in language change. I need this network for my work and hope I have something to contribute to it. ----------------- *Ephraim, Nissan Department of Mathematics & Computer Science, Ben Gurion University of the Negev, P.O. Box 653, Beer-Sheva 84105, Israel; I work on expert systems and nested relations, as applied to morphology and word-formation, the lexicon and machine- dictionaries. Besides, I am developing a project investigating argumentation. I am the editor of the annual "Advances in Computing and the Humanities", and the associate editor for Europe of the "International Journal of Expert Systems: Research & Applications". ----------------- *Gillespie, John My computing interests are: the use of computers for language teaching, including distance learning and the use of databases and also for text analysis. I am also interested in computer applications in general to my main research field, French literature in the twentieth century, particularly in its relations with theology and philosophy (Existentialism, Sartre, Camus, Michel Tournier, Andre Gide). ----------------- *Little, Greta Anthropology Department, University of South Carolina, Columbia, SC 29208; telephone: (803) 777-7261 [office]; (803) 782-8933 [home] I am a linguist (PhD UNC-Chapel Hill), also with an interest in children's literature. My research interests are diverse -in addition to children's literature, I work with the teaching of English as a foreign language. However, my primary research interest is in written language, especially punctuation. I use computers in a variety of ways with my work--much of what I do involves fairly large hunks of data which must be sorted and counted. Text analysis is a tool I use for my own research and in my classes. In the fall I will be teaching a course in contrastive texts, which will examine and compare rhetorical/stylistic patterns and reader expectations for different literate cultures with special reference to English. Furthermore, computers and programming languages have employed punctuation in interesting ways and are having an impact on punctuation usage. Consequently, I often find myself studying these conventions. I am about as interdisciplinary as most universities will allow -- I teach three courses each semester with three different ----------------- University of Massachusetts; telephone: 412-256-0675 or 545-2314 / 3453. I am Director of the Five College Foreign Language Resource Center and Professor of French at UMASS. I am currently teaching faculty seminars on the uses of technology in foreign language and literature teaching at Amherst, Smith, Mount Holyoke, and Hampshire Colleges, as well as the University of Massachusetts. ----------------- *Peters, Frank c/o Computer Center, Mississippi State, MS. 39762; telephone: (601) 325-2942 I serve as one of the Bitnet node administrators for our local bitnet node (I was part of the programming team which developed the local code needed to connect our Unisys 1100 to bitnet). I serve as system administrator for the UNIX system available on our mainframe. I answer any and all questions about the above systems (and any other general questions that come my way). ----------------- *Pierce, Richard Holton University of Bergen, Department of Classics (Egyptology), Sydnesplass 9, N-5007 Bergen, NORWAY; telephone: 21 22 86. ----------------- Assist. Prof. of French, Middlebury College. B.A., M.A. SUNY at Binghamton Ph.D. French, Univ. of Illinois at Champaign-Urbana. (1987) At Urbana I coordinated Computer Assisted Instruction for the French department on the PLATO system for 2 years. During this time I was writing a dissertation on irony in French literature, using a microcomputer as a tool in classifying data on ironic enunciations. While at St. Lawrence University I developed and taught a course on Computers and the Humanities, participated in the development of a new language center with a view to integrating new technologies in language teaching, and worked closely with our Academic Computing office to insure the adequacy of the school's word processing equipment for the foreign languages we taught. At Middlebury I have taught a course on Computers and Language, examining topics in computational linguistics and computer assisted instruction using the IBM- PC and the Macintosh. I have experience with a number of ----------------- California State University, Sacramento, CA 95819; phone: 916- 278-6333 Two ways of using computers in academia (other than practical word-processing and data base functions) interest me. About one I know little, but I am very curious; about the other I have very definite ideas, but I am not really enthusiastic to try them out. To mention first what I do not know: I am a classicist, specializing in ancient science, with several articles of Ptolemy, Plutarch, and the ancient astrologers (most recent: TAPA 1988; Trans. Am. Philos. Assoc. 1988). In Classics, serious work being done on the computer analysis of texts by the Thesaurus Linguae Graecae and by others at UCLA, Brown, and Harvard. I have read of this work but have not myself taken any part. This non- participation is to some extent because of the lack of specialized equipment at my institution, but more because I do not see what I could do with computer analysis or how I could put it to use. I have John Abercrombie's text on computer analysis with sample programs, and I believe I see how such analysis can be done, but so far for me, it is a tool looking for a use. I would like to see some sample applications of the methods. How, for example, can computer analysis be used for the compilation of glossaries? There is a real need for specialized glossaries for Greek scientific texts. Even as important a text as Ptolemy's Almagest does not have a glossary, or even an index verborum. Can anyone steer me to a study using these methods? My second (somewhat reluctant) interest: at times, when the flow of Greek and Latin students has become a trickle, I am forced to teach English composition. I must admit that, while there are compensations to teaching English comp. (there can be real intellectual content to the class), I have found it difficult to teach the process of writing. Many students, when faced with an essay assignment, stare at the paper and break out in a sweat: "What do I do next!" My vision of a comp. class is one in which the instructor has a keyboard at his desk, with the CTR projected on the wall. He asks the class for a topic, then he goes through the whole process of writing an essay, from brainstorming the idea to the final revision, all during the class period. The students can see each stage of the process right there in front of them. (A rapid printer would also be good; then they would have hard copies.) I would hope (hope springs eternal-it has to if you teach composition!) that watching, then actually doing this process might make their writing flow. Has anyone done this; are there labs set up with such equipment? (I know projection screens are available.) I have to confess ignorance here: I do not keep up with the professional literature on teaching composition, so I may be making a suggestion here which is common practice. It is not, however, common in Sacramento. I'd be glad to have a periodical reference. ----------------- *Rockwell, Geoff I am a US citizen residing in Canada. Though born in Philadelphia, I grew up in Rome, Italy. I can speak Italian fluently and French reasonably well. I received a B.A. in philosophy from Haverford College, + Haverfrd Pennsylvania. (Haverford is a small Quaker college outside + Philadelphia that is associated to Bryn Mawr College.) In 1983 I received the Royal Society of Arts Preparatory Certificate in Teaching English as a Foreign Language. Armed with this I taught English and Art at an American high school in Kuwait for two years. I came to Toronto where I am now enrolled in the doctoral program in philosophy at the University of Toronto. I am interested in the philosophy of education and especially the teaching of philosophy. I am also interested in Greek philosophy (Plato and Aristotle.) I am interested in the use of computers in philosophy. I am especially interested in the use of Macintoshes in philosophy and the humanities. I am presently an Apple Research Program Partner. This means that I am supposed to encourage research at U of T using Macs. ----------------- *Scullion, Jim <23SCULLION@CUA> I am a Ph.D. candidate in biblical studies (New Testament) at Catholic University. I am currently writing a dissertation on the Day of Atonement. I would be very interested in joining your electronic discussion group. My interests are in Semitic languages (Hebrew, Aramaic, Syriac) as well as Coptic, Greek, and Latin. I also knew some computer language (Turbo Pascal, BASIC, and Assembly). ----------------- *Spangehl, Stephen D. ? Director, Interdisciplinary Program in Linguistics, University of Louisville, 332 Strickler Hall, Louisville, Kentucky 40292; My interests include: text analysis of modern and Middle English, particularly stylistic analysis which identifies superficial features indicating underlying stylistic parameters; writing generative grammars of English (esp. ones based on C. Fillmore's "case grammar" approach); in Prolog, LISP, etc.; teaching technical types (e.g., our computer center people) how to write manuals and documentation in readable, interesting English; ----------------- *Sudduth, Thomas D 2215 Land Street #149, Laramie, WY 82070, USA; telephone: 307/766-2124 I am an Assistant Coordinator, Conferences and Institutes, School of Extended Studies and Public Service at the University of Wyoming. I am also a Ph.D. candidate in the College of Education with Adult Education as my emphasis of study. I currently serve on the editorial board of an electronic journal, NEW HORIZONS, a Kellogg-funded project in Syracuse University. One of my particular interests is the treatment of adult education in works of fiction such as short stories, novels, films, etc. ----------------- *Van Sickle, John From 1981 to 1985 I worked with MVS WYLBUR and then WATERLOO SCRIPT on the CUNY mainframe systems, mostly word processing, but some programming in SCRIPT of a bibliography of my rare books. From mid 1985, I began learning NOTA BENE and working on personal computers, in order to survive during a sabbatical year in Italy, which I spent happily enough hacking in various computer centers, wherever I could find a spare machine. I have managed to generate a certain amount of interest in NB in Italy.... In the meantime, my own employment of NB continues in bibliographical and word processing pursuits. Recently I have begun to collect information about LATIN TEACHING applications in MSDOS systems and would like to pursue this with the network. ----------------- *Webb, Don Dept. of Foreign Languages, California State University, Sacramento, CA 95819; telephone: (916) 278-5791 or 6652 Computers in the Humanities: Desktop publishing, esp. textbook, "French-English Translating." Two other textbooks planned: "English-French Translating," "Advanced French Grammar." I am interested in on-line communications as a means of carrying on long-distance conversations with colleagues about matters of mutual professional interest. ----------------- *Woolley, James Associate Professor of English, Lafayette College, Easton, PA 18042, USA I teach computer applications in writing courses, offer a good deal of informal computing advice to humanists at Lafayette, and am a member of Lafayette's Academic Computing Committee. My primary research interest is scholarly editing; I am a co-editor of the Delaware Edition of Swift's Poems, which is as fully computerized a project as we can make it. ----------------- *Yevics, Philip E. Adjunct Lecturer, Theology, the University of Scranton (PA) and Allentown College of St. Francis deSales. This summer, I will be taking part in a NEH Seminar on Medieval Hagiography, and my own research project will concern Byzantine Hagiography and the liturgical hymns concerning the saints of the Byzantine tradition, which are contained in a series of volumes known as the Menaia or Menologia. I hope to have access to an English translation of these volumes on PC disks (which if necessary I can upload to our VAX). I am interested in suggestions on how computers might assist in the analysis of this literary corpus. If this is something which your members would be interested in tackling, I would look forward to future correspondence. From: HUMANIST Discussion 10-SEP-1988 04:25:24.71Willard McCarty Subject: 12th biographical supplement (672) Date: Fri, 9 Sep 88 22:35:30 EDT X-Humanist: Vol. 1 Num. 1084 (1084) Humanist Mailing List, Vol. 2, No. 89. Friday, 9 Sep 1988. Autobiographies of HUMANISTs Twelfth Supplement Following are 23 additional entries to the collection of autobiographical statements by members of the HUMANIST discussion group. HUMANISTs on IBM VM/CMS systems will want a copy of Jim Coombs' exec for searching and retrieving biographical entries. It is kept on HUMANIST's file-server; for more information, see the Guide to HUMANIST. Further additions, corrections, and updates are welcome. Willard McCarty Centre for Computing in the Humanities, Univ. of Toronto mccarty@vm.epas.utoronto.ca OR mccarty@utorepas 9 September 1988 ----------------- *Aasgaard, Joshua "PeaceMan" 625 North Scott, New Orleans, Louisiana 70119; (504) 484-7934; (504) 488-6908 (data 1200 bd) Greetings fellow humans! I am a Political Science / Philosophy double major, minoring in Computer Science at the University of New Orleans. I began as a CS / Philosophy double major but was incompetent at Calculus II. However, my love of rhetoric and politics made my switch quite easy. My knowledge of computers (software and hardware, mainframe and micros) has made the amount I can contribute to my fellow students greater, and has partially helped me in landing a new job as Managing Editor of our school newspaper, the Driftwood, which starts in mid-August. Previously, I had been employed as a pollster for the UNO Poll and wrote a liberal/humanist column for the Driftwood. In the polling, tabulation of data was done primarily on our VAX mainframes using SAS. I didn't do any of that work, but it is one way our political science department uses computing in Humanities. On the paper we use Macintoshes and desktop publishing and a laser printer, to help us create and lay-out the paper. It is then sent to our local daily newspaper to be printed. As the semester develops I will, if you all are interested, keep you informed of any innovations or short-cuts there may be using the Mac and whatever software package(s) we use. My research interests include Artificial Intelligence, Philosophy of Communication, Social and Political Philosophy. My general interests include developing benevolent alternatives to absolutist theologies that subvert confidence in the power of human reason and compassion, good will toward those with different beliefs or cultures, and the ability to build a more peaceful and prosperous world (spiritually and materially). Basically, my interests are like those of the practical idealists in whose footsteps I follow, to bring about a more just, happier society, by living by that rule so many people who call themselves Christians forget: by loving my fellow human sisters and brothers as I love myself, by caring, giving, and volunteering. In short, my interests are to promote Christian love with a humanist face. Finally, when I first subscribed I thought I was joining a humanism conference rather than one on the humanities. I am pleasantly surprised; I look forward to seeing what, specifically, takes place between us. ----------------- *Barnett, Gerald <8122313@UWAVM> Humanities and Arts Computing Center DW-10 University of Washington Seattle, WA 98195 Birth 23 October 1958 at Walla Walla, Washington Professional Experience: 1988-89 Acting Instructor in the Department of English at the University of Washington. 1984-88 Project Consultant at the Humanities and Arts Computing Center at the University of Washington. I specialized in custom character font development for IBM PCs and dot matrix printers (medieval English, Cyrillic), OCR text scanning, the electronic representation of manuscript texts, and language-learning software. ----------------- *Batke, Peter Humanities Discipline Specialist, Homewood Computing Facilities, The Johns Hopkins University, Charles and 34th Street, Baltimore, MD 21218; phone: (301) 338 8096 BITNET: Academic Background: BA 1968 German - Univ. of North Carolina at Chapel Hill; MA 1971 German UNC-CH Thesis: The Parodies of Nestroy; PhD 1979 German Diss: Heimito von Doderer (Modern Austrian Novel); Research Interest: Modern novel, electronic indexing of literature, computer aided instruction, AI, Neural Networks, WordCruncher, OCP. Humanities Computing Background: 1971 Text Processing on IBM Mag Card; 1975-78 PL/1, SPSS at the UNC-CH Computer Center; 1978-79 Text processing with Waterloo Script of dissertation; 1980 Text Processing and office applications on CPT; 1983-1988 Senior Research Associate at Humanities Computing Facility; Duke University; 1988- Humanities Discipline Specialist, The Johns Hopkins University. My present duties include supporting Hopkins humanities faculty and departments in computer applications. I have read papers at IBM/ACIS conferences, at ICCH and at CALICO primarily about CAI, Text Analysis, Non-roman characters, and general issues concerning computers and humanities. ----------------- *Benoit, William L. Assistant Professor Department of Communication University of Missouri Columbia, MO 65211 (314) 882-0545 My research and teaching interests concern symbolic (and, in particular, verbal) social influence. Thus, I am interested in rhetorical theory and criticism. I have written about rhetorical theory, including articles on Aristotle, Isocrates, and Kenneth Burke. I have also written some rhetorical criticism, including studies of President Nixon on Watergate, Kennedy on the Chappaquiddick tragedy, and Tylenol on the poisonings. I also have done some experimental work on persuasion (attitude change) and resistance to persuasion, and on argumentation. I use computers primarily for word processing, but I have done some statistical analysis on our mainframe. I have written some menu programs (with the assistance of, e.g., PC Magazine) but I don't do any programming. I have found BITNET to be useful and fun (and informative). I like to know enough about computers to tweak their performance a bit, and to make sure they work easily. So, while I enjoy working with computers quite a bit, that's not my job (they are tools that help me do my job better, faster, easier). I have an IBM-PC at home and an AT-compatible at my office. Both have modems. (I covet the NEC multispeed: a fast laptop with the function keys in the right place!) The software I use most is Wordstar (TM) for word processing and Procomm (TM--developed right here in Columbia, MO!). I occasionally use Grammatik II (TM). My favorite games are INFOCOM games, including the Zork trilogy and several of their murder mystery games (Suspect, Witness). I look forward to seeing what the discussions on the HUMANIST list concern. ----------------- *Crosby, Connie or MA student, Department of English Language and Literature, University of Guelph, N1G 2W1 or 631 Glen Moor Cres., Burlington, Ontario, L7N 2Z8, (416) 632-6337. I am a graduate student with the English Department here at U. of Guelph currently working on my thesis. My thesis comes under the wonderfully general title of "computer applications to the humanities"--I am attempting to write a guide for undergraduate humanities students on computer use. I am also a frequent user of computer conferencing here on campus. I enjoy both CoSy and TCoSy use, and like to support the use of these whenever possible. I first came to computer use on campus through using TCoSy, and have since been able to help other students, both formally and informally, learn to use the system. I moderate several conferences on TCoSy, and am frequently creating new ones. Other than computers, I have a strong interest in literature and in communication in general. I am hoping to learn much from the people on HUMANIST, and contribute whenever I am able. ----------------- *Ellis, Richard William Principal Analyst; Network Services, Bloomington Academic Computing Services, Indiana University, 750 N. State Rd. 46, Bloomington, IN 47405; (812) 335-4240 My educational background lies in psychobiology--animal learning theory and ethology. However, my recent professional experience and interest has been in planning for academic information systems, library automation, central and distributed computing, and telecommunications. Illustrations follow. I have developed and coordinated execution of plans for academic information systems for several years, including integrated faculty and student online information environments and videotex development projects. I am currently a member of the university's NOTIS library automation project leaders team and a member of Indiana University's Online Database Forum. I recently served as a panelist for the Indiana Cooperative Library Services Authority briefing on new information systems and have been asked to convene a joint libraries and computer centers planning retreat. Last year I participated in academic computing task forces addressing instructional computing and central systems architecture. I also prepared a network plan for academic computing. I currently chair a committee that is developing a five-year plan for academic computing, and I am also working on a plan for network-workstation integration. I took the lead in establishing a Bitnet connection for Indiana University/Bloomington in 1985, and in provision of DECnet routers, gateways, and dialout service. In 1987, I served on the Telecommunications System Technical Evaluation Team that evaluated and recommended a new campus telecommunications system. ----------------- *Eveland, John F. Graduate Student, English -- Teaching Assistant, English Home: 47-A Schilletter Village, Ames, Iowa 50010 (515) 296-8595 I am 47 years old and working on a Masters in English, specifically in the areas of Business and Technical Writing, and Composition and Rhetoric. At the present time I teach three classes of Freshman Composition a year. As to a 'professional' career in Humanities, I am afraid I'm lacking in that category. I received a B.A., in English, in May 1988, and I have been busy ever since with my graduate studies. Before going back to school for my English degree, my wife and I were in the ministry. We traveled extensively in the United States for about six years and enjoyed both the people and the sights. We did not 'preach', but we provided churches and congregations with information about literature, both clerical and secular, that they could use in their ministries. My wife and I felt that we could be of more help to people if we got a better education, and so in 1985 we enrolled in college. My current interests include, computer aided education, especially as it relates to teaching composition and other forms of writing. I am also interested in the affects that computers have on people, i.e., how it influences their job choice, their interests in the arts, and their acceptance of it in their lives. I feel that computers may be the best tool available at this time to help educate the people of the world, and although education may not solve the worlds problems, it certainly is not going to make them worse. Computers are the simplest way to educate people without interference from prejudice or bigotry, after all, a monitor does not know if the operator is white, black, male, female, Christian, Moslem or Jew. Enough soap box for now. I hope to continue my education through the Doctorate level and then teach writing. I am not sure if I want to teach in a university or in a high school. ----------------- *Fowler, Don I am the Fellow and Tutor in Classics at Jesus College, Oxford, and University Lecturer in Greek and Latin Literature. My main research interests are in Latin poetry, especially Lucretius, Vergil, and Ovid, and Hellenistic Philosophy, especially Epicurus. I first used computers as a graduate working on a commentary on Lucretius Book Two, using the old Oxford ICL 1906A to do simple concording, metrical stuff etc. After a while out of the field, I have recently got involved again with the Oxford Text Searching Project, which aims to make a friendly front-end to OCP available for undergraduate users on the Oxford network. I also use Ibycus quite often and get IBYNEWS. I'd be particularly interested to hear about lexical research, especially on hapax legomena (!). ----------------- *Fox, John J. Since 1964, professor of history, Salem State College, Salem, MA 01970; phone# (508) 741-6000 x 2369. Other interest is U.S. constitutional history. Research in period 1763 - 1824. Just published chapter on MA and creation of federal union in "the constitution and the states" (published by Madison House). Teach both under and grad in this area. Also teach honors world civ for freshman. This is a thematic course. Theme first semester is rise of religious thought and institutions, second semester, rise of political institution. I was a late bloomer, not going on to college until after service in Korean War. Stationed in Berlin, Germany, 1953 - 1954. Went to North Adams State (Mass.). Graduate work at Lehigh Univ. abd, but gained full professor in 1980. Also involved in community history. Work with museums in region. Teach a very successful summer institute on local history. Theme is use of local resources in teaching national history. Just received a $74,000 + grant from Federal Bicenntenial Commission to run 1989 Summer Institute on Massachusetts and the Constitution. ----------------- *Fritz, Paul Associate Prof; Dept of Communication; Univ. of Toledo 2801 W. Bancroft St. Toledo, Ohio 43606 (419) 537-2006 Professional Activities: Director of Intro Course ----------------- I am an undergraduate in Philosophy at the University of New Orleans. I have a fair amount of programming experience and am currently employed as a computer programmer, creating hardware drivers and graphics programs for use in psychological testing at UNO. ----------------- *Harbin, Duane G. Systems & Planning Manager, Yale Divinity School Library, 409 Prospect Street New Haven, CT 06510; 203 432-5296 Primary responsibility for technical support of services and research at the Divinity School Library. Particular interest in the field of Worship & Liturgy in Christianity, particularly the Anglican Communion. Professional interest in Biblical Studies, Christian Theology, and Theological Education, especially bibliographic issues and the use of new technologies for scholarly research and education. Undergraduate work in Linguistics and Computer Studies (B.A. Northwestern University, 1977). Concentrations in Spanish, Portuguese and French, and in Computer Assisted Instruction (CAI). Mainframe Experience: IBM VM/CMS and MVS. Microcomputer Experience: PC/MS DOS environment. Pascal and C programming usually utilizing Turbo compilers). IBM Token Ring LAN under IBM LAN Support Software and the IBM LAN Program. Implementing IBYCUS microcomputer. M.Div. Yale Divinity School, 1981. MLS. Southern Connecticut State University, 1988 (pending). American Theological Library Association: Member since 1982. First convener of Microcomputer Users' Group. American Library Association: Member since 1986. ----------------- *Hendrick, Philip Dr., Department of European Studies and Modern Languages, Magee College, University of Ulster Londonderry, Northern Ireland. I have been a lecturer in French at the University of Ulster since 1972. I obtained my doctorate at the University of Pennsylvania, graduating in 1974. Since the early '80s we have radically redesigned our foreign language courses, and I was chairman of the course planning committee for a course called International Business Communication which is run on the Magee campus of UU. I am now course director for IBC, a course that combines the study of 2 foreign languages, computing and secretarial skills. Designing and running this course has brought me to take an active interest in computing in the Humanities. Since February of this year I am academic co-ordinator for the Computers in Teaching Initiative in the Humanities faculty of UU. Thanks to a substantial grant from the Computer Board we have been able to establish a small micro lab in the Faculty, and further labs are being installed. We are moving from BBCs to Apple Macs. Part of the CTI project involves developing CALL software, and we have three main strands: a multiple choice programme designed originally for helping students learn the intricacies of Irish grammar; a text analysis programme, designed for making the best use of textual analysis in the context of foreign language learning. This second programme is closely linked to a third, as yet unfinished programme for glossary work. In addition to the above activities I am also campus director for the Humanities faculty on the Magee campus, coordinator of the ERASMUS exchange scheme between in suitable work placements in the North-West of Ireland. ----------------- *Ingerman, Bret Microcomputer Consultant, Syracuse University, Academic Computing Services, 215 Machinery Hall, Syracuse, NY 13244-1260; Phone: (315) 443-1865 I have a B.S. in Psychology (Syracuse University, 1985) as well as an M.S. in Behavioral Neuroscience/ Experimental Psychology (Syracuse University, 1987). Most of my research concerned non- invasive ways of reducing stress in borderline and hypertensive strains of rats. I was always intrigued by computers, and did a great deal of computer programming and interfacing computers to laboratory equipment in the lab. However, I decided that my first love was teaching, and thus I sought a career that would allow me to devote most of my time to that endeavor. That is how I arrived at my current position: Microcomputer Consultant. I work in a division of academic computing known as Aside from my consulting duties, I have a few areas of special ----------------- Associate Professor, Department of Germanic Languages and Literatures, 230 East Pyne Building, Princeton University, Princeton, NJ 08540 (609) 452-4141 Michael W. Jennings specializes in the European, and especially German, literature of the twentieth century and in literary theory. His book, Dialectical Images: Walter Benjamin's Theory of Literary Criticism, was published by Cornell University Press in 1987. He has written on Robert Musil, Friedrich Hoelderlin, the drama of the Sturm und Drang, and the literature of the Weimar Republic. Currently engaged in a study of the relationship between politics and the novel in the Weimar period, he is conducting research on such novelists as Doeblin, Heinrich and Thomas Mann, Musil, Broch, Roth, Seghers, and Graf. He is the General Editor of Walter Benjamin's Collected Writings, forthcoming from Harvard University Press, and, with Dorothea Dietrich, of a collection of essays on culture and politics in the Weimar era. Jennings has focused his work with computers on their applicability to the teaching of language. He has overseen a project which integrated use of a CAI program (CALIS) into the first four semesters of the German language sequence at Princeton. At present he is part of a team working with IBM's InfoWindow system which seeks to develop interactive videodisc applications for intermediate German and French. In the coming year he will begin work on a Macintosh-based project which will explore new ways to use the computer in the teaching of writing. ----------------- *Malakoff, Marguerite I am a doctoral student in psychology at Yale University. I am currently working on my dissertation in the area of bilingualism (under Dr. Kenji Hakuta). Specifically, I am studying the nature of Natural Translation in bilingual children. I will be conducting the study in an international school in Geneva, Switzerland. I would appreciate it if you would send me the necessary information to join the HUMANIST network. ----------------- *Novak, Rich Novak <2631002@RUTVM1> Graduate School of Education ETPA Department Rutgers University 10 Seminary Place New Brunswick, NJ 08903; 201-932-7855; 201-249- 4448 In addition to my work in adult religious education, I am a doctoral student in adult education at Rutgers University. I am also a Teaching Assistant there and serve on the editorial board of New Horizons in Adult Education, an electronic journal originating from Syracuse University and the Kellogg Foundation Project. My interests in terms of computing include how to bridge the gap between the techies and those pure academicians, culling the best of what the technology has to offer as a tool. ----------------- *Rickard, Wendy Publications Coordinator EDUCOM P.O. Box 364 777 Alexander Road Princeton, NJ 08540 609 520-3367 A graduate of SUNY Binghamton in upstate New York with degrees in English Literature and Economics, I currently serve as Publications Coordinator for EDUCOM, a consortium of over 550 colleges and universities dedicated to the advancement of technology in higher education. In addition to that position, I also serve as Assistant Editor of the EDUCOM Bulletin, a quarterly magazine, as well as Editor of CCNEWS, an electronic forum for campus computing newsletter editors on BITNET consisting of a weekly newsletter and an articles database. Previously I have held positions in marketing, advertising and promotion, often with technology companies, and have produced numerous publications, brochures and flyers via desktop publishing. Throughout my career I have also handled several freelance writing assignments in a variety of fields. Having graduated college fairly computer illiterate, I am delighted to be among a growing number of "humanists" who have also developed a passion for the potentials of technology in their respective fields. My specific interests within computing include electronic publishing, the development of policy (particularly with regard to copyrights), desktop publishing, and instructional software. ----------------- *Roch, Michael Teaching Officer, Oxford University Computing Teaching Centre, 59 George St., Oxford, OX1 2BH. I am joining Humanist on behalf of my department. The Computing Teaching Centre offers courses on a wide range of computing topics to individuals and departments of the University. The centre is independent of the main computer centre and is located in the city centre, away from the science area of the campus but within easy reach of most colleges. The aim has been to make the CTC more accessible to humanities students. The main teaching equipment comprises 58 Research Machines AX micros (IBM AT compatibles), connected by a ZNET network to two RM VX (80386) file and print servers. A variety of MS-DOS applications and languages are offered. In addition we have five Torch Triple X micros running Unix and arranged as a "thin wire Ethernet". The CTC has its own TV studio where videos are produced to support teaching or on behalf of colleges, departments or external organisation. The CTC also hosts an "Open Learning Centre" equipped with 20 IBM ATs connected by PC-NET, where individuals may work through tutorial packages, experiment with software, etc.. ----------------- *Ryle, Martin Professor of History, University of Richmond; Office: University of Richmond, Virginia 23173 USA Home: 216 College Road Richmond, Virginia 23229 USA; telephone: Office: (804)289-8340 Home: (804) 282-4761 Educational background: BA Furman University, 1960 PhD Emory University, 1967 Courses currently being taught: History of the Soviet Union, History of Soviet Foreign Policy, History of Socialism and Communism, Western Civilization from the Greeks to the Present, Colloquium on Nuclear Weapons, Honors Seminar on Historiography Research interests: Historically-based game simulations, Comintern front organizations, particularly International Red Aid, The Russian Revolution Work in progress: An educational game simulation based on Stalin's rise to power. Other activities: Lectures on Soviet policies and life to community groups. Creative cuisine Gaming, particularly Scrabble, bridge, and chess. Tailgate trombone in dixieland band, The Academy of St. Boatwright on the Lake (ASBOL). ----------------- I am a librarian with a strong interest in computer applications for the humanities. I have a background in anthropology with an emphasis in comparative religion, philosophy, and law. ----------------- *Ward, James H. Coordinator, Users Services Group, Office of Information Systems, University of Puerto Rico, Box 4984-Gl, San Juan, PR 00936; (809)758-7740 extension 5442 I received a doctorate in contemporary Spanish American literature from Tulane University in 1967 but, after moving to the Humanities Department on the Mayaguez campus of the University of Puerto Rico in 1972, my teaching duties have centered on a basic humanities course (all Western Civilization in two semesters) and a two semester course on Latin American cultures and civilizations. My serious interest in and dedication to computers began in 1979 when I convinced my wife she needed a computer at home to facilitate writing her Ph.D. dissertation. Consequently, my dexterous sixteen year old son and I put together a Heath H89. Since the Mayaguez campus is traditionally associated with science and engineering, I had the opportunity to expand horizons beyond a personal computer. This month (August 1988) I have transferred from the Mayaguez Campus to the Office of Systems Information of the Central Administration. The President of the University and the Director of the Office were particularly anxious to employ a non- specialist (and preferably a humanist) generally knowledgeable about computer applications to organize and coordinate a Users Service Group. Consequently my immediate interest (and the area in which any advice or information will be most welcome) is User Services with emphasis on non-traditional academic users (mostly humanists). I am also deeply involved in a second project. Before my transfer to the Central Administration, I was designated Conference Manager of the First Inter-American Congress on the Philosophy of Technology, to be held on the Mayaguez campus from October 5 to 8, 1988. Calls for papers and other information have been placed in pertinent journals of philosophy; however, if anyone wishes more information, I will be happy to furnish it by MAIL. Other areas of involvement have been editing (mostly technical reports), translating (Spanish-English), and the creation of a bilingual audio-visual program, "Getting to Know Puerto Rico/Conociendo a Puerto Rico." From: HUMANIST Discussion 12-OCT-1988 12:44:30.03Willard McCarty Subject: Biographical supplement (542) Date: Tue, 11 Oct 88 23:23:42 EDT X-Humanist: Vol. 1 Num. 1085 (1085) Humanist Mailing List, Vol. 2, No. 224. Tuesday, 11 Oct 1988. Autobiographies of HUMANISTs Thirteenth Supplement Following are 23 additional entries to the collection of autobiographical statements by members of the HUMANIST discussion group. HUMANISTs on IBM VM/CMS systems will want a copy of Jim Coombs' exec for searching and retrieving biographical entries. It is kept on HUMANIST's file-server; for more information, see the Guide to HUMANIST. Further additions, corrections, and updates are welcome. Willard McCarty Centre for Computing in the Humanities, Univ. of Toronto mccarty@vm.epas.utoronto.ca OR mccarty@utorepas 11 October 1988 ----------------- *Ahrens, John Assistant Professor, Department of Philosophy, University of Hartford, West Hartford, CT 06117; (H) 203-247-9560 (O) 203- 243-4745 I have just returned to teaching after spending the past four years as the Assistant Director of the Social Philosophy and Policy Center at Bowling Green State University. During this time I was the Managing Editor of the Center's publications, including a journal entitled Social Philosophy & Policy. My stint as an editor instilled in me a fervent loathing of bad writing and a desire to eradicate it wherever it may be. My research interests are primarily in the areas of ethics and political philosophy. I have a special interest in environmental issues and in the foundations of classical liberalism. When the attraction of traditional philosophical issues pales but I still have the desire to work, I turn my attention to popular culture, especially the literature and films of science fiction. I teach classes in these areas (including even science fiction), and in elementary logic. Beyond such academic pursuits, I am an avid student of computers and of the martial arts. ----------------- *Brink, Daniel T. Assoc. Prof., English, Arizona State University, Tempe, AZ 85287- 0302 (602) 965-6795. My academic training was in Germanic philology, with an emphasis on the West Germanic tongues, Old English, Old Saxon, and the like. I minored in linguistics, and wrote a generative phonology of Dutch for my dissertation (U of Wisconsin). My computer interests awoke with the advent of the microcomputer, largely as a result--probably common to many humanities computing types--of my frustration in the the early eighties with trying to get foreign characters into WordStar. I finally succeeded, but a sabbatical was lost in the process and my interests took a decidedly new direction. I am now Manager of a new facility on my campus, Technology Assessment and Development, which has the charter of identifying and evaluating new products which have the potential impacting the university in major way. Talk about fun... I can still recite the entire corpus of Old Low Franconian literature by heart, however, so I haven't gone completely bad: Hebban olla vogala nestas bigunnan hinasi thu ende ik. Wat onbidan we nu? ----------------- *Hanly, Ken Brandon University, Brandon Mb. Canada, R7A 6A9 I am an associate professor of philosophy at Brandon University. I have been involved for some time in socialist politics as well as community involvement, e.g. was president of a co-operative housing project, of a direct charge buyer's co-op etc. I am interested in humanist religion -was a member of the local unitarian fellowship for some time- as well as discussion of contemporary social issues. I am also interested in Marxist analyses of social problems. As a hobby I read and write poetry and have served as editorof a poetry magazine and have edited a series of chapbooks. ----------------- *Horton, Thomas B. Dept. of Computer Science, Florida Atlantic Univ., Boca Raton, FL 33431; (407) 393-2674 I completed my PhD in Computer Science at the Univ. of Edinburgh (Scotland) in June 1987, and started as an assistant professor here at FAU the following autumn. My PhD research examined statistical authorship studies and focused on the question of collaboration between Shakespeare and Fletcher in "Henry VIII" and "The Two Noble Kinsmen." In this study I used distribution-free discriminant analysis techniques to analyze the rate of occurrence of function words, with successful results: 94.8% of 459 scenes of known authorship (containing at least 500 words) were assigned to the correct author. The procedure indicates that the two disputed plays are collaboration and generally supports the usual division (but not always). Current research interests include the analysis of function word rates and characterization in the plays of Shakespeare and Fletcher. I am also very interested in computer processing of old-spelling texts, especially Jacobean drama and Middle English manuscripts. Here at FAU I teach core Computer Science courses such as Data Structures and Analysis of Algorithms. I generally program in Pascal and C on UNIX systems and am busy converting my colleagues to the TeX document processing system. At Edinburgh I helped set up a local "Computers in the Humanities" special interest group. FAU does not have a strong community of computer users in the humanities, but I have recently made contact with the College of Humanities, who are interested in joint supervision of graduate students, seminars and possibly a course in literary computing. I'd be very interested hearing from anyone interested in research in authorship studies, literary statistics, characterization studies or Middle English texts. ----------------- *Janus, Louis E. Norwegian Department, St. Olaf College, Northfield, MN 55057; (507) 663-3486 (work); (612) 822-1015 (home). I teach Norwegian and Linguistics at St. Olaf College. In the past, I taught a beginning level course in computer use for Humanists (an interim which focused on several UNIX tools). Now I am involved in several Mac projects, most specifically with sound digitizing in foreign language instruction. I have a number of Old Norse sagas in machine readable format, which I soon will be announcing to the general public. I was at the meeting in Columbia, SC where we discussed establishing the HUMANIST, but somehow did not end up on the e-mail list. ----------------- *Keith, Philip M. English Department, St. Cloud State University, St. Cloud, MN 56301; 612-255-3189 I have been working on and off with computers in literature and the teaching of writing for the better part of a decade. I have been working with word-processing and interactive programs for teaching and improving writing. I have also been working at some text-analyzer projects on metrics and prose structure. ----------------- *Lipson, Elizabeth Academic Computing Publications Editor, Emory University Computing Center, Uppergate House, Atlanta, Georgia 30322 404/727-7651 I write computing publications and a newsletter for our user community at Emory. My past activities have been in the area of computer training-- instructor led, classroom/individual, computer-based training, self-paced materials. I have worked in computing for about five years. Before that, I worked in research/writing in the field of historic preservation. I have a B.A. in history from Georgia State University. ----------------- *MacWhinney, Brian Professor of Psychology, Department of Psychology, Carnegie Mellon University, Pittsburgh PA 15213; 412 268-3793 I am a psychologist and not a bona fide humanist. Much of my work is clearly psychological in nature. This is the work I do in simulating the acquisition of language with computers and in conducting experiments to provide data for these simulations. However, a second line of my research comes closer to touching on issues in the Humanities. This is my work as director of the Child Language Data Exchange System. With the support of various public and private national foundations, Catherine Snow of Harvard University and I have worked with many researchers in the field of child language acquisition to establish a large (over 100 megabytes) database of adult-child conversational interaction data. Most of the these data are from English. However, a growing proportion of the database is from other languages as well. The CHILDES system has not only developed a large database but has also specified a standard transcription system for conversational analysis. In addition, we have developed a set of C programs for the analysis of conversational data. These programs are unlike concordance programs in many ways and focus on Boolean searches, cross-tier searches, and numerical outputs. Membership in the CHILDES system and access to the data and programs is open to the research community. ----------------- *Makivirta, Joni Mikael Minna Canthin katu 22 A 5, 40100 Jyvaskyla, Finland 1. Studies I am a student of history. During this year I should graduate. My main subject is general history. I have also studied history of Finland, Economic history, Social sciences (=mostly politics), and Educational sciences. I probably will still study Speech communication as an extra subject. This year I should do my pro gradu -work. I will most probably do study on T.H.Green - an english politician during 19th centory (=Victorianic time). Green was interesting man because he respected Christian values in life. He thought that liberalism should lead also qualitial improvement in human beeing. A improvement meant that people lerned to act and work moralisticly - in Christian sense of the word. By learning these values people could learn to think also common good - not only themselves. This kind of thinking gave some ideas also to socialistic thinkers. Although they, of course, left out the word Christian. Was Mr. Green a true Christian, I do not know. But at least he sympathised Christian values. 2. Life Well that about my studies. Now. My story: I have lived all of my life in Finland. (is in Europe, Scandinavia and beside the USSR.) And I am still living in my home city: Jyvaskyla. Jyvaskyla is in central Finland. Here is lot of lakes, woods ... and nature is very colse. There are some 70 000 citizens in Jyvaskyla. I graduated from High School 1986. After that I decided to study history in local University. So here I am. I am Christian, but I can sympathise with many kind of a people and understand many kind of a thoughts - although I would not apply them in my own life. Maybe this is the reason for my interest to T.H.Green? 3. Expectations I hope I could find, via this list, someone who is interested in Victorianic time. I also hope I could learn to use some databases, and tell about them in the Departement of History. I have founded the list HISTORY at FINHUTC, so if anyone is interested in that. You are welcome to join in. I wonder if HUMANIST and HISTORY could do some co-operation? ----------------- *McDaid, John G. Computer Coordinator, NYU Expository Writing Program, 269 Mercer St. Rm. 219 New York, NY 10003 (212) 998-8862 For the last five years, I've been teaching expository writing at NYU while working on my doctorate in media ecology. My dissertation is on the process of hypermedia composition, and I've been working at integrating hypertext (StorySpace, HyperCard) into our freshman comp sequence. We have two dedicated classrooms with Macintoshes running 16 sections each semester. I see computers from a McLuhanist perspective, and as a tool for decentering authority in the classroom. I'm interested in networks, computer conferencing, collaborative work-group support, and Xanadu. ----------------- *Meadows, Dennis Lynn Director, Institute for Policy and Social Science Research; Professor of Business Systems and Policy, University of New Hampshire, Durham, NH 03824, USA; 603/862-2186 Telex: 493-0372 RPC UI I have been on the faculties of MIT and Dartmouth; I just joined the UNH faculty this fall to define, create, and run a new institute that will support research interests of faculty across the campus. My research interests lie in the creation of sophisticated simulation models and educational games that show the interaction of psychological, economic, social, political, and technical factors. My games and books on this subject have been translated into over 30 languages. I also have an extremely strong and active interest in Eastern Europe. I have lived in Budapest and visited the USSR over 45 times. I will take a Fulbright in Moscow next spring. I serve as US Director of the Soviet- American Environmental Education Project. ----------------- *Meath, Terrence W. User Services Specialist, Health Sciences Computing Services, University of Minnesota, BOX 720 UMHC, 5-235 Malcom Moos Tower, 515 Delaware St. S.E., Minneapolis, MN 55455; (612) 625-7175 In requesting to join your list, I am representing several hundred faculty, staff and students in the Health Sciences at the UofM, and not just myself. We have taken a different approach to network based information sharing. Rather than require each of our users to subscribe to the lists that they think might be useful to them (and then unsubscribe if necessary), we have created a small group of public mailboxes (mailboxes on our system that all of our users can access), which we subscribe to the various list servers. Our public mailboxes are organized around general topics such as statistics, computing, health, etc. and each contain the traffic of a dozen or so lists. The mailboxes have the ability to show all of the traffic together, or to sort on any one list. The biggest advantage is that we involve many more people in network communications than would be the case if they had to learn about, evaluate and manage the subscriptions by themselves. Further, this type of involvement eliminates a great deal of redundant network traffic. Our users are still free to subscribe to a list on their own, but our public mailbox technique has worked out so well that few people have chosen to do that. ----------------- *Moulthrop, Stuart Assistant Professor of English; P.O. Box 7355 Yale Station, New Haven, CT 06520; 203-436-3023 My primary interest is in theory and history of narrative, especially late-modern and "postmodern" developments. My dissertation (Yale, 1986) was an investigation of problematic closure in Sterne, Dickens, and Charlotte Bronte. I'm currently working on a book concerning the status of fictional language in recent American fiction (Thomas Pynchon, Toni Morrison, Maxine Hong Kingston). Parallel (somewhat) to all of that, I also take an active interest in hypertext, particularly its literary applications. I've written a hypertextual treatment of Borges's "Garden of Forking Paths" in Storyspace and am presently involved in a couple of other hypertext projects, including speculations about an electronic literary review. ----------------- *Muller, R. Andrew Associate Professor, Department of Economics, McMaster University, Hamilton, Ontario; (416) 525-9140 x3831 I am an academic economist with professional interests in public policy, especially environmental policy, housing and industrial organization. I have published econometric studies of the pulp and paper industry's adjustment to pollution control and energy price changes, analyses of the Toronto housing market and rent control, and analyses of Canadian water export policy (with particular reference to the Grand Canal Scheme to export water from James Bay to the United States). I have also worked on simulating the effects of the proposed Canada-US free trade agreement. I have a long standing interest in politics and political philosophy. I have often been disturbed by the extent to which my discipline, Economics, tends to coarsen the public sensibilities of those who study it. I have presently (Fall, 1988) a specific interest in the nature of liberal education in to-day's universities. I chair a committee in the Faculty of Social Sciences has been struck to investigate possible changes in our undergraduate degree programme. Discussion has focussed on the nature of liberal education in the 1990s. I am trying to reconcile two views on this - an "applied social science " view which would "fix-up" current degree programs by adding courses in computer techniques, report writing, etc. and a more traditional liberal education view which would impose distributional requirements and create special courses to ensure all students received some exposure to science, literature, philosophy, and the history of thought. ----------------- *Neu, Joyce Dept. of Speech Communication, 207 Sparks, Penn State University, University Park, PA 16802, (814) 865-7365 I am an Assistant Professor of Speech Communication at Penn State University with a degree in Linguistics. I have used micro computers in teaching writing to both natives and non- natives since 1983 and have been instrumental in developing computer labs at Santa Monica College and at the U. of California, Irvine for use by humanities faculty and students. ----------------- *Oleske, William F. Technical Assistant, Central Language Laboratory, The University, Leeds LS2 9JT, West Yorkshire, England, U.K.; Tel:Leeds (0532)- 332647 I am charged with the introduction, maintenance and development of technical support for language acquisition, foreign language text process and support in the audio-, video-, and satellite broadcast reception area I am also concerned with the development of the use of Computer Assi anguage Learning software and with non-Roman alphabet wordprocessing as a communication and teaching tool. In my post, I am the technical support for the numerous autonomous foreign language teaching departments within the University. I also maintain contacts with my equivalent colleagues throughout the U.K. and Eire. ----------------- *Owen, David W. D. Dept of Philosophy, 213 Social Sciences Bldg, University of Arizona, Tucson, AZ 85718 Main computing interests: display and printing of alternate character sets, especially on laser printers; text searching; computer conferencing, ----------------- *Rhine, Michael L. Computer Operations Supervisor, Georgia Tech Research Institute, (404) 421-7694; (404) 894-7171 Background in music and performing arts (22 yrs exp) in r+r, r+b, classical and jazz (classic guitar/recorder in chamber music ensembles, bebop and standards, etc...). Avid reader: ecletic interests. 4.5 years USMC, 3rdBAT 5thMAR, 1st MARDIV as enlisted and officer (0301/2). Currently working in the computer field. No previous background. All learning OJT. Studying Mathematics w/concentration in CompSci. ----------------- *Richman, David Theater and Dance, University of New Hampshire I am Assistant Professor of Theater at the University of New Hampshire. My principal interest is in theatrical production. For eleven years I was Director of Theater at the University of Rochester and Artistic Director of the University of Rochester Summer Theater. I have directed twenty-two productions, including "King Lear," "A Midsummer Night's Dream," and "The Misanthrope." I have published several articles on connections between theatrical practice and critical inquiry. Though I do not use computers in my work, I am serious about the computer as a tool and a source of information. Being blind, I find that the computer has become one of my most important sources of information, and moreover a vital connection to the academic community. Arguably, the coming into being of computers with speech is the greatest single advance for the blind since the invention of Braille. ----------------- *Rutherford, John RUTHERFORD@CTSTATEU Academic Computing Services Coordinator / Central Conn. State University Elihu Burritt Library / New Britain, CT 06050 / 203 827-7800 I am a librarian by profession and have become interested in academic computing subjects through work with online bibliographic services. Prior to my position at Central Connecticut I was an assistant manager of information services at the University of Petroleum & minerals in Dhahran, Saudi Arabia, and was involved in the implementation of the DOBIS/LIBIS automated library information system. I am interested in text recognition systems and have been a third party follower of several HUMANIST discussions in this area and look forward to becoming more involved in HUMANIST discussions. ----------------- *Schriner, Ken Research Assistant, System Analyst, Arkansas Research and Training Center in Vocational Rehabilatation, University of Arkansas, 346 N. West Ave., Fayetteville, AR 72701; (501) 575- 3656 I am currently employed as a computer jock by ARTCVR in support of faculty research. This recent development ended a 5 year stint with the Computing Services department at this campus. While there, some of my support of faculty was in the area of humanities. Including several statistical studies of slave holdings in the state prior to the Civil War. I find the application of computers to fields like humanties quite interesting, since it is usely so different from the application in fields like engineering. I have also worked 2 years for Exxon. My role was computer support of engineers. We did large amounts of oil reservoir simulation computing. Computers used for simulation are another area of interest. I was bit by that bug during my three years of employment by the US Geological Survey as a Hydrologist. Simulation there consisted of stochastic simulation of rainfall, runoff, and sunspots. Also, some simulation of water reservoir operating schedules was performed. My goals are to obtain a wide range experience of the world, much as Thomas Jefferson did. To that end, I also brew beer. ----------------- *Stuart, Christopher Technical Consultant, 220 CCC, Academic Computing, Cornell University 14850 (607) 255-8304 I work closely with the college of liberal arts at Cornell University and am most involved with the application of computing to course design and research. I have a B.A. in Medieval History from the University of the South (Sewanee, TN) and became interested in computing while working in the field of historic preservation in St. Augustine, Florida. My tastes, as probably is the case with most people in this group, are varied, though I would be hard pressed to decide between giving up my P.G. Wodehouse books or my bluegrass record collection. My duties as technical consultant at Cornell include managing the software lending library, consulting projects in the humanities, and teaching microcomputer applications. From: CBS%UK.AC.EARN-RELAY::EARN.UTORONTO::LISTSERV 13-SEP-1989 12:13:38.07Revised List Processor (1.6a) Subject: File: "BIOGRAFY 14" being sent to you Date: Wed, 13 Sep 89 07:00:46 EDT X-Humanist: Vol. 1 Num. 1086 (1086) * * File "BIOGRAFY 14" contains records larger than 80 characters. * It is consequently being sent to you in "Listserv-Punch" format. * * You can get information about that format by sending the following command * to LISTSERV@UTORONTO.BITNET: "Info LPunch" * ID/BIOGRAFY 14 V 00111 ----------------- Autobiographies of HUMANISTs Thirteenth Supplement Following are 23 additional entries to the collection of autobiographical statements by members of the HUMANIST discussion group. HUMANISTs on IBM VM/CMS systems will want a copy of Jim Coombs' exec for searching and retrieving biographical entries. It is kept on HUMANIST's file-server; for more information, see the Guide to HUMANIST. Further additions, corrections, and updates are welcome. Willard McCarty Centre for Computing in the Humanities, Univ. of Toronto mccarty@vm.epas.utoronto.ca OR mccarty@utorepas 11 October 1988 ----------------- *Ahrens, John Assistant Professor, Department of Philosophy, University of Hartford, West Hartford, CT 06117; (H) 203-247-9560 (O) 203- 243-4745 I have just returned to teaching after spending the past four years as the Assistant Director of the Social Philosophy and Policy Center at Bowling Green State University. During this time I was the Managing Editor of the Center's publications, including a journal entitled Social Philosophy & Policy. My stint as an editor instilled in me a fervent loathing of bad writing and a desire to eradicate it wherever it may be. My research interests are primarily in the areas of ethics and political philosophy. I have a special interest in environmental issues and in the foundations of classical liberalism. When the attraction of traditional philosophical issues pales but I still have the desire to work, I turn my attention to popular culture, especially the literature and films of science fiction. I teach classes in these areas (including even science fiction), and in elementary logic. Beyond such academic pursuits, I am an avid student of computers and of the martial arts. ----------------- *Brink, Daniel T. Assoc. Prof., English, Arizona State University, Tempe, AZ 85287- 0302 (602) 965-6795. My academic training was in Germanic philology, with an emphasis on the West Germanic tongues, Old English, Old Saxon, and the like. I minored in linguistics, and wrote a generative phonology of Dutch for my dissertation (U of Wisconsin). My computer interests awoke with the advent of the microcomputer, largely as a result--probably common to many humanities computing types--of my frustration in the the early eighties with trying to get foreign characters into WordStar. I finally succeeded, but a sabbatical was lost in the process and my interests took a decidedly new direction. I am now Manager of a new facility on my campus, Technology Assessment and Development, which has the charter of identifying and evaluating new products which have the potential impacting the university in major way. Talk about fun... I can still recite the entire corpus of Old Low Franconian literature by heart, however, so I haven't gone completely bad: Hebban olla vogala nestas bigunnan hinasi thu ende ik. Wat onbidan we nu? ----------------- *Hanly, Ken Brandon University, Brandon Mb. Canada, R7A 6A9 I am an associate professor of philosophy at Brandon University. I have been involved for some time in socialist politics as well as community involvement, e.g. was president of a co-operative housing project, of a direct charge buyer's co-op etc. I am interested in humanist religion -was a member of the local unitarian fellowship for some time- as well as discussion of contemporary social issues. I am also interested in Marxist analyses of social problems. As a hobby I read and write poetry and have served as editorof a poetry magazine and have edited a series of chapbooks. ----------------- *Horton, Thomas B. Dept. of Computer Science, Florida Atlantic Univ., Boca Raton, FL 33431; (407) 393-2674 I completed my PhD in Computer Science at the Univ. of Edinburgh (Scotland) in June 1987, and started as an assistant professor here at FAU the following autumn. My PhD research examined statistical authorship studies and focused on the question of collaboration between Shakespeare and Fletcher in "Henry VIII" and "The Two Noble Kinsmen." In this study I used distribution-free discriminant analysis techniques to analyze the rate of occurrence of function words, with successful results: 94.8% of 459 scenes of known authorship (containing at least 500 words) were assigned to the correct author. The procedure indicates that the two disputed plays are collaboration and generally supports the usual division (but not always). Current research interests include the analysis of function word rates and characterization in the plays of Shakespeare and Fletcher. I am also very interested in computer processing of old-spelling texts, especially Jacobean drama and Middle English manuscripts. Here at FAU I teach core Computer Science courses such as Data Structures and Analysis of Algorithms. I generally program in Pascal and C on UNIX systems and am busy converting my colleagues to the TeX document processing system. At Edinburgh I helped set up a local "Computers in the Humanities" special interest group. FAU does not have a strong community of computer users in the humanities, but I have recently made contact with the College of Humanities, who are interested in joint supervision of graduate students, seminars and possibly a course in literary computing. I'd be very interested hearing from anyone interested in research in authorship studies, literary statistics, characterization studies or Middle English texts. ----------------- *Janus, Louis E. Norwegian Department, St. Olaf College, Northfield, MN 55057; (507) 663-3486 (work); (612) 822-1015 (home). I teach Norwegian and Linguistics at St. Olaf College. In the past, I taught a beginning level course in computer use for Humanists (an interim which focused on several UNIX tools). Now I am involved in several Mac projects, most specifically with sound digitizing in foreign language instruction. I have a number of Old Norse sagas in machine readable format, which I soon will be announcing to the general public. I was at the meeting in Columbia, SC where we discussed establishing the HUMANIST, but somehow did not end up on the e-mail list. ----------------- *Keith, Philip M. English Department, St. Cloud State University, St. Cloud, MN 56301; 612-255-3189 I have been working on and off with computers in literature and the teaching of writing for the better part of a decade. I have been working with word-processing and interactive programs for teaching and improving writing. I have also been working at some text-analyzer projects on metrics and prose structure. ----------------- *Lipson, Elizabeth Academic Computing Publications Editor, Emory University Computing Center, Uppergate House, Atlanta, Georgia 30322 404/727-7651 I write computing publications and a newsletter for our user community at Emory. My past activities have been in the area of computer training-- instructor led, classroom/individual, computer-based training, self-paced materials. I have worked in computing for about five years. Before that, I worked in research/writing in the field of historic preservation. I have a B.A. in history from Georgia State University. ----------------- *MacWhinney, Brian Professor of Psychology, Department of Psychology, Carnegie Mellon University, Pittsburgh PA 15213; 412 268-3793 I am a psychologist and not a bona fide humanist. Much of my work is clearly psychological in nature. This is the work I do in simulating the acquisition of language with computers and in conducting experiments to provide data for these simulations. However, a second line of my research comes closer to touching on issues in the Humanities. This is my work as director of the Child Language Data Exchange System. With the support of various public and private national foundations, Catherine Snow of Harvard University and I have worked with many researchers in the field of child language acquisition to establish a large (over 100 megabytes) database of adult-child conversational interaction data. Most of the these data are from English. However, a growing proportion of the database is from other languages as well. The CHILDES system has not only developed a large database but has also specified a standard transcription system for conversational analysis. In addition, we have developed a set of C programs for the analysis of conversational data. These programs are unlike concordance programs in many ways and focus on Boolean searches, cross-tier searches, and numerical outputs. Membership in the CHILDES system and access to the data and programs is open to the research community. ----------------- *Makivirta, Joni Mikael Minna Canthin katu 22 A 5, 40100 Jyvaskyla, Finland 1. Studies I am a student of history. During this year I should graduate. My main subject is general history. I have also studied history of Finland, Economic history, Social sciences (=mostly politics), and Educational sciences. I probably will still study Speech communication as an extra subject. This year I should do my pro gradu -work. I will most probably do study on T.H.Green - an english politician during 19th centory (=Victorianic time). Green was interesting man because he respected Christian values in life. He thought that liberalism should lead also qualitial improvement in human beeing. A improvement meant that people lerned to act and work moralisticly - in Christian sense of the word. By learning these values people could learn to think also common good - not only themselves. This kind of thinking gave some ideas also to socialistic thinkers. Although they, of course, left out the word Christian. Was Mr. Green a true Christian, I do not know. But at least he sympathised Christian values. 2. Life Well that about my studies. Now. My story: I have lived all of my life in Finland. (is in Europe, Scandinavia and beside the USSR.) And I am still living in my home city: Jyvaskyla. Jyvaskyla is in central Finland. Here is lot of lakes, woods ... and nature is very colse. There are some 70 000 citizens in Jyvaskyla. I graduated from High School 1986. After that I decided to study history in local University. So here I am. I am Christian, but I can sympathise with many kind of a people and understand many kind of a thoughts - although I would not apply them in my own life. Maybe this is the reason for my interest to T.H.Green? 3. Expectations I hope I could find, via this list, someone who is interested in Victorianic time. I also hope I could learn to use some databases, and tell about them in the Departement of History. I have founded the list HISTORY at FINHUTC, so if anyone is interested in that. You are welcome to join in. I wonder if HUMANIST and HISTORY could do some co-operation? ----------------- *McDaid, John G. Computer Coordinator, NYU Expository Writing Program, 269 Mercer St. Rm. 219 New York, NY 10003 (212) 998-8862 For the last five years, I've been teaching expository writing at NYU while working on my doctorate in media ecology. My dissertation is on the process of hypermedia composition, and I've been working at integrating hypertext (StorySpace, HyperCard) into our freshman comp sequence. We have two dedicated classrooms with Macintoshes running 16 sections each semester. I see computers from a McLuhanist perspective, and as a tool for decentering authority in the classroom. I'm interested in networks, computer conferencing, collaborative work-group support, and Xanadu. ----------------- *Meadows, Dennis Lynn Director, Institute for Policy and Social Science Research; Professor of Business Systems and Policy, University of New Hampshire, Durham, NH 03824, USA; 603/862-2186 Telex: 493-0372 RPC UI I have been on the faculties of MIT and Dartmouth; I just joined the UNH faculty this fall to define, create, and run a new institute that will support research interests of faculty across the campus. My research interests lie in the creation of sophisticated simulation models and educational games that show the interaction of psychological, economic, social, political, and technical factors. My games and books on this subject have been translated into over 30 languages. I also have an extremely strong and active interest in Eastern Europe. I have lived in Budapest and visited the USSR over 45 times. I will take a Fulbright in Moscow next spring. I serve as US Director of the Soviet- American Environmental Education Project. ----------------- *Meath, Terrence W. User Services Specialist, Health Sciences Computing Services, University of Minnesota, BOX 720 UMHC, 5-235 Malcom Moos Tower, 515 Delaware St. S.E., Minneapolis, MN 55455; (612) 625-7175 In requesting to join your list, I am representing several hundred faculty, staff and students in the Health Sciences at the UofM, and not just myself. We have taken a different approach to network based information sharing. Rather than require each of our users to subscribe to the lists that they think might be useful to them (and then unsubscribe if necessary), we have created a small group of public mailboxes (mailboxes on our system that all of our users can access), which we subscribe to the various list servers. Our public mailboxes are organized around general topics such as statistics, computing, health, etc. and each contain the traffic of a dozen or so lists. The mailboxes have the ability to show all of the traffic together, or to sort on any one list. The biggest advantage is that we involve many more people in network communications than would be the case if they had to learn about, evaluate and manage the subscriptions by themselves. Further, this type of involvement eliminates a great deal of redundant network traffic. Our users are still free to subscribe to a list on their own, but our public mailbox technique has worked out so well that few people have chosen to do that. ----------------- *Moulthrop, Stuart Assistant Professor of English; P.O. Box 7355 Yale Station, New Haven, CT 06520; 203-436-3023 My primary interest is in theory and history of narrative, especially late-modern and "postmodern" developments. My dissertation (Yale, 1986) was an investigation of problematic closure in Sterne, Dickens, and Charlotte Bronte. I'm currently working on a book concerning the status of fictional language in recent American fiction (Thomas Pynchon, Toni Morrison, Maxine Hong Kingston). Parallel (somewhat) to all of that, I also take an active interest in hypertext, particularly its literary applications. I've written a hypertextual treatment of Borges's "Garden of Forking Paths" in Storyspace and am presently involved in a couple of other hypertext projects, including speculations about an electronic literary review. ----------------- *Muller, R. Andrew Associate Professor, Department of Economics, McMaster University, Hamilton, Ontario; (416) 525-9140 x3831 I am an academic economist with professional interests in public policy, especially environmental policy, housing and industrial organization. I have published econometric studies of the pulp and paper industry's adjustment to pollution control and energy price changes, analyses of the Toronto housing market and rent control, and analyses of Canadian water export policy (with particular reference to the Grand Canal Scheme to export water from James Bay to the United States). I have also worked on simulating the effects of the proposed Canada-US free trade agreement. I have a long standing interest in politics and political philosophy. I have often been disturbed by the extent to which my discipline, Economics, tends to coarsen the public sensibilities of those who study it. I have presently (Fall, 1988) a specific interest in the nature of liberal education in to-day's universities. I chair a committee in the Faculty of Social Sciences has been struck to investigate possible changes in our undergraduate degree programme. Discussion has focussed on the nature of liberal education in the 1990s. I am trying to reconcile two views on this - an "applied social science " view which would "fix-up" current degree programs by adding courses in computer techniques, report writing, etc. and a more traditional liberal education view which would impose distributional requirements and create special courses to ensure all students received some exposure to science, literature, philosophy, and the history of thought. ----------------- *Neu, Joyce Dept. of Speech Communication, 207 Sparks, Penn State University, University Park, PA 16802, (814) 865-7365 I am an Assistant Professor of Speech Communication at Penn State University with a degree in Linguistics. I have used micro computers in teaching writing to both natives and non- natives since 1983 and have been instrumental in developing computer labs at Santa Monica College and at the U. of California, Irvine for use by humanities faculty and students. ----------------- *Oleske, William F. Technical Assistant, Central Language Laboratory, The University, Leeds LS2 9JT, West Yorkshire, England, U.K.; Tel:Leeds (0532)- 332647 I am charged with the introduction, maintenance and development of technical support for language acquisition, foreign language text process and support in the audio-, video-, and satellite broadcast reception area I am also concerned with the development of the use of Computer Assi anguage Learning software and with non-Roman alphabet wordprocessing as a communication and teaching tool. In my post, I am the technical support for the numerous autonomous foreign language teaching departments within the University. I also maintain contacts with my equivalent colleagues throughout the U.K. and Eire. ----------------- *Owen, David W. D. Dept of Philosophy, 213 Social Sciences Bldg, University of Arizona, Tucson, AZ 85718 Main computing interests: display and printing of alternate character sets, especially on laser printers; text searching; computer conferencing, ----------------- *Rhine, Michael L. Computer Operations Supervisor, Georgia Tech Research Institute, (404) 421-7694; (404) 894-7171 Background in music and performing arts (22 yrs exp) in r+r, r+b, classical and jazz (classic guitar/recorder in chamber music ensembles, bebop and standards, etc...). Avid reader: ecletic interests. 4.5 years USMC, 3rdBAT 5thMAR, 1st MARDIV as enlisted and officer (0301/2). Currently working in the computer field. No previous background. All learning OJT. Studying Mathematics w/concentration in CompSci. ----------------- *Richman, David Theater and Dance, University of New Hampshire I am Assistant Professor of Theater at the University of New Hampshire. My principal interest is in theatrical production. For eleven years I was Director of Theater at the University of Rochester and Artistic Director of the University of Rochester Summer Theater. I have directed twenty-two productions, including "King Lear," "A Midsummer Night's Dream," and "The Misanthrope." I have published several articles on connections between theatrical practice and critical inquiry. Though I do not use computers in my work, I am serious about the computer as a tool and a source of information. Being blind, I find that the computer has become one of my most important sources of information, and moreover a vital connection to the academic community. Arguably, the coming into being of computers with speech is the greatest single advance for the blind since the invention of Braille. ----------------- *Rutherford, John RUTHERFORD@CTSTATEU Academic Computing Services Coordinator / Central Conn. State University Elihu Burritt Library / New Britain, CT 06050 / 203 827-7800 I am a librarian by profession and have become interested in academic computing subjects through work with online bibliographic services. Prior to my position at Central Connecticut I was an assistant manager of information services at the University of Petroleum & minerals in Dhahran, Saudi Arabia, and was involved in the implementation of the DOBIS/LIBIS automated library information system. I am interested in text recognition systems and have been a third party follower of several HUMANIST discussions in this area and look forward to becoming more involved in HUMANIST discussions. ----------------- *Schriner, Ken Research Assistant, System Analyst, Arkansas Research and Training Center in Vocational Rehabilatation, University of Arkansas, 346 N. West Ave., Fayetteville, AR 72701; (501) 575- 3656 I am currently employed as a computer jock by ARTCVR in support of faculty research. This recent development ended a 5 year stint with the Computing Services department at this campus. While there, some of my support of faculty was in the area of humanities. Including several statistical studies of slave holdings in the state prior to the Civil War. I find the application of computers to fields like humanties quite interesting, since it is usely so different from the application in fields like engineering. I have also worked 2 years for Exxon. My role was computer support of engineers. We did large amounts of oil reservoir simulation computing. Computers used for simulation are another area of interest. I was bit by that bug during my three years of employment by the US Geological Survey as a Hydrologist. Simulation there consisted of stochastic simulation of rainfall, runoff, and sunspots. Also, some simulation of water reservoir operating schedules was performed. My goals are to obtain a wide range experience of the world, much as Thomas Jefferson did. To that end, I also brew beer. ----------------- *Stuart, Christopher Technical Consultant, 220 CCC, Academic Computing, Cornell University 14850 (607) 255-8304 I work closely with the college of liberal arts at Cornell University and am most involved with the application of computing to course design and research. I have a B.A. in Medieval History from the University of the South (Sewanee, TN) and became interested in computing while working in the field of historic preservation in St. Augustine, Florida. My tastes, as probably is the case with most people in this group, are varied, though I would be hard pressed to decide between giving up my P.G. Wodehouse books or my bluegrass record collection. My duties as technical consultant at Cornell include managing the software lending library, consulting projects in the humanities, and teaching microcomputer applications. From: CBS%UK.AC.EARN-RELAY::EARN.UTORONTO::LISTSERV 13-SEP-1989 12:14:51.49Revised List Processor (1.6a) Subject: File: "BIOGRAFY 15" being sent to you Date: Wed, 13 Sep 89 07:00:55 EDT X-Humanist: Vol. 1 Num. 1087 (1087) * * File "BIOGRAFY 15" contains records larger than 80 characters. * It is consequently being sent to you in "Listserv-Punch" format. * * You can get information about that format by sending the following command * to LISTSERV@UTORONTO.BITNET: "Info LPunch" * ID/BIOGRAFY 15 V 00142 ----------------- Autobiographies of Humanists Fourteenth Supplement Following are 24 additional entries and 1 revised entry to the collection of autobiographical statements by members of the Humanist discussion group. Humanists on IBM VM/CMS systems will want a copy of Jim Coombs' exec for searching and retrieving biographical entries. It is kept on Huma nist's file-server; for more information, see the Guide to Humanist. Further additions, corrections, and updates are welcome. Willard McCarty Centre for Computing in the Humanities, Univ. of Toronto mccarty@utorepas.bitnet 19 November 1988 ----------------- *Chisholm, David Professor of German, University of Arizona, Tucson, AZ 85721, Tel.(602)621-5924 or 621-7385. Member of ACH and ALLC, Computer-aided research on German and English language and literature, concordances to German literature; metrics and versification. ----------------- *Clausing, Stephen Assistant Professor, German Department, Yale University. I am also a computer programmer and the author of an authoring system for the Macintosh called the Private Tutor System (not to be confused with the IBM system of similar name). My research interests include Germanic Philology and Linguistics and the application of computers to pedagogical and linguistic problems. I have published in the area of pedagogy,linguistics, and computer science. ----------------- *Culy, Christopher Ph.D. Student in Linguistics, Stanford University, 438 W. Sixth St., Claremont, CA 91711 USA; (714) 626-3392 Even though my background in mathematics (B.S.) and linguistics (M.A. and Ph.D. in progress), I have only recently become interested in computing for the humanities. I am especially interested in the inputting and use of dictionaries and texts in various languages. I recently wrote an interface to a program to search an on-line English dictionary, and I am exploring the possibility of putting a Dogon-French dictionary on-line. I am also interested in the possible uses of computers in connection with literacy programs in developing countries. Finally, I am interested in clear, relatively simple interfaces so that computers are accessible as possible. ----------------- *Davies, Anna Morpurgo Somerville College, Oxford OX2 6HD, England. I am a professor of comparative philology at the University of Oxford (England) and am interested in computers and the application of computers to written texts. I have been working on ancient Indo-European Languages especially Ancient Greek and Latin and ancient Anatolian Languages (Hittite, Hieroglyphic Luwian etc.). ----------------- *Dumont, Stephen D. Senior Fellow, Pontifical Institute of Mediaeval Studies, 59 Queen's Park Crescent East, Toronto, Ontario, M5S 2C4, Canada, (416) 926-1300 x3232. In 1974 I graduated Phi Beta Kappa from Wabash College, Crawfordsville, Indiana with a double major in philosophy and English literature. All my graduate studies were pursued at the Centre for Medieval Studies, University of Toronto (M.A. 1976, Ph.D 1983) and the Pontifical Institute for Mediaeval Studies (M.S.L. 1979). In 1982 I was appointed instructor at the School of Philosophy at the Catholic University of America, Washington, D.C., and in 1983 assistant professor. In 1985 I accepted an appointment as Junior Fellow at the Pontifical Institute and was promoted to Senior Fellow 1 July 1988. My area of interest and research was and continues to be medieval philosophy and theology, especially after Aquinas. I have focused most on the philosophy of Duns Scotus and his school, and my doctoral thesis was on the Subtle Doctor's proofs for the existence of God. I am currently revising the thesis into a more general treatment of natural theology in the late middle ages. Concurrently, I have published a number of articles on the central philosophical contributions by Scotus and traced their fate at the hands of his fourteenth century critics and disciples. As most of my research is based on unedited sources, I have recently investigated computer assisted critical edition. ----------------- *Heino, Aarre University of Tampere, P.O. Box 607 SF-33101, TAMPERE Finland; +358 31 156274 I have worked almost twenty years at the university of Tampere, the last ten years as second professor of comparative literature. My main fields are German literature and the theory of the novel. I teach and write about the phenomena of literature and the function of arts. I try to bring every term a group of students together with computers, which is here not so simple for students of the human faculty. At the moment I am planning a seminar in co- operation with my colleague in Indiana. Her students and mine write about the same topics, then we change papers via BITNET and also give each other the feedback. In these years I have had much to do with the Institute of extended studies of my university, where I now develop the distance education and the multimedia-teaching. I hope I can use also computers for these purposes. In short I try to develop my teaching as a whole so that I could get the major benefit of computers. The hypertext and hypermedia programs seem to have qualities that I perhaps could use in the future. Until now my main software is MS-WORD, GrandView and Tornado, which I use in my daily work. ----------------- *Hill, Lamar M. Professor of History, Department of History, University of California, Irvine, CA 92717 USA; (714) 856-6524, 6521 NOTA BENE: I am not responding "for" for School of Humanities at UCI but I shall try to distribute HUMANIST items when I can. Access to this network by humanists is limited by the generally apathetic (to hostile) relationship of humanists to computers for more than word processing and by a woeful lack of funds to support computer communication. I hope that the later will be a non-issue in the short-term future. The former is a far more intractable problem but we are working on it ("we" being a small number of us in the School who use computers comfortably. I do early modern English history with a special interest in legal history. I also teach the Reformation. I received my AB from Kenyon College in history (and nearly two other majors in English and political science), my MA from (then) Western Reserve University after having done a year of law at Cornell, and my PhD from the University of London (University College). I am a Fellow of the Royal Historical Society and am active in several professional organizations, particularly the North American Conference on British Studies. My publications have centered around the legal writings and public career of Sir Julius Caesar, an Elizabethan/Jacobean lawyer, judge, and bureaucrat. I am now working in an entirely new area, civil litigants and civil litigation in the period 1558-1625. The two discrete studies I am presently working on are : (1) "Mistress Bourne's Complaint" the anatomy of the breakdown and dissolution of a late-sixteenth century marriage, and (2) "The Jacobean Court of Requests and its Litigants: 1603- 1625" a study of an equity court, its process, personnel, litigants and their litigation. In both instances I am using conventional historical and legal-historical methodologies as well as trying to develop critical apparatus for reading verbatim pleadings and the deposition of witnesses. ----------------- *Hough, Michael I work at Seneca College in the Placement Office. I have a degree in English from the University of Western Ontario in London, Ontario and I am currently taking courses in History at the University of Toronto during the evenings. I have a strong interest in the Arts. I play the piano, I enjoy painting in water colours and I like to write stories. I also read as often as I can. I use my computer at home to write and to explore the world via modem. I like to keep in touch with the social issues that are going on in the world and for this purpose, I try to read all that I can from reliable sources rather than tabloids. I am interested in joining this service because I feel that I can effectively add my knowledge and experiences to the debate. I have travelled over much of Europe and I intend to travel extensively around the rest of the world in the future. I am 25 years old and I was born in Bolton, England. I am more Canadian than English however as I moved here when I was five years old. ----------------- *Kippen, Jim Research Fellow, Dept Social Anthropology & Ethnomusicology, The Queen's University of Belfast, Belfast BT7 1NN, Northern Ireland, UK; Telephone (UK) 0232 - 245133 Ext.3706, (International) +44 - 232 - 245133 Ext.3706 I began, as many ethnomusicologists do, as a trained Western classical musician with an undergraduate degree in music (1978). I specialised in piano, conducting, and composition. My fascination for world music was kindled at that time and I explored performance techniques of several North Indian instruments, finally settling for specialisation in the tabla (two-piece, tuned drum set). As a supplement to practical work, I began reading the anthropological literature to get some idea of the cultural structure and social organisation of the people who played tabla for a living. I then enrolled for a Ph.D. in social anthropology/ethnomusicology (1979) and undertook fieldwork to study the music and the lives of traditional musicians in Lucknow. Subsequently, I published a book on tabla and tabla musicians (1988) based largely on my thesis (1985). My work on the verbal representations of tabla music (quasi- onomatopoeic mnemonic syllables used for transmission and sometimes performance) suggested that techniques derived from formal linguistics (generative grammars) could be developed for the description of the percussion "language". Yet by hand, such descriptions were not feasible, and so grammars were implemented in an expert system designed for a microcomputer portable enough for experimental work on location. The aim was to involve informants as analysts in the experimental process: the machine generated music which was assessed for quality and accuracy by the musicians, and the musicians in turn supplied improvisations that were analysed by the machine to test the validity of the grammars. By means of this interaction, models were constantly modified and more accurate hypotheses of musical structure (with greater cognitive validity?) were arrived at. (This work has been the subject of numerous articles.) However, shortcomings in the research theory and method forced a re-evaluation of the usefulness of the expert system/informant interaction as articulated by an intermediary analyst (researcher). As a result, a learning module has now been added to the expert system and research is currently being undertaken to assess whether or not machine-learning is the solution to problems encountered in the transfer of knowledge from informants to machines. ----------------- *Landow, George P. Professor of English, and Art Faculty Fellow, Institute for Research in Information and Scholarship (IRIS), Box 1946, Brown University, Providence, Rhode Island 02912 USA; 401 751-7493. George P. Landow, who holds the AB, MA, and PhD from Princeton U. and an MA fr om Brandeis, has also done graduate work at the U. of London. Landow, who has written on 19th-century literature, art, and religion as well as on educational computing, has taught at Columbia, the U. of Chicago, Brasenose College, Oxford, and Brown Universities. He has been a Fulbright Scholar (1963-4), twice a Guggenheim Fellow (1973, 1978), and a Fellow of the Society for the Humanities at Cornell U. (1968-9), and he has received numerous grants and awards from the National Endowment for the Humanities and the National Endowment for the Arts. He has organized with others several international loan exhibitions including *Fantastic Illustration and Design in Britain, 1850- 1930* (1979), and his books include *The Aesthetic and Critical Theories of John Ruskin,* *Victorian Types, Victorian Shadows: Biblical Typology and Victorian Literature, Art, and Thought *(1980), *Approaches to Victorian Autobiography* (1979), *Images of Crisis: Literary Iconology, 1750 to the Present* (1982), *Ruskin* (1985), *Elegant Jeremiahs: The Sage from Carlyle to Mailer* (1986). Landow's projects in humanities computing began with several involving graduate students in English literature and art history which employed advanced mainfra me word processing, electronic conferencing, and typesetting on the Brown mainv frame to create group projects resulting in published books. One result was *A Pre-Raphaelite Friendship* (1985), an edition of 19c-century unpublished letter s with full scholarly apparatus produced by J. H. Combs and others. Another was *Ladies of Shalott: A Victorian Masterpiece and Its Contexts* (1984), a heavily illustrated exhibition catalogue fully designed online using IBM Script, customized macros, and typesetting and page formatting programs developed at Brown by Allan H. Renear and others. Since 1984, he has worked as a member of the team at the Institute for Research in Information and Scholarship (IRIS) at Brown that developed Intermedia, a full hypermedia system. Landow supervised, edited, and partially wrote *CONTEXT32* a body of several thousand hypermedia documents on this system used to support English courses ranging from introductory surveys to graduate seminars. He is corrently editor of THE CONTINENTS OF KNOWLEDGE, an expansion of the Brown hypermedia materials by contributors from several dozen institutions to include materials from all disciplines. His most recent publications include essays on the rhetoric of hypertext, the use of hypertext in education, and its effects on collaborative work, conceptions of the literary work, and literary criticism. He is currently editing a gathering of essays on hypertext and literature with Paul Delany. ----------------- *Masterson, Karie UCLA Humanities Computing Facility, 405 Hilgard Ave., 248 Kinsey Hall, Los Angeles CA 90024 I am a programmer/analyst for the Humanities Computing Facility at UCLA. I am also a phd. student of Near Eastern Languages and Cultures at UCLA. I plan on completing the requirements for the candidate in philosophy degree this fall quarter. My specialization is semitic languages. ----------------- *Matsuba, Stephen Naoyuki Research Associate, Centre for Textual Studies, Department of English, University of British Columbia, #397-1873, East Mall, Vancouver, BC CANADA V6T 1W5 My interests are bibliographical studies, computer-assisted research in the humanities, late-nineteenth-century drama, and early-twentieth-century drama. My special interests in these areas are Shaw, Shakespearian production and scholarship during this period, textual editing and stemmatics, and artificial intelligence. At present, my research focusses on computer-applications in humanities research and teaching. I am completing an article on the editing of Shaw's Man of Destiny, using the mainframe computer. My present research, with Professor Ira B. Nadel, is a study of Shakespeare's sonnets using DISCAN, a content and discourse analysis program developed by Professor Pierre Maranda at Laval University. The purpose of this project is to explore and evaluate the program's potential as a research tool for literary research. I am also involved with the design and implementation of the English 100 Database for the Centre of Textual Studies at UBC. It is a new way of administering the English 100 course, the largest first-year English course of its kind in North America with an enrollment of over 3500 students and a staff of 108 instructors. The Database allows the instructors to access information from any computer terminal on campus as well as to express problems, offer suggestions, and keep informed about resources available. The Centre is developing the Database as a teaching tool for multi-sectionned university courses regardless of the discipline I have acted as the Assistant Editor for The Newsletter of the Victorian Studies Association of Western Canada, and am teaching at UBC as a Sessional Lecturer in the Department of English. I am in the process of applying for admission into a Ph.D programme for the 1989-90 Winter Session. ----------------- *Ooi, Vincent Beng Yeow Senior Tutor, Department of English Language and Literature, National University of Singapore, Kent Ridge, SINGAPORE 0511; 02- 7726032 Research interests/areas: Computational linguistics; Computational lexicography/lexicology; Artificial Intelligence; CALL My MA dissertation is titled: "Computational Lexicography-- Constructing a Lexical Database in PROLOG". I will, hopefully, be going overseas in Autumn next year to do a PhD in some aspect of Computational Linguistics. I'm also a member of the local inter-tertiary CALL group which meets informally once a month to discuss CALL problems and projects. ----------------- *Raben, Joseph Professor Emeritus of English, Queens College/CUNY. Founded the journal Computers and the Humanities and edited it for twenty years; founding president of the Association for Computers and the Humanities. At present, adjunct professor at CUNY. ----------------- *Roovers, Ton Head of Computer Department, Faculty of Arts, University of Groningen, Grote Rozenstraat 31, NL 9712 TG Groningen, The Netherlands, tel. 3150636063. While studying Dutch Linguistics in the seventies I learned how to program in SNOBOL4 and SPITBOL. Since then I became a user of many other languages and tools on all kinds of computers, but still SNOBOL to me is *our own* language. Since the beginning of the eighties I am in charge of the computing facilities of the Faculty of Arts of our university, and I have let and see them grow from a few terminals on the university mainframe to a network with about 150 personal computers, with file servers, UNIX hosts and workstations and lots of printers of all kinds. SPITBOL plays a modest but growing role next to Pascal, C, Lisp, Prolog and so on, plus packages like SPSS, dBASE, OCP and last but not least: the UNIX tools. My task as a manager of these facilities (with the help of 5 colleagues) is growing fast, but it will continue to be a very challenging one, due to the fast development of the available computer hardware and software. Furthermore there will be the need to repeatedly explain to hardware and software vendors, that our needs can not be compared to those of the other faculties in our university, with a few exceptions. And last but not least: our own university and faculty boards must be convinced, that our faculty needs more than a good library and some classrooms. ----------------- *Santiago, Delma (D_Santiago@UPRENET) Teodoro Aguilar st. 789 Los Maestros, Rio Piedras, P.R. 00923 I am a history school teacher in Rio Piedras, Puerto Rico. I am presently coursing my Master's Degree in History Education in the University of Puerto Rico at the Rio Piedras Campus. My research work is based on the use of the computer in the Social Studies class. Prior to this investigation I was reading about the influence of technology in family education. This is my first contact with computers and I'm very interested in obtaining all the advantages and knowledge that this group will proportion. ----------------- *Smith, David To: School of Sociology, Kwansei Gakuin University, Nishinomiya 662 Japan. (0798) 53-6111 x 5390; Fax: (0798) 51-0955. (Nishinomiya is half-way between Kobe and Osaka.) I am spending the year in Japan at Kwansei Gakuin University as the Visiting Professor of Canadian Studies. I am teaching Gender Relations and am trying to set up a dialogue between Japanese and Canadian (and US) students on BITNET. The conversation is to be part of an experiment in teaching as well as other things. Additional interests include a strong orientation towards bioethical issues (I consulted with the Law Reform Commission of Canada for 2 years on matters concerning abortion, fetal experimentatin, genetic engineerin frozen embryoes and so on. I am also starting collaboration with the Japanese scholars who are most interested in this area (very limited interest in Japan at the moment). My formal training is in sociology and my "specialty" areas include quantitative methods and medical sociology. I have a very keen interest in matters concerning the philosophy of science (particularly social science) and try (with considerable ineptitude) to read Plato in the original Greek. Because of my previous dealings with both the medical and legal professions I am also interested in concepts of evidence and proof. ----------------- *Stuart, Thomas W School of Information & Library Studies, SUNY at Buffalo, Buffalo, NY I am currently an MLS graduate student and computer lab assistant here at SILS, SUNY at Buffalo. Earlier education background in social sciences, English, and regional planning. Work experience in environmental health, behavioral health (mental health and crisis services), and human services (youth services, information and referral, and criminal justice planning). Now, a career change (not necessarily a change in interests) to library and information services. Why? Reasons range from liking how books feel and smell, through enjoyment of what I discover when browsing, to combined attraction to and misgivings about what new technologies promise/portend for mindfulness, creativity, sense and sensibility. Current research and inquiry interests also range widely, but ----------------- Assistant Coordinator, Medical Humanities Program, Michigan State University, East Lansing, MI 48824; 517-355-7550. I teach and speak on issues in health care ethics to medical and nursing students, as well as practicing physicians and nurses. My interests in computers and medical ethics have taken two forms. One is a computer bulletin board on the IBM mainframe at MSU which is the heart of the Medical Ethics Resource Network of Michigan. This bulletin board features postings of meetings, recent article abstracts, and summaries of court cases in Michigan and elsewhere; and a discussion forum which provides for commentaries and exchanges on difficult cases, draft hospital policies, and the like. The other is the development of CAI in medical ethics. I have written a pilot program on the ethics of medical confidentiality using an authoring language developed by a group of programmers at MSU. I would be willing to share this pilot with anyone interested, and would like to hear of other teaching uses of computers in medical ethics or the humanities. ----------------- *Tweyman, Stanley Director of Graduate Programme in Philosophy, Department of Philosophy, York University, North York, Ontario, Canada, M3J 1L1; (416) 736-5113 ----------------- Manager, Humanities Computing Facility, c/o English Department, Arizona State University, Tempe, AZ 85287-0302; (602) 965-2679 I'm trained as a linguist but working as a computer person. As a linguist I've been working in historical-comparative Micronesian linguistics -- no, not at Arizona, but at the University of Hawaii. It won't be easy to continue this work -- or any linguistic research -- at ASU. At UH my computer work was mostly in lexicography, using Bob Hsu's LEXWARE programs. At ASU I'm mostly just babysitting a bunch of PCs and PC users who haven't learned how to read the WordPerfect manual. I was told that I would be working with scholars doing research in the humanities. I haven't seen much of that here. I'd like to know what's going on at other institutions. ----------------- *Waters, Stacy <93651@UWAVM.ACS.WASHINGTON.EDU> Humanities and Arts Computing Center, DW-10, University of Washington, Seattle, WA 98195; (206) 543-5370 I currently use computers in conjunction with a Middle English problem of my own. In addition, I advise and assist members of the local community on a wide range of textual matters from computer assisted analysis to typesetting. ----------------- *Werner, Stefan Department of General Linguistics, University of Joensuu, P.O. Box 111 SF-80101, Joensuu Finland; +358/73/151-4334 (work), +358/73/27094 (home) After university studies in West Germany where I gained my first experiences in computing in the humanities (text analysis with programs like COCOA and TEXTPACK) and in general (programming in Fortran and Lisp) I took up work as a linguistics lecturer concentrating on - essentially introductory - computer courses for language and literature students (use of OCP, WordCruncher, BETA etc.). I also work as an adviser and occasional programmer for the staff of the language departments. My main research interests are in literary computing, semiotics and the study of mass media. ----------------- *Willett, Perry Reference Librarian Bartle Library SUNY-Binghamton Binghamton, Ny.Y. 13901; (607)777 777-4386, 777-2345 I am currently a librarian . I have undergraduate degrees in German and English from Washington U. in in (St. Louis), and an M.A. in Comparative Libterature and an M.L.S . from Rutgers. Besides a general interest in the humanities, I am interested in the use and development of microcomputer appilications in the study and teaching of languages and literatures.