11.0492 calls and conferences

Humanist Discussion Group (humanist@kcl.ac.uk)
Mon, 5 Jan 1998 22:28:15 +0000 (GMT)

Humanist Discussion Group, Vol. 11, No. 492.
Centre for Computing in the Humanities, King's College London

[1] From: "David L. Gants" <dgants@parallel.park.uga.edu> (94)
Subject: CFP: Adapting Lexical and Corpus Resources to
Sublanguages and Applications

[2] From: "David L. Gants" <dgants@parallel.park.uga.edu> (109)

[3] From: "David L. Gants" <dgants@parallel.park.uga.edu> (162)
Subject: Interdisciplinary Workshop on Deception and Trust

[4] From: Mike Fraser <mike.fraser@computing- (31)
Subject: Computers & Texts 16: Call for Articles and Reviews

[5] From: Ineke Schuurman <ineke@ccl.kuleuven.ac.be> (499)
Subject: Corpora: EUROCALL 98 Conference (update!)

[6] From: "David L. Gants" <dgants@english.uga.edu> (138)
Subject: ECML'98 TANLPS Workshop: First Call for Paper

Date: Fri, 2 Jan 1998 11:43:48 -0500 (EST)
From: "David L. Gants" <dgants@parallel.park.uga.edu>
Subject: CFP: Adapting Lexical and Corpus Resources to Sublanguages and Applications

>> From: Simone Saint Laurent <lrec@ilc.pi.cnr.it>


Adapting Lexical and Corpus Resources to Sublanguages and Applications
Granada May 26, 1998

This workshop will be held in conjunction with the First International
Conference on Language Resources and Evaluation (LREC), to be held
in Granada, Spain on May 28 - 30, 1998.
The workshop will provide a forum for those researchers involved in
the development of methods to integrate corpora and MRDs, with the aim of
adding adaptive capabilities to existing linguistic resources.

Workshop Scope and Aims

Lexicons, i.e., those components of a NLP system that contain "computable"
information about words, cannot be considered as static objects. Words may
behave very differently in different domains, and there are language
phenomena that do not generalize across sublanguages.
Lexicons are a snapshot of a given stage of development of a language,
normally providedwithout support for adaptation changes, whether caused
by language creativity and development or the shift to such
a previously unencountered domain.

The divergence of corpus usage's from lexical norms has been studied
computationally at least since the late Sixties, but only recently
has the availability of large on-line corpora made it possible to establish
methods to cope systematically with this problem.
An emerging branch of research is now involved in studies and experiments
on corpus-driven linguistics, with the aim of complementing and
extending earlier work on lexicon acquisition based on Machine Readable
Dictionaries (MRD): data are extracted from texts, as embodiments of
language in use, so as to capture lexical regularities and to code them
into operational forms. The purpose of this workshop will be to provide an
updated snapshot of current work in the area, and promote discussion of
how to make progress.

Central topics will be (though this list is in no way exclusive):

* corpus-driven tuning of MRDs to optimize domain-specific inferences,
* terminology and jargon acquisition,
* sense extensions,
* acquisition of preference or subcategorization information from corpora
* taxonomy adaptation,
* staistical weighting of senses etc. to domains
* use of MRDs to provide explanations of linguistic phenomena in corpora
* what is the scope of "lexical tuning"
* the evaluation of lexical tuning as a separate task, or as part
of a more generic task

Organizers: Roberto Basili (University of Roma "Tor Vergata"), Roberta
Catizone (University of Sheffield), Maria Teresa Pazienza (University of
Roma "Tor Vergata"), Paola Velardi (University of Roma "La Sapienza),
Yorick Wilks (University of Sheffield)

Preliminary Program Committee

Yorick Wilks University of Sheffield
Roberta Catizone University of Sheffield
Paola Velardi University of Roma "La Sapienza"
Maria Teresa Pazienza University of Roma "Tor Vergata"
Roberto Basili University of Roma "Tor Vergata"
Bran Boguraev Brandeis University
Sergei Nirenburg New Mexico State University
James Pustejowsky Brandeis University
Ralph Grishman New York University
Christiane Fellbaum Princeton University

Paper Submission

Papers should not exceed 4000 words or 10 pages.


Three hard copies should be sent to:

Paola Velardi
Dipartimento di Scienza dell'Informazione
via Salaria 113
00198 Roma

Electronic submission will be allowed in Poscript or Word per Mac or RTF.
An ftp site will be available on demand.
Authors should send an info email to Paola Velardi
(velardi@dsi.uniroma1.it) even
if they submit in paper form. An electronic submission should be
accompanied by a plain ascii text.

# NAME : Name of first author
# TITLE: Title of the paper
# PAGES: Number of pages
# FILES: Name of file (if also submitted electronically)
# NOTE : Anything you'd like to add
# KEYS : Keywords
# EMAIL: Email of the first author
# ABSTR: Abstract of the paper


Paper Submission Deadline (Hard Copy/Electronic) February 20
Paper Notification March 20
Camera-Ready Papers Due April 15
L&CT workshop May 26


Prof. Paola Velardi
Dipartimento di Scienza dell'Informazione
via Salaria 113
Universita' "La Sapienza"
00198 Roma
ph. +39-(0)6-49918356
fax +39-(0)6-8541842 8841964

Date: Fri, 2 Jan 1998 11:44:45 -0500 (EST)
From: "David L. Gants" <dgants@parallel.park.uga.edu>
Subject: THE EVALUATION OF PARSING SYSTEMS - Workshop Call for Papers

>> From: John Carroll <johnca@cogs.susx.ac.uk>


a workshop jointly organised by the CEC Language
Engineering 1 projects SPARKLE and ECRAN

to be held at the



This workshop will provide a forum for researchers interested in the
development and evaluation of natural language grammars and parsing
systems, and in the creation of syntactically annotated reference

Organisers: John Carroll, Roberto Basili, Nicoletta Calzolari,
Robert Gaizauskas, Gregory Grefenstette


The aim of this workshop is to provide a forum for discussion of
evaluation methods for parsing systems, and proposals for the
development of syntactically annotated language resources.

With increased attention to evaluation of component technology in
language engineering, evaluation of parsing systems is rapidly becoming
a key issue. Numerous methods have been proposed and while one, the
Parseval/Penn Treebank scheme, has gained wide usage, this has to some
extent been due to the absence of workable alternatives rather than to
whole-hearted support. Parseval/PTB evaluation has several limitations
and drawbacks, including a commitment to a particular style of
grammatical analysis, and oversensitivity to certain innocuous types of
misanalysis while failing to penalise other common types of more serious
mistake. Also, the original published description of the scheme -- and
the evaluation software widely distributed as a follow-up to it -- is
specific to the English language. It may be that there are currently no
alternative more workable schemes or proposals, but this needs to be
more fully discussed: this workshop will provide an opportunity for such
a debate.

This workshop is particularly timely given the large number of CEC
Language Engineering projects that involve parsing in one form or
another and which need to evaluate and share the results of their
efforts. Parsing is an essential part of many larger applications, such
as Information Extraction, which have gained in importance over the last
few years. Often in such systems, the strength of the parser and
grammar has a direct effect on the desired results, and thus achieving
good results rests on being able to determine and improve weaknesses in
the parser/grammar. Without a reliable parser evaluation method this
cannot be done effectively.

A parsing evaluation workshop is also appropriate at this time given the
imminent creation of large-scale syntactically annotated resources for
European languages. Contributions from those involved in such activities
are welcomed, so as to improve communication between the resource
construction and the resource utilisation communities. This should
ensure that the resources constructed are maximally useful to the
general language engineering community.

The organisation of this workshop brings together two European language
engineering projects which are closely related and whose partners share
similar research interests: SPARKLE and ECRAN.

The organisers solicit contributions from the general community on the
following topics:

-- descriptions of generic syntactic annotation schemes
-- methodologies and metrics for parsing system evaluation
-- reports and analyses of the results of utilising particular parser
evaluation schemes
-- description/analysis/experience of language-dependent (especially
for languages other than English) and task-dependent syntactic
annotation schemes


Roberto Basili Gregory Grefenstette
Ted Briscoe Mark Hepple
Nicoletta Calzolari Tony McEnery
John Carroll Maria Teresa Pazienza
Roberta Catizone Paola Velardi
Robert Gaizauskas Yorick Wilks


Papers should not exceed 4000 words or 10 pages. Submission may be in
either hard copy or electronic form. The submission deadline is February
15th, 1998.

Hard Copy Submission:

Three copies of the paper should be sent to:

Dr John Carroll
Cognitive and Computing Sciences
University of Sussex
Brighton BN1 9QH

Electronic Submission:

Electronic submission may be in either self-contained Latex, Postscript,
or RTF formats, to john.carroll@cogs.susx.ac.uk. For each submission --
whether hard copy or electronic -- a separate plain ascii text email
message should be sent to John Carroll, containing the following

# NAME : Name of first author
# TITLE: Title of the paper
# PAGES: Number of pages
# NOTE : Any relevant instructions
# KEYS : Keywords
# EMAIL: Email of the first author
# ABSTR: Abstract of the paper
. . . . . .


Paper submission deadline (hard copy/electronic) February 15th
Notification of acceptance March 10th
Camera-ready papers due April 10th
Workshop May 26th


General information about the conference is at:

Specific queries about the conference should be directed to:

LREC Secretariat
Facultad de Traduccion e Interpretacion
Dpto. de Traduccion e Interpretacion
C/ Puentezuelas, 55
18002 Granada, SPAIN
Tel: +34 58 24 41 00 - Fax: +34 58 24 41 04

Date: Fri, 2 Jan 1998 11:45:41 -0500 (EST)
From: "David L. Gants" <dgants@parallel.park.uga.edu>
Subject: Interdisciplinary Workshop on Deception and Trust

>> From: falcone@pscs2.irmkant.rm.cnr.it (Rino Falcone)


Workshop at the
Second International Conference on Autonomous Agents (AA'98)


Description of the workshop:

The aim of the workshop is to bring together researchers that can
contribute to a better understanding of trust and deception in agent
Most agent models assume secure and reliable communication to exist between
agents. However, this ideal situation is seldom met in real life.
Therefore, many techniques (e.g. contracts, signatures, long-term personnel
relationships) have been evolved over time to detect and prevent deception
and fraud in human communication, exchanges and relations, and hence to
assure trust between agents.

In recent research on electronic commerce trust has been recognized as
one of the key factors for successful electronic commerce adoption. In
electronic commerce problems of trust are magnified, because agents
reach out far beyond their familiar trade environments. Also it is far
from obvious whether existing paper-based techniques for fraud
detection and prevention are adequate to establish trust in an
electronic network environment where you usually never meet your trade
partner physically, and where messages can be read or copied a million
times without leaving any trace. Trust building is more than secure
communication via electronic networks, as can be obtained with, for
example, public key cryptography techniques. For example, the
reliability of information about the status of your trade partner has
very little to do with secure communication. With the growing impact of
electronic commerce distance trust building becomes more and more
important, and better models of trust and deception are needed. One
trend is that in electronic communication channels extra agents, the
so-called Trusted Third Parties, are introduced in an agent community
that take care of trustbuilding among the other agents in the
network. For example, in some cases the successful application of public
key cryptography critically depends on trusted third parties that issue
the keys. Although we do not focus in this workshop on techniques for
secure communication (e.g. public key cryptography), we would welcome
analyses about the advantages and limitations of these techniques for

The notion of trust is definitely important in other domains of agents'
theory, beyond that of electronic commerce. It seems even foundational for
the notion of "agency" and for its defining relation of acting "on behalf
of". So, trust is relevant also in HC interaction; consider the relation
between the user and her/his personal assistant (and, in general, her/his
computer). But it is also critical for modeling groups and teams,
organisations, coordination, negotiation, with the related trade-off
between local/individual utility and global/collective interest; or in
modelling distributed knowledge and its circulation. In sum, the notion of
trust is crucial for all the major topics of Multi-Agent systems.
Thus what is needed is a general and principled theory of trust, of its
cognitive and affective components, and of its social functions.

Analogously the study of deception not only is very relevant for avoiding
practical troubles, but it seems really foundational for the theory of
communication. First, because it challenges Grice's principles of
linguistic communication; second, because the notion of "sign" itself has
been defined in semiotics in relation to deception: "In principle,
Semiotics is the discipline studying whatever can be used for lying" (U.
Eco). Thus not only practical defences from deception (like reputations,
guaranties, etc.), but also a general and principled theory of deception
and of its forms (including fraud) are needed.

We would encourage an interdisciplinary focus of the workshop as well as
the presentation of a wide range of models of deception, fraud and
trust(building). Just to mention some examples; AI models, BDI models,
cognitive models, game theory, and also management science theories about

Suggested topics include, but are not restricted to:

* models of deception and of its functions
* models of trust and of its functions
* models of fraud
* role of trust and trusted third parties (TTP) in electronic commerce
* defensive strategies and mechanisms
* ways to detect and prevent deception and fraud


The full-day workshop will be aimed at creating an informal atmosphere for
stimulating discussions, interdisciplinary exchange and deep understanding
of each other's pespective.
We plan to have both:

Paper presentations:
Long presentations (25-30 min) of the accepted papers, plus 10-15 minutes
for discussion (possibly with discussants). Plenary discussion at the end.

Panel sessions:
A couple of topics will be selected for a focused discussion. Some of the
attendees will be requested to participate as panelists. The panels chairs
will circulate prior to the workshop a list of questions for the panelists.

The accepted papers will be published in the workshop proceedings. The
publication of a revised version of the accepted papers is being negotiated
with a high quality publisher.


The workshop welcomes submissions of original, high quality papers
addressing issues that are clearly relevant to trust, deception and fraud
in agent-based systems, either from a theoretical or an applied
perspective. Papers will be peer reviewed by at least two referees from a
group of reviewers selected by the workshop organizers.
Submitted papers should be new work that has not been published elsewhere
or is not about to be published elsewhere.

Paper submissions: will include a full paper and a separate title page
with the title, authors (full address), a 300-400 word abstract, and a list
of keywords. The length of submitted papers must not exceed 12 pages
including all figures, tables, and bibliography. All papers must be
written in English.

* The authors must send by email the title page of their paper by
January 15th.
* Submissions must be send electronically, as a postscript or MSword
format file, by January 20th.
* The authors must also airmail one hard copy of their paper to two
of the organizers as soon as possible after the electronic submission.
* No submissions by fax or arriving after the deadline will be accepted.


for the electronic submission
Rino Falcone
tel. +39 - 6 - 860 90 211

for the airmail hard copy

Babak Sadighi Firozabadi
Department of Computing - Imperial College
180 Queen's Gate - London SW7 2BZ - U.K.

and (notice "and")

Cristiano Castelfranchi
National Research Council - Institute of Psychology
Viale Marx, 15 - 00137 Roma - ITALY
tel +39 6 860 90 518


Deadline for the electronic title page January 15, 1998
Deadline for Paper Submission January 20, 1998
Notification of Acceptance/Rejection March 1, 1998
Deadline for camera-ready version April 1, 1998
Workshop May 9, 1998


Phil Cohen
Dept. of Computer Science and Engineering, Oregon Inst. of Science
and Tech., USA

Robert Demolombe

Andrew J I Jones
Dept. of Philosophy - Univ. of Oslo, Norway

Anand Rao
Australian AI Institute, Melbourne, Australia

Munindar Singh
Dept. of Computer Science - North Carolina State University, USA

Chris Snijders
Dept. of Sociology, Utrecht, The Netherlands

Gilad Zlotkin
VP Engineering, Israel

Gerd Wagner
Inst.f.Informatik - Univ. Leipzig, Germany

Cristiano Castelfranchi (co-chair)
National Research Council - Institute of Psychology- Rome, Italy

Yao-Hua Tan (co-chair)
EURIDIS - Erasmus University - Rotterdam - The Netherlands

Rino Falcone (co-organizer)
National Research Council - Institute of Psychology- Rome, Italy

Babak Sadighi Firozabadi (co-organizer)
Department of Computing - Imperial College - London - UK

Rino Falcone
IP - CNR National Research Council
Division of "Artificial Intelligence, Cognitive Modeling and Interaction"
Viale Marx, 15 00137 ROMA

email: falcone@pscs2.irmkant.rm.cnr.it or falcone@vaxiac.iac.rm.cnr.it
tel: ++39 6 86090.211 fax: ++39 6 86090.214

Date: Mon, 5 Jan 1998 12:46:31 +0000 (GMT)
From: Mike Fraser <mike.fraser@computing-services.oxford.ac.uk>
Subject: Computers & Texts 16: Call for Articles and Reviews

COMPUTERS & TEXTS 16: Call for Articles and Reviews

Articles and reviews are invited for the next issue of Computers & Texts,
the newsletter of CTI Textual Studies. Articles may concern any aspect of
the use of computers in the HE teaching of the disciplines we support
(literature in all languages, linguistics, theology, classics, philosophy,
film studies, theatre arts and drama). We especially welcome reviews and
case studies of IT currently being used in undergraduate/postgraduate
courses (especially within UK higher education). Reviews of relevant books
and conference reports are also welcome. We would also consider short
IT-related profiles of UK departments (further details available on

All contributions for Computers & Texts 16 should reach the Centre by
20 February 1998. Submissions may be made by electronic mail to
ctitext@oucs.ox.ac.uk or mike.fraser@oucs.ox.ac.uk. Submissions on paper
should be sent to the Centre together with an electronic version of the
document (and any screenshots) on a 3.5" disk.

Articles should not normally exceed 2,500 words and reviews should be
between 800-1,500 words. Please feel free to discuss any article/review
prior to submission.

Contributions will appear in both the print and electronic editions of
Computers & Texts.

Dr Michael Fraser Email: mike.fraser@oucs.ox.ac.uk
Manager, CTI Textual Studies Fax: +44 1865 273 275
Humanities Computing Unit, OUCS Tel: +44 1865 283 282
University of Oxford
13 Banbury Road http://info.ox.ac.uk/ctitext/
Oxford OX2 6NN

Date: Mon, 05 Jan 1998 18:13:48 +0100 (MET)
From: Ineke Schuurman <ineke@ccl.kuleuven.ac.be>
Subject: Corpora: EUROCALL 98 Conference (update!)



invite you to the

EUROCALL 98 Conference


venue: Faculty of Arts, K.U.Leuven, Blijde-Inkomststraat 21, Leuven, Belgium,
9 - 12 September 1998


Pre-conference workshops, keynote presentations, poster sessions,
demonstrations, parallel sessions and workshops; PC-labs, the one-computer
classroom, multimedia, courseware, software, authorware; learning theory,
educational principles, psychology of instruction, educational policy;
e-mail, WWW, video-conferencing, school projects, text processing, idea

ICT (Information and Communications Technology) expands the boundaries of
learning: from the small classroom to a global scale, passing limits of
space and time. Learning becomes asynchronous: e-mail, the WWW,
video-conferencing,... allow international contacts. Learning becomes more
autonomous, student-oriented with the teacher as a facilitator. From being
teacher-centred the process becomes learner-centred. Language
learning plays
the central role in this change.

All correspondence to the Conference Secretariat:

Claudine Van Volsem, EUROCALL 98, LINOV/UPV, Celestijnenlaan 200 A, B-3001
Heverlee, BELGIUM, tel. +32 16 32 77 31, fax +32 16 32 79 75
e-mail: eurocall98@linov.kuleuven.ac.be
web-site: http://www.arts.kuleuven.ac.be/eurocall98
President of the local organising committee: Prof. dr. Michael Goethals,
Faculty of Arts,
e-mail: michael.goethals@arts.kuleuven.ac.be

EUROCALL is the European Association for Computer Assisted Language
Learning, an association of language teaching professionals in Europe and



* integration of ICT (Information and Communciations Technology)
in the language curriculum with emphasis on their potential
for stimulating international contacts and enhancing the
quality, diffusion and cost-effectiveness of language content
either in a classroom, self-access or distance-learning environment

* assessment and evaluation of software tools and resources (with a
special attention on telematics and multimedia)

* new language learning strategies (autonomous learning, data-driven
learning, learner-centred learning) and their influence on
courseware design
* ICT-developments (www, e-mail, computer conferencing, and their
implications for new language learning strategies)

[material deleted -- see the Web page]

Eurocall98 Conference:

Date: Mon, 5 Jan 1998 17:21:32 -0500 (EST)
From: "David L. Gants" <dgants@english.uga.edu>
Subject: ECML'98 TANLPS Workshop: First Call for Paper

ECML-98 Workshop: First Call for Papers

ECML-98 Workshop:
Towards adaptive NLP-driven systems:
linguistic information, learning methods and

Organized by :
R. Basili, M .T. Pazienza (University of Roma, Tor Vergata), ITALY

Since most of the applications, from syntactic to semantic, are lexicon
driven, systematic and reliable acquisition on a large scale of linguistic
information is the real challenge to Natural Language Processing (NLP).
Empiricist view on Natural Language Processing and Learning has become
recently more attractive for a wider research community: computational
linguistics, artificial intelligence, psychology then seemed to converge on
a specific data-oriented perspective aiming to overcome the traditional
knowledge acquisition bottleneck.
It has been often noted that the limited attention paid by the machine
learning community to text and speech data seems unjustified. It is thus
more and more evident that empirical learning of Natural Language
Processing (NLP) can alleviate the NLP main problem by means of a variety
of methods for the automatic induction of lexical knowledge.
Lexical knowledge is often hard to compile by hand, and even harder to port
and reuse. NLP application systems have still a low impact on real world
problems, mainly due to the costs related to reusability and customization
of the required lexicons. In particular changes in the domain, causes
changes in the lexical information required in the underlying natural
language. Empirical, symbolic machine learning methods can be perfectly
suited for this task like automatic acquisition and adaptation of this
klnowledge. Rule induction, symbolic approaches to clustering, lazy
learning, and inductive logic programming, have been already proposed by a
growing community that is entering the challenge for theoretical (i.e.
methodological) and application purposes A variety of techniques seems to
be combined in order to successfully design realistic inductive systems for
text processing: the target of this research are methodological and design
principles for systems combining linguistic and lexical learning
capabilities for large scale language processing tasks. This is what we
mean with adaptive NLP-driven systems.
Within this research enterprise, some issues can favour a sinergistic
process between NLP and ML areas: the access to large data sets, that are
even increasing over time, due to the telematics facilities available
nowaday; extending the set of typical classes of ML problems to other hard
cases (particularly dense in the NLP processes); adding inductive
capabilities to NLP system for tasks related to specific applications (i.e.
Information Extraction).

The proposed Workshop is thus aiming to stimulate reasearch and discussion
on the following aspects :
- Establishing results and evidencies on the suitability of different ML
paradigms on specific levels of representation of lexical knowledge
(morphology, syntax, linguistic inference among others)
- Comparison of the quantitative approaches to lexical acquisition with
empirical symbolic methods
- Stimulating discussion on cognitive perspective of some models within a
plausible architecture for Language Processing and Learning
- Establishing results on the applicability of the extracted/induce
knowledge within NLP systems, with respect to assessed evaluation criteria,
typical of the ML and Language Engineering (LE) area
- Case studies on adaptive NLP systems, i.e. effective NLP systems
integrating linguistic inferences with inductive capabilities (WWW KB at
- Critical review of existing experiences on adaptive NLP systems
- Establishing guidelines for an evaluation framework of adaptive NLP
systems : accuracy of the linguistic process, robustness of the induction
process, ...
- Promote cooperation among research groups in Europe and USA to exchange
ideas, data and tools for design and experiment architectures for
adaptive NLP systems

WorkShop format :

The Workshop is expected to cover the whole day.
In the first session, a part from an invited talk, we expect to cover
methodological issues. Papers related to advanced research on suitability
of learning paradigms for the different target lexical information will be
favoured. Prototypical examples in this area are studies on empirical
learning of tasks like POS tagging, induction of grammatical information,
symbolic learning of word sense disambiguation criteria and lexical
semantic information. A panel discussion is expected to close the morning
session and focus on principles of suitability for learning paradigms vs.
lexical levels.
In the second half of the day we expect to stimulate partecipants to cover
application areas, like IR and IE, by a couple of invited talks on existing
adaptive systems as a basis for presenting novel aspects on integration of
NLP capabilities with learning from experience (examples, errors,
performance). A set of at least other 3 or 4 papers is expected to
concentrate on original research works that we know are currently under
development in several reasearch centres in Europe (Sheffield University,
Tilburg, Rome Tor Vergata and Torino University). A Panel discussion on the
implication of the adaptive paradigm on existing and potential NLP systems
will close the Workshop.

Program Committee

R. Basili (University of Roma, Tor Vergata, ITALY)
M. Craven (Carnegie Mellon University, USA)
W. Daelemans (University of Tilburg, NEDERLANDS)
M.T. Pazienza (University of Roma, Tor Vergata, ITALY)
L. Saitta (University of Torino, ITALY)
C. Samuelssonn (Bell Labs, AT&T, USA)
Y. Wilks (University of Sheffield, UK)

Paper Submission:

Papers should not exceed 3000 words or 6 pages

Hard Copy Submission:

Three copies of the paper should be sent to:

Roberto Basili
Department of Computer Science, Systems and Production
University of Roma, Tor Vergata
Via di Tor Vergata
00133 Roma (ITALY)
e-mail: basili@info.utovrm.it

Electronic Submission:

Electronic submission may be in either self-contained Postscript
or RTF formats, to
For each submission -- whether hard copy or electronic -- a separate plain
ascii text email
message should be sent to Roberto Basili, containing the following

# NAME : Name of first author
# TITLE: Title of the paper
# PAGES: Number of pages
# FILES: Name of file (if attachments are submitted electronically)
# NOTE : Any relevant instructions
# KEYS : Keywords
# EMAIL: Email of the first author
# ABSTR: Abstract of the paper
. . . . . .


Workshop Announcement and Call for Papers: 5 January 1998
Papers due : 15 February 1998
Notification of Acceptance : 5 March 1998
Final version due : 25 March 1998

==== cut here ====

Roberto Basili
Department of Computer Science, Systems and Production
University of Roma, Tor Vergata
Via di Tor Vergata
00133 Roma (ITALY)
e-mail: basili@info.utovrm.it
tel: +39 - 6 - 7259 7391
fax: +39 - 6 - 7259 7460

Humanist Discussion Group
Information at <http://www.kcl.ac.uk/humanities/cch/humanist/>