16.105 encoding questions

From: Humanist Discussion Group (by way of Willard McCarty (w.mccarty@btinternet.com)
Date: Tue Jul 02 2002 - 01:50:05 EDT

  • Next message: Humanist Discussion Group (by way of Willard McCarty : "16.103 e-mail attachments"

                   Humanist Discussion Group, Vol. 16, No. 105.
           Centre for Computing in the Humanities, King's College London

             Date: Tue, 02 Jul 2002 06:42:52 +0100
             From: lachance@chass.utoronto.ca (Francois Lachance)
             Subject: Sometime subsequent to January and before the next January


    Towards the end of January, in describing Susan Hockey's presentation of
    possible applications of different tag sets, I promised a subsequent post
    in regards to headers. See:

    In her contribution to _The Literary Text in the Digital Age_ Susan Hockey
    astutely invites us to entertain the construction of a header, the place
    for holding metadata, only after having moved through a series of what may
    be called abstrations from the artefact to be encoded (she moves from
    transcription to analytic & interpretative information via linking,
    segmentation & alignment). She, I believe, points the way to more markup
    and is considerate of the burdens this might impose upon an encoder or a
    team of encoders. She writes:

    For many features in the header, one chooses between describing the
    information inprose text and using a subset of elements that gives more
    granularity to the information. The later format is more suitable for
    computer processing, but users so far tended to prefer the prose format,
    as it is simpler for them. Software to encourage the use of non prose
    format would be helpful.

    In the years since the publication of _The Literary Text in the Digital
    Age_, I do recall the becoming availible of a WWW interface to a form
    which helps encoders generate TEI headers. I wonder however if the state
    of affairs described by Susan Hockey persits. I wonder if it is not just
    software which in essence would be a prompter that is necessary but an
    actual human reviewer akin to a librarian.

    I'm not suggesting a tool versus human contact dichotomy. Without the
    technology there can be no discourse about its appropriate appropriation.

    The machine as textbook is connected in the networked universe to a
    machine as communications device putting users as co-explorers in touch
    with each other and with more experienced explorers.

    The fundamental question may be one of the social determination of
    acceptable levels of granularity. One can think of search engine displays
    of keywords in context as mitigating the need for granularity. But
    searching is not the only type of processing. How many encoders have asked
    themselves what they want done with the texts they encode? How many have
    answered that all they want is an ability to represent the texts where
    "representation" is taken in its restricted sense of "render"?

    Granularity it seems stems from a desire to map. Mapping deals in parts.
    To anatomize may be an old word for encode. And both, I wonder, forms of
    abstraction. See again:

    Francois Lachance, Scholar-at-large
    per Interactivity ad Virtuality via Textuality

    This archive was generated by hypermail 2b30 : Tue Jul 02 2002 - 02:07:56 EDT