16.509 data modelling for a history of the book?

From: Humanist Discussion Group (by way of Willard McCarty willard.mccarty@kcl.ac.uk)
Date: Wed Feb 26 2003 - 02:39:35 EST

  • Next message: Humanist Discussion Group (by way of Willard McCarty

                   Humanist Discussion Group, Vol. 16, No. 509.
           Centre for Computing in the Humanities, King's College London
                       www.kcl.ac.uk/humanities/cch/humanist/
                         Submit to: humanist@princeton.edu

             Date: Wed, 26 Feb 2003 07:27:56 +0000
             From: Willard McCarty <willard.mccarty@kcl.ac.uk>
             Subject: data modelling for a history of the book?

    Suppose that you were designing a computational basis for studying the
    history of the book in 19C England. What sort of data model(s) would you
    use? A relational database would seem right for several aspects of such a
    study: authors, paper-makers, printers, book-sellers, binders and, of
    course, suitably tabular facts about the books themselves, such as titles,
    dates of publication, number of pages etc. Would the text of the books be
    of interest? (Let us for the moment ignore how much work would be involved,
    e.g. in obtaining those contents.) If yes, then how would these contents be
    handled?

    Furthermore, what would one do about the non-verbal aspects -- layout (of
    the book *opening*, not just individual pages), design, typography, colour,
    binding and so forth? How about the heft of the thing? Imaging can record
    the visual aspects, allowing us to infer some others, but images (as
    snapped by the digital camera) are not subject to automatic analytical
    probing.

    Many of us, I suspect, will be aware of the "complete encoding" fallacy --
    the idea that it is possible completely to encode a verbal artifact. (One
    imagines the equivalent of a typing pool, a vast factory filled with
    text-encoders processing all works of literature in all languages....) This
    is closely related to the mimetic fallacy -- the idea that a digitized
    version will be able to replace its non-digital original. So we avoid the
    extention of these fallacies to the current question, allowing that images
    will show that which at the design stage we do not know might be designed
    for. The student of 19C English book history will still need to be looking
    through the books for whatever catches his or her eye. Certain developments
    with image-processing suggest that a machine may at some point be able to
    throw up examples of an automatically discerned pattern, which would be
    helpful.

    But at this stage what is the next step beyond the justaposed image and
    descriptive text? Do we, for example, image-map the visual object to attach
    hypertextual commentary? Do we record the location of objects (such as
    marginalia, doodles, typographic devices etc.) within the book-opening so
    that we may compute with them? Or is all this vanity and a vexation of the
    spirit?

    Yours,
    WM

    Dr Willard McCarty | Senior Lecturer | Centre for Computing in the
    Humanities | King's College London | Strand | London WC2R 2LS || +44 (0)20
    7848-2784 fax: -2980 || willard.mccarty@kcl.ac.uk
    www.kcl.ac.uk/humanities/cch/wlm/



    This archive was generated by hypermail 2b30 : Wed Feb 26 2003 - 02:44:45 EST