16.513 data modelling for a history of the book

From: Humanist Discussion Group (by way of Willard McCarty willard.mccarty@kcl.ac.uk)
Date: Fri Feb 28 2003 - 08:33:08 EST

  • Next message: Humanist Discussion Group (by way of Willard McCarty

                   Humanist Discussion Group, Vol. 16, No. 513.
           Centre for Computing in the Humanities, King's College London
                         Submit to: humanist@princeton.edu

       [1] From: John Unsworth <jmu2m@virginia.edu> (9)
             Subject: GIS in print culture studies

       [2] From: Elisabeth Burr <Elisabeth.Burr@uni-duisburg.de> (111)
             Subject: Re: 16.509 data modelling for a history of the book?

             Date: Fri, 28 Feb 2003 07:56:22 +0000
             From: John Unsworth <jmu2m@virginia.edu>
             Subject: GIS in print culture studies


    Possiby relevant to your recent query on Humanist, about tools for studying
    the history of the book:

    Using GIS for Spatial and Temporal Analyses in Print Culture Studies:
                                   Some Opportunities and Challenges

                                    Bertrum H. MacDonald and Fiona A. Black

    Social Science History 24.3 (2000) 505-536



    434-825-2969 | jmu2m@virginia.edu | http://www.iath.virginia.edu/~jmu2m/

             Date: Fri, 28 Feb 2003 07:58:28 +0000
             From: Elisabeth Burr <Elisabeth.Burr@uni-duisburg.de>
             Subject: Re: 16.509 data modelling for a history of the book?

    I have just taught a course on the influence of media revolutions on the
    conception of language. A lot of the time was devoted to the beginnings
    (invention and development of writing systems, manuscript culture and the
    advent of the printing press. The present development was sketched out at
    the end of the course. I say this because the experiences influence what I
    am going to say.

    The ideal would be, according to me, to retain as much as possible of the
    information the individual book provides or seems to provide to us as
    observers / users allowing, at the same time, for the possibility that the
    information might change in the future because of the different position
    with respect to new cultural objects in which the book might find itself.

    As this is not possible just like that, the best thing to do would seem to
    start such a project with a certain teaching aim in mind and make the
    information to retain dependent on your needs:

    The object of the above named course being the conception of language /
    theory of language the text is certainly important when it comes to books
    like Torry's Champ fleury which do not just propose a certain theory of
    letters but of language, too.

    With other books, above all manuscripts or early prints of vernacular
    (literary) texts, say for example of Dante's Divine Comedy, the text itself
    would be less important in my context. More important would be the printed
    objects themselves, their contribution to the development of the structure
    of the printed book (front pages, indices, page numbering etc.) and the
    role they played in the context of the Italian linguistic, cultural and
    political situation and the develop- ment of a linguistic norm for the
    vernacular languages. This means, that it is more important, in this case,
    that I can show pictures of the material object to my students.

    With 19C printed books the teaching aim could very well be to get students
    to look at books as an object providing manifold information and finding
    itself in a complex interplay of diffe- rent types of context (see the
    relational database - I'd add information about the topic of the books
    straight away ). When they will look at the book itself - I think they will
    feel the need to do so eventually - or come across books of the 19C they
    will be able to look at them in a con- textualised way from the start and
    then explore them further in a scholarly way.

    To give an example: in my course on early grammatical descriptions of
    Romance languages I introduce students also to the forerunners / models of
    vernacular grammars like Donatus and Priscianus. One of the students went
    to an exposition on Venice (if I recall rightly) and then wrote me an
    e-mail that there she had seen the Institutiones of Priscianus. If it had
    not been for this course she would probably not have seen it at all or she
    would have seen it just as one of the objects, i.e. without being able to
    put it into a certain context. Naturally, this would not be enough for
    scholars of latin grammars. They would want to be able to analyse the text.

    The result of such a project could then be made available to others in such
    a way that they can add whatever other information they need for their
    teaching aims. This collaborative effort would make reaching the ideal more
    feasible in the long run.

    Elisabeth Burr

    At 07:39 26.02.03 +0000, you wrote:
    > Humanist Discussion Group, Vol. 16, No. 509.
    > Centre for Computing in the Humanities, King's College London
    > www.kcl.ac.uk/humanities/cch/humanist/
    > Submit to: humanist@princeton.edu
    > Date: Wed, 26 Feb 2003 07:27:56 +0000
    > From: Willard McCarty <willard.mccarty@kcl.ac.uk>
    > >
    >Suppose that you were designing a computational basis for studying the
    >history of the book in 19C England. What sort of data model(s) would you
    >use? A relational database would seem right for several aspects of such a
    >study: authors, paper-makers, printers, book-sellers, binders and, of
    >course, suitably tabular facts about the books themselves, such as titles,
    >dates of publication, number of pages etc. Would the text of the books be
    >of interest? (Let us for the moment ignore how much work would be involved,
    >e.g. in obtaining those contents.) If yes, then how would these contents be
    >Furthermore, what would one do about the non-verbal aspects -- layout (of
    >the book *opening*, not just individual pages), design, typography, colour,
    >binding and so forth? How about the heft of the thing? Imaging can record
    >the visual aspects, allowing us to infer some others, but images (as
    >snapped by the digital camera) are not subject to automatic analytical
    >Many of us, I suspect, will be aware of the "complete encoding" fallacy --
    >the idea that it is possible completely to encode a verbal artifact. (One
    >imagines the equivalent of a typing pool, a vast factory filled with
    >text-encoders processing all works of literature in all languages....) This
    >is closely related to the mimetic fallacy -- the idea that a digitized
    >version will be able to replace its non-digital original. So we avoid the
    >extention of these fallacies to the current question, allowing that images
    >will show that which at the design stage we do not know might be designed
    >for. The student of 19C English book history will still need to be looking
    >through the books for whatever catches his or her eye. Certain developments
    >with image-processing suggest that a machine may at some point be able to
    >throw up examples of an automatically discerned pattern, which would be
    >But at this stage what is the next step beyond the justaposed image and
    >descriptive text? Do we, for example, image-map the visual object to attach
    >hypertextual commentary? Do we record the location of objects (such as
    >marginalia, doodles, typographic devices etc.) within the book-opening so
    >that we may compute with them? Or is all this vanity and a vexation of the
    >Dr Willard McCarty | Senior Lecturer | Centre for Computing in the
    >Humanities | King's College London | Strand | London WC2R 2LS || +44 (0)20
    >7848-2784 fax: -2980 || willard.mccarty@kcl.ac.uk

    HD Dr. Elisabeth Burr
    Fakultt 2 / Romanistik
    Geibelstr. 41
    D-47058 Duisburg


    This archive was generated by hypermail 2b30 : Fri Feb 28 2003 - 10:39:56 EST