Home About Subscribe Search Member Area

Humanist Discussion Group


< Back to Volume 34

Humanist Archives: June 21, 2020, 7:11 a.m. Humanist 34.127 - annotating notation

                  Humanist Discussion Group, Vol. 34, No. 127.
            Department of Digital Humanities, King's College London
                   Hosted by King's Digital Lab
                       www.dhhumanist.org
                Submit to: humanist@dhhumanist.org


    [1]    From: philomousos@gmail.com
           Subject: Re: [Humanist] 34.125: annotating notation (23)

    [2]    From: Robert Delius Royar 
           Subject: 3D annotating notation (38)

    [3]    From: Desmond  Schmidt 
           Subject: Re: [Humanist] 34.125: annotating notation (39)


--[1]------------------------------------------------------------------------
        Date: 2020-06-21 00:56:16+00:00
        From: philomousos@gmail.com
        Subject: Re: [Humanist] 34.125: annotating notation

I'm afraid I can't parse Desmond Schmidt's response, which seems intended
not to answer my questions by diverting our attention to modes of attack on a
technology I last remember using 15 years ago (and didn't think much of then).
Silence is probably best.

Peter Robinson's further explanation of the Textual Communities system reminds
me rather of the Homer Multitext project
(http://www.homermultitext.org/#_navigation) with which I think he shares some
aims, namely digital editions that are an exploration of the shape of a textual
tradition rather than an editor's distillation of that tradition. Casey
Dué's book Achilles Unbound
(https://chs.harvard.edu/CHS/article/display/6910) is a nicely accessible recent
discussion of aspects of the Homeric tradition, and talks about some of the
nontechnical parts of the project. More technical detail is available in their
blog posts. As I recall, the project treats documents as "ordered hierarchies
of citable objects", with identifiers allocated down to the token level, and
uses RDF to describe relations between the citations, so that the use of (e.g.)
formulas can be tracked, and the ways the texts are reshaped in different
versions can be described. HMT, incidentally, does an admirable job of engaging
undergraduate students in research, one of many reasons I am a fan of it.

All the best,
Hugh

--[2]------------------------------------------------------------------------
        Date: 2020-06-20 19:21:56+00:00
        From: Robert Delius Royar 
        Subject: 3D annotating notation

As has been noted in the discussion regarding what way best to annotate and
represent digital artifacts for scholarly purposes, the focus has been
entangled in the language and view of books. Further, it has tended to
emphasize traditional 2D elements.

So, I would like to know about efforts in digital humanities to provide
3D-printed facsimiles of artifacts such as the Hengwrt Chaucer (since it
has been raised in a description of practical existing, practical
applications.

Reading the discussion in this thread sparked my interest in wondering
whether we might already have these types of facsimile works (as I believe
there are for archeology), and if we did, how the annotation/markup system
would work with regard to erasure (e.g. on vellum) and other
emendations/degradations.

One source I found that discusses these issues is Dahlström, M. "Copies and
facsimiles." International Journal of Digital Humanities, volume 1, pages
195-208(2019):

"Several recent editing projects even go to considerable lengths to
accommodate the need for and interest in graphical information about the
source documents, and they display the entire source document, as it were,
i.e. not just the sections of the document bearing text, but also covers,
margins, blank pages, etc. In fact, this is an area in which we are only
beginning to take the first steps to go beyond the textual transcription
and the 2D flat graphical reproduction to represent the source document and
to provide a large array of access and views: 3D simulations of the
material object or minute photographs down to a microscopic, molecular
level to serve analyses of cellulose, skin nerves, and fibers (Björk 2015,
197). And in the other direction, vast amounts of abstracted information in
the form of linked data to serve various kinds of work at the macro level."
https://link.springer.com/article/10.1007/s42803-019-00017-5

--
               Robert Delius Royar
 Caught in the net since 1985


--[3]------------------------------------------------------------------------
        Date: 2020-06-20 10:08:49+00:00
        From: Desmond  Schmidt 
        Subject: Re: [Humanist] 34.125: annotating notation

Peter,

This is more informative but returns to things I never understood
about the Kahn-Wilensky object identifiers mentioned in your papers on
the subject. I have a few questions about that which I hope you will
now elucidate.

1. Your identifiers are key-value pairs separated by colons. Where do
these key names such as "entity", "part", "line", "linespace" come
from? You say you infer their values by reading the n-attribute of
certain elements, but it is unclear to me where all the key names come
from. For example, how exactly do you know you are in a "part" called
"GP" or in a "document" called "Canterbury Tales"? Are these key-names
determined arbitrarily by your XML ingestion program or is there some
way to specify them more generally?

2. Is it true that for your model to work the values of various
properties that become parts of the identifier have to be manually
added to the XML file beforehand? If so it would seem to be a big
up-front cost before new texts can be properly ingested in the system.
For example, you are saying that individual lines are labelled in both
the document and entity trees, and the line-numbers of those two trees
often will not correspond, so you will have to label all of them
manually, no?

3. Are the identifiers for entities ever used? Isn't it the case for a
query regarding the location of some text in a particular manuscript,
or the number of manuscripts that have that particular line, etc.,
that all you need to do is query the text fragments collection, which,
being at the bottom of the tree, is the most fully specified?

4. Aren't the key components of your reference system themselves
subject to variation? What happens if a line is split into two in one
manuscript, or if two sections are merged into one? You seem to assume
that there is a global identification system with a fixed granularity
across the work that can be used in every document that represents it.

Desmond



_______________________________________________
Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php


Editor: Willard McCarty (King's College London, U.K.; Western Sydney University, Australia)
Software designer: Malgosia Askanas (Mind-Crafts)

This site is maintained under a service level agreement by King's Digital Lab.