14.0449 XML and the Web

From: by way of Willard McCarty (willard@lists.village.Virginia.EDU)
Date: 10/30/00

  • Next message: by way of Willard McCarty: "14.0450 graduate fellowship in the history of information processing"

                   Humanist Discussion Group, Vol. 14, No. 449.
           Centre for Computing in the Humanities, King's College London
                   <http://www.princeton.edu/~mccarty/humanist/>
                  <http://www.kcl.ac.uk/humanities/cch/humanist/>
    
    
    
             Date: Mon, 30 Oct 2000 07:01:49 +0000
             From: "Fotis Jannidis" <fotis.jannidis@lrz.uni-muenchen.de>
             Subject: Re: 14.0440 hypertext and the Web and XML
    
      > WM:
      > A question to those who understand XML: to what extent will it allow us
      > users of the Web to get the benefit of this CS research, which is now
      > effectively out of reach? In "Open Hypermedia as User Controlled Meta Data
      > for the Web", Kaj Grnbk, Lennert Sloth and Niels Olof Bouvin (Aarhus)
      > describe "a mechanism [built on XML]... for users or groups of users to
      > control and generate their own meta data and structures", e.g. "user
      > controlled annotations and structuring which can be kept separate to the
      > documents containing the information content". If I understand the import
      > of what these fellows are saying, this would mean that people like us could
      > build far more adequate scholarly forms (editions, commentaries et al.)
      > online. Or am I misreading?
    
    It seems to that for now and for some time to come XML won't
    change the visible side of the net, because most xml users use xml
    on the server but serve html files to the clients. They may switch to
    serving xhtml, the xml conform version of html, but this won't change
    the rather sad state of affairs concerning interoperability of open
    accessible scholarly edition on the net. As long as one cannot
    access the xml structure of an edition from the outside, but has only
    the data chunks, which fit into a browser window, the whole power of
    XPointer, XLink and XPath can't be used.
    
    At the moment it seems to me, that we need some kind of
    information frame work which allows this kind of access to structural
    data and/or an xpath sensitive retrieval function - which is by itself
    only usable, if you know the structure of the data - and some
    common layer of meta data. The last one isn't necessary, but would
    make things much easier. Even if you know that the text you want to
    link your comments to is tagged according to TEI you probably need
    a description of the exact use of the tags to use xpath effectivly.
    This is not a technical problem: if a text is encoded in such a way
    that every sentence / vers or even word has its own id, you can set
    a link to this id and the markup in your comment is correctly
    encoded. BUT: A reader of your comment has no way to follow this
    link to the text, if the website of the edition doesn't allow you to
    retrieve text via the encoded id. And because of the seperation
    between xml on the server and html in the client I mentioned above,
    even if the text is encoded with ids for every line, this encoding is
    not visible on the net.
    
    I think this is the next task for the scholarly community, maybe even
    for the TEI consortium: to design a framework which solves these
    problems. It would be an addition to the existing TEI or rather a
    framework around it. Maybe some people are already working on
    this; I would be very much interested in hearing about this.
    
    Fotis Jannidis
    



    This archive was generated by hypermail 2b30 : 10/30/00 EST