14.0282 primitives

From: by way of Willard McCarty (willard@lists.village.Virginia.EDU)
Date: 09/29/00

  • Next message: by way of Willard McCarty: "14.0283 new on WWW: scientometrics in Current Science"

                   Humanist Discussion Group, Vol. 14, No. 282.
           Centre for Computing in the Humanities, King's College London
                   <http://www.princeton.edu/~mccarty/humanist/>
                  <http://www.kcl.ac.uk/humanities/cch/humanist/>
    
    
    
             Date: Fri, 29 Sep 2000 09:17:45 +0100
             From: Willard McCarty <willard.mccarty@kcl.ac.uk>
             Subject: primitives
    
    My question about primitives was based on two convictions: (1) that
    methodologically computing humanists across the disciplines occupy a common
    ground, and (2) that the development of computing is increasingly toward
    putting the ability to make things into the hands of ordinary users. It
    seems to me that these two convictions are both involved in the many
    tool-building enterprises to which references have so far been made.
    
    The common ground seems not to be visible to everyone; I'd suppose that in
    order to see it an interdisciplinary vantage point is required. In my
    experience those who look on applied computing activity solely from within
    a single discipline have great difficulty; to them methodology is so
    subservient to the artefacts over which their disciplines have (or claim)
    dominion that it ceases to have meaning otherwise. Admittedly, the
    geography of our common ground is still not all that well defined, and much
    remains to be done in the transfer of constructive ability to the "end
    user". But are these not genuine problems for our research agenda?
    
    Wilhelm Ott (by example in Humanist 14.262) and John Bradley (14.272) are
    surely right, that in Bradley's words, "Any tool meant to support
    activities as diverse as those that turn up in humanities text-based
    computing cannot possibly be trivial to learn or use", but I don't think
    that trivialising is necessarily involved in identifying and simplifying
    basic operations, in making them (as we say) more accessible to scholars
    than they currently are. I'd think that the objective of our
    system-building isn't to remove the need for intellectual labour, rather
    that component of scholarly work which is mechanically definable, and
    therefore trivial (in the mathematician's sense).
    
    Ian Lancashire suggests that "one of the unforeseen effects of relying on
    professional programmers to create big pieces of software like TACT and
    Wordcruncher [might be] to encourage scholars in the humanities to believe
    that they can get along without being able to write small programs or adapt
    ones created by other people" (14.277). Yes, indeed. Our colleagues
    sometimes do approach computing with the expectation that everything may be
    accomplished at the touch of a button. It seems to me, however, that we're
    not in disagreement fundamentally, rather arguing about the level of
    granularity at which we humanists work with computers. Does this
    necessarily and forever need to be at the level of, say, Snobol or Pascal
    or perl or Java? Once, I recall, it was at the level of assembler language;
    I clearly remember arguments to the effect that unless you understood what
    commands like "shift left accumulator" did you could not grasp the essence
    of computing....
    
    So also is Wendell Piez surely right, that "Every lens comes with its
    blindness, and as we design these capabilities into systems, by deciding
    what we want to look at, we will also be deciding what we don't care to
    see.... [G]reat works of literature will continue to evade whatever
    structures we impose on them, just as they always have...." (14.277). I'd
    argue that the point of what we do is to raise better questions than the
    ones we have, only incidentally to come up with (tentative, temporary)
    answers, and that those of us who use computers in scholarship raise such
    questions by showing what can be known computationally -- therefore what we
    cannot know, or know how we know, now.
    
    In an online document he recommended to our attention, Wilhelm Ott
    describes the conclusion he reached about 30 years ago after work on some
    major textual projects: that "the next step in supporting projects was to
    no longer rely on programming in FORTRAN or other 'high level' languages,
    but to provide a toolbox consisting of programs, each covering one 'basic
    operation' required for processing textual data. The function of each
    program is controlled by user-supplied parameters; the programs themselves
    may be combined in a variety of ways, thereby allowing the user to
    accomplish tasks of the most diversified kind. It was in 1978 when these
    programs got the name TUSTEP"
    (http://www.uni-tuebingen.de/zdv/tustep/tdv_eng.html#b). The TUSTEP toolbox
    defines these basic operations: editing, comparing, processing text,
    preparing indexes, presorting, sorting, generating indexes and
    concordances, generating listings, typesetting (with some file-handling and
    job-control fuctions as well). Much more recently, John Unsworth in a talk
    given at King's College London (forthcoming in print from the Office of
    Humanities Communication; online at
    <http://jefferson.village.virginia.edu/~jmu2m/Kings.5-00/primitives.html>)
    identified 7 scholarly primitives also based on close interdisciplinary
    observation and project work -- discovering, annotating, comparing,
    referring, sampling, illustrating and representing -- as the basis for a
    tool-building enterprise. He illustrated these primitives with work that
    has been going on for some time at IATH.
    
    Contemplating Ott's TUSTEP and Unsworth's thoughts about primitives, I
    wonder if what we're seeing here is two parts of a much larger design,
    proceeding from universality to specificity.  In broad terms, could it be
    said that a "methodological macro" like Unsworth's "discovering" or
    "annotating" comprise "methodological primitives" like "consult lexicon",
    "search WWW" "recognise image patterns", each of which in turn comprises
    "mechanical primitives" and so forth?
    
    I see us facing a real-world problem that in turn gives us a very
    challenging research agenda. This problem continues to be the one Wilhelm
    Ott, Susan Hockey and others faced many years ago, how best collegially to
    support research in the humanities with computers, or in other words, how
    to get more of our non-technically inclined colleagues intellectually
    involved in applied computing -- if for no other reason than the creative
    imaginations they will bring into the increasingly complex equation. Do we
    tell them to learn C++ or whatever, or do we work with them to define a
    better research pidgin? As a search of the Web shows, talk about
    "primitives" is common parlance among cognitive scientists and philosophers
    who (like Jerry Fodor) think about how we think and do intellectual work,
    and among the system builders who design and make the software prototypes.
    Modelling, which is what one does with primitives, is a very active topic
    -- see the Model-Based Reasoning Conference advertised in Humanist 14.269.
    Much going on for us to tap into, as well as much that has gone on which we
    need to take account of.
    
    Yours,
    WM
    -----
    Dr Willard McCarty / Centre for Computing in the Humanities /
    King's College London / Strand / London WC2R 2LS / U.K. /
    voice: +44 (0)20 7848-2784 /  fax: +44 (0)20 7848-2980 /
    ilex.cc.kcl.ac.uk/wlm/
    maui gratias agere
    



    This archive was generated by hypermail 2b30 : 09/29/00 EDT