14.0039 report on Colloquium at King's College London

From: Humanist Discussion Group (willard@lists.village.virginia.edu)
Date: Sat May 27 2000 - 08:34:19 CUT

  • Next message: Humanist Discussion Group: "14.0040 report on the colloquium at KCL"

                    Humanist Discussion Group, Vol. 14, No. 39.
           Centre for Computing in the Humanities, King's College London
                   <http://www.princeton.edu/~mccarty/humanist/>
                  <http://www.kcl.ac.uk/humanities/cch/humanist/>

             Date: Sat, 27 May 2000 09:06:20 +0100
             From: John Lavagnino <John.Lavagnino@kcl.ac.uk>
             Subject: Humanities computing colloquium at King's College London:
    a report

    On May 13, the Centre for Computing in the Humanities at King's
    College London held a colloquium entitled Humanities Computing: Formal
    Methods, Experimental Practice: an occasion to think about the
    subject's fundamentals once again, and consider how we might best
    conceptualize the application of computing to the humanities. The
    colloquium's speakers included a philosopher of science, a sociologist
    of science, a literary critic, a theoretician and philologist, and a
    director of a humanities computing research institute; the plan was to
    consider analogies with scientific practice, and perspectives from the
    history, sociology, and philosophy of science. These are areas of
    study that have been very active intellectually in recent years, and
    they did turn out to offer many valuable ideas.

    Current work in a number of scientific fields, notably artificial
    intelligence and evolutionary psychology, purports to overlap with
    some aspects of the humanities. But scholars in the humanities
    generally remain unconvinced that these fields offer much of interest
    to them. It is still the case that the project of artificial
    intelligence, that of getting computers to exhibit or understand human
    behaviour, is an extremely difficult one that has not progressed very
    far. That was one point that Harry Collins, of the University of
    Cardiff, made in his paper. He borrowed an analogy from Hubert
    Dreyfus: to say that AI has progressed a long way is like standing on
    a chair and saying you've made real progress in getting to the
    moon. Machines have problems above all in dealing with ambiguity, in
    understanding what words or images mean in the light of their context;
    ambiguities that we negotiate without effort in daily life remain
    difficult or impossible for computers to manage, and these are nothing
    compared to the problems scholars face. Our task, then, is to develop
    ways to get computers to do useful work for us in the humanities, ways
    that don't involve teaching them to speak our language.

    A common and often successful approach has been to subject our primary
    sources to formalization. A historian understands that elementary
    facts such as names and dates may be uncertain or speculative to
    different degrees, depending on the evidence behind them; one major
    task for a project like the Prosopography of the Byzantine Empire at
    King's is to create definite and not too misleading versions of such
    facts that can then be organized and sifted with the computer's
    aid. Tito Orlandi, of the University of Rome, presented the argument
    for taking such formalization seriously: for confronting the
    deep-seated difficulties in moving from the continuous realm of human
    experience to the digitized world, but also for seeing the process of
    formalization not just as a practical chore but instead as an
    opportunity to think through the foundations of our subjects. The
    proposal met with, and clearly will generally meet with, a very mixed
    reaction: although there is formalization in every humanities field to
    some extent, and was even before the days of computing, it is often
    distant from urgent concerns; what is foundational needs to be
    consensual, but disagreements matter more than consensus in many
    circles.

    Even given the assumption that, in a humanities-computing project, the
    goal of formalization is to create something that makes certain
    practical operations possible, and not to capture the fullness of
    anyone's understanding of the objects under study, there is often
    discomfort with the process and difficulty in finding any level of
    agreement. Jerome McGann, of the University of Virginia, proposed a
    concentration instead on the failures of computing, on the distortions
    they inevitably introduce into texts and images, as a defamiliarizing
    technique to heighten our attention---rather on the model of exercises
    that are common in writing or acting workshops. Such approaches may
    prove to be the principal application of computing for interpretative
    work focused on small numbers of images or texts: where the power of
    computing to marshal large amounts of information is not really
    required, methods which change the individual interpreter's
    perceptions become most significant, and the restrictions that
    formalization entails seem less acceptable.

    Hasok Chang of University College London spoke on approaches to
    understanding scientific experiments, and this was highly relevant to
    reflection on the range of methods being set out, from formalization
    to distortion. An experiment can be seen as a test or as a tentative
    effort---or as many other things; and experimentation can become an
    activity of its own, pursued by a community separate from that of
    theoretical scientists, though of course related. Experiments may seek
    to collect data on phenomena in nature, but they also may work to
    varying extents to make phenomena happen in order to be studied. And
    alongside the theoretical and experimental communities there is also
    the community of instrument-makers, who themselves have practices and
    interests not fully apprehended by others in the same scientific
    field; this practice of instrument-making may offer the best analogy
    for what computing humanists do, and certainly a novel one in fields
    that often barely conceive of themselves as using instruments or
    equipment of any kind.

    John Unsworth, of the University of Virginia, took a broader view of
    the whole activity of the humanities, in an attempt to isolate basic
    operations that most of us do. More than any of the other papers, his
    put the focus on collaborative work and not just modelling or
    analysis. The World Wide Web, for all its many shortcomings, has
    proven a powerful tool for scholars, and what we ought to do is try to
    develop further its potential for communication and collaboration.

    The colloquium's web page, with links to further resources on the
    subject, will remain available, at
    <http://www.kcl.ac.uk/humanities/cch/seminar/99-00/seminar_hc.html>.



    This archive was generated by hypermail 2b29 : Sat May 27 2000 - 08:41:12 CUT