17.829 what's needed

From: Humanist Discussion Group (by way of Willard McCarty willard.mccarty@kcl.ac.uk)
Date: Fri May 07 2004 - 16:51:00 EDT

  • Next message: Humanist Discussion Group (by way of Willard McCarty ): "17.831 conference; colloquium; master-classes"

                   Humanist Discussion Group, Vol. 17, No. 829.
           Centre for Computing in the Humanities, King's College London
                       www.kcl.ac.uk/humanities/cch/humanist/
                            www.princeton.edu/humanist/
                         Submit to: humanist@princeton.edu

       [1] From: lachance@origin.chass.utoronto.ca (Francois (249)
                     Lachance)
             Subject: Re: 17.813 what's needed; pedagogical use of text-
                     analysis

       [2] From: Matt Kirschenbaum <mgk@umd.edu> (66)
             Subject: Re: 17.823 what's needed

       [3] From: Robert Kraft <kraft@ccat.sas.upenn.edu> (20)
             Subject: Re: 17.825 what's needed

    --[1]------------------------------------------------------------------
             Date: Tue, 27 Apr 2004 07:07:59 +0100
             From: lachance@origin.chass.utoronto.ca (Francois Lachance)
             Subject: Re: 17.813 what's needed; pedagogical use of text-analysis

    > Willard,
    >
    > Paul Edward Oppenheimer has asked me to expand upon a (mine?) cryptic
    > analogy. Below is an elaboration of the pedagogical considerations raised
    > in positing that prompted the call to expand a cryptic analogy. If in this
    > here posting, I fail to elucidated the cryptic analogy, I hope to provide
    > some concrete (software-specific) examples that may be abstracted to
    > contexts of work and play (pedagogical and research contexts).
    >
    >
    > Let me put aside any extended discussion of any perceived intellectual
    > affinity between Kirschenbaum, Hofstadter and McCarty.
    >
    >
    > What I see at play in all three is a commitment to design games for the
    > classroom or tutorial space.
    >
    >
    > The nature of the games played in the domain of text analysis are counting
    > games. What is counted? Occurences and locations. [Note that the language
    > here is not yet one of frequencies or co-locations.] Given this nature,
    > the pedagogical situation can begin by introducing the "how many" quesion
    > or by introducting the "where is" question. The location and occurrence
    > questions are combinable. For example, how many instances of a given
    > string "murple" appear in a given extent of a text file? The pedgagical
    > situation can depart from such a combined question.
    >
    > The question of what am I asking is translated as what am I asking the
    > machine to do which in turn gets translated as what result am I seeking.
    >
    > I have suggested that group activities [Note such activities can be
    > adapted to the solo practioner.] can be structured around the occurence
    > question. Group A is given text Y. Group B is given text Z. Both Group A
    > and Group B have access to the same software. Neither group is formally
    > coached on the software. They are asked to compare results. Some of the
    > learning will involve developing a common vocabulary on how the software
    > works. Some of the learning will be in using that common vocabulary to
    > compare texts.
    >
    > Such an activity exemplifies the constructivist principles of
    > collaboration and communication. It also exemplifies that of creation: the
    > common vocabulary is indeed a creation.
    >
    > Another activity consists in using the same text for analysis and in a
    > subsequent stage of the exercise the groups create a modified version of
    > the text. Modified versions are then exchanged with other groups and
    > groups attempt to identify the modifications.
    >
    > Even the apparently simple pedagogical situation (all the learners and
    > re-learners using the same software with the same version of the text to
    > analyze) provides plenty of open-ended play. Out of the observation of
    > similarities and differences arises the next question and the next
    > question.
    >
    > ********
    >
    > On the ground, some bemoan button pushing behaviours. I suggest that such
    > behaviours are coupled with a paying attention to the interface. Button
    > pushing is exploratory play. It can be harnessed to games of
    > defamilarization. When I push the button this happens...
    >
    > In this spirit, text editing software can be used to introduce text
    > analysis. Yes, text editing software.
    >
    > >From the world of the MacIntosh platform, a quick tour of three pieces of
    > software:
    >
    >
    > Francis Stanbech and Brian Stearns. TeachText (1986-1991)
    >
    > There is no find function.
    >
    > The learner can mark to count (e.g. placing a numeral after every instance
    > that is being searched).
    >
    > The learner can copy and count. (e.g. used a reserved section at the end
    > of the file or copy to another file)
    > Such little exercises with text editing software reveal the importance of
    > piping to text analysis software -- can the results of an analysis be
    > exported?
    >
    >
    >
    > Tom Dowdy. SimpleText (1995-97)
    >
    > There is text-to-speech functionality. (marvellous for those text
    > modification games exploring punctuation, capitalization, phoneme
    > substitution, _Mots d'heures gousses rames_ etc.)
    >
    > There is a find function. It is interesting to note how the find function
    > is inflected by end of file treatment. (e.g. the search is conducted from
    > the cursor position on. Compare with TeachText where the cursor position
    > is reset on closing the file. [Your eyes may tell you that a certain
    > string is there in the file but the eeps or quacks remind you of
    > location, location, location]). One group* cleverly produced a version
    > which modified only the cursor position. Their crafty intent was foiled
    > when the other group chose to do the search with wrap-around search. Again
    > everyone learnt about the preservation of information and the exporting of
    > results. {*For "group" read hypothetical individualtesting scenarios.}
    >
    >
    > Tom Bender. Text-Edit Plus 2.5 (1998)
    >
    > This software will do a quick modify of carriage returns and line feeds
    > from Mac to DOS (and vice versa) or Mac to Unix (and vice versa).
    > Instant appreciation that word-wrapping is about line endings.
    >
    > The word count of a selected portion of the text also returns word count
    > information for the document as a whole. (In conjunction with information
    > returned from searches, learners can uses such information to determine
    > local lexical density [number of instances per lines, per words, per
    > characters]).
    >
    > Again the exporting question arises: there is the challenge of
    > transcribing information from pop up windows which are not accessible to
    > selection and copy commands. Snapshots (Shift+Cmd+3) [Print Screen in
    > Windows] provide a means for capturing the information and the information
    > command allows one to supply additional metadata and describe what the
    > picture represents. (Cmd+i). The trick for novices is discovering where
    > the snapshots are stored :) Cumbersome manipulations but very
    > educational. It is a smart way of conveying the difference between
    > information and format. A reminder that the computer is not just a
    > glorified typewriter. And that the screen is not just a window-bound page
    > or a desktop with launchable icons. The screen is indeed a space of
    > options that is refreshed.
    >
    > I am beginning to digress...
    >
    > Much of what I have explored here with MacIntosh software can be explored
    > with Notepad, Wordpad and historic versions of wordprocessing packages
    > such as MS Word or Wordperfect or Wordstar that will work under DOS or
    > Windows systems.
    >
    > And then there is the fabulous world of Unix...
    >
    > In short, articulate attention to interface can be coupled to an
    > apprenticeship in text analysis via an understanding of the fundamental
    > fungibility of the textual object and a discplined approach to log
    > keeping. In other words, the devilish questions get interesting in the
    > divine details.
    >
    > ****
    >
    > I thank Paul Edward Oppenheimer for calling me out and inviting me to
    > reorder and record some thinking that arose quite a while ago. I have not
    > taught formally in while and yet remain invested, if in a disinterested
    > fashion, in pedagogy.
    >
    > A search for "Reading and Searching: Tools and Skills" might net a
    > discussion of similar themes arising from introducing students to
    > TACTWeb, by me, and whole bunch more resources by others.
    >
    > Thanks
    >
    > > Date: Wed, 21 Apr 2004 06:43:12 +0100
    > > From: "Paul Edward Oppenheimer" <paul.oppenheimer@cox.net>
    > > > >
    > > Francois,
    > >
    > > Please expand your cryptic analogy. Thank you!
    > >
    > > Paul
    > > ----- Original Message -----
    > > From: "Humanist Discussion Group
    > > <willard.mccarty@kcl.ac.uk>)" <willard@lists.village.virginia.edu>
    > > To: <humanist@PRINCETON.EDU>
    > > Sent: Sunday, April 18, 2004 11:39 PM
    > >
    > >
    > > > Humanist Discussion Group, Vol. 17, No. 804.
    > > > Centre for Computing in the Humanities, King's College London
    > > > www.kcl.ac.uk/humanities/cch/humanist/
    > > > www.princeton.edu/humanist/
    > > > Submit to: humanist@princeton.edu
    > > >
    > > >
    > > >
    > > > Date: Mon, 19 Apr 2004 07:19:15 +0100
    > > > From: lachance@chass.utoronto.ca (Francois Lachance)
    > > > Subject: Re: 17.744 MonoConc (Pro), with thoughts on
    teaching
    > > >
    > > > Willard,
    > > >
    > > > I want to pick up the thread of learning objectives and the
    > > > pedagogical use of tools for textual analysis. I think your
    comments on
    > > > MonoConc relate to a blog entry by Matt Kirschenbaum about the
    exercise
    > > > set by Douglas Hofstadter in the early pages of Godel, Escher, Bach.
    > > > See
    > > > http://www.otal.umd.edu/~mgk/blog/archives/000339.html
    > > >
    > > > Matt invites readers to consider why Hofstadter introduces his
    > > > discussion of the Mechanical, Intelligent, and Unmode with what
    can turn
    > > > out to be a frustrataing exercise. That invitation raises similar
    > > > questions about the value of learning by doing that your MonoConc
    example
    > > > embodies.
    > > >
    > > > For some reason, the example put forward by Matt and your example
    have me
    > > > wondering if certain teachers complement the exploration of the
    > > > application with the exploration of the objects of study. Does anyone
    > > > teaching humanities computing set up exercises along the following
    lines?
    > > >
    > > > Present a class with a given distribution and then invite students to
    > > > discover if the given distribution is replicated in an analysis of
    > > > different versions of a text. Repeat the exercise with one set of
    students
    > > > introducing a typo in one version (or altering it in some form);
    another
    > > > group of students is assigned the task of determining if the
    comparative
    > > > analysis picks up the change. Repeat with the student groups
    switching
    > > > tasks.
    > > >
    > > > > Date: Sun, 28 Mar 2004 08:59:01 +0100
    > > > > From: Willard McCarty <willard.mccarty@kcl.ac.uk>
    > > > >
    > > > <snip/>
    > > >
    > > > > MonoConc is very easy to learn -- as I said, 5 to 10 minutes
    is all
    > > that's
    > > > > required. The students I've had tend to come to humanities
    computing
    > > > > believing that it's about pushing buttons. So I've tried to
    rush them
    > > past
    > > > > the push-button interface to problems of interpretation. The more
    > > > > sophisticated-in-features this interface is, the harder that
    is to do,
    > > the
    > > > > more they take what they see as a harder problem of the kind
    they've
    > > mostly
    > > > > already mastered rather than a new sort of problem entirely.
    > > > >
    > > > <snip/>
    > > >
    > > >

    --
    Francois Lachance, Scholar-at-large
    http://www.chass.utoronto.ca/~lachance
    

    A calendar is like a map. And just as maps have insets, calendars in the 21st century might have 'moments' expressed in flat local time fanning out into "great circles" expressed in earth revolution time.

    --[2]------------------------------------------------------------------ Date: Tue, 27 Apr 2004 07:08:39 +0100 From: Matt Kirschenbaum <mgk@umd.edu> Subject: Re: 17.823 what's needed

    I appreciate the several, patient responses to my admittedly tendentious question. FWIW, I've posted a version of it on my blog, and I'd be happy to see discussion and comment there as well:

    http://www.otal.umd.edu/~mgk/blog/archives/000472.html

    Let me see if I can address some of the points people have made and clarify my own thinking.

    I read Literary and Linguistic Computing too, but I'm not satisfied that all we need to do is look there, or to the closing paragraphs of our latest article, to gauge the impact of our field. Note that I wrote "impact," not value or worth. Since the early 1990s (and earlier) many of us have committed time, money, and less tangible forms of professional capital to the construction of (to adopt a phrase) digital resources in the humanities. I think anyone who does this work quickly moves beyond any residual positivism and arrives at the realization that the process is what's most compelling about the activity--this is what I take it to mean to imagine what we don't know, or to quote from some pretty awful music I used to listen to by a certain Canadian rock trio, "the point of the journey is not to arrive." Fine. But I don't for a moment believe anyone who has ever spent time on a schema, ontology, or taxonomy, to say nothing of the long hours of encoding, imaging, or worrying over the placement of a button on an interface hasn't conceived of an ideal user who comes to their resource, accesses the materials, and then uses them as the basis for a piece of scholarship that could not have otherwise been written. Nor do I buy the notion that what we do in humanities computing is so rarefied that it can't be held accountable to what are, after all, some fairly conventional criteria for scholarship: "a problem solved, a question answered, received wisdom overturned, or new things learned." (Nor do I accept, incidentally, that the kind of knowledge gleaned from a scanning electron microscope is purely of the preceding nature. To create a binary between the two seems to me to essentialize both humanities and sciences in mutually unproductive ways.) Steve says we would do better to ask, "what have we done that's interesting?" I submit that being interesting is the easy part; we're all pretty interesting after our own fashion, but I would rather ask at what point does "interest" pass into something that circulates macroscopically in a scholarly economy of citation and influence? In such a way that others have to stand up (or sit down) and take notice of an insight?

    I note that I'm not alone in this assessment: in an abstract of a talk delivered last week here at Maryland, John Bradley writes: "Digital Libraries seemed like a great thing for Humanities scholarship when they were first proposed. Why, then, do they seem to have had relatively little impact on how humanists do their scholarship?"

    The rhetorical presumption behind my post is not that such cases don't exist (gotcha, the emperor has no clothes) but rather that we would do well, if for no other reason than eminently practically ones of funding, outreach, recruitment, tenure, and promotion, to be able to point to some influential discoveries. I can think of a few examples off the top of my head: Kevin Kiernan's imaging work on the Beowulf manuscript; Will Thomas and Ed Ayers's work on American slavery, based on data compiled for their Valley of the Shadow project. I would like to hear of others, and perhaps think in terms of collectively assembling a casebook of digital humanities scholarship.

    One last point: in so far as there is a latent tension between positivism and empiricism in what I've been writing what I'm ultimately aiming for is the latter. I _do_ believe empiricism has its place in the humanities--and humanities computing. Indeed, I believe empiricism is under-theorized and under-taught. There's a fascinating exchange on Blake's color printing processes in recent issues of Blake: An Illustrated Quarterly that would make rich reading material for a graduate research methods course. Call it a strategic essentialism or local knowledge if you prefer: but what can digital resources contribute to a debate over the state of empirical knowledge in the humanities? Matt

    Matthew G. Kirschenbaum_____________________________ _______________________http://www.otal.umd.edu/~mgk/

    --[3]------------------------------------------------------------------ Date: Tue, 27 Apr 2004 07:08:58 +0100 From: Robert Kraft <kraft@ccat.sas.upenn.edu> Subject: Re: 17.825 what's needed

    > Norman Hinton <hinton@springnet1.com> suggested: > > For the kind of search that Bob Kraft is talking about, perhaps grep would > be useful. > > Look at http://www.gnu.org/software/grep/

    To the best of my knowledge (and the information provided on the grep page), grep is not a web searcher like google.com and several others, but searches known files. If I'm wrong, and grep can be used to search the entire web for words or phrases that occur near each other (e.g. within two lines of each other), please elaborate! I don't see it in the introductory information.

    Bob

    -- Robert A. Kraft, Religious Studies, University of Pennsylvania 227 Logan Hall (Philadelphia PA 19104-6304); tel. 215 898-5827 kraft@ccat.sas.upenn.edu http://ccat.sas.upenn.edu/rs/rak/kraft.html



    This archive was generated by hypermail 2b30 : Fri May 07 2004 - 16:51:01 EDT