21.049 coding and composing

From: Humanist Discussion Group (by way of Willard McCarty willard.mccarty_at_kcl.ac.uk>
Date: Fri, 25 May 2007 07:01:48 +0100

                Humanist Discussion Group, Vol. 21, No. 49.
       Centre for Computing in the Humanities, King's College London
  www.kcl.ac.uk/schools/humanities/cch/research/publications/humanist.html
                        www.princeton.edu/humanist/
                     Submit to: humanist_at_princeton.edu

   [1] From: Desmond Schmidt <schmidt_at_itee.uq.edu.au> (150)
         Subject: Re: 21.046 coding and composing (and modelling)

   [2] From: Mary Dee Harris <mdharris_at_acm.org> (150)
         Subject: Re: 21.046 coding and composing

   [3] From: "Neal Audenaert" <neal.audenaert_at_gmail.com> (51)
         Subject: Re: 21.046 coding and composing

--[1]------------------------------------------------------------------
         Date: Fri, 25 May 2007 06:56:34 +0100
         From: Desmond Schmidt <schmidt_at_itee.uq.edu.au>
         Subject: Re: 21.046 coding and composing (and modelling)

Dear Willard,

I don't disagree with anything Wendell says. However, his response
raises a question about modelling which has been troubling me for some
time. In your book you say "computational models, however finely
perfected, are better understood as temporary structures in a process
of coming to know rather than fixed structures of knowledge." (27) and
later you say: "Eventually we realize that the perpetual impetus to
construct a permanently elusive whole is the point." (188). You thus
appear to see the computational model as a temporary interpretation
that opens out the text, lets us see one aspect, one view through the
dirty glass window of software tools. I think that Wendell would
probably disagree with this picture of computational models - as I
gather from his comments below, where he talks about "right" and
"wrong". I also think that models are pictures of reality, as
Wittgenstein said: "there must be something identical in the picture
and what it depicts, to enable the one to be a picture of the other at
all." (Tractatus 2.161) I thus see models as approximations of reality
that are gradually refined. This is a scientific point of view.
Humanists like to think that everything is an interpretation, for
example that the tags that we choose to describe a text are an
interpretation. The selection is, but the tags themselves usually
describe facts about a text, e.g. that a word is underlined - what is
interpretative about that? I know there are people who will argue even
with this, but to say that it is ALL interpretation as is so often done
I think misrepresents a more subtle situation. One of the reasons I
suspect you say this about computational models is that the tools for
constructing them are so poor. A compressed view of humanities
computing over the past 40 years would seem to support your
interpretation - after all, is that not what we have all been doing -
creating models and then throwing them away? I am not so pessimistic,
however. I like to think that we can get somewhere, and if the
computational models are bad perhaps we just need to make better ones.
I see models as data structures that are tied to the algorithms that
process the text, like a hand that fits into a glove.

-------------------------
Desmond Schmidt

On 24/05/2007, at 5:26 PM, Willard McCarty wrote:

> Humanist Discussion Group, Vol. 21, No. 46.
> Centre for Computing in the Humanities, King's College London
>
>www.kcl.ac.uk/schools/humanities/cch/research/publications/ humanist.html
> www.princeton.edu/humanist/
> Submit to: humanist_at_princeton.edu
>
>
>
> Date: Thu, 24 May 2007 07:58:13 +0100
> From: Wendell Piez <wapiez_at_mulberrytech.com>
> >
>Dear Willard,
>
>Desmond Schmidt's distinction between the correctness of the program
>and the correctness of the model does get close to the heart of the
>issues you raise, inasmuch as a program can be wildly wrong, even
>when "correct", when the model on which it is based is inadequate.
>Writing programs in the classical form you were taught in -- which
>we've learned to call the "waterfall model", suggesting everything
>just moves along sedately until the moment of implementation, when it
>all crashes in at once (but see the link cited below), did seem like
>the way to do it in an age where intellectual labor was cheap
>relative to computers and the time and expertise required to code
>them. Such operations did not (or at least so they thought) have the
>luxury of getting it wrong a few times before they got it right.
>Perhaps they assumed the difference between "wrong" and "right" to be
>clearer than it often is; perhaps they simply assumed that the
>modeling problem was relatively straightforward and easy -- as any
>committee of intelligent and informed people could do it -- whereas
>the implementation in code was hard, required special expertise, and
>could only be done by a select few with expensive educations. And
>perhaps the sorts of problems they were trying to solve -- mapping
>trajectories or calculating payments due -- were easier to model than
>the problems we think about ("finding information"). Whatever the
>reasons, we now discover the reverse to be more often the case: it's
>the specification of what should happen that's the difficult part,
>not the making it happen once it's been specified.
>
>Yet while the problems have become harder, the means available for
>solving them have become more powerful. So we discover that with
>cheap machines, open-source platforms, web-based and self-directed
>learning, computer programming does, at length, become more like
>composition, in the sense that both become means whereby we can think
>with our hands. Just as the writer learns what she thinks by writing
>it (and striking it out and rewriting it), the programmer can come to
>better understandings of a problem by using the computer as a more
>plastic instrument, trying things out -- both in models and in
>implementation -- and then scrapping, recasting, refactoring.
>Accordingly, both design and implementation can become more
>iterative, evolutionary, "spiral", as it's called. This does not
>actually bring an end to the old way of doing it, any more than a
>writer necessarily spends less time in sentence composition than she
>did as a three-year-old child, when sentences were new. It's just
>that the waterfall has now become a cascading rapid, development
>pouring down through eddies and rivulets, sometimes gathering in
>calmer pools.
>
>Dare I cite Wikipedia, whose development has also followed a spiral,
>and entails both software development and natural language composition?
>
>http://en.wikipedia.org/wiki/Waterfall_model
>
>http://en.wikipedia.org/wiki/Spiral_model
>
>and add to these --
>
>http://en.wikipedia.org/wiki/Hermeneutic_circle
>
>http://en.wikipedia.org/wiki/Learning_cycle
>
>http://en.wikipedia.org/wiki/Feedback
>
>To Nat Bobbit's citation of the "extreme programming" keyword we
>should also add "agile programming".
>
>Best regards,
>Wendell
>
>At 01:42 AM 5/18/2007, you wrote:
> >An interesting article, "Code and Composition", zipped by on Humanist
> >a few days ago in an announcement of the latest Ubiquity
> >(http://www.acm.org/ubiquity/) -- an online magazine that sometimes
> >publishes little gems. In this article Luke Fernandez compares the
> >two modes of expression and finds them convergent: writing, not
> >entirely spontaneous, involves planning and research; coding, not
> >entirely planned, involves discovery during the act of composition.
> >In this attempt at a parallel, the first seems obvious, if a bit
> >overstated, but the second seems to contradict official stories of
> >how writing code should proceed. What is the experience of people
> >here? I understand that nowadays no one or few preaches the doctrine
> >that a complete specification must be devised beforehand (as I was
> >told when I learned many years ago). I think the need to come up with
> >a complete flowchart is what put me off programming eventually, along
> >with the dreaded "turn-around time" of 2 hrs minimum. In any case, it
> >would be interesting to know how exploratory programming is known to
> >be these days.
>
>
>======================================================================
>Wendell Piez mailto:wapiez_at_mulberrytech.com
>Mulberry Technologies, Inc. http://www.mulberrytech.com
>17 West Jefferson Street Direct Phone: 301/315-9635
>Suite 207 Phone: 301/315-9631
>Rockville, MD 20850 Fax: 301/315-8285
>----------------------------------------------------------------------
> Mulberry Technologies: A Consultancy Specializing in SGML and XML
>======================================================================
>
>
>Dr Willard McCarty | Reader in Humanities Computing | Centre for
>Computing in the Humanities | King's College London |
>http://staff.cch.kcl.ac.uk/~wmccarty/.

--[2]------------------------------------------------------------------
         Date: Fri, 25 May 2007 06:57:16 +0100
         From: Mary Dee Harris <mdharris_at_acm.org>
         Subject: Re: 21.046 coding and composing

I've worked in a commercial environment for the last five years,
developing a natural language generation system that creates a
narrative of the findings discovered during a doctor's encounter with
a patient.

We have used the Spiral Approach to software development for a number
of reasons. Probably the most important one was that there was a
lot of skepticism about my ability to accomplish this task. So we
started with a "proof of concept" system that generated simple
sentences to prove that we could do it at all. Then we moved to
the "prototype" stage which gave us more credibility with more
features and better narrative. Gradually we have continued to expand
and elaborate both the model and the code, using continuous feedback
from our users and the doctors within the company. In fact we just
two weeks ago started a weekly meeting for the doctors to critique
our output. The goal is to make the narrative sound as if a doctor
wrote it, which is a noble endeavor, but we're getting closer!

I don't believe the old Waterfall Method would be at all appropriate
in today's world. In fact, we hear about spectacular failures in
the government, where they've contracted with some company that's
gone off with a set of initial specifications, written a huge system,
and returned with it to discover that it didn't fit the needs of the
organization -- or more likely the needs changed during that
time. The FBI is a case in point, I believe. And since such
systems take years to develop, then after the software has failed,
the organization is even farther behind than it was before.
Our system has proven quite robust! I designed it to be modular, to
change as we added features, so it has been able to adapt to the new
capabilities and larger and larger amounts of data. There are still
new additions we intend to make over in the future, including
translating the narratives into languages besides English, and so on.

With language generation, there is no end to improvements, just as
one can keep polishing any piece of text to make it better.

Mary Dee Harris, PhD
Chief Language Office
Catalis, Inc.
www.thecatalis.com
512-294-8110
mdharris_at_TheCatalis.com

--[3]------------------------------------------------------------------
         Date: Fri, 25 May 2007 06:57:41 +0100
         From: "Neal Audenaert" <neal.audenaert_at_gmail.com>
         Subject: Re: 21.046 coding and composing

Willard,

>In any case, it would be interesting to know how exploratory
>programming is known to be these days.

For that matter, how exploratory is composition (e.g., an office memo,
a UN resolution, a love letter, the OED, "the" Boeing 777 maintenance
manual, a response to a mailing list) known to be these days?

Designing software systems is very much like writing in one important
respect - it is a skill (art) that requires a knowledge of the
different techniques involved, and the selection and application of an
appropriate technique based on experience and personal (institutional)
aptitude. Some do it better than others, novices rarely do it best
(though sometimes adequately for their purposes). The ONE RIGHT WAY
approach to software engineering that is taught in school is no more
fixed and final than the ONE RIGHT WAY of composition taught in school
- and the best schools don't teach it that way.

Some (possibly obvious, possibly uninteresting) thoughts on factors
that influence "exploratory programming" follow. . .

A critical factor in picking a software model is the size and the
scope of the project. For small project (about 2 programmers working
for about 2 years or less) the best approach (in my opinion) probably
does look a lot like composition. Do a bit of pre-writing, sketch out
the overall structure of the system, start coding (and testing) and
see what works. Critique the code (and the system design) and rework
things as you go along.

As you get bigger (one might think of systems that involve thousands
of programmers, plus support staff working over decades) you need a
more formal process and much more control. This, of course, is much
like composition as well - I suspect composing a work like the OED or
the Encyclopedia Britanica involves a somewhat different process than
what I learned in high school.

A related question is what degree of correctness must be guaranteed?
If I build a user interface and it is badly wrong, no one will be able
to use the tool. In this case though, I build a new interface and
everyone is happy again (or happier). On the other hand, if I
accidentally delete (or worse, corrupt) the database containing a
decade or so of painstakingly gathered information the problem is much
worse. The same holds true if I design a data model that represents
the information in ways that meet my needs now, but poorly model the
data and hence, has only limited usefulness for future work.

Accordingly, it makes sense to invest much more up-front effort in
making sure the data model part of a system a) doesn't destroy things,
and b) is "correct". The interface components of a system however, are
likely to be best served by a highly exploratory process with frequent
revisions. The waterfall approach would be (has been) a disaster for
interface design. Of course, one lesson that has been learned quite
well of the past 60 years of computer science is that you are NEVER
right the first time (which brings to mind the fact that texts are
really an ordered hierarchy of content objects . . . or something like
that . . . sometimes for some purposes). The best designs will need
revisions and we will always learn things in the process.

Neal Audenaert
Received on Fri May 25 2007 - 02:15:58 EDT

This archive was generated by hypermail 2.2.0 : Fri May 25 2007 - 02:15:58 EDT