17.073 an engineer's understanding

From: Humanist Discussion Group (by way of Willard McCarty willard.mccarty@kcl.ac.uk)
Date: Tue Jun 10 2003 - 01:37:05 EDT

  • Next message: Humanist Discussion Group (by way of Willard McCarty

                    Humanist Discussion Group, Vol. 17, No. 73.
           Centre for Computing in the Humanities, King's College London
                       www.kcl.ac.uk/humanities/cch/humanist/
                         Submit to: humanist@princeton.edu

             Date: Tue, 10 Jun 2003 06:32:12 +0100
             From: "Amsler, Robert" <Robert.Amsler@hq.doe.gov>
             Subject: RE: 17.063 an engineer's understanding

    [See below for the questions to which the following are replies.]

    (1) Yes and No. The description is complete only to the extent that the
    behavior of the system is known. That is, any such virtual machine which
    correctly and fully replicates the behavior of the system is limited by the
    knowledge of what behavior the system is capable. If, for example, one were
    describing a very complex system of whose behavior we have only a limited
    understanding, the virtual machine might well omit vast parts of the
    system's capabilities because nobody has knowledge of those capabilities.
    To model the question-answering capabilities of a rock, one need only have
    a simple model that doesn't say anything. To model the weather in any but
    vague terms for short periods of time is unattainable.

    (2) Yes. Modeling something is never without its merits.

    (3) Not as you use the term "machine". A machine is another artifact, it
    has practical problems. The only effective response to modeling a machine
    is a virtual machine, i.e., a theoretical response. We largely learn about
    the world by building mental/theoretical models of parts of it we identify
    and then discovering where these mental/theoretical models differ from the
    observed behavior of the real world. The problem is how do you describe a
    mental/theoretical model?

    Words cannot do it because there is no way to assure a unique
    interpretation to those words. Computer programs are pretty good models
    because their interpretation is left to a machine which will operate
    independent of the creator of that program and operate consistently for all
    who run the program, regardless of how they believe the program would work.
    Computer programs are perhaps the most reliable modeling tool yet developed
    by human beings. They are not the most powerful tool, but they are the most
    reliable one. They are more reliable than any other of the more powerful
    symbol systems because of their independent evaluation mechanism. Logic and
    mathematics, formidable representation tools, lack this ability since they
    have to be checked by other human beings. Mathematicians can "think" they
    have solved a problem only to discover a flaw in their proof. Language is
    another even more powerful modeling tool, but one that is even less
    reliable than mathematics. Language cannot be "checked" since it is open to
    many interpretations.

    > -----Original Message-----
    > From: Humanist Discussion Group (by way of Willard McCarty
    > <willard.mccarty@kcl.ac.uk>)
    >
    [<mailto:willard@lists.village.virginia.edu>mailto:willard@lists.village.virginia.edu]

    > Sent: Saturday, June 07, 2003 2:43 AM
    > To: humanist@Princeton.EDU
    > >
    >
    > Humanist Discussion Group, Vol. 17, No. 63.
    > Centre for Computing in the Humanities, King's College London
    > www.kcl.ac.uk/humanities/cch/humanist/
    > Submit to: humanist@princeton.edu
    >
    >
    >
    > Date: Sat, 07 Jun 2003 07:13:00 +0100
    > From: Willard McCarty <willard.mccarty@kcl.ac.uk>
    > Subject: an engineer's understanding?
    >
    > Armand de Callatay, in "Computer Simulation Methods to Model
    > Macroeconomics", states that, "An engineer understands... a
    > real system
    > when he can design a (virtual) machine that is functionally
    > equivalent to
    > this system" (The Explanatory Power of Models, ed. Robert
    > Franck, Kluwer
    > 2002, p. 105).
    >
    > Three questions: (1) Is this a correct and complete
    > description of what it
    > means to understand something from an engineering
    > perspective? If so, then
    > (2) are we to articulate our complete understanding of a real
    > system, such
    > as a tool, at least in part by simulating it? (3) If the artifacts of
    > engineering comprise an intellectual tradition, as I think
    > Eugene Ferguson
    > has argued in Engineering and the Mind's Eye (MIT, 2001),
    > then would it not
    > follow that within the tradition only a machine is a proper
    > response to a
    > machine -- and not words, however many, however apt? And does this not
    > have strong implications for how we write a history of our technology?
    >
    > Yours,
    > WM
    >
    >
    > Dr Willard McCarty | Senior Lecturer | Centre for Computing in the
    > Humanities | King's College London | Strand | London WC2R 2LS
    > || +44 (0)20
    > 7848-2784 fax: -2980 || willard.mccarty@kcl.ac.uk
    > www.kcl.ac.uk/humanities/cch/wlm/
    >



    This archive was generated by hypermail 2b30 : Tue Jun 10 2003 - 01:46:53 EDT