Home About Subscribe Search Member Area

Humanist Discussion Group

< Back to Volume 33

Humanist Archives: May 22, 2019, 11:51 a.m. Humanist 33.33 - events: how language became data

                  Humanist Discussion Group, Vol. 33, No. 33.
            Department of Digital Humanities, King's College London
                   Hosted by King's Digital Lab
                Submit to: humanist@dhhumanist.org

        Date: 2019-05-22 07:43:35+00:00
        From: Blanke, Tobias 
        Subject: Xiaochang Li on how language became data

How Language Became Data: Speech Recognition and Computational Knowledge

Xiaochang Li (Max Planck Institute for the History of Science, Berlin)

Wed, 22 May 2019
16:30 – 18:00 BST
Anatomy Lecture Theatre (King's Building K6.29)
King's College London, Strand Campus

Beginning in the 1970s, a team of researchers at IBM began to reorient
the field of automatic speech recognition from the scientific study of
human perception and language towards a startling new mandate: to find
“the natural way for the machine to do it.” In what is recognizable
today as a data-driven, “black box” approach to language processing,
IBM’s Continuous Speech Recognition group set out to meticulously
uncouple computational modelling from the demands of explanation and
interpretability. Automatic speech recognition was refashioned as a
problem of large-scale data acquisition, storage, and classification,
one that was distinct from—if not antithetical to—human perception,
expertise, and understanding. These efforts were pivotal in bringing
language under the purview of data processing, and in doing so helped
draw a narrow form of data-driven computational modelling across diverse
domains and into the sphere of everyday life, spurring the development
of algorithmic techniques that now appear in applications for everything
from machine translation to protein sequencing. The history of automatic
speech recognition invites a glimpse into how making language into data
made data into an imperative, and thus shaped the conceptual and
technical groundwork for what is now one of our most wide-reaching modes
of computational knowledge.

Bio: Xiaochang Li is currently a Postdoctoral Fellow in the Epistemes of
Modern Acoustics research group at the Max Planck Institute for the
History of Science in Berlin. This coming fall, she will be joining the
faculty at Stanford University as Assistant Professor in the department
of Communication. Her current book project examines the history of
predictive text and how the problem of making language computationally
tractable was laid into the foundations of data- driven computational
culture. It traces developments in automatic speech recognition and
natural language processing through the twentieth century, highlighting
their influence on the cultural, technical, and institutional practices
that gave rise to so-called “big data” and machine learning as
privileged and pervasive forms of knowledge work.


This event is part of an ongoing seminar series on "critical inquiry
with and about the digital" hosted by the Department of Digital
Humanities, King's College London.

Unsubscribe at: http://dhhumanist.org/Restricted
List posts to: humanist@dhhumanist.org
List info and archives at at: http://dhhumanist.org
Listmember interface at: http://dhhumanist.org/Restricted/
Subscribe at: http://dhhumanist.org/membership_form.php

Editor: Willard McCarty (King's College London, U.K.; Western Sydney University, Australia)
Software designer: Malgosia Askanas (Mind-Crafts)

This site is maintained under a service level agreement by King's Digital Lab.