On Ramsay’s “‘Patacomputing” and “Postconditions”

With the chapter “‘Patacomputing,” Stephan Ramsay returns us the concept of pataphysics (the science of imaginary solutions), which he introduced us to in the second chapter of Reading Machines, “Potential Literature.” If you recall that reading and its corresponding lecture, you’ll remember that Ramsay drew specifically upon C. P. Snow’s 1959 lecture “The Two Cultures and the Scientific Revolution” and Snow’s call for a third culture, one that combined scientific thought and analysis with the “imaginative experience” of the humanities.1 It is here, in this third culture, Ramsay told us in chapter two, that we would find algorithmic criticism.

As we saw in the first chapter of Reading Machines, computers are really good at certain kinds of textual analysis such as creating lists of words used by different characters.2 While we can ask computers to engage in fairly sophisticated textual analysis such as the example Ramasy uses with Virginia Woolf’s The Waves in which he asks a computer program to:

  1. Identify the words spoken by one of the six main characters,
  2. Create a list of the most commonly used words for each of those characters, and
  3. Drop from each list words shared by other characters.

The product was a list of words for each of the six characters that were unique to each of those characters.  Using the character of Louis as an example, Ramsay notes that we see unique to Louis issues that critics have often noted to be among Louis’ preoccupations (12). While we can write sophisticated textual analysis programs, their result, Ramsay notes in the “‘Patacomputing” chapter we’re reading this week, is almost invariably a list of some sort. To sum up his discussion of various text analysis programs – WordHord, David Hoover’s study of “vocabulary richness,” TAPoR, HyperPo, and MONK – what Ramsay has to say about MONK could be said about any of these examples: “The ‘result’ of a system like MONK is the same as that for virtually any text-analytical procedure: a textual artifact that, even if recapitulated in the form of an elaborate interactive visualization, remains essentially a list” (80).

At this point, it might be all too easy to say “A list? That’s all we’re going to get? Okay, then, I’m done. Back to reading texts the old-fashioned way.” And one could do that, but one would be making the same mistake as the hypothetical user of WordHord who, upon learning that the three most common words in Homer are “man, ship, and god,” and thinks “Home is about ‘man, ship, and god’ in that order. Stop reading right there” (Martin Mueller, “Digital Shakespeare” 123; qtd. in Ramsay 70). The problem with both approaches is that while both are entirely accurate as a description of the thing (textual analysis (computer-based or otherwise) and the corpus of Homer), neither is a complete description of the thing being described any more than “an animal that lives in water” is a complete description of a seal.

The purpose of textual analysis, as you might recall, is to describe quantifiable features of a text such as word frequency, rhyme scheme, syntactical structure, words per sentence, adjectival and adverbial density, etc: Homer is mostly about “man, ship, and god” (69-70); Rudyard Kipling’s children’s book Kim has more “vocabulary richness” than either William Faulkner’s Light in August or Henry James’s The Ambassadors (71-73); Christina Rossetti’s Goblin Market has 3107 words of which 1130 are unique, and averages 40.4 words per sentence (73-74); and in Goblin Market some of the collocations (that is, words which appear together at rates far above mere chance) are “Laura” and “should”and “Lizzie” and “not” (75-76). While we might look at these lists and think “so what,” we might also look at these lists and think “huh, I would have said Homer was about…” or we might ask ask “why?” or “what does it mean?” It’s these later responses that lead us somewhere. As Ramsay suggests, “Algorithmic criticism is born at that moment” (71). To quote from the “An Algorithmic Criticism” lecture:

In short, Ramsay is making the argument that while literary computing has focused on using computers to do what computers do best, that is engage in textual analysis (issues related to understanding the features of a text), we can ask computers to analyze texts in ways that produce data that we can then use to identify fruitful lines of interpretive inquiry in order to engage in literary criticism. In other words, while we can’t yet ask computers to interpret literary texts for us, we can use them for more than straight-forward analysis to describe the features of a text. While that description (analysis) can be meaningful and important for some kinds of inquiry, we can use that analytical data as a starting point for our own acts of interpretation. This computer-assisted interpretation is what Ramsay names an “algorithmic criticism.”

And this brings us back both to pataphysics and to Ramsay’s invoking of C. P. Snow. Textual analysis gives us data about texts, and sometimes that data might surprise us (that Homer’s works are primarily about ‘man, ship, and god’ or that Shakespeare’s works are primarily about “lord, man, and sir”; that Kipling’s Kim is richer in vocabulary than William Faulkner’s Light in August or Henry James’s The Ambassadors; that Lizzie collocates with not; etc); however, textual analysis doesn’t tell us what these things mean. That is the purview of literary criticism, the narrative explanation we create to account for this data the computer has given us. In other words, the ‘Patacomputing of the chapter’s title is the imaginative explanations (literary criticism) given to the result of one or more sets of scientifically derived observations (computer-assisted textual analysis). In short, ‘patacomputing is algorithmic criticism.

If we think of computers as antithetical to the Humanities, it might be well worth reflecting on the fact that, as Ramsay himself notes, we live “[i]n an age when the computer itself has gone from being a cold arbiter of numerical facts to being a platform for social networking and self-expression” (81). Computers are already deeply apart of the human experience.4 And that has been one of the main points of Ramsay’s book: that computers, like all tools, are human tools to be put to work for human ends. While textual analysis existed long before computers, computers make the work of textual analysis much, much easier. Keeping that in mind, it might be worthwhile to let Ramsay have the last word here. He concludes the “‘Patacomputing” chapter with note about his hopes for algorithmic criticism:

algorithmic criticism looks forward not to the widespread acknowledgement of its utility but to the day when ‘algorithmic criticism’ seems as odd a term as ‘library-based criticism.’ For by then we will have understood computer-based criticism to be what it has always been: human-based criticism with computers (81).

  1. Snow, you might recall, was both a scientist and a novelist.
  2. Remember, here, Ramsay’s distinction between textual analysis and literary criticism. This may be a good time to reread the lecture on “An Algorithmic Criticism.”
  3. Mueller, Martin. “Digital Shakespeare or Toward a Literary Informatics.” Shakespeare 4.3 (2008): 284-301.
  4. Or, as Walter Ong suggests in “Digitalization Ancient and Modern: Beginnings of Writing and Today’s Computers,”: “there is nothing more natural for a human being than the artificial.”

Leave a Reply

Your email address will not be published. Required fields are marked *