Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

Nobody talks like that: a stylometric exercise

Back in March 2022, I was asked if I’d like to be interviewed as part of a research project concerning editing in the 21st century . What the hell I said, I have close to no real experience of digital editing (unless you count my lovely digital edition of “Through Beatnik Eyeballs”), though I have made a reasonably satisfactory career out of telling other people how they should do it. One thing they really do teach well at Oxford is the ability to sound as if you know what you’re talking about… Anyway, I signed up, and after some vicissitudes was duly interviewed via Skype, sitting in my birdsong-filled garden, some time in June. Some considerable time later (at the end of December to be precise)  I was invited to revise the transcript someone had made of my interview, and did so. I removed some of the more egregious hesitations and a few garden path sentences, but left the bullshit intact, collection of that being after all the object of the exercise. And later still, I learned, my transcribed interview joined a group of fifty or so others on a website, in a proper digital archive no less. This was all very satisfactory, though I was a little disappointed to find that the edited transcriptions were being made available in PDF or in RTF only. Are those really now considered to be appropriate long term preservation formats? I suppose in a world where Boris Johnson can become prime minister, nothing should be surprising. And what happened to the audio? The design goals of this project were firmly based in a part of the forest of the Digital Humanities somewhat removed from linguistic analysis, discourse semantics, problems of speech transcription, or the textual analysis of academic talk. The project was to deliver something readable by human beings, like a book. Quite enough for one grant.

But hoorah for the open-minded spirit of open access, which makes it possible for me (and anyone else so inclined) to play with the resulting resources and do at least some things not originally envisaged in the project design! Entirely unsurprisingly, I spent a happy few days last week downloading the RTF files and converting them to TEI (the scripts and the results are now available in a github repo ). TEI because that’s what I do, but also because I wanted to be able to do textual analysis properly.

My resulting TEI corpus contains 46 small documents, one for each interview, each consisting of a sequence of TEI <sp> elements, with a who attribute to indicate a unique code for the speaker Each element contains one or more paragraphs of text, preceded by the speaker code as given in the transcription (there are a few differences). Like this:

<sp who="#LB">
<speaker>LB</speaker>
<p>I would have liked to have been better paid.</p></sp>
<sp who="#MK"><speaker>MK</speaker><p>Sure.</p></sp>

This was prefixed by a paragraph of background information about the interviewee, which I banished to the <front> of the document, as a source of metadata. I also created a rudimentary TEI Header for each document.

Importing my 36 documents into TXM, I found the corpus had a total of 237,271 words. I made a partition on the basis of the @who attribute, so that all the words for each distinct speaker were grouped together. Here’s the bar graph from TXM showing how many words were associated with each of the 51 distinct speakers. It shows that one speaker (JOS) talks a lot more than anyone else: but this is unsurprising, since he is one of the two interviewers, and I have simply aggregated his side of each discussion irrespective of participant. I did the same for the other interviewer, MK, who has fewer interviews (14 as opposed to 32 for JOS); at 6000 words, he actually talks less than the three most garrulous interviewees (JC, AG, and RR), all of whom manage more than 6500 words. At the other end of the scale, there are five interviewees who hover around 2000 words apiece. The bulk of respondents fall comfortably between these extremes.

What can I do with this data? Well, treating it just as data, it might be interesting to see whether the frequency with which words, or lemmas, or POS codes appear in each speaker’s chunk is much the same, or whether some stylometrical statistic can be used to group like-speaking speakers together. Does everyone talk in more or less the same way or (ex hypothese) do professors and old lags like me talk differently from early career researchers? It’s not quite the typical stylometric use-case (which tries to establish probable authorship on the basis of similarities) but close. Fortunately for the mathematically challenged, there exists a fairly well established range of tools designed to explore such matters. Unfortunately for the mathemagically challenged (amongst whom I unreservedly place myself), you do need to know what you’re doing with these really quite sharp edged tools. So please forgive any idiocies in what follows…

I played with TXM and with Stylo, both of which claim to be usable by the non specialist, and both of which have (interestingly different, but that’s another story) user interfaces. TXM has the great advantage of accepting TEI XML as input and treating it sensibly. Stylo requires me to pre-process my XML text into dumbed down plain ascii using arbitrary naming conventions to provide metadata. Both produce fancy graphics.

Here, for example, is a dendogram from Stylo, showing how my 50 locuteurs cluster together, if we look just at the highest frequency lemmas. If I interpret this correctly, it shows the interviewers (“allJOS” and “allDK”) grouped together and distinct from all other respondents, which seems reassuring.

This is even more evident in the Principle Component Analysis also produced with Stylo, which shows JOS and MK as complete outliers.

And TXM provides further confirmation of this from a lexical perspective:

I think what this is telling me is not only that JOS and MK are outliers, miles from all the other documents shown here only as a red splodge in the blue cloud, but also that the words they favour are characteristically to do with their role as interviewers (What, future,  question, projects, maybe etc.)  Or so I believe. But clearly I am going to have to do a lot more background reading before I can say anything really interesting about this little dataset…

An experiment in CLS

Some time ago, I agreed to participate along with several others much smarter than me in COST Action Work Group 3. The goals of this work group were, amongst other things, to run a small experiment in counting verb frequencies on ELTeC texts enhanced with POS and lemma information. It took a surprisingly long time to find out exactly what contribution was required of me, and I make no claim to have got it right even now. But here’s what I thought I was doing.

First, I wrote an insultingly simple XSL stylesheet to produce a list, in descending frequency order, of verbal lemmas in each of the (now) 10 ELTeC level 2 corpora. For example, here’s the start of the file rom/verbFreq.xml:

<frequencies>
 <lemma form=”face” freq=”30919″/>
 <lemma form=”avea” freq=”29391″/>
 <lemma form=”zice” freq=”22673″/>
<!– … and so on for several hundred more lines –>
</frequencies>

… which tells us that in our data Romanian’s favourite verb has the lemma face, and the next favourite is avea. The code for doing this is (like all the rest of the code described here) in the github repo COST-ELteC/ELTeC-data/Scripts if you care: it’s called imaginatively verbFreqs.xsl

Next, I wrote another simple-minded script to extract from each novel a bag of words, with no markup or punctuation: just all the verbs, for example, or all the nouns, in their order of appearance in the text. So the that celebrated work Hard Times, which begins in the original like this

<div type=”group”>
 <head>BOOK THE FIRST <hi>SOWING</hi> </head>
 <div type=”chapter”>  <head>CHAPTER I
     THE ONE THING NEEDFUL</head>
  <p>‘<hi>Now</hi>, what I want is, Facts.  Teach these boys and girls nothing but Facts.  Facts    alone are wanted in life.  Plant nothing else, and root out everything else.  You can only form    the minds of reasoning animals upon Facts: nothing else will ever be of any service to them.</p>
<!– … –>
 </div>
<!– … –>
</div>

generates a bag of words starting like this

want be teach be want|wanted plant root form be…    

if I ask for VERB lemmas, or like this

book sowing|sow chapter thing fact boy girl fact fact life mind reasoning|reason animal fact    service 

if I ask for NOUN lemmas. You may wish to complain about the behaviour of the lemmatizer here, but I am taking the path of least resistance and using whatever treetagger (in this case) produces without cavil. This deplorable laziness returns to bite me further below…

I wrote some python to run the xslt script filter.xsl which does this task: the script is called filter.py and it uses a Python interface to the Saxon C processor which I was very pleased with myself about when I got it working; less so later, see below. There’s more mundane detail of how to run it in the README in the Scripts folder.

If still awake, you are probably wondering what the point of all this was. And here comes the scientific bit. The little workgroup I had signed up for wished to test a Hypothesis, which (if I understand it correctly) might be crudely summarized thusly:

  • The European novel undergoes some sort of seismic shift around the turn of the 19th century, which is popularly known as The Rise of Modernism
  • Modernism has many stylistic correlatives, but they include notably a focus on the interior life of characters, on sensation and feeling, rather than on objective omniscient narrative
  • If this is true, we should expect to see a change in the frequency with which verbs associated with that ‘inner life’ appear over time.

I hope you can see where we are going with this, now. All we need is a reasonably plausible list of verbs which express aspects of ‘inner life’. And so, for the next few months, with zoom and email and similar modern contrivances, the group theorized how to actually produce such a list. I may have fallen asleep during the process and missed something critical, but eventually (I think) it was decided that we would explore two approaches to identifying our list. Firstly, we’d ask language experts to vote for their top ten “inner” verbs. Secondly, we’d use a statistical procedure (word vector embedding) to identify a list of candidate verbs automagically. Then we’d compare the results, declare victory, and move on.

What could possible go wrong? Well, at least two things.

Firstly, the ask-an-expert approach turned out to be less successful than it might have been, largely for purely logistical reasons. If we had asked the experts simply to review the existing verb frequency lists for their language and identify in them those verbs which were indubitably and always betokeners of interiority, plus any others which were a bit thus inclined sometimes, then we might have got our results a bit faster. But we didn’t, and the experts, understandably a bit mystified by the whole process, gave us lists which varied widely in their format and scope. So I found myself having to tweak and readjust their contributions, to remove duplicates and ambiguity. As for the automagical procedure, it proved a little challenging for most participants to run, if only because it required access to a machine capable of running Google’s word2vec program which is not meant for your average laptop. In any case, you can see the resulting word lists in the file innerVerbs.xml which I hope is fairly self-explanatory.

Secondly, my simplistic notion of ‘lemma’ turned out to be problematic. As you noticed above, when unable to choose between two alternatives, treetagger obligingly gives you both of them, separated by a vertical bar. That’s no problem for me: I just discard the alternative. But other lemmatizers behave differently. For example, in our Portuguese data, the lemmas for reflexive verbs are suffixed by a # and an indication of person. In our Hungarian data, spelling variations of the same basic lemma are sometimes presented as different lemmas. In the first case, should I simply ignore the part of the lemma after the #? In the second, should I aggregate all the differently spelled variants and consider matches for any of them as equivalent? As usual in computational linguistics, it all depends what you think you’re counting…

Despite these metalinguistic anxieties, I wrote a (needlessly complicated) python script called verbCount.py to count the frequencies of the inner verbs through time, comparing the things-called-lemmas in our various lists of inner verbs with the things-identified-as-lemmas in the level2-encoded files. Invoking various XSLT scripts and Saxon C as before, this script grudgingly churned out a file for each text in the corpus under examination, with a row for each title and a column for each inner verb, like this:

   extId year verbs innerVerbs aimer connaître croire entendre regarder savoir sembler trouver voir vouloir     FRA00101 1860 3889 310  17 9 28 22 18 52 5 47 83 29    FRA00102 1883 5499 465  112 21 38 16 17 55 32 30 77 67       FRA00201 1910 7577 682  26 20 41 75 96 63 49 93 128 91   

I say ‘grudgingly’ because the script was obliged to process the whole of every file in order to extract a year of publication from its TEI header, and consequently ran with noticeable slowness. If I’d thought to include the year of publication along with other metadata in the filename of the “bag of words” I could have used that instead, which would have been much quicker. Maybe if I get a better set of inner life verbs I’ll revise the scripts to do so.

Anyway, we now have a bunch of CSV files. And why? Because my colleague Diana has produced some R scripts which will plot this data set so everyone can understand it. Or at least look at it. Here’s what we get for some of the Portuguese data:

innerVerbs.png

I leave it to the statistically-informed to interpret this and other similar results. The closing conference of the COST Action, taking place next week, includes a paper (on which I am somewhat embarassingly cited as co-author) presenting the results in more detail.

Seven steps to Ossian

A TEI transcription of the 1773 edition of James Macpherson’s “translations” of the works of “Ossian”

Why would anyone want such a thing? I can’t imagine, but here’s how I made this one. It turned out to be a seven step process — so far. You can check out each stage from this github repo, if you’re really curious…

1. Decide which PDF to work from

You might think that one library’s digitized copy of “the 1773 edition of Ossian” would be much the same as another’s. But no. There are variations in the physical state of the originals, and the PDF format in which the digitization is made available may also vary. I downloaded three different digitized versions from the Internet Archive, but mainly I used the PDF version of the copy preserved at the National Library of Scotland. https://ia802302.us.archive.org/33/items/poemsofossiantra11macp/poemsofossiantra11macp.pdf I say “mainly” because that particular PDF file had a curious glitch in it which made some of the half-titles disappear when extracted as separate image files. I supplied the missing text from the PDF version of the New York Public Library’s copy.

2. Extract images from PDF

$ pdfimages [filename.pdf] [outputPrefix]

I am too lazy to install anything clever, so I use tried and tested ancient command line Unix tools, like pdfimages. Applying this to my chosen PDF file, I find that each page in the NLS PDF produces three files: two in PPM format which appear to be masks, and one in grayscale representing the page, in negative form. I extract the page image and save it in my img folder, ready for the next stage.

3. Do OCR using medieval rules

$ tesseract [inputfile] [outputfile] -l enm

As noted above, I have a preference for old-fashioned command-line Unix tools, and tesseract, once instructed to use an appropriate language model (enm, rather than eng), actually does a pretty good job of recognizing 18th century typography. It consistently fails on ligatured “ct” and a few other oddities, but is much better than I expected at distinguishing long-s from f. Most of its errors seem to be due to poor image quality. At the end of the process, I have about eight hundred text files, each corresponding with one page of the source, and most of them containing plausible text, which I save in the txt folder.

4. Hand-check, page by page, introducing minimal non-xml markup

I then (and this is where the time goes) proofread each one, introducing some absolutely minimal markup, of my own invention. The cheatsheet reads as follows:

  • introduce a — line at start and end of text on page
  • introduce a == line at start and end of note-text on page
  • introduce a blank line between paras but otherwise retain linebreaks
  • introduce an extra hyphen following end-of-line hyphens which are to be retained
  • replace * or +1 sigla for notes and note references with @ and a sequence number
  • use entity references for long dash, accented letters etc
  • use ““” for open double quotes
  • retain forme work on a single line
  • delimit smallcaps with { … }
  • delimit italicized phrases with {{ … }}
  • use % to mark the start of a dramatic style speech
  • add \ at end of verse lines
  • add $ at end of speech
  • add &end; at end of an argument or other chunk

I made the corrections using, need you ask, emacs aided and abetted with some perl one-liners for bulk corrections. Reading Ossian is an odd way to pass time during lockdown, but no worse than some of the other sanity-preserving expedients one reads about on Twitter. A good soundtrack seems to be almost any Sibelius symphony.

5. Transform and (slightly) reorganize the textfiles into proper XML, one per text

perl streamer.prl v1files.txt

This is not the most elegant or indeed sanitary code I have ever written; it also took quite a few iterations to get it working acceptably, which I defined as generating well-formed XML. It reads in a listof filenames, interspersed with flags to tell it when to start a new output file, and what its initial page number should be. Then it processes in succession each page of transcribed text, building up one string containing all the text chunks for a work, and another containing all the footnotes. Footnotes often span pages, of course. The resulting strings are then output as two separate XML <div> elements. Their contents also acquire some minimal XML tagging (<pb/>, <hi>, <p>, <sp> etc.) before they get flushed out. I gave up trying to overcome some inelegant results of not particularly elegant process. The code is in the Scripts folder of this repo for the morbidly curious; the results are in the xml folder: at least they are well-formed XML.

6. Run XSLT scripts to convert this stuff to kosher TEI documents and validate same.

Since this version is going to be my contribution to the “Ossian Online” project, it should probably follow that project’s usage and TEI practices. Alas, they do not have an ODD to tell me what that should be, and their files are apparently validated against TEI-All. But they do have a reasonable amount of documentation, and enough files already available online for me to be able to construct an ODD automagically (take a bow oddbyexample.xsl a well kept secret inside the TEI P5 Utilities repository) and thus a schema I can use to validate my TEI files when I have finished licking them into shape.

As ever, the fun part of the project is seeing how much of the remaining data-mungeing can be scripted in XSLT. Quite a lot, it transpires, though it remains necessary to hand-craft the details of titlepages, tables of contents etc. Another complete sweep through the text checking for miscellaneous things like the following is also needed:

  • words broken by a pagebreak but not properly reassembled (happens occasionally)
  • quotations not marked as such
  • verse lines not marked as such
  • code-switching
  • residual OCR errors (there are always residual OCR errors)

Before launching into that campaign, I checked the <pb/> elements introduced at stage 5 against the page numbering of the original as preserved in the paratextual comments of my transcription. Somewhat to my surprise, the page-numbering corresponded exactly with the number of such elements, enabling me to construct both a reliable reference system and reliable page image links as values for the facs attribute on each <pb/>.

7. Decide on the macrostructure

The Ossian Online project uses <div> elements for every subdivision of the 1773 edition, at whatever level, all the way down from its two volumes to the arguments of individual poems. It takes the perfectly reasonable view that every text can be organized as an ordered hierarchy of uniformly nested objects. As a consequence, the @type attribute for <div> has to do quite a lot of heavy lifting. Odd_by_example enumerates its values as follows:

  • advertisement
  • argument
  • book
  • contents
  • dedication
  • dissertation
  • duan
  • fragment
  • maintext
  • poem
  • preface

This list combines types that have a structural function (fragment, duan, book) with others that are purely descriptive (advertisement, argument, poem). Nothing wrong with that, but I still find this “divs all the way down” approach somewhat problematic, and that for for two reasons. Firstly a <div> is supposedly something incomplete, which is true of (for example) the argument prefixed to each poem or book, but not of the book or poem itself. Secondly, the relation between the argument and the poem requires that the two be siblings within some larger entity, but the poem is not really an incomplete part of that entity in quite the same way as the argument. Furthermore, in the 1773 edition, we have some texts which are undivided (Carricthura for example) along with others which are divided variously into “duan”s (Cathlona) or “book”s (Fingal). Should each book of Fingal be treated as a single text? Should the whole of Fingal be treated as a single text? Can’t we have both?

Values such as maintext and poem in the list above ring alarm bells indicating that these ontological issues are being evaded. Since the TEI in its wisdom already provides a mechanism for coping with exactly this (not at all unusual) kind of macrostructure, why not use it? I refer of course to the element <group>.

My version of the 1773 edition prefers to treat each distinct work as a <text>, rather than a <div type='maintext'>. Within it, there is a <front> containing a titlepage or half title and the argument, followed by a <body>, if the work is not further subdivided, or by a <group> if it is. A <group>, combines a number of lower-level <text> elements, each again with a <front> and a <body>. I also treat each of the two volumes of the 1773 edition as a <group>. The file driver.tei embeds each file in the structure using xInclude; it is commented to explain what’s going on (a bit).

It remains to be seen what my colleagues in Galway make of this radical re-organisation, to say nothing of my perverse desire to retain the long-s form. But at least changing to a format which matches more exactly that of the (excellent) work already done on the Ossian Online site will be a Simple Matter Of Programming.

An experiment in counting the books

A couple of years ago I spent some time trying to determine which of the titles in the wonderful “At the Circulating Library” (ATCL) database were freely available online in digital form. This was for largely pragmatic reasons to do with building the ELTeC English language collection: other blog entries describe the method I used and some preliminary results. It’s not as easy as you might suppose to download reliable catalogue information from most digital libraries, nor is it always readily tractable when you do. After some experimentation, I hit on the idea of creating a magic key, a kind of fingerprint, derived from the title and author name as specified, which could then be matched against keys in the same format derived from ATCL entries.

More recently, it occurred to me that this data might also provide some interesting numbers to contribute to current debates about digitization priorities. Exactly why some titles make it to Project Gutenberg, or the HathiTrust, or the Internet Archive and others don’t is not a question to which simple un-nuanced answers are likely or even (maybe) possible, but we should still ask them. Those responsible for the digitization efforts of major libraries are a little coy about the principles on which books are chosen for digitization, or even whether they actually have explicit selection policies, for some reason. I assume that there is a difficult tightrope walk between on the one hand practical but purely adventitious matters such as the relative locations of volume and scanner, the size and state of the volume, the time of day, the temperament of the scanner operator etc.) and on the other principled criteria aiming to ensure a balance of say titles by female and male authors, or high and low brow, date of production, longevity of readership, and so on. It would be surprising if the choices were completely unrelated to characteristics of the population being sampled, or totally failed to reflect the cultural priorities of the scanning operation; the same uncertainties apply, of course, to the collection being sampled for digitization itself.

Anyway, I recently read an interesting article by Allen Riddell and Troy Bassett (“What library digitization leaves out”; preprint available from https://arxiv.org/abs/2009.00513)  which reports that in the data they looked at – the comparatively small sample of surviving English novels published in 1836 and 1838 – shorter books, and books with male authors are disproportionately more likely to be digitized. I naturally wondered whether this applies equally well across the whole of the 19th century.  Which is what led me to revisit my efforts of two years ago. But first, here are the results.

There are 19,912 titles in the current ATCL database. Of these, 9152 (46%) have authors identified in the database as male, 9809 (49%) are identified as female, and 951 (4%) are identified as unknown. These relative proportions are rather different if we look at titles with at least one digital surrogate, of which there are in total 9099 (45%). Of these 9099 digitized texts, we find 5221 (57%) are of male authorship, 3718 (41%) of female authorship, and 160 (2%) are unsexed.

Look at that again. Although there are actually more titles available for digitization from female authors than for male, the number that actually gets digitized is significantly smaller (if, like me, you think a gap of 16 percentage points is pretty significant). Hmmm. These counts of course derive from the whole period covered by ATCL, from 1800 to 1900, so I also calculated them for each decade, only to find that the proportions and their imbalance remain fairly consistent across the century. And this despite huge changes in the numbers: for the last decade of the century ATCL lists nearly 6000 titles, a six-fold increase on (for example) the fourth decade. What percentage of those titles were digitized? In both decades, over 51%. And what proportion of those digitized titles were male-authored ? In both decades, 62%. There is some variability across the decades, but the basic picture remains the same

One possible explanation might be that titles with unknown or unsexable authorship (e.g. the ubiquitous “Anonymous”) are more likely to have been female, and that hence we are not seeing all the truly female authors. But even were this the case (after all, why should we not equally well hypothesize that male authors might be bashful or crave secrecy?), the proportions for books ostensibly male-authored with respect to books ostensibly not male-authored (i.e. those classed as either F or U by ATCL) remain stubbornly higher than the proportions for books definitely not male-authored. And indeed, the same mutatis mutandis is true for the ostensibly-female to ostensibly-not-female ratio.

Here’s a table showing the raw counts:

Decade All “Male” “Female” “U” A-dig M-dig F-dig U-dig
19912 9152 9809 951 9099 5221 3718 160
1830s 482 256 174 52 250 164 85 1
1840s 1037 543 422 72 538 334 202 2
1850s 1483 595 778 110 718 347 358 13
1860s 2341 1019 1093 229 1015 540 456 19
1870s 2866 1189 1514 163 1300 642 633 25
1880s 4126 1693 2287 146 1765 945 782 38
1890s 5979 2995 2863 121 3092 1929 1103 60

 

And here’s another showing the percentages:

               
Decade Ad% M% Md% F% Fd% U% Ud%
45.70% 45.96% 57.38% 49.26% 40.86% 4.78% 1.76%
1830s 51.87% 53.11% 65.60% 36.10% 34.00% 10.79% 0.40%
1840s 51.88% 52.36% 62.08% 40.69% 37.55% 6.94% 0.37%
1850s 48.42% 40.12% 48.33% 52.46% 49.86% 7.42% 1.81%
1860s 43.36% 43.53% 53.20% 46.69% 44.93% 9.78% 1.87%
1870s 45.36% 41.49% 49.38% 52.83% 48.69% 5.69% 1.92%
1880s 42.78% 41.03% 53.54% 55.43% 44.31% 3.54% 2.15%
1890s 51.71% 50.09% 62.39% 47.88% 35.67% 2.02% 1.94%

 

In an ideal world, you’d expect the percentages for titles with male authors (M%)  and for digitized titles with male authors (Md%)  to be roughly the same, right?  Think on… And feel free to download the csv file behind these tables for your own experimentation.

One should always suspect the data, so I make no excuse for the following detailed blow by blow account of how I got these numbers. Full gruesome details, including the scripts mentioned below, are available from https://github.com/lb42/bookLists

The basic method was to download a complete catalogue of relevant titles available from each target digital library, and then try to match them with records in the ATCL. For Google Books, which does not seemingly provide a complete catalogue online, I tried a different method, discussed further below.

I started by downloading the latest (June 2020) dump of the ATCL database, and converting it to a basic TEI XML format. I then did much the same for the holdings of five digital libraries with good holdings of 19th century novels: the Hathi Trust, the British Library, the Internet Archive, Project Gutenberg, and Google Books. As a control, and for testing purposes, I also looked at a few smaller collections, notably the Victorian Women Writers Project at Indiana University and the (now defunct) University of Adelaide “ebooks” repository. I wanted to provide something similar to John Mark Ockerbloom’s lovely Online Books Pages at https://onlinebooks.library.upenn.edu/ but more precisely tied in to ATCL.

Hathi Trust makes available a monthly dump of their entire collection as a huge tab-delimited file. Working with the most recent dump, dated September 1 2020, I used a simple minded perl script `hathiProcess.prl` to parse this file and select from it only freely-available English language books published in Great Britain between 1800 and 1920; an  XSLT stylesheet `htConv.xsl` then converted the results to the common project format (CPF).

The British Library website makes available an Excel spreadsheet providing metadata for the titles from their collection which were digitized some time ago by the Microsoft Books project I downloaded this, converted it to TEI with `csvtotei` and converted the result to CPF, (selecting just the 19th century titles) with `blConv.xsl`.

Project Gutenberg makes available several versions of its catalogue data. I worked with the most recently updated one, which is a vast archive of unbelievably verbose RDF files. Despite its complexity, this data doesn’t include any publication data for the source texts concerned (unsurprising really), though it does provide birth and death dates for the authors. To cut down the numbers a little, I rejected titles whose authors were not born during the 19th century, and also those which specified a MARC relator field “edt” (to cut out non-original editions). Once I had remembered how on earth to handle a gazillion tiny files of RDF (I did this back in 2018 ), I used the `gutConvRDF.xsl` script to process them all to CPF, and concatenated the results into a single file.

The Internet Archive, so far as I can see, doesn’t have any generally available or downloadable catalogue, though it does have a really good query interface. The method I used for attacking Google Books would presumably work equally well (or equally badly; see below) in this case, but I haven’t tried it. Instead I just used a predefined collection called `19thcennov` which someone at UIC Urbana Champaigne thoughtfully created back in December 2008. This gave me 7828 XML records which were easily converted to CPF using `iaConv.xsl`.

The common project format files all consist of TEI <bibl> elements with either an @xml:id attribute or an <idno> specifying the identifying code for this item in the relevant repository, e.g. ‘ia:foreignersnovel03pric` identifies the Internet Archive’s digitization of volume three of Eleanor Price’s novel “The Foreigners” . Each <bibl> also has an @n attribute supplying the magic key for the title, which is confected as follows:

  • remove the full stop following Mr or Mrs in any title containing one
  • take the substring of the title up to the first occurrence of one of the punctuation marks . , : ; or /
  • concatenate this with the author’s last name
  • convert to lower case and remove all punctuation characters and spaces

So, for example ATCL lists a work with the title “The Foreigners: A Novel” attributed to author “Eleanor C. Price”. The same work appears in the Internet Archive list, but with the author “Price, Eleanor C. (Eleanor Catherine)” and the title “The foreigners : a novel”. Despite the differing strings, both will get the same magic key “theforeigners|price”. This method is far from bullet proof, but it’s serviceable.

For Google Books, as noted above, there is no readily downloadable catalogue. But there is an API, which in a moment of madness I thought it might be cool to learn how to use. A day of poking around led me to a neat python script some helpful person had written to look up ISBN numbers (hat tip to AO8’s treasury , which I mercilessly hacked to my own purposes. My version reads a file of URL-encoded search requests like this “inTitle:the+inTitle:foreigners+inAuthor:Price”, fires them at the Google API, and processes the returns into a rudimentary bibl or a comment lamenting the absence or unavailability of the item in question. The file of search requests is rather long (one for each title in ATCL for which I have not yet found any digital version – a total of 11,203 ) so I make the program sleep for a while after firing off about 40 consecutive requests, to help the Google server catch up. Despite this considerate behaviour on my part, it did not take Mr Google long to decide that my program (or my IP address) was a threat, and then to start returning unco-operative HTTP messages like 503 (“Service Unavailable”) and 429 (“Too many requests”). The API Help pages confirm that Google considers “using an app, program or script to perform a large number of searches in a short time” prima facie justification for temporarily blocking the IP address in question; though it’s not clear what exactly is meant by “large” (more than 100?) and “short” (less than a minute?) in that phrase. Furthermore, when I search using my specially-minted API key, there seems to be a hard limit of 1000 queries per day in any case: so this job is not going to be finished very quickly. Still, I do now have an extra 1517 records to show for two day’s work.

Once I’ve created all these lists, I run the merger.xsl script to add <ref> elements to the ATCL-TEI file I created in the first step. This makes for some redundancy, for two reasons: firstly, for most of the archives a three volume novel is likely to get a separate entry for each volume; secondly, for many titles, there exist multiple digitizations – which may (or may not) derive from the same source. The following table shows for each archive the number of records selected for processing, the number of references to ATCL titles found, and the number of titles affected. Note that I haven’t yet done any de-duplication to remove overlaps.

British Library 62015 9920 5104  
Hathi Trust 460070 18891 5655  
Internet Archive 7829 4691 1655  
Project Gutenberg 38338 2880 2275  
Google Books ? 1517 1517  

I haven’t made available the CPF files for each archive, nor the final merged TEI version of the ATCL dump, since this is not really my data to share. But I have made available a file called atcl-links.csv, which is a spreadsheet with a row for each ATCL title digitized in one or more publicly available digital collection, mapping its ATCL identifier to its identifier in each repo. I’ll  update these as and when the data improves.

(Another reason) Why I love the Internet

So, at the recent TEI Conference in Vienna, Elisa and I were indulging in a little mutual admiration on our knowledge of an obscure work entitled Thalaba the Destroyer by the early English Romantic Poet called Robert Southey (rhymes, as any fule kno, with “mouthy”). So when I got back home, I went to look for the volume containing said work which I dimly remembered having on my shelves, in the decrepit-but-too-nice-to-throw-away-section. And sure enough, there it was. The front board has come loose, but the first three openings  look like this:

frontboardhalftitletitlepage

Having scanned those first few pages, I naturally asked Mr Google what he knew about the matter. And was thus able rapidly to confirm :

  • My copy of Thalaba is the cheap reprint (two volumes in one) published
    by Vizetelly and Beeton in 1853. There is a Google-scanned version of the same edition, available from the Internet Archive. They have included with it a couple of pages of  advertisements for other works published by Clarke Beeton (p 7 and 8) which are missing in mine however.
  • What seems like another copy of the same edition is currently on sale at Abe Books for the startling sum of $199.76. Mine is in poor condition,  which is why it only cost me half a crown back in 1967, when I used to frequent Oxford’s second hand bookshops (there aren’t any to frequent these days).

As you may have noticed above, my copy also contains several signs of its previous owners. As well as the book plate, and the inscription above, there’s a nice message from Aunty Sarah, the donor,  opposite the preface:

front-1and there’s also an intriguing note from “JB” dated some twenty years later, opposite the start of the poem proper.

body-01

So… what have we learned? Rosamund was given this book by her aunt, Sarah Brent, in 1860. And in 1882, her husband felt compelled to record his own experience of the Eastern exotic in the same book “We met at Persepolis an Arab maiden of most lovely form and features — she was a dream of beauty never to be forgotten”. What she made of it, one can only conjecture.

But why I love the Internet, is that (pondering these matters after breakfast this morning), it has helped me place these people a little more precisely in time and place. A search for “Rosamund Borrowman” told me that  the 1861 Census shows a person of that name, born 1825 in Kent, married to John Borrowman, born 1830 in Midlothian, residing in Middlesex in 1861. The ancestry.co.uk site where I found this record is pay-walled so no further details available, but that seems reasonably plausible.

And searching for “Rosamund Borrowman John” I was able to find a record of her death. Some industrious volunteers have been surveying the gravestones of a place called Hambledon in Surrey, and there she is:   “Rosamund Vertue the beloved wife of John Borrowman. She died 25th August 1895. Also the above named John Borrowman son of Robert Borrowman born in Edinburgh 3rd April 1830 died at Hambledon 4th July 1906. Also Elizabeth daughter of the above died 22nd October 1932 aged 72 years” It’s all in the spreadsheet.

My next step, obviously, will be to find out where Hambledon is, and whether you can get there by train. Maybe.

Quel avenir pour l’édition génétique sans "digital forensics"?

Ce texte représente une intervention au séminaire général de l’ITEM qui a eu lieu à Paris le 31 janvier 2011. Remerciements à ma collègue Nadine Dardenne qui l’a relu pour en corriger les fautes d’orthographe et de syntaxe répandues dans la version originelle; je revendique cependant toute faute intellectuelle résiduelle.

Je souhaiterais vous proposer une brève présentation d’un champ d’études émergeant qui se nomme “digital forensics”. Ce terme reprend un ensemble de techniques et théories propres aux procédures juridiques, mais probablement également d’une importance incontournable pour l’archivage et l’étude des objets nativement numériques; considéré du point de vue patrimonial. Le besoin de mettre en évidence, d’une manière crédible et certaine, les traces de mots enregistrés sur disque dur ou floppy, même supprimées, et d’associer ces traces avec un écrivain, est un enjeu qui afflige l’éditeur critique autant que l’agent de police, ou les services secrets. À chaque fois on a besoin d’une connaissance des affordances des systèmes de stockage numérique, de ce qu’ils rendent possible, et de ce qu’ils cachent. À chaque fois, il est question de balancer des probabilités, de proposer une vérité vraisemblable basée sur des évidences. On pourrait rester aveugle devant ces possibilités, bien sûr. On pourrait dire que l’histoire d’un texte est réduit à l’histoire de ses incarnations multiples, sur ces feuilles de papier que nous aimons si bien. On pourrait renoncer à l’investigation de la manière par laquelle ces incarnations ont été réalisées. Mais dans ce cas il faudrait également renoncer à la majorité du discours artistique actuel, qui est nénumérique, vit et évolue dans le numérique, et meurt dans les archives numérisées de M Google. Car les objets d’étude des humanités et sciences sociales sont de plus en plus conçus et stockés sous forme numérique; il est donc indispensable de revoir et de transformer l’outillage  avec lequel on espère les archiver et les analyser. L’ordinateur de l’auteur, ses disques, son téléphone portable, ses espaces virtuels sur le réseau internet, remplacent ses cahiers, ses brouillons, et ses manuscrits. Il faut ré-équiper le chercheur avec une compréhension des principes d’enregistrement numériques, pour compléter sa compréhension des principes de l’écriture analogique. Le choix est simple: ou bien il faut redéfinir la diplomatique pour le numérique, ou bien il faut renoncer à l’étude de la genèse textuelle des oeuvres modernes.

Comment constituer cette redéfinition? Je propose un réajustement à deux niveaux: intellectuel, et substantif.

Au niveau intellectuel d’abord, il faut affecter une bonne compréhension de l’informatique aux disciplines des SHS. En dépit de deux décennies (au moins) de “humanities computing”, à present relabelisé comme “digital humanities”, il reste une étonnante ignorance autour de l’ordinateur et de ses capacitàs à faire (ou à ne pas faire). En partie, c’est une des conséquences de l’émergence de l’informatique grand public, comme phénomène de marché de masse. Des impératifs commerciaux restreignent l’usage de l’ordinateur à des plateformes spécifiques, et transforment ce moteur universel en un jouet uni-fonctionnel. Ce n’est guère surprenant alors d’entendre les gens affirmer que cette technologie réductive pervertit l’intelligence humaine en la transformant dans une disposition de bits. Ou, à l’extrême opposé, d’y voir l’éternel attrait du divin se manifestant cette fois dans la tendance à vouloir ‘attribuer une intelligence consciente aux effets d’échelle (par exemple, le crowd sourcing, les réseaux neuronaux, le data mining…) Peut être il y en a-t-il parmi nous qui ont besoin de récalibrer le cadre de leur esprit pour supporter l’ère de l’information, juste comme nos ancêtres ont dû s’ajuster à l’ère de la vapeur… mais un tel ajustement consisterait en une extension de nos perceptions, en aucun cas d’une transformation. Dans la langue française, un ordinateur a pour objectif de mettre de l’ordre dans les choses; le mot “ordinateur” porte même des nuances religieuses en rappelant par exemple l’ordination des prêtres. Dans les langages anglo-saxons par contre, un “computer” n’est qu’une machine pour calculer. Mais les objets auxquels l’ordinateur apporte un ordre ne sont pas que les chiffres: il est la machine par excellence pour organiser n’importe quelle espèce de signe, pour le ré-encodage des systèmes sémiotiques de toute sorte. Voilà pourquoi j’ai toujours insisté pour que l’informatique soit considérée comme une branche des sciences humaines, plutot que de l’ingenierie ou de la mathématique. Au niveau materiel, je propose une élargissement des connaissances attendues pour ceux qui veulent faire des études philologiques. On attend de tels gens une compréhension assez intime des technologies typographiques ou paleographiques. Maintenant, a l’urgence on doit élargir ces compétences pour le numérique.

Je termine avec quelques mots sur quelques elements de ce qu’il faut faire apprendre aux futurs généticiens. Quand j’écris un document sur mon ordinateur, le texte apparaît et disparaît sur l’écran, sous le contrôle d’un logiciel avec lequel j’interagis à travers mon clavier. Les traces propres à mon texte sont de deux sortes: lettres, et ce que l’on pourrait nommer “meta-lettres”: c’est-à-dire des codes qui déterminent la façon d’ afficher ou de traiter les lettres. (Un autre terme possible serait “markup” ou “balisage”). Ma conscience de ces meta-lettres est variable: quelques-unes (la ponctuation par exemple) me semble être un composant de ce système sémiotique que l’on appelle la langue naturelle; d’autres (les retours de chariot, les indications de rature, etc) me semblent moins visibles, et j’attends que la machine s’en occupe seule. De la même façon, les codes insérés par le logiciel de traitement de texte pour générer des effets spéciaux tels que les changements de police ou de couleur appartiennent, de mon point de vue, à un niveau sémiotique tout à fait différent. Cependant, mon texte est composé de signes appartenant à ces trois niveaux. Le texte numérisé que j’ai ainsi composé commence son existence physique comme des changements d’état dans la partie dynamique de la mémoire de mon ordinateur; très rapidement ces changements sont transférés et enregistrés dans un format plus permanent quelque part sur mon disque dur, ou dans une autre mémoire. D’habitude ceci s’effectue automatiquement par l’infrastructure informatique, le OS: à noter que c’est fait sans aucune intervention de ma part. Même au moment ou je me décide consciemment d’enregistrer l’état courant de mon texte, bien que je pense savoir ou je le mets (dans un fichier nommé, sur un médium specifique), la manière dans laquelle sont organisés à cet emplacement les composants de mon texte — par exemple, les adresses des secteurs concernés, leurs tailles, la disposition des caractères et autres signes dans ces secteurs — est entièrement hors de mon contrôle et de ma connaissance.

Quand j’écris un document sur papier, le texte apparaît, mais ne disparaît que rarement. Je dois utiliser un ensemble assez complexe de “meta-markup” pour indiquer que tel ou tel signe n’existe plus dans mon texte, qu’il a été remplacé par un autre etc. Le système sémiotique auquel appartient ce markup sera entièrement le mien (exception faite des signes de correction imposés par une maison d’édition). Plus significativement, chacun de mes bouts d’écriture a sa propre existence physique, qu’il m’est impossible d’ignorer, surtout si j’ai un petit bureau ou déjà bien rempli … Par conséquence, il me faut trouver rapidement des stratégies de stockage (ou de recyclage), qui vont déterminer les possibilités de récupérer à l’avenir mes procédures d’écriture. Ces stratégies seront déterminées, bien naturellement, par ce qui me paraît utile, ou ce qui semble approprié dans le contexte institutionnel dans lequel mon écriture prend place. Elles représentent des jugements de valeurs considerés justes dans ces contextes, et c’est pour cela qu’on dit que l’histoire est toujours écrite par les gagnants, et que les archives de n’importe quelle société ont tendance à ne contenir que ce qui est valorisé par cette société. Avec l’arrivée des média numériques, pourtant, les affordances de nos systèmes de stockage se sont transformés d’une manière fondamentale. En dépit des efforts des artistes modernistes, on ne peut lire un bout de papier que d’une seule manière. Mais l’organisation des fragments d’écriture sur un medium numérique de stockage est indépendant de son écriture; elle peut être lue de plusieurs façons. Les séquences de bits constitutifs de ce document peuvent être lus (comme je le suppose assez naïvement) à travers le système de gestion des fichiers sur mon laptop. Mais ce dernier n’est qu’une espèce d’index, comprenant un ensemble de pointeurs sur des segments de stockage éparpillés sur mon disque dur. Ou bien, dans le cas où on recupère mon texte à travers un logiciel plus complexe comme un blog sur le réseau, les traces de mon texte sont hebergées dans une base de données en Californie sur une machine que j’ignore totalement. Mais il demeure possible de récupérer ces mêmes séquences de bits en adressant n’importe quels systèmes de stockage d’une autre manière, tout à fait différemment du système d’acces prévu, que cela soit le système de fichiers sur mon laptop ou le blog, qui (je croyais) représenterait la seule structuration correcte de mon texte. Au contraire. Pour le texte numérique, la structuration est contingente, protéenne.

Ces morceaux écrits, comme je l’ai déjà souligné, pouvaient ne contenir que des materiels raturés, ou des signes qui ne servent qu’à indiquer la manière ou d’autres signes devraient ou pourraient être affichés ou intégrés dans un texte visible. D’où des problèmes pour l’archiviste, et un défi supplémentaire pour la critique textuelle. En acceptant une boîte de papiers comme dépôt, l’archiviste peut raisonnablement supposer que les parties savent exactement ce qu’elles sont en train d’offrir. Mais, quand l’archiviste accepte en dépôt un disque dur, peut-on envisager que les déposants sachent quelles traces d’activités sur l’internet ou quels fichiers supprimés restent encore à découvrir à l’intérieur, au-delà des materiaux proposés et visibles? Un récent rapport américain du Council on Library and Information Resources s’est interrogé sur ce problème, justement perçu comme un vrai défi pour l’éthique professionelle, qui nécessite une mise à jour des standards de contrats de dépôt. Mais je demande aux critiques textuels ici présents — si vous pouviez accéder à l’histoire de browsing sur internet de disons Joyce ou Flaubert, hésiteriez-vous à y aller, par crainte de la violation de la loi sur la vie privée? Peut être moins chimériquement, si vous pouviez récupérer chaque étape de l’écriture d’une oeuvre de l’importance du Satanic Verses de Rushdie (ce qui sera en effet le cas) — chaque rature, chaque ajout, chaque déplacement de mot — de quels outils auriez-vous besoin pour gérer une telle richesse? Les outils et les méthodes élaborés jusqu’à présent sont tous dans la mesure de ce que nous pouvons comprendre: c’est l’abondance de ces informations dans le monde numérique qui nécessite de repenser ces outils et ces méthodes.

Je termine en soulignant encore que le texte numérique serait une construction, pas seulement au sens qu’il est composé de plusieurs séquences fragmentaires de bits, mais aussi au sens que ces séquences reprennent de l’information à plusieurs niveaux. Les mots seuls ne suffisent pas: les documents numériques contiennent inévitablement un balisage, dont une grande partie est (selon le term du philosophe anglais J.L. Austen, repris notamment par Allen Renear) performative — il détermine la nature du texte. D’où l’importance pour le critique textuel numérique de comprendre le balisage et les technologies qui y sont associées . Mais vous vous attendiez probablement à que je vous dise cela…

Does genetic criticism have a future without digital forensics?

This is the text of a presentation I gave at the ITEM’s general symposium on the future of genetic editing, held in Paris on 31 January 2011. I started writing it in French, switched to English for speed, translated it all into French (with the invaluable assistance of my colleague Nadine Dardenne), and then re-Englished it for this version.

I’d like to introduce you to an emerging field called “digital forensics”. This term covers a set of techniques and theories originating in the domain of criminal justice, but also of major importance for the archiving and study of born digital objects considered from a cultural heritage perspective. The need to plausibly identify traces of words recorded on hard or floppy disk, and to reliably associate them with a specific writer, even after their deletion, is a goal which torments the textual critics as much as the police officer or secret service agent. In both cases, a knowledge of the affordances of digital storage systems is needed, to know what they make possible and what the conceal. In both cases, there is a need to balance probabilities when seeking to establish plausible evidence-based conclusions. Ignoring these possibilities is also an option, of course. We could consider the history of a text to be no more than the history of its various embodiments on those sheets of paper we like so well. We could abandon any attempt to investigate by which those embodiments have been achieved. But in that case, we have to give up on the majority of current artistic discourse, which is born digital, lives and evolves digital, and dies in the digital archives of Mr Google. The objects studied in the human and social sciences are increasingly conceived and stored only in digital form; that is why it is essential to rethink and transform the toolkit we use to archive and analyse them. The author’s computer and its disks, their portable telephone, and the virtual spaces they use on the Internet, are taking over from their notebooks, their drafts and their manuscripts. We must re-equip the researcher with an understanding of the principals of digital storage to complement an understanding analog writing. The choice is simple: either redefine diplomatic studies to include the digital world, or abandon any attempt to study the textual genesis of modern works. What are the components of this redefinition? I propose a readjustment at two levels: the intellectual, and the substantive. At the intellectual level first, we need to re-appropriate a proper understanding of information studies within the humanities disciplines. Despite more than two decades of “humanities computing”, now rebranded as “digital humanities”, there is still an astonishing amount of ignorance about what the computer can and cannot do. Partly this is one of the results of the emergence of computing as a mass market phenomenon. Commercial imperatives restrict usage of the infinitely plastic computer to certain platforms, transforming a universal engine into a mono-functional toy. Unsurprisingly, therefore, we still hear people assert that this reductive technology perverts human intelligence as a transient patterns of bits. Or, at the other extreme, we still see evidence of the eternal desire for the divine, now appearing as a tendency to attribute conscious intelligence to effects of scale (for example crowd sourcing, neural nets, data mining…). Maybe some of us need to adjust our mental framework to deal with the information age, just as our ancestors adjusted theirs to deal with the steam age, but such an adjustment is a matter of expanding our perceptions, not transforming them. In the French language, a computer is something which puts things in order: the word ordinateur even has religious overtones, suggesting “ordination” and consecration. In the English and German languages, it is just a machine that “computes”. But the things that a computer puts in order are not just numbers: it is a machine above all for organizing any kind of sign, for re-encoding semiotic systems of all kinds. This is why I have always maintained that computer science is more a branch of the humanities than it is of engineering or mathematics. At the materiel level, I propose an extension of the knowledge expected from those undertaking philological study. Such people are expected to acquire a detailed understanding of typographic or paleographic technologies. There is an urgent need to expand those skills to embrace the digital medium. I conclude with a brief discussion of a few components of the understanding that future genetic editors needed to acquire. When I write a text on my laptop, the text appears and disappears on the screen under control of some piece of software with which I am interacting via a keyboard. The traces which constitute my text are of two kinds — letters, and what we may call meta-letters; codes which determine how the text should be displayed or processed in some way. (Another word we might use is markup). I may or may not be aware of all of these — some, the punctuation for example, is almost a part of the semiotic system I call “natural language” so I am very aware of it; others — the carriage returns, deletion characters, etc. — seem less salient, I expect the machine to deal with them. In the same way, the codes my word processor inserts to produce special effects such as changes of font or colour seem to belong to some other semiotic level entirely. But signs at all three of these levels are what constitute my text. The digital text I create starts its physical existence as detectable changes of state in the dynamic part of my computer’s memory, but very rapidly is transferred to a more permanent form, somewhere on my hard disk, or on some other store. Usually this will be done automatically by the software environment: critically, this will happen without any knowledge or intervention on my part. Even when I do deliberately request that the state of my text should be stored away in its current form, although I may think I know where I am putting it (in a file with such a name, on a specified physical medium), the way in which the components of my text are organized at that location — the order and number of blocks of characters and other signs represented — is entirely beyond my control or knowledge. When I write a text on a piece of paper, signs appear, but rarely disappear. I have to deploy quite a complex range of meta-markup to indicate that some sign is no longer significant or has been superceded by another, but the semiotic system to which that metamarkup belongs is entirely my own (unless forced on me by a publisher in the shape of proof reading marks, of course). More significantly, each of my scraps of writing has a physical existence which forces itself on my attention, especially if my desk is small, or my office already crowded. Consequently, I will rapidly adopt recycling or storage strategies, which effectively determine the future re-traceability of my writing processes. Those strategies are naturally determined by what is useful or perceived as appropriate by myself or the institutional context in which my writing takes place. They represent value judgments deemed appropriate within that context, and that is why (as they say) history is written by the victors, and why the archives of every society represent and maintain what that society values. With the advent of the digital medium however, the affordances of our storage systems change fundamentally. Despite the best efforts of modernist artists, you can only read a written scrap of paper in one way. But the organization of written fragments on a digital storage medium is independent of its reading, and thus can be read in many ways. The blocks of storage constituting this text may be read, as I naively think they should be, via the file system on my laptop, which contains a number of pointers indicating more or less continguous segments of storage scattered across my hard disk.They might be recovered via a more complex piece of software such as a networked blog, which stores my text as records on some database system in California. But it is also possible to recover the same written fragments by addressing those storage systems in an entirely different way, by-passing the intermediate access systems (the file system, the blog) which represent the “organization” of my text. In the digital text, organization is contingent and protean. Those written fragments, as noted above, ma
y actually contain nothing but material that has been deleted, or signs that serve only to indicate how other signs should be, or might be, displayed or integrated into a visible text. The first case poses problems for the archivist, as well as a challenge for the textual critic. When accepting a box of papers for deposit, the archivist can reasonably assume that both parties know exactly what is being handed over. But when the archivist accepts for deposit a hard disk, is it equally likely that the depositor will know what traces of internet activity or deleted files may remain to be recovered from it, in addition to the intended and apparent materials? A recent American report from the Council on Library and Information Studies agonizes considerably over this problem, which it perceives rightly as a challenge to the maintenance of professional ethics, necessitating a reappraisal of such deposit agreements. But I ask the textual critics here present — if you could have access to (say) Joyce’s or Flaubert’s web browsing history, would you hesitate to examine it on the grounds of a breach of confidence? Less fancifully, if you could (as you will soon be able to) recover every stage of the writing of a great work such as Rushdie’s Satanic Verses, every deletion, insertion, and the movement of every word, what tools would you need to make sense of that richness? The tools and methods so far elaborated have been done so in the measure of what we know how to handle; it is the very abundance of information to the textual critic that necessitates a rethinking of those tools and methods. I close by underlining again the fact that the digitized text is a construction, not only in the sense that it is composed of fragmentary byte sequences, but also in the sense that those byte sequences contain information at many levels. The words alone are not enough: digital documents inevitably contain markup, much of which is (in a term Allen Renear borrows from the English philosopher J L Austen) performative — it determines what the text is. Hence the importance of a proper understanding of markup, and markup technologies to the digital textual critic. But you probably expected me to say that.