An experiment in counting the books

A couple of years ago I spent some time trying to determine which of the titles in the wonderful “At the Circulating Library” (ATCL) database were freely available online in digital form. This was for largely pragmatic reasons to do with building the ELTeC English language collection: other blog entries describe the method I used and some preliminary results. It’s not as easy as you might suppose to download reliable catalogue information from most digital libraries, nor is it always readily tractable when you do. After some experimentation, I hit on the idea of creating a magic key, a kind of fingerprint, derived from the title and author name as specified, which could then be matched against keys in the same format derived from ATCL entries.

More recently, it occurred to me that this data might also provide some interesting numbers to contribute to current debates about digitization priorities. Exactly why some titles make it to Project Gutenberg, or the HathiTrust, or the Internet Archive and others don’t is not a question to which simple un-nuanced answers are likely or even (maybe) possible, but we should still ask them. Those responsible for the digitization efforts of major libraries are a little coy about the principles on which books are chosen for digitization, or even whether they actually have explicit selection policies, for some reason. I assume that there is a difficult tightrope walk between on the one hand practical but purely adventitious matters such as the relative locations of volume and scanner, the size and state of the volume, the time of day, the temperament of the scanner operator etc.) and on the other principled criteria aiming to ensure a balance of say titles by female and male authors, or high and low brow, date of production, longevity of readership, and so on. It would be surprising if the choices were completely unrelated to characteristics of the population being sampled, or totally failed to reflect the cultural priorities of the scanning operation; the same uncertainties apply, of course, to the collection being sampled for digitization itself.

Anyway, I recently read an interesting article by Allen Riddell and Troy Bassett (“What library digitization leaves out”; preprint available from https://arxiv.org/abs/2009.00513)  which reports that in the data they looked at – the comparatively small sample of surviving English novels published in 1836 and 1838 – shorter books, and books with male authors are disproportionately more likely to be digitized. I naturally wondered whether this applies equally well across the whole of the 19th century.  Which is what led me to revisit my efforts of two years ago. But first, here are the results.

There are 19,912 titles in the current ATCL database. Of these, 9152 (46%) have authors identified in the database as male, 9809 (49%) are identified as female, and 951 (4%) are identified as unknown. These relative proportions are rather different if we look at titles with at least one digital surrogate, of which there are in total 9099 (45%). Of these 9099 digitized texts, we find 5221 (57%) are of male authorship, 3718 (41%) of female authorship, and 160 (2%) are unsexed.

Look at that again. Although there are actually more titles available for digitization from female authors than for male, the number that actually gets digitized is significantly smaller (if, like me, you think a gap of 16 percentage points is pretty significant). Hmmm. These counts of course derive from the whole period covered by ATCL, from 1800 to 1900, so I also calculated them for each decade, only to find that the proportions and their imbalance remain fairly consistent across the century. And this despite huge changes in the numbers: for the last decade of the century ATCL lists nearly 6000 titles, a six-fold increase on (for example) the fourth decade. What percentage of those titles were digitized? In both decades, over 51%. And what proportion of those digitized titles were male-authored ? In both decades, 62%. There is some variability across the decades, but the basic picture remains the same

One possible explanation might be that titles with unknown or unsexable authorship (e.g. the ubiquitous “Anonymous”) are more likely to have been female, and that hence we are not seeing all the truly female authors. But even were this the case (after all, why should we not equally well hypothesize that male authors might be bashful or crave secrecy?), the proportions for books ostensibly male-authored with respect to books ostensibly not male-authored (i.e. those classed as either F or U by ATCL) remain stubbornly higher than the proportions for books definitely not male-authored. And indeed, the same mutatis mutandis is true for the ostensibly-female to ostensibly-not-female ratio.

Here’s a table showing the raw counts:

Decade All « Male » « Female » « U » A-dig M-dig F-dig U-dig
19912 9152 9809 951 9099 5221 3718 160
1830s 482 256 174 52 250 164 85 1
1840s 1037 543 422 72 538 334 202 2
1850s 1483 595 778 110 718 347 358 13
1860s 2341 1019 1093 229 1015 540 456 19
1870s 2866 1189 1514 163 1300 642 633 25
1880s 4126 1693 2287 146 1765 945 782 38
1890s 5979 2995 2863 121 3092 1929 1103 60

 

And here’s another showing the percentages:

               
Decade Ad% M% Md% F% Fd% U% Ud%
45.70% 45.96% 57.38% 49.26% 40.86% 4.78% 1.76%
1830s 51.87% 53.11% 65.60% 36.10% 34.00% 10.79% 0.40%
1840s 51.88% 52.36% 62.08% 40.69% 37.55% 6.94% 0.37%
1850s 48.42% 40.12% 48.33% 52.46% 49.86% 7.42% 1.81%
1860s 43.36% 43.53% 53.20% 46.69% 44.93% 9.78% 1.87%
1870s 45.36% 41.49% 49.38% 52.83% 48.69% 5.69% 1.92%
1880s 42.78% 41.03% 53.54% 55.43% 44.31% 3.54% 2.15%
1890s 51.71% 50.09% 62.39% 47.88% 35.67% 2.02% 1.94%

 

In an ideal world, you’d expect the percentages for titles with male authors (M%)  and for digitized titles with male authors (Md%)  to be roughly the same, right?  Think on… And feel free to download the csv file behind these tables for your own experimentation.

One should always suspect the data, so I make no excuse for the following detailed blow by blow account of how I got these numbers. Full gruesome details, including the scripts mentioned below, are available from https://github.com/lb42/bookLists

The basic method was to download a complete catalogue of relevant titles available from each target digital library, and then try to match them with records in the ATCL. For Google Books, which does not seemingly provide a complete catalogue online, I tried a different method, discussed further below.

I started by downloading the latest (June 2020) dump of the ATCL database, and converting it to a basic TEI XML format. I then did much the same for the holdings of five digital libraries with good holdings of 19th century novels: the Hathi Trust, the British Library, the Internet Archive, Project Gutenberg, and Google Books. As a control, and for testing purposes, I also looked at a few smaller collections, notably the Victorian Women Writers Project at Indiana University and the (now defunct) University of Adelaide “ebooks” repository. I wanted to provide something similar to John Mark Ockerbloom’s lovely Online Books Pages at https://onlinebooks.library.upenn.edu/ but more precisely tied in to ATCL.

Hathi Trust makes available a monthly dump of their entire collection as a huge tab-delimited file. Working with the most recent dump, dated September 1 2020, I used a simple minded perl script `hathiProcess.prl` to parse this file and select from it only freely-available English language books published in Great Britain between 1800 and 1920; an  XSLT stylesheet `htConv.xsl` then converted the results to the common project format (CPF).

The British Library website makes available an Excel spreadsheet providing metadata for the titles from their collection which were digitized some time ago by the Microsoft Books project I downloaded this, converted it to TEI with `csvtotei` and converted the result to CPF, (selecting just the 19th century titles) with `blConv.xsl`.

Project Gutenberg makes available several versions of its catalogue data. I worked with the most recently updated one, which is a vast archive of unbelievably verbose RDF files. Despite its complexity, this data doesn’t include any publication data for the source texts concerned (unsurprising really), though it does provide birth and death dates for the authors. To cut down the numbers a little, I rejected titles whose authors were not born during the 19th century, and also those which specified a MARC relator field “edt” (to cut out non-original editions). Once I had remembered how on earth to handle a gazillion tiny files of RDF (I did this back in 2018 ), I used the `gutConvRDF.xsl` script to process them all to CPF, and concatenated the results into a single file.

The Internet Archive, so far as I can see, doesn’t have any generally available or downloadable catalogue, though it does have a really good query interface. The method I used for attacking Google Books would presumably work equally well (or equally badly; see below) in this case, but I haven’t tried it. Instead I just used a predefined collection called `19thcennov` which someone at UIC Urbana Champaigne thoughtfully created back in December 2008. This gave me 7828 XML records which were easily converted to CPF using `iaConv.xsl`.

The common project format files all consist of TEI <bibl> elements with either an @xml:id attribute or an <idno> specifying the identifying code for this item in the relevant repository, e.g. ‘ia:foreignersnovel03pric` identifies the Internet Archive’s digitization of volume three of Eleanor Price’s novel “The Foreigners” . Each <bibl> also has an @n attribute supplying the magic key for the title, which is confected as follows:

  • remove the full stop following Mr or Mrs in any title containing one
  • take the substring of the title up to the first occurrence of one of the punctuation marks . , : ; or /
  • concatenate this with the author’s last name
  • convert to lower case and remove all punctuation characters and spaces

So, for example ATCL lists a work with the title “The Foreigners: A Novel” attributed to author “Eleanor C. Price”. The same work appears in the Internet Archive list, but with the author “Price, Eleanor C. (Eleanor Catherine)” and the title “The foreigners : a novel”. Despite the differing strings, both will get the same magic key “theforeigners|price”. This method is far from bullet proof, but it’s serviceable.

For Google Books, as noted above, there is no readily downloadable catalogue. But there is an API, which in a moment of madness I thought it might be cool to learn how to use. A day of poking around led me to a neat python script some helpful person had written to look up ISBN numbers (hat tip to AO8’s treasury , which I mercilessly hacked to my own purposes. My version reads a file of URL-encoded search requests like this “inTitle:the+inTitle:foreigners+inAuthor:Price”, fires them at the Google API, and processes the returns into a rudimentary bibl or a comment lamenting the absence or unavailability of the item in question. The file of search requests is rather long (one for each title in ATCL for which I have not yet found any digital version – a total of 11,203 ) so I make the program sleep for a while after firing off about 40 consecutive requests, to help the Google server catch up. Despite this considerate behaviour on my part, it did not take Mr Google long to decide that my program (or my IP address) was a threat, and then to start returning unco-operative HTTP messages like 503 (“Service Unavailable”) and 429 (“Too many requests”). The API Help pages confirm that Google considers “using an app, program or script to perform a large number of searches in a short time” prima facie justification for temporarily blocking the IP address in question; though it’s not clear what exactly is meant by “large” (more than 100?) and “short” (less than a minute?) in that phrase. Furthermore, when I search using my specially-minted API key, there seems to be a hard limit of 1000 queries per day in any case: so this job is not going to be finished very quickly. Still, I do now have an extra 1517 records to show for two day’s work.

Once I’ve created all these lists, I run the merger.xsl script to add <ref> elements to the ATCL-TEI file I created in the first step. This makes for some redundancy, for two reasons: firstly, for most of the archives a three volume novel is likely to get a separate entry for each volume; secondly, for many titles, there exist multiple digitizations – which may (or may not) derive from the same source. The following table shows for each archive the number of records selected for processing, the number of references to ATCL titles found, and the number of titles affected. Note that I haven’t yet done any de-duplication to remove overlaps.

British Library 62015 9920 5104  
Hathi Trust 460070 18891 5655  
Internet Archive 7829 4691 1655  
Project Gutenberg 38338 2880 2275  
Google Books ? 1517 1517  

I haven’t made available the CPF files for each archive, nor the final merged TEI version of the ATCL dump, since this is not really my data to share. But I have made available a file called atcl-links.csv, which is a spreadsheet with a row for each ATCL title digitized in one or more publicly available digital collection, mapping its ATCL identifier to its identifier in each repo. I’ll  update these as and when the data improves.

(Another reason) Why I love the Internet

So, at the recent TEI Conference in Vienna, Elisa and I were indulging in a little mutual admiration on our knowledge of an obscure work entitled Thalaba the Destroyer by the early English Romantic Poet called Robert Southey (rhymes, as any fule kno, with « mouthy »). So when I got back home, I went to look for the volume containing said work which I dimly remembered having on my shelves, in the decrepit-but-too-nice-to-throw-away-section. And sure enough, there it was. The front board has come loose, but the first three openings  look like this:

frontboardhalftitletitlepage

Having scanned those first few pages, I naturally asked Mr Google what he knew about the matter. And was thus able rapidly to confirm :

  • My copy of Thalaba is the cheap reprint (two volumes in one) published
    by Vizetelly and Beeton in 1853. There is a Google-scanned version of the same edition, available from the Internet Archive. They have included with it a couple of pages of  advertisements for other works published by Clarke Beeton (p 7 and 8) which are missing in mine however.
  • What seems like another copy of the same edition is currently on sale at Abe Books for the startling sum of $199.76. Mine is in poor condition,  which is why it only cost me half a crown back in 1967, when I used to frequent Oxford’s second hand bookshops (there aren’t any to frequent these days).

As you may have noticed above, my copy also contains several signs of its previous owners. As well as the book plate, and the inscription above, there’s a nice message from Aunty Sarah, the donor,  opposite the preface:

front-1and there’s also an intriguing note from « JB » dated some twenty years later, opposite the start of the poem proper.

body-01

So… what have we learned? Rosamund was given this book by her aunt, Sarah Brent, in 1860. And in 1882, her husband felt compelled to record his own experience of the Eastern exotic in the same book « We met at Persepolis an Arab maiden of most lovely form and features — she was a dream of beauty never to be forgotten ». What she made of it, one can only conjecture.

But why I love the Internet, is that (pondering these matters after breakfast this morning), it has helped me place these people a little more precisely in time and place. A search for « Rosamund Borrowman » told me that  the 1861 Census shows a person of that name, born 1825 in Kent, married to John Borrowman, born 1830 in Midlothian, residing in Middlesex in 1861. The ancestry.co.uk site where I found this record is pay-walled so no further details available, but that seems reasonably plausible.

And searching for « Rosamund Borrowman John » I was able to find a record of her death. Some industrious volunteers have been surveying the gravestones of a place called Hambledon in Surrey, and there she is:   « Rosamund Vertue the beloved wife of John Borrowman. She died 25th August 1895. Also the above named John Borrowman son of Robert Borrowman born in Edinburgh 3rd April 1830 died at Hambledon 4th July 1906. Also Elizabeth daughter of the above died 22nd October 1932 aged 72 years » It’s all in the spreadsheet.

My next step, obviously, will be to find out where Hambledon is, and whether you can get there by train. Maybe.

Quel avenir pour l’édition génétique sans "digital forensics"?

Ce texte représente une intervention au séminaire général de l’ITEM qui a eu lieu à Paris le 31 janvier 2011. Remerciements à ma collègue Nadine Dardenne qui l’a relu pour en corriger les fautes d’orthographe et de syntaxe répandues dans la version originelle; je revendique cependant toute faute intellectuelle résiduelle.

Je souhaiterais vous proposer une brève présentation d’un champ d’études émergeant qui se nomme « digital forensics ». Ce terme reprend un ensemble de techniques et théories propres aux procédures juridiques, mais probablement également d’une importance incontournable pour l’archivage et l’étude des objets nativement numériques; considéré du point de vue patrimonial. Le besoin de mettre en évidence, d’une manière crédible et certaine, les traces de mots enregistrés sur disque dur ou floppy, même supprimées, et d’associer ces traces avec un écrivain, est un enjeu qui afflige l’éditeur critique autant que l’agent de police, ou les services secrets. À chaque fois on a besoin d’une connaissance des affordances des systèmes de stockage numérique, de ce qu’ils rendent possible, et de ce qu’ils cachent. À chaque fois, il est question de balancer des probabilités, de proposer une vérité vraisemblable basée sur des évidences. On pourrait rester aveugle devant ces possibilités, bien sûr. On pourrait dire que l’histoire d’un texte est réduit à l’histoire de ses incarnations multiples, sur ces feuilles de papier que nous aimons si bien. On pourrait renoncer à l’investigation de la manière par laquelle ces incarnations ont été réalisées. Mais dans ce cas il faudrait également renoncer à la majorité du discours artistique actuel, qui est nénumérique, vit et évolue dans le numérique, et meurt dans les archives numérisées de M Google. Car les objets d’étude des humanités et sciences sociales sont de plus en plus conçus et stockés sous forme numérique; il est donc indispensable de revoir et de transformer l’outillage  avec lequel on espère les archiver et les analyser. L’ordinateur de l’auteur, ses disques, son téléphone portable, ses espaces virtuels sur le réseau internet, remplacent ses cahiers, ses brouillons, et ses manuscrits. Il faut ré-équiper le chercheur avec une compréhension des principes d’enregistrement numériques, pour compléter sa compréhension des principes de l’écriture analogique. Le choix est simple: ou bien il faut redéfinir la diplomatique pour le numérique, ou bien il faut renoncer à l’étude de la genèse textuelle des oeuvres modernes.

Comment constituer cette redéfinition? Je propose un réajustement à deux niveaux: intellectuel, et substantif.

Au niveau intellectuel d’abord, il faut affecter une bonne compréhension de l’informatique aux disciplines des SHS. En dépit de deux décennies (au moins) de « humanities computing », à present relabelisé comme « digital humanities », il reste une étonnante ignorance autour de l’ordinateur et de ses capacitàs à faire (ou à ne pas faire). En partie, c’est une des conséquences de l’émergence de l’informatique grand public, comme phénomène de marché de masse. Des impératifs commerciaux restreignent l’usage de l’ordinateur à des plateformes spécifiques, et transforment ce moteur universel en un jouet uni-fonctionnel. Ce n’est guère surprenant alors d’entendre les gens affirmer que cette technologie réductive pervertit l’intelligence humaine en la transformant dans une disposition de bits. Ou, à l’extrême opposé, d’y voir l’éternel attrait du divin se manifestant cette fois dans la tendance à vouloir ‘attribuer une intelligence consciente aux effets d’échelle (par exemple, le crowd sourcing, les réseaux neuronaux, le data mining…) Peut être il y en a-t-il parmi nous qui ont besoin de récalibrer le cadre de leur esprit pour supporter l’ère de l’information, juste comme nos ancêtres ont dû s’ajuster à l’ère de la vapeur… mais un tel ajustement consisterait en une extension de nos perceptions, en aucun cas d’une transformation. Dans la langue française, un ordinateur a pour objectif de mettre de l’ordre dans les choses; le mot « ordinateur » porte même des nuances religieuses en rappelant par exemple l’ordination des prêtres. Dans les langages anglo-saxons par contre, un « computer » n’est qu’une machine pour calculer. Mais les objets auxquels l’ordinateur apporte un ordre ne sont pas que les chiffres: il est la machine par excellence pour organiser n’importe quelle espèce de signe, pour le ré-encodage des systèmes sémiotiques de toute sorte. Voilà pourquoi j’ai toujours insisté pour que l’informatique soit considérée comme une branche des sciences humaines, plutot que de l’ingenierie ou de la mathématique. Au niveau materiel, je propose une élargissement des connaissances attendues pour ceux qui veulent faire des études philologiques. On attend de tels gens une compréhension assez intime des technologies typographiques ou paleographiques. Maintenant, a l’urgence on doit élargir ces compétences pour le numérique.

Je termine avec quelques mots sur quelques elements de ce qu’il faut faire apprendre aux futurs généticiens. Quand j’écris un document sur mon ordinateur, le texte apparaît et disparaît sur l’écran, sous le contrôle d’un logiciel avec lequel j’interagis à travers mon clavier. Les traces propres à mon texte sont de deux sortes: lettres, et ce que l’on pourrait nommer « meta-lettres »: c’est-à-dire des codes qui déterminent la façon d’ afficher ou de traiter les lettres. (Un autre terme possible serait « markup » ou « balisage »). Ma conscience de ces meta-lettres est variable: quelques-unes (la ponctuation par exemple) me semble être un composant de ce système sémiotique que l’on appelle la langue naturelle; d’autres (les retours de chariot, les indications de rature, etc) me semblent moins visibles, et j’attends que la machine s’en occupe seule. De la même façon, les codes insérés par le logiciel de traitement de texte pour générer des effets spéciaux tels que les changements de police ou de couleur appartiennent, de mon point de vue, à un niveau sémiotique tout à fait différent. Cependant, mon texte est composé de signes appartenant à ces trois niveaux. Le texte numérisé que j’ai ainsi composé commence son existence physique comme des changements d’état dans la partie dynamique de la mémoire de mon ordinateur; très rapidement ces changements sont transférés et enregistrés dans un format plus permanent quelque part sur mon disque dur, ou dans une autre mémoire. D’habitude ceci s’effectue automatiquement par l’infrastructure informatique, le OS: à noter que c’est fait sans aucune intervention de ma part. Même au moment ou je me décide consciemment d’enregistrer l’état courant de mon texte, bien que je pense savoir ou je le mets (dans un fichier nommé, sur un médium specifique), la manière dans laquelle sont organisés à cet emplacement les composants de mon texte — par exemple, les adresses des secteurs concernés, leurs tailles, la disposition des caractères et autres signes dans ces secteurs — est entièrement hors de mon contrôle et de ma connaissance.

Quand j’écris un document sur papier, le texte apparaît, mais ne disparaît que rarement. Je dois utiliser un ensemble assez complexe de « meta-markup » pour indiquer que tel ou tel signe n’existe plus dans mon texte, qu’il a été remplacé par un autre etc. Le système sémiotique auquel appartient ce markup sera entièrement le mien (exception faite des signes de correction imposés par une maison d’édition). Plus significativement, chacun de mes bouts d’écriture a sa propre existence physique, qu’il m’est impossible d’ignorer, surtout si j’ai un petit bureau ou déjà bien rempli … Par conséquence, il me faut trouver rapidement des stratégies de stockage (ou de recyclage), qui vont déterminer les possibilités de récupérer à l’avenir mes procédures d’écriture. Ces stratégies seront déterminées, bien naturellement, par ce qui me paraît utile, ou ce qui semble approprié dans le contexte institutionnel dans lequel mon écriture prend place. Elles représentent des jugements de valeurs considerés justes dans ces contextes, et c’est pour cela qu’on dit que l’histoire est toujours écrite par les gagnants, et que les archives de n’importe quelle société ont tendance à ne contenir que ce qui est valorisé par cette société. Avec l’arrivée des média numériques, pourtant, les affordances de nos systèmes de stockage se sont transformés d’une manière fondamentale. En dépit des efforts des artistes modernistes, on ne peut lire un bout de papier que d’une seule manière. Mais l’organisation des fragments d’écriture sur un medium numérique de stockage est indépendant de son écriture; elle peut être lue de plusieurs façons. Les séquences de bits constitutifs de ce document peuvent être lus (comme je le suppose assez naïvement) à travers le système de gestion des fichiers sur mon laptop. Mais ce dernier n’est qu’une espèce d’index, comprenant un ensemble de pointeurs sur des segments de stockage éparpillés sur mon disque dur. Ou bien, dans le cas où on recupère mon texte à travers un logiciel plus complexe comme un blog sur le réseau, les traces de mon texte sont hebergées dans une base de données en Californie sur une machine que j’ignore totalement. Mais il demeure possible de récupérer ces mêmes séquences de bits en adressant n’importe quels systèmes de stockage d’une autre manière, tout à fait différemment du système d’acces prévu, que cela soit le système de fichiers sur mon laptop ou le blog, qui (je croyais) représenterait la seule structuration correcte de mon texte. Au contraire. Pour le texte numérique, la structuration est contingente, protéenne.

Ces morceaux écrits, comme je l’ai déjà souligné, pouvaient ne contenir que des materiels raturés, ou des signes qui ne servent qu’à indiquer la manière ou d’autres signes devraient ou pourraient être affichés ou intégrés dans un texte visible. D’où des problèmes pour l’archiviste, et un défi supplémentaire pour la critique textuelle. En acceptant une boîte de papiers comme dépôt, l’archiviste peut raisonnablement supposer que les parties savent exactement ce qu’elles sont en train d’offrir. Mais, quand l’archiviste accepte en dépôt un disque dur, peut-on envisager que les déposants sachent quelles traces d’activités sur l’internet ou quels fichiers supprimés restent encore à découvrir à l’intérieur, au-delà des materiaux proposés et visibles? Un récent rapport américain du Council on Library and Information Resources s’est interrogé sur ce problème, justement perçu comme un vrai défi pour l’éthique professionelle, qui nécessite une mise à jour des standards de contrats de dépôt. Mais je demande aux critiques textuels ici présents — si vous pouviez accéder à l’histoire de browsing sur internet de disons Joyce ou Flaubert, hésiteriez-vous à y aller, par crainte de la violation de la loi sur la vie privée? Peut être moins chimériquement, si vous pouviez récupérer chaque étape de l’écriture d’une oeuvre de l’importance du Satanic Verses de Rushdie (ce qui sera en effet le cas) — chaque rature, chaque ajout, chaque déplacement de mot — de quels outils auriez-vous besoin pour gérer une telle richesse? Les outils et les méthodes élaborés jusqu’à présent sont tous dans la mesure de ce que nous pouvons comprendre: c’est l’abondance de ces informations dans le monde numérique qui nécessite de repenser ces outils et ces méthodes.

Je termine en soulignant encore que le texte numérique serait une construction, pas seulement au sens qu’il est composé de plusieurs séquences fragmentaires de bits, mais aussi au sens que ces séquences reprennent de l’information à plusieurs niveaux. Les mots seuls ne suffisent pas: les documents numériques contiennent inévitablement un balisage, dont une grande partie est (selon le term du philosophe anglais J.L. Austen, repris notamment par Allen Renear) performative — il détermine la nature du texte. D’où l’importance pour le critique textuel numérique de comprendre le balisage et les technologies qui y sont associées . Mais vous vous attendiez probablement à que je vous dise cela…

Does genetic criticism have a future without digital forensics?

This is the text of a presentation I gave at the ITEM’s general symposium on the future of genetic editing, held in Paris on 31 January 2011. I started writing it in French, switched to English for speed, translated it all into French (with the invaluable assistance of my colleague Nadine Dardenne), and then re-Englished it for this version.

I’d like to introduce you to an emerging field called « digital forensics ». This term covers a set of techniques and theories originating in the domain of criminal justice, but also of major importance for the archiving and study of born digital objects considered from a cultural heritage perspective. The need to plausibly identify traces of words recorded on hard or floppy disk, and to reliably associate them with a specific writer, even after their deletion, is a goal which torments the textual critics as much as the police officer or secret service agent. In both cases, a knowledge of the affordances of digital storage systems is needed, to know what they make possible and what the conceal. In both cases, there is a need to balance probabilities when seeking to establish plausible evidence-based conclusions. Ignoring these possibilities is also an option, of course. We could consider the history of a text to be no more than the history of its various embodiments on those sheets of paper we like so well. We could abandon any attempt to investigate by which those embodiments have been achieved. But in that case, we have to give up on the majority of current artistic discourse, which is born digital, lives and evolves digital, and dies in the digital archives of Mr Google. The objects studied in the human and social sciences are increasingly conceived and stored only in digital form; that is why it is essential to rethink and transform the toolkit we use to archive and analyse them. The author’s computer and its disks, their portable telephone, and the virtual spaces they use on the Internet, are taking over from their notebooks, their drafts and their manuscripts. We must re-equip the researcher with an understanding of the principals of digital storage to complement an understanding analog writing. The choice is simple: either redefine diplomatic studies to include the digital world, or abandon any attempt to study the textual genesis of modern works. What are the components of this redefinition? I propose a readjustment at two levels: the intellectual, and the substantive. At the intellectual level first, we need to re-appropriate a proper understanding of information studies within the humanities disciplines. Despite more than two decades of « humanities computing », now rebranded as « digital humanities », there is still an astonishing amount of ignorance about what the computer can and cannot do. Partly this is one of the results of the emergence of computing as a mass market phenomenon. Commercial imperatives restrict usage of the infinitely plastic computer to certain platforms, transforming a universal engine into a mono-functional toy. Unsurprisingly, therefore, we still hear people assert that this reductive technology perverts human intelligence as a transient patterns of bits. Or, at the other extreme, we still see evidence of the eternal desire for the divine, now appearing as a tendency to attribute conscious intelligence to effects of scale (for example crowd sourcing, neural nets, data mining…). Maybe some of us need to adjust our mental framework to deal with the information age, just as our ancestors adjusted theirs to deal with the steam age, but such an adjustment is a matter of expanding our perceptions, not transforming them. In the French language, a computer is something which puts things in order: the word ordinateur even has religious overtones, suggesting « ordination » and consecration. In the English and German languages, it is just a machine that « computes ». But the things that a computer puts in order are not just numbers: it is a machine above all for organizing any kind of sign, for re-encoding semiotic systems of all kinds. This is why I have always maintained that computer science is more a branch of the humanities than it is of engineering or mathematics. At the materiel level, I propose an extension of the knowledge expected from those undertaking philological study. Such people are expected to acquire a detailed understanding of typographic or paleographic technologies. There is an urgent need to expand those skills to embrace the digital medium. I conclude with a brief discussion of a few components of the understanding that future genetic editors needed to acquire. When I write a text on my laptop, the text appears and disappears on the screen under control of some piece of software with which I am interacting via a keyboard. The traces which constitute my text are of two kinds — letters, and what we may call meta-letters; codes which determine how the text should be displayed or processed in some way. (Another word we might use is markup). I may or may not be aware of all of these — some, the punctuation for example, is almost a part of the semiotic system I call « natural language » so I am very aware of it; others — the carriage returns, deletion characters, etc. — seem less salient, I expect the machine to deal with them. In the same way, the codes my word processor inserts to produce special effects such as changes of font or colour seem to belong to some other semiotic level entirely. But signs at all three of these levels are what constitute my text. The digital text I create starts its physical existence as detectable changes of state in the dynamic part of my computer’s memory, but very rapidly is transferred to a more permanent form, somewhere on my hard disk, or on some other store. Usually this will be done automatically by the software environment: critically, this will happen without any knowledge or intervention on my part. Even when I do deliberately request that the state of my text should be stored away in its current form, although I may think I know where I am putting it (in a file with such a name, on a specified physical medium), the way in which the components of my text are organized at that location — the order and number of blocks of characters and other signs represented — is entirely beyond my control or knowledge. When I write a text on a piece of paper, signs appear, but rarely disappear. I have to deploy quite a complex range of meta-markup to indicate that some sign is no longer significant or has been superceded by another, but the semiotic system to which that metamarkup belongs is entirely my own (unless forced on me by a publisher in the shape of proof reading marks, of course). More significantly, each of my scraps of writing has a physical existence which forces itself on my attention, especially if my desk is small, or my office already crowded. Consequently, I will rapidly adopt recycling or storage strategies, which effectively determine the future re-traceability of my writing processes. Those strategies are naturally determined by what is useful or perceived as appropriate by myself or the institutional context in which my writing takes place. They represent value judgments deemed appropriate within that context, and that is why (as they say) history is written by the victors, and why the archives of every society represent and maintain what that society values. With the advent of the digital medium however, the affordances of our storage systems change fundamentally. Despite the best efforts of modernist artists, you can only read a written scrap of paper in one way. But the organization of written fragments on a digital storage medium is independent of its reading, and thus can be read in many ways. The blocks of storage constituting this text may be read, as I naively think they should be, via the file system on my laptop, which contains a number of pointers indicating more or less continguous segments of storage scattered across my hard disk.They might be recovered via a more complex piece of software such as a networked blog, which stores my text as records on some database system in California. But it is also possible to recover the same written fragments by addressing those storage systems in an entirely different way, by-passing the intermediate access systems (the file system, the blog) which represent the « organization » of my text. In the digital text, organization is contingent and protean. Those written fragments, as noted above, ma
y actually contain nothing but material that has been deleted, or signs that serve only to indicate how other signs should be, or might be, displayed or integrated into a visible text. The first case poses problems for the archivist, as well as a challenge for the textual critic. When accepting a box of papers for deposit, the archivist can reasonably assume that both parties know exactly what is being handed over. But when the archivist accepts for deposit a hard disk, is it equally likely that the depositor will know what traces of internet activity or deleted files may remain to be recovered from it, in addition to the intended and apparent materials? A recent American report from the Council on Library and Information Studies agonizes considerably over this problem, which it perceives rightly as a challenge to the maintenance of professional ethics, necessitating a reappraisal of such deposit agreements. But I ask the textual critics here present — if you could have access to (say) Joyce’s or Flaubert’s web browsing history, would you hesitate to examine it on the grounds of a breach of confidence? Less fancifully, if you could (as you will soon be able to) recover every stage of the writing of a great work such as Rushdie’s Satanic Verses, every deletion, insertion, and the movement of every word, what tools would you need to make sense of that richness? The tools and methods so far elaborated have been done so in the measure of what we know how to handle; it is the very abundance of information to the textual critic that necessitates a rethinking of those tools and methods. I close by underlining again the fact that the digitized text is a construction, not only in the sense that it is composed of fragmentary byte sequences, but also in the sense that those byte sequences contain information at many levels. The words alone are not enough: digital documents inevitably contain markup, much of which is (in a term Allen Renear borrows from the English philosopher J L Austen) performative — it determines what the text is. Hence the importance of a proper understanding of markup, and markup technologies to the digital textual critic. But you probably expected me to say that.