Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

Seven steps to Ossian

A TEI transcription of the 1773 edition of James Macpherson’s “translations” of the works of “Ossian”

Why would anyone want such a thing? I can’t imagine, but here’s how I made this one. It turned out to be a seven step process — so far. You can check out each stage from this github repo, if you’re really curious…

1. Decide which PDF to work from

You might think that one library’s digitized copy of “the 1773 edition of Ossian” would be much the same as another’s. But no. There are variations in the physical state of the originals, and the PDF format in which the digitization is made available may also vary. I downloaded three different digitized versions from the Internet Archive, but mainly I used the PDF version of the copy preserved at the National Library of Scotland. https://ia802302.us.archive.org/33/items/poemsofossiantra11macp/poemsofossiantra11macp.pdf I say “mainly” because that particular PDF file had a curious glitch in it which made some of the half-titles disappear when extracted as separate image files. I supplied the missing text from the PDF version of the New York Public Library’s copy.

2. Extract images from PDF

$ pdfimages [filename.pdf] [outputPrefix]

I am too lazy to install anything clever, so I use tried and tested ancient command line Unix tools, like pdfimages. Applying this to my chosen PDF file, I find that each page in the NLS PDF produces three files: two in PPM format which appear to be masks, and one in grayscale representing the page, in negative form. I extract the page image and save it in my img folder, ready for the next stage.

3. Do OCR using medieval rules

$ tesseract [inputfile] [outputfile] -l enm

As noted above, I have a preference for old-fashioned command-line Unix tools, and tesseract, once instructed to use an appropriate language model (enm, rather than eng), actually does a pretty good job of recognizing 18th century typography. It consistently fails on ligatured “ct” and a few other oddities, but is much better than I expected at distinguishing long-s from f. Most of its errors seem to be due to poor image quality. At the end of the process, I have about eight hundred text files, each corresponding with one page of the source, and most of them containing plausible text, which I save in the txt folder.

4. Hand-check, page by page, introducing minimal non-xml markup

I then (and this is where the time goes) proofread each one, introducing some absolutely minimal markup, of my own invention. The cheatsheet reads as follows:

  • introduce a — line at start and end of text on page
  • introduce a == line at start and end of note-text on page
  • introduce a blank line between paras but otherwise retain linebreaks
  • introduce an extra hyphen following end-of-line hyphens which are to be retained
  • replace * or +1 sigla for notes and note references with @ and a sequence number
  • use entity references for long dash, accented letters etc
  • use ““” for open double quotes
  • retain forme work on a single line
  • delimit smallcaps with { … }
  • delimit italicized phrases with {{ … }}
  • use % to mark the start of a dramatic style speech
  • add \ at end of verse lines
  • add $ at end of speech
  • add &end; at end of an argument or other chunk

I made the corrections using, need you ask, emacs aided and abetted with some perl one-liners for bulk corrections. Reading Ossian is an odd way to pass time during lockdown, but no worse than some of the other sanity-preserving expedients one reads about on Twitter. A good soundtrack seems to be almost any Sibelius symphony.

5. Transform and (slightly) reorganize the textfiles into proper XML, one per text

perl streamer.prl v1files.txt

This is not the most elegant or indeed sanitary code I have ever written; it also took quite a few iterations to get it working acceptably, which I defined as generating well-formed XML. It reads in a listof filenames, interspersed with flags to tell it when to start a new output file, and what its initial page number should be. Then it processes in succession each page of transcribed text, building up one string containing all the text chunks for a work, and another containing all the footnotes. Footnotes often span pages, of course. The resulting strings are then output as two separate XML <div> elements. Their contents also acquire some minimal XML tagging (<pb/>, <hi>, <p>, <sp> etc.) before they get flushed out. I gave up trying to overcome some inelegant results of not particularly elegant process. The code is in the Scripts folder of this repo for the morbidly curious; the results are in the xml folder: at least they are well-formed XML.

6. Run XSLT scripts to convert this stuff to kosher TEI documents and validate same.

Since this version is going to be my contribution to the “Ossian Online” project, it should probably follow that project’s usage and TEI practices. Alas, they do not have an ODD to tell me what that should be, and their files are apparently validated against TEI-All. But they do have a reasonable amount of documentation, and enough files already available online for me to be able to construct an ODD automagically (take a bow oddbyexample.xsl a well kept secret inside the TEI P5 Utilities repository) and thus a schema I can use to validate my TEI files when I have finished licking them into shape.

As ever, the fun part of the project is seeing how much of the remaining data-mungeing can be scripted in XSLT. Quite a lot, it transpires, though it remains necessary to hand-craft the details of titlepages, tables of contents etc. Another complete sweep through the text checking for miscellaneous things like the following is also needed:

  • words broken by a pagebreak but not properly reassembled (happens occasionally)
  • quotations not marked as such
  • verse lines not marked as such
  • code-switching
  • residual OCR errors (there are always residual OCR errors)

Before launching into that campaign, I checked the <pb/> elements introduced at stage 5 against the page numbering of the original as preserved in the paratextual comments of my transcription. Somewhat to my surprise, the page-numbering corresponded exactly with the number of such elements, enabling me to construct both a reliable reference system and reliable page image links as values for the facs attribute on each <pb/>.

7. Decide on the macrostructure

The Ossian Online project uses <div> elements for every subdivision of the 1773 edition, at whatever level, all the way down from its two volumes to the arguments of individual poems. It takes the perfectly reasonable view that every text can be organized as an ordered hierarchy of uniformly nested objects. As a consequence, the @type attribute for <div> has to do quite a lot of heavy lifting. Odd_by_example enumerates its values as follows:

  • advertisement
  • argument
  • book
  • contents
  • dedication
  • dissertation
  • duan
  • fragment
  • maintext
  • poem
  • preface

This list combines types that have a structural function (fragment, duan, book) with others that are purely descriptive (advertisement, argument, poem). Nothing wrong with that, but I still find this “divs all the way down” approach somewhat problematic, and that for for two reasons. Firstly a <div> is supposedly something incomplete, which is true of (for example) the argument prefixed to each poem or book, but not of the book or poem itself. Secondly, the relation between the argument and the poem requires that the two be siblings within some larger entity, but the poem is not really an incomplete part of that entity in quite the same way as the argument. Furthermore, in the 1773 edition, we have some texts which are undivided (Carricthura for example) along with others which are divided variously into “duan”s (Cathlona) or “book”s (Fingal). Should each book of Fingal be treated as a single text? Should the whole of Fingal be treated as a single text? Can’t we have both?

Values such as maintext and poem in the list above ring alarm bells indicating that these ontological issues are being evaded. Since the TEI in its wisdom already provides a mechanism for coping with exactly this (not at all unusual) kind of macrostructure, why not use it? I refer of course to the element <group>.

My version of the 1773 edition prefers to treat each distinct work as a <text>, rather than a <div type='maintext'>. Within it, there is a <front> containing a titlepage or half title and the argument, followed by a <body>, if the work is not further subdivided, or by a <group> if it is. A <group>, combines a number of lower-level <text> elements, each again with a <front> and a <body>. I also treat each of the two volumes of the 1773 edition as a <group>. The file driver.tei embeds each file in the structure using xInclude; it is commented to explain what’s going on (a bit).

It remains to be seen what my colleagues in Galway make of this radical re-organisation, to say nothing of my perverse desire to retain the long-s form. But at least changing to a format which matches more exactly that of the (excellent) work already done on the Ossian Online site will be a Simple Matter Of Programming.

How to make a sow’s ear into a silk purse/Comment faire d’une buse un épervier

As the proverb says, you can’t turn a buzzard into a sparrow-hawk. But here’s how I went about producing a TEI-conformant minimally encoded edition of the 1611 King James bible, starting from an all-singing, all-dancing, vastly over-complicated, web site to the existence of which Martin Mueller had alerted me last Friday in a plaintive posting to TEI-L

The problem with web sites like this is that they expose only various HTML views of their underlying data, usually heavily infested with javascript, often deploying all sorts of nonstandard gizmos to make the pages look just so. I’ve no quarrel with their doing that, I just wish they’d make it a bit easier to get at the real data, i.e. the transcribed text and the pointers to the images that go with them. This site is a case in point: after spending some time looking at some of the gazillion HTML files a simple-minded wget -r starts hoovering up, I hypothesize that the data is actually stashed away somewhere inaccessible and all that you see on the site is a bunch of variously configured static web pages which have been created from it. The good thing about that is that the static web pages were created by an automaton, and the process can therefore be reversed by another automaton. The bad thing is that (in this case at least) clearly different automata have been used to generate vaguely different versions of the same page. For example: the following three URLs all show subtly different versions of the same first page of the 1611 bible : https://www.kingjamesbibleonline.org/1611_Genesis-Chapter-1/  https://www.kingjamesbibleonline.org/Genesis-Chapter-1_Original-1611-KJV/ and https://www.kingjamesbibleonline.org/Genesis_1_1611/

These are not simple redirects: each URL delivers a file in a different format. Well, I could have asked whoever runs this site what was going on, but that would have spoiled the fun, so I chose the format I liked best (the last of the three above) and wrote a script to generate requests for just those pages. I had to assume that the URLs would follow a consistent naming format, encouraged by a table of the names of the books of the bible I found in one of the chunks of embedded javascript, and which I moved into my little perl script. Running this script produced a bash script which grabbed each page using curl and saved it as an HTML file on my machine.
What went wrong with this process? Surprisingly little: I didn’t find out till Sunday lunchtime that not only had I completely overlooked the Apocryphal books, but also that the file naming conventions used for these were not quite the same as for the canonical chapters (“Judith” for example is actually spelled “Iudeth”), but for the bulk of the 1300 or so chapters my guesses about the URL to use were spot on.  [This was Hubris. See my comment below]
The real challenge was disentangling the useful data from the resulting screen-scraped mess of pottage. My usual approach here is to run the HTML I get through Dave Ragget’s utterly wonderful tidy utility to turn it into kosher XML, and then write an XSLT script to pick out the bits I want. In this case, however, the pottage I got was so messy even tidy threw up its metaphorical hands and refused to proceed with it, so I had to concoct a little perl script to pre-process each file and extract just the useful part, before running it through tidy, and processing the resulting XML into conformant TEI by means of saxon and a little XSLT stylesheet. Like this:

for f in webScraped/*.html; do \
FNAME=`basename $f .html`;\ 
echo ${FNAME} ;\ 
perl extract.prl $f | \ 
tidy -q  --error-file tidyerrs --doctype omit --numeric-entities yes - asxml | \ 
saxon -o:chaps/${FNAME}.xml -s:- -xsl:postGrab.xsl chapId=${FNAME};\ 
done;

And what went wrong with this process? Well, quite a lot, as you might expect, but nothing that couldn’t be fixed by tweaking and rerunning the process (this is why I always put the effort into writing bash scripts for jobs like this: they can be rerun till I get it right). For example, I was deriving an XML id for each chapter from the name of the input file, but some of the files had names beginning with digits. My plan was to put each chapter into a separate XML file and then use xInclude to munge them together into a TEI document (to maximize the flexibility of said mungeing) — but for this to work I had to get namespace declarations in the right place, and as anyone who has used them knows, anything involving namespace declarations never works the first time. To say nothing of unexpected inconsistencies in the data : which in fact were really very few. In fact so far, the only thing that has so far caught me out was a handful of chunks which looked like biblical data in the HTML markup but were not. Fortunately these were detectable because they wound up with an invalid @n attribute in the XML output and could therefore be weeded out after the event.
While all this was going on, I wrote my driver file and did some further sniffing around on the interwebs for sources from which to complete the work. The 1611 Bible has a title page and loads of prefatory matter not all of which is available on www.kingjamesbibleonline.org (for the record, I found the title page on Wikimedia, and the rest of the front matter at several places, but in page image form only).
How should the Bible be modelled as a TEI document? In my first view, each verse is an <ab>, each chapter is a <div>, each book is a <text>, each testament is a <group>. This made sense to me, but then I realised that processing would be simpler if instead each book were regarded as a <div> of a different type; hence, that is what the current versions of both driver file and ODD say. Considerations of where to put the genealogies, tables for Easter, almanachs etc. which arguably do not belong in the front matter may lead me to review that decision, so I have left <group> lurking in my ODD. It should be easy to produce a separate driver file representing that alternative view.

A more pressing need, however, is to sort out the placement of the page image links: in the HTML source, and therefore in my XML, links to the page images for the whole of a chapter are given as a sequence at the start of the transcribed text for that chapter, rather than being placed at the point in the transcript where that page begins. In some cases that point is identifiable because the transcribers have chosen to include part of the running page header in the transcription there (it winds up as a <fw> element); but in many it isn’t. Most chapters only occupy a few pages so sorting this out would not be a major effort, just a rather tedious one, and not-easily-automatable one.
So what have we learned? Actually it’s not that hard to make a usable TEI document even from something as idiosyncratic as this web site. Which is encouraging. Let me also hasten to add that I mean no disrespect for the hard work (still less for the generosity) which must have gone into the production of that website: everyone is entitled to their own design decisions. I am pleased to express my thanks to the anonymous people who have put in that effort, and decided to share its fruits even with the godless multitude. As the 1611 translators say in their preface ‘Zeale to promote the common good, whether it be by devising any thing our selves, or revising that which hath bene laboured by others, deserveth certainly much respect and esteeme’.

Interoperability of TEI projects : apotheosis or chimera?

This was the title (sounds better in French) of the closing talk I gave at an interesting workshop last week. A prevous COST-funded meeting in Krakow had brought together Czech, French, Catalan, German, and Polish teams working on several different dictionaries of medieval Latin to elaborate the idea that maybe they could make their various lexica interoperable if only they could agree on a common format, for which the TEI seemed the most plausible candidate. Susanna Allés, the energetic organizer of this workshop, got funding for it from several sources, notably the ALLC (or European Society for Digital Humanities as it now prefers to be known). She also seems to have hit on the wheeze of inviting a number of luminaries to make the case for the TEI dictionary tagset (notably L. Romary, F. Glorieux, P. Banski);  alas, in the event only I turned out to be available. Which was useful for me, since preparing for the workshop meant rediscovering all sorts of dusty and neglected (by me, though not by others) parts of the Guidelines.
The workshop was held in the CSIC (Spanish for CNRS) Istitucio Milà i Fontanals, which occupies a rather grand building conveniently located in the Raval, a picturesque if slightly seedy district of Barcelona to the left of the Ramblas, and we were all accommodated next door in the splendidly-named Investigators’ Residence on Calle Hopital. Barcelona is not a place for those uninterested in food and drink; and we were very well fed, in large quantities, if at strange hours and (on one occasion) after a lengthy walk up town through an unexpected tropical-style deluge. The ravioli stuffed with pears and cheese offered by the resto “En Ville” for lunch was particularly memorable, invidious though it is to single out this one occasion.

More intellectual fare was also on offer, of course. It is always a pleasure to arrive amongst a group of specialists personally unknown to me and from a domain of which I am more or less totally ignorant  to find that word of the TEI has already reached them, and often in a far from superficial way. So I was made very happy indeed to hear Sabine Thuillier (currently working in Madrid on the Diccionario Griego-Espanol but ‘formed’ as they say at the Ecole Nationale des Chartes) evangelise for the TEI as an international open source community, and impressed by the way she is implementing it in a workflow which though its editors remain obstinately based on Word Perfect, remains determed to envisage production of a respectable TEI P5 version.
Similarly, the team responsible for the eLexicon Mediae Latinitatis Polonorum lead by Krzysztof Nowak from Krakow, while maintaining a proper scepticism about some aspects of the TEI’s conceptual model, was clearly persuaded of its virtues as an open standard, notably as evidenced by both the amount of open source software (they mentioned XTF, Philogic, and TXM) and the number of comparable projects (they mentioned the Anglo Norman Dictionary, the Glossarium DuCange, and several others) using TEI. Their workflow starts with an OCR phase, since they are starting from an extensive library of source texts  and then uses LibreOffice and a customised library of styles to enhance it to the point where it can be automaticalty converted to TEI, thus (apparently independently) following the same path as is used by Lodel, OxGarage, Agora, and no doubt others, to combine the user friendliness of a word processing style interface with the rigour of a TEI structured maintenance format.

Catalonia has ambitions (as posters everywhere proclaim) to become politically independent of Spain, and certainly its linguistic independence is a well established fact. As a confirmed non-speaker of either Spanish or Catalan (nor of Basque, Galician, or Portuguese for that matter) I regretfully let the interventions in those languages wash over me, and thus missed out, notably on Jose Manuel de Bustamente’s insights on the relation between textual corpus and dictionary. I did however manage to understand the German colleagues present since they made the effort to speak in English or French, for example Alexandra Gorbrecht from the Trier Centre for Digital Humanities gave a brief overview of the dozen or so dictionaries put together online at woerterbuchnetz.de with a well designed query interface. Allegedy all of these dictionaries are locally stored in TEI XML, but as this is not currently exposed one cannot tell how consistently it has been done. None of the other major TEI dictionary projects in Germany I am aware of was represented here: presumably because none of them is specifically concerned with Mediaeval Latin. I had to console myself for the absence of Werner Wegstein from Wurzburg by stealing one of his examples for my own talk.
Bruno Bon and Renaud Alexandre from the IRHT in Paris had the advantage, if advantage it be, of being able to develop their proposals for an over-arching Novum Glossarium Mediae Latinitatis on the basis of the already existant complete Glossarium of Du Cange which has been freely available in TEI-inspired XML markup for some time now, thanks to the work of Fréderic Le Glorieux. The idea seems to be to develop a set of proposals able to express the (not inconsiderable) variation in practice amongst these and others working on different lexica of medieval Latin in Europe, and thus create what (inevitably) Bruno suggested would be called NGML (the Novum Glossarium Markup Language). As a first step they have set up an exploratory multilingual wiki with some nice visualisation tools, based on a few sample entries taken from each of the five different lexical projects (specifically, in Barcelona, Prague, Krakow, Munich and Paris), and are inviting more. This could be fun, though I think expressing NGML as a real TEI ODD would be more of a challenge.

Susanna Allés and her graduate student Frédérique Laugrost (on secondment from the Ecole Nationale des Chartes) talked about the specific problems they faced when starting to apply the TEI to the text of their dictionary: the Glossarium Mediae Latinitatis Cataloniae. Many of these are familiar, of course: notably those which derive directly from the wish to preserve the punctuation and use of abbreviation which characterize such sources and at the same time model the logical structure which they determine. Some of these problems do however point to aspects of the current TEI dictionary model which could be improved.

I started making a short list of such points during the workshop, but sadly did not get very far:

  • too many of the proposed TEI dictionary elements relate only to modern lexicographic practice. Deciding which ones to filter out to make a kind of TEI Lite for dictionaries would be very desirable.
  • an element for “translated segment” is desired, even if it is just syntactic sugar for with a value for xml:lang other than that of the surrounding text
  • some dictionaries have entries which are large enough to have multiple paragraphs but there is no place for <p> in any model.entryLike element
  • when a term is identical in two or more languages, can xml:lang take more than one value (I confidently said it could, but I think I am wrong)
  • how should you mark a word which is clearly readable in the text when its meaning is entirely uncertain? (I suggested <orig>, but there must be better ideas)
  • The typology currently used for <form> combines categories from entirely dissimilar taxonomies, e.g @type=lemma is an entirely different kind of thing from @type=compound. Likewise, the typology one might want to use for should be more to do with the way the sense had evolved. To both these points I said (in my best French) “Bof”, Or, more precisely, it’s only by receiving proper input from specialists in the field — those best able to define more appropriate typologies — that the TEI progresses…

I’m hoping that the undoubted success of this workshop will encourage the participants to form a SIG on the subject or (as Piotr Banski had previously suggested to several of them) to make an active contribution to the existing LingSig. Plenty of scope for very interesting work to come, not to mention the opportunity of returning to Barcelona for the paella which I somehow failed to find time for on this occasion.

The origins of ODD

I’m moving house this week, which involves packing up thirty years of accumulated junk of various sorts. As a result, every now and then I stumble upon some long lost historic document, like this one. It dates from a lunch that Michael Sperberg-McQueen and I enjoyed at the Lido restaurant Bergen in November 1991. This being a family restaurant, it was equipped with paper table cloths and wax crayons, Norwegian kids for the use of, which Michael and I were quick to reappropriate to our immediate needs, namely some kind of visual representation of the production system we wanted to create for the editing and processsing of the TEI Guidelines, version P2. We knew we were going to write and edit it in some version of TEI SGML; we had faith that anything in SGML could be transformed into anything else. We just had to work out how, and what.

P1 had been produced by some devious hackery that only Michael understood, and more critically which only ran on the mainframe at UIC; we wanted something that would be platform (hardware and software) independe nt. Such was the promise of TEI SGML, after all. Somewhat to our horror, the only reasonable high level programming language in which we were both reasonably competent and for which there were decent implementations on all the machines we collectively used (IBM CMS, VAX VMS, IBM PC, Macintosh…) seemed to be a now largely-forgotten string handling language called Macro Spitbol, so we decided that our production system (what nowadays we’d call a work flow) would have to be written in that. But of course the heart of everything would be a nice author-friendly TEI SGML dialect, for which we optimistically coined the acronym ODD: One Document Does-it-all. ODD files would be parsed by an SGML parser, and its output filtered through a variety of Spitbol processors to create other formats. And that, more or less, is what we did.

On this schematic you can see the basic idea in blue. The big blue circle is the ODD format, from which are generated canonical TEI files (with extension .TIN (for Tiny) or .TEI), RL files (extension .TD), and DTD files, the three little blue boxes. DTD files are of course SGML DTD files, which ois why you see a green line going back from them to validate individual ODD files (I dont know why it’s labelled LB though). “Tiny” files would use a subset of the TEI Lite schema defined back in 1988; RL (later renamed .REF) files would use the TEI vocabulary Michael had developed for reference documentation of individual elements (“TD” for tag documentation). Down the middle you see a list of TLAs in blue which I think must have been attempts to decide on a name for the format (WEB, Joe, LAM, RDF, CSP…), though what they expand to I really don’t remember – what a pity we didn’t choose RDF. Or not. And over on the left in red you see some notes which eventually became the canonical structure of the TEI Guidelines: there is a chapter about the “blort”, containing prose paragraphs; there is a documentation element referencing the blort tag, and there is a parameter entity reference which pulls in the definitions for the blort chapter.

What happened next? Well, we did set up a workflow more or less on this model, and we did use three separate filters written in macro Spitbol (mostly by Michael) which turned our ODD SGML into two flavours of straightforward TEI-lite-like SGML, which we called “P2X” and “REF” and also generated SGML DTD fragments. After experimenting with a generic filter called “tf” (also in Spitbol) to translate the generated TEI files into LaTeX, and dallying with a Canadian tool called Omnimark, we finally settled on a rather swish transformation engine called Balise, which was produced by a French company called AIS. Either way we were able to print the fascicles of P2 in something that not only looked quite nice but also looked just the same whether I printed it in Oxford, or Michael in Chicago. Except for the paper size, of course: ain’t standardisation a marvellous thing.

And what happened to ODD? It turned out to be quite a good idea. We gave a presentation about it at the ACH-ALLC conference in 1994, though I cannot remember what we said and we never got round to writing it up. Michael developed the ideas in the “tag documentation” part quite extensively, and (I believe) used them also in his next job working for the W3C, but the TEI’s ODD stayed more or less unchanged until work started on the TEI’s XML reincarnation, at which time the whole system was re-imagined and redesigned as the lean mean generic schema generation system we know and love today. But that’s another story.

Here we go again

It’s ridiculously early for a Sunday morning, but the only plausible train to catch from Oxford if you want to connect with a 1220 Eurostar leaves at 0940. So here I am wondering, along with many others, where on earth is said train. We can see it in the sidings North of Oxford station, but it’s not moving towards us and the announcements are not reassuring. Maybe the stopping service to Ealing Broadway is a better bet: certainly standing around fretting on Oxford station is not pleasant. Some twenty bucolic minutes later, I detrain at Didcot in the hope of something better: which does indeed turn up in the shape of the 0940, now proudly running only 20 minutes late. I spend my uneventful trundle through the morning sunshine trying to work out what I have done to incapacitate tei-emacs on my laptop. Then an unspeakably horrible Circle line train bears me off to St Pancras, and the comparatively civilised space of the Eurostar lounge where I discover that in the general confusion of getting myself ready for this week’s set of French gigs I have failed to check something crucial into my nice new subversion repository. Ah well: no time to agonise over that, it’s time to get my disordered thoughts on the history of the TEI into some sort of plausible order, and to construct an appropriate French narrative around same. Which keeps me happily occupied for the rest of the day: out of London, across the wilds of Kent, under the channel, through Picardy into Paris, my nose barely strays a few inches from my laptop screen, tappety tappety tap, except for a few minutes degustation (I use the term advisedly) of a Eurostar snack lunch, and a few dirty looks in the general direction of some fellow passengers yapping away noisily behind me. Even nastier, but mercifully not noticeably longer than the Circle line is the hop by RER B from Gare du Nord to Gare de Lyon, where I resume work on board a nice peaceful TGV all the way to Lyon. With such good effect that my talk for tomorrow is all ready to go, even before I arrive at Perrache. Such virtue warrants dinner, even though it’s now a little late, so I stride purposefully across the Place Carnot to the brasserie Victor Hugo, order a hamburger a cheval (nothing to do with horses, this is a burger with a fried egg on it) frites, et un pot de cote and phone Marjorie to re-assure her that I am here and ready to boogie, before retiring to bed.

One hasty breakfast later, Dominique Roux and I set off in search of one of the many fine Universities in which Lyon rejoices, more exactly the vaulted basement dungeon in which Marjorie’s seminaire is taking place. The morning was supposed to be a double act, but since Paul Spence couldn’t make it his colleague Guilhem Pépin instead gave us an interesting lecture about medieval history before showing us some of the Gascon Rolls project. Pépin is a French (or more properly, Gascon) historian actually working at Oxford in the History faculty. There was a time when I might have huffed and puffed a bit about Oxford academics who take their TEI digital projects off to Kings College instead of using the local facilities, but these days I have become placid and boring. Anyway, Pépin was a good speaker and clearly an agreeable person to work with; and the material presented all sorts of interesting possibilities for analysis once marked up, even if he was almost aggresively reluctant to claim any expertise in the application of markup. Not for the first time, I wonder why it is perfectly acceptable for academics to profess ignorance of one technology that is essential to their work, whereas ignorance of others (say, bibliography) would seriously damage their career prospects. And then off we all went for a decent lunch, this being France: dos de colin avec ses pommes de terres lyonnaises, if I remember correctly. After which I gave my talk, which seemed (to me at least) to go remarkably well for a first outing: I suspect I will give it again, at least as long as people go on asking me to explain where on earth the TEI came from, and why it has not sunk without trace. It is a good story, with a good moral, I think. After a coffee break, Dominique Roux from the Presses Universitaires de Caen gave a thorough overview of their projects and preoccupations, presenting a variety of cool projects, a TEI-based workflow, some wise remarks about the use of TEI in commercial publishing, and much else besides. It’s a pity he came at the end of a long day with perhaps a touch too much Gasconnade in it, since it would have been good to discuss several of the ideas he presented with the masters students present — who had all been assiduously taking notes earlier in the day, but were clearly flagging somewhat by the end. I was sorry to have to rush off  in time to catch the train to my next gig, in Tours.

Preparation for said next gig took up quite a bit of the journey, quelle surprise; indeed I don’t think I looked out of the window once. And yes, it is possible to get from Lyon to Tours without passing through Paris, if only once or twice a day. The TGV concerned stops at a place I have never heard of called Massy, and then at St Pierre des Corps, before zooming on to Caen. St Pierre des Corps is a dismal little junction from which a variety of trains shuttle into the architectural splendour of Tours central, about 5 minutes away. Even when entirely enclosed in scaffolding as a part of its restoration as a patronomial monument, Tours station is an uplifting spectacle late at night, when everything around it is closed except for Macdonalds. Equally good for the soul is the Grand Hotel of Tours which has retained and lovingly refurbished its charming 1930s decor, all peacock feathers and wooden panelling and geometric patterns. Last time I was here in December, the wifi was misbehaving but everything seems to be fine now, and the breakfast is excellent. Next morning, it’s a quick trot across town to the Centre d’Etudes Superieures de la Renaissance to give my contribution to their Master 2 professionnalisant Patrimoine écrit et édition numérique : initiation à l’encodage des textes patrimoniaux. This is the third or fourth year I have done this, so you would think I had it sorted by now. My contribution this year consisted of a ninety-minute lecture on manuscript encoding (much revised to recognise the existence of the new <sourceDoc> element, as of release 2.0 of TEI P5 — this was the talk I thought I had mislaid, but hadn’t); followed by another 90 minutes on Roma and schemas and such like mysteries, using the Virgolos project as a case study (called TEI a la cartes, geddit?); and finally another 90 minutes attempting to explain XSLT pour les nuls. This last was a rather more quixotic and under-prepared venture: although novices quite quickly grasp the basic ideas and usefulness of XPATH, grasping exactly what an xsl template is and why you might want one is rather more of a challenge. But the punters seemed content to be slightly baffled at the end of a long and varied day, and I am sure that the local team will clarify any residual bewilderment next week. Dinner was at the Odeon, another piece of lovingly restored 1930s kitsch, where the food was excellent (I had the rognons since you ask), and Marie-Luce and I discussed the notion of a weeklong residential formation approfondie sur la TEI under the auspices of the Cahiers consortium, plus anyone else who might like to play.

Tours is in the process of acquiring a tramway, which means that large amounts of it are being dug up and knocked down, notably near the railway station: I observed this with interest over breakfast, before hastening off rather late for a short consultation with the CESR team about how to autogenerate an ODD from their Epistemon corpus (sort of difficult if you don’t have Saxon installed), and some discussion about how best to proceed with their ongoing project of revising the project’s encoding manual. The plan is not only to update but also to generalise this manual for use by other similar projects, which would certainly be useful: there isn’t a lot like it in French, aside from the BFM manual. However, I have a train to catch this morning, so I have to sprint back through the marche des fleurs, looking neither to the right nor the left, regretfully for there is much to see, and resisting the temptation to stop to buy fresh garlic or dried flowers or a sandwich for the journey or even take some photos of the pavements now decorated with a rich and colourful assortment of flowering bedding out plants. Tours is a charming place with much to recommend it. And so, off to Paris where I have a couple of crucial meetings to attend, crucial enough to propel me into an irrational anxiety about the progress of my train which suddenly decides to slow down and stop in the middle of nowhere more frequently than is decent, even for an intercite. In the event, though, we pull into Austerlitz ten minutes early, allowing me to take a pleasantly-paced walk through the Jardin des Plantes and up the hill to the TGE Adonis office in good time for my appointment with my directeur, Jean Luc Pinol. We discuss the coming year’s work plan for MEET; this being satisfactorily resolved, Ariane agrees to release a PUMA forthwith (don’t ask) … I spend the afternoon catching up on the gossip with TGE colleagues before checking into this week’s hotel which is conveniently located opposite a nice bar and round the corner from a rather excellent brasserie. Here I dine, expensively but deliciously on foie de veau patates et encore un pot de rhone. It’s tiring work all this gluttony, you know.

Next morning, I rise at a civilised hour, and catch up on my committments at the TGE most of the day, taking however an extensive lunch break to discuss with Mathieu Andro from the Bibliotheque Ste Genevieve a wondrous new digital library project which has apparently secured 1.7 million euros of local funding to finance a deposit archive for the digitized outputs of a select bunch of Parisian libraries, and wants to use the TEI. Did I hear that right? The lunch was pretty good too. Finally, I put in place some hasty arrangements for another meeting in Paris next week, and then trek on foot across town to Chatelet (where there seems to be, as usual, a manif going on) to catch the metro to Gare St Lazare  (which, post-renovation, seems to be mysteriously disguising itself as the gare de l’est), to take the train to Caen, for the last gig of this tour, namely viz Matthew Driscoll’s ongoing TEI seminar at the MSH. The Hotel Quatrans is much as I last saw it, and so, I am pleased to report, is the little restaurant called “Les saveurs de la Reunion” just round the corner from it, where Matthew, Eric, and I enjoy some rum, some gateaux piments assortis, two bottles of muscadet, and a tasty carrie cabri before retiring for the evening.

Friday is seminar day. Serge Heiden ARE YOU READING THIS SERGE? from the ENS Lyon opens proceedings with an update and an impressive demonstration of the textometrie project, which goes from strength to strength. They have an equipex in which they will be working with hundreds of Historians, and a number of other collaborations in prospects, some ANR, some DFG-funded. The software is, of course, still available from sourceforge, and they are also in the process of setting up a portal for general access to some demonstration applications of it   . Serge discussed the way the software uses TEI and other forms of markup; they have now fixed on a TEI-conformant pivot format, for which an ODD is in preparation. He also demonstrated many XAIRA-like features of the software and reported some work done by Alexei Lavrentev in importing and analysing the markup of a large corpus of texts from Frantext.  He was followed by Antoine Widlocher who described the  search engine under development at Caen’s Greyc research group initially for use in the Descartes project Its data model uses graphs rather than trees, and much of his talk therefore concerned the difference between the two, although he did also present the user interface envisaged for the system; this is, of course, SPARQL-based, and will access a triple store in which XML and other annotations are all represented in RDF. All very interesting if, perhaps, a little computer science oriented. Maud Ingarao commented that the project resembled  Edouard Portier’s work on multistructured documents; I should have mentioned Desmond Schmidt, but didn’t. After lunch (in the student canteen; n’en parlons plus) Maud gave a brief overview of a newish XML database system called BaseX, and demonstrated some of its jazzier features: she also noted that a test basex server has now been implemented as part of the TGE Grille de services. Frederic Glorieux then gave a nice talk demonstrating how the presence of detailed markup in his version of Francois Ganaz’s “XMLittré” project facilitated several interesting searches: he proposed tthe average size of text fragment within a TEI document might be an interesting stylistic indicator; and remarked on the high frequency of emotive words like “dieu, homme, roi” in the examples cited by Littré. Finally in this session Marie Bisson demonstrated the current state of the Juxta collation system under Windows, and working on three manuscripts of Thomas Le Roy. Juxta apparently has its own XML markup but  does now also (more or less) grock TEI .

Last but one session of the day concerned “quantitative codicology“, a term, I learned, which is even older than the TEI, having apparently been invented by someone called Ornato in 1980, according to Matthew, though it is a concept which can be seen to underlie Don McKenzie’s 1985 Panizzi lectures on bibliography as the “the sociology of texts”, or the so-called New Philogy of Stephen Nichols at the start of the nineties. I liked Matthew’s use of the phrase “the artefactual turn” to describe his increasing certainty that the meaning of text should not be dissociated from its “embodiment” or the historical and social forces that documents manifest, and intend to appropriate it for use when presenting the TEI’s recent reinvention of <sourceDoc> . Matthew and colleagues described the Fornaldarsögur norðurlanda project, which aims to provide an account of the production, dissemination, and reception of the “chirographically transmitted texts” of 36 stories from prehistoric times which can be identified in some 1500 texts presented in over 750 distinct Icelandic manuscripts. These are described using (inter alia) a reduced and tightly constrained schema derived from TEI P5, extended to include information derived from the transcriptions of the mss such as the average written area, the number of abbreviations per line, etc. as well as such features as the presence of decoration, or the types of text included. Sylvia Hufnagel presented some hypotheses about possible connexions between these evidential characteristics and assumptions about the wealth or status of the owner or person believed to have commisioned creation of a manuscript, though there is really insufficient evidence so far to justify any generalisations one might be tempted to make about (say) the emergence of the “prestigious reading manuscript” distinguishing (as it were) “coffee table” manuscripts from “paperbacks” . Eric Haswell described clearly and concisely the technologies used in the project, contrasting the “data centric” and “document centric” notions of relational and xml databases, and also showing how their web-service based implemetation based on eXist made it possible very easily to extract query results as CSV for input into traditional spreadsheets or as JSON for use by cooler things such  simile widgets. Finally, I gave that talk about linguistic annotation and why people say such terrible things about it. Not sure how appropriate it was to the day, but people seemed to be listening anyway. Final dinner of this week of over eating, was at Le Bouchon du Vaugueux where I (and others) tucked into a four course gastronomic menu, including some excellent roast duck, and rather a lot of stewed pears.

And on Saturday, the journey home, which was all very pleasant till I actually got to London : trains cancelled without warning, inadequate fallback facilities, Great British Public mustnt-grumbling etc etc. It took longer to get from London to Oxford (about 100 km) than from Caen to Paris (about 200 km), and involved a train that was so overcrowded it could not leave the station, not to mention a 30 minute wait for a replacement bus in the cold outside Reading station. Never mind, next week I’m going back to France, where the trains (mostly) run on time and the train crews are (usually) helpful and less demoralised when they don’t.

A day in Lower Normandy

And so to Caen, whose University campus boasts magnificent if vaguely fascist architecture, at the top of a hill, commanding splendid views over the urban sprawl to the countryside beyond, and liberally decked with graffiti to bewilder future epigraphers

OK Epidoc, encode this.

The University Press of Caen having joined forces with two other departments to offer him a visiting fellowship, my distinguished and white haired Danish colleague Matthew Driscoll is organising a series of seminars over the next few months, and I am here for the kick off session “TEI et encodage des sources”. About a dozen or so TEI fans are gathered in the Belvedere Room which is vast and very cold but still affords delightful prospects (as they say).

First up is Julia Rogers, a local doctorante describing the online edition of Descartes on which she is working under the watchful tutelage of Pierre-Yves Buard inter alia. No manuscript survives of Descartes’ works, and modern editors have played fairly fast and loose with them as a consequence: this impeccable electronic edition returns to the first printed editions as its basis, but uses all the possibilities of digital editing. Text is captured and maintained collaboratively by up to 15 scholarly editors, using a customisation of XML Mind to enforce a simple P5-conformant protocol designed by Pierre-Yves (and built with Roma), allowing for such niceties as the addition of editorial notes, citations, tracking of quotations, mathenmatical formulae (currently done in TeX though this will change) etc. Elsewhere in the University a fairly sophisticated morphologically-aware search engine is being developed, so that the original text can be queried in Modern French. The online edition will also integrate high quality page images supplied by the BNF, compensating for the decision not to encode all features of the layout. Impeccable, as I said. I was also impressed (as usual) by Sourcencyme,  presented by Isabelle Draelants from Nancy and Catherine Jacquemard from Caen. This ongoing project will combine a textual corpus of medieval encylopaedias (about seven so far) with a sophisticated indexing system tracing the chains of reference and citation amongst them, extending in some cases beyond into the 19th century. As a real hand-built hypertext, it is thus increasingly becoming the thing it represents: a complete encyclopaedia of medieval learning, endowed with tools for collaborative editing and annotation, and also with a specialist journal-like addition published by the ubiquitous revues.org. However, unless I misunderstand, a significant number of the texts it treats are owned by Brepols, which may pose access problems. Next before lunch, we were entertained by Vincent Olivet and Frederick Glorieux from the Ecole Nationale des Chartes whose home-grown RelaxNG tools continue to advance in the general direction of TEI conformance. They have been working on a direct conversion from ODT to TEI, using the same principles as Sebastian Rahtz’ stylesheets but aiming at a more specific homegrown RelaxNG schema, now expressed (I think) using an ODD. This was all very satisfactory, as is the fact that the tools in their workshop continue to be readily accessible.

Lunch (a three course affair involving some rather good salmon, and a chocolate mousse) was also highly satisfactory, and we reconvened much restored for an afternoon combining three short project presentations with set pieces from Matthew and from myself. Subhasree Pasupathy, from Caen, first of the three, described her use of the TEI mechanism to represent textual variation in her thesis on the projects of the Abbe de St Pierre. Thomas Lebarbe introduced us to the pleasingly heterodox digital Stendhal project at Grenoble during which I wondered not for the first time how hard could it be to write an ODD corresponding with their home grown DTD. Finally, Jorge Fins from Tours showed us how the Bibliotheque Virtuelle des Humanistes at Tours is now using both XTF and Philologic to search its corpus.

And so to the grand old man of TEI -based editing: not me, but Matthew Driscoll. He spoke in English but (as someone said to me afterwards) with such limpidity of discourse as to pose no problem (which sounds even better in French), Citing WS Greg’s distinction between “substantive” and “accidental” variation he showed how TEI markup enables one to capture both, but display either, by the judicious tweaking of rather cunning stylesheets developed by Eric Haswell. He also talked about gaiji, news of the existence and facilities of which does not seem to have penetrated everyone’s consciousness to the extent that it probably should have by now. And finally, a good half of the material I had prepared for my own talk having been presented by previous speakers, I was able to close the day in a suitably forward-looking way by focussing mainly on the new concepts proposed for handling l’edition genetique (sourceDoc, mod, change etc.) in TEI P5 which all seemed to go down quite well.

Deplacements Septembre – 4 et finale

I wake in yet another extraordinarily overpriced (but architecturally impressive) hotel, this time in London, to find that the sun is already shining brightly and I have

Breakfast in Russell Square
Breakfast in Russell Squaree

ample time to look for a real breakfast — my first for many days — which I enjoy alfresco, in the much to be recommended cafe in Russell Square. For today really is the last stop on this little tour of duty, and I am down to give a closing address at University College London where Juliette Nyhan and Anne Welsh have jointly organised a symposium on “the Hidden Histories of Digital Humanities”. Sitting in the leaf-dappered sunshine, I mentally resolve to try to take seriously the notion of Digital Humanities as something having an identifiable history, and (with slightly less difficulty) the notion that I myself have contributed to it. I also resolve not to get too much ego on my tie, and not to use that joke to open my talk, since I am scheduled to appear at the end of a long day. My talk is a revamped version of one I have now given twice, both times in French, so doing it in English will be entertaining (for me, at any rate)

I arrive at UCL in good time to pay my respects to Jeremy Bentham still sitting in his glass case, and to be issued with my little folder of publicity for UCL’s new Centre for Digital Humanities, a free pen, a completely empty USB key, and some fairly nasty coffee. Ah yes, English coffee. And sugary English biccies in plastic bags. Never mind, we pile into the massive auditorium, and listen to Professor Willard McCarty, for it is he, reading wise words, and dropping names in an appropriately professorial manner. His title was something about how to write a history of Digital Humanities. Amongst topics mentioned I seem to have noted only the following: a wurlitzer book vending machine; Margaret Masterman; I A Richards; N Mccullock; Jasia Reichardt’s cybernetic serendipity and of course Homer’s Odyssey, xix, lines 149-50. Willard somehow manages to appear both encyclopaedic and parochial. While I share his desire that the usual suspects when pontificating on the “discipline” should meet the highest of academic standards, it also seems to me that writing history is largely about taking revenge. But before it all gets too personal, the charming Claudine Moulin from Trier’s DH Zentrum takes to the floor and delivers a lengthy meditation on the concept of “knowledge spaces”, in particular (I think) the way that such spaces can be regarded as “the spatial concretion of knowledge stocks” (I understood that when I wrote it down, but now I am not so sure).

After a pause for more brown liquid and sugary snacks, during which I traded scurrilous gossip with Lorna Hughes, Edward Vanhoutte, attended by not one but two tall and glamorous Flemish ladies, gave what was probably the best researched presentation of the day, concerning how humanities computing mutated into digital humanities, and what the difference between them might be. Like Fred Gibbs and Patrick Svenson’s (whose definitional forays he cites), Vanhoutte seems to think that these terms label ontologically distinct fields of activity, or “disciplines” which can be located independently of any social, historical, or political context, and that there is an inherent circularity in defining either of these terms by means of lists or surveys or (one might say) communities of practice. But then he also seems to think that terms exist independently of their application, so he is at least consistent. I learn with interest that the currently foundational text in the field (Schreibman et al) was titled a Digital Humanities Reader, at the suggestion of John Unsworth because the marketing department of their publisher did not like the originally proposed “Humanities Computing” label ; I immediately tweet the proposition that said publisher (Blackwell) was keen to put deep blue water between this product and the immediately preceding foundational text of the domain, OUP’s “Humanities Computing Yearbook” and “Research in Humanities Computing” series. That’s how historical truth is created, Edward.  Next up is the totally charismatic Melissa Terras, back from maternity leave and showing her dedication to the cause by making this seminar her first return gig, and on a Sunday to boot. She does us the compliment of actually addressing the ostensible topic of the seminar by talking (ostensibly) about the implications of “Crowd sourcing: beyond the traditional boundaries of academic history”. I think the idea was to make connexions between the role of the internet beyond the academy, the need to pay attention to ephemera when constructing history, and how crowd sourcing might help us explicate them. She used as real life example an amusing document from the TEI archives, but Mel has the rare ability to make any document sound fascinating. we resumed for more serious matters. Last before lunch, James Cronin from UC Cork spoke fairly inpenetrably about what is presumably a cause celebre in Irish architectural history : the elision from history of Lehmann James Oppenheimer , a key figure in the Irish Arts and Crafts movement at the end of the 19th century (I learn), but no longer featuring in any discussion of the churches whose interior he designed, presumably on account of his name (I hypothesize).

After lunch, about which I report only that it was not French, Andrew Flinn from UCL spoke entirely over my head about the information theory behind oral history methodologies (I plead total ignorance of this topic). Also from UCL, Vanda Broughton was much more accessible on the social history of what we now know as facetted searching, which she made a plausible case for having been invented by a real-life group of information scientists inspired by Ranganathan long before its reinvention or rediscovery in shopping systems and super cool web search engines (like Isidore). In the absence of the sadly-missed Claire Warwick (she was off sick) the next item on the programme was a “virtual presentation” by Ray Siemens, i.e. (I think) a previously recorded video of Dr Siemens gazing at us and chatting away very much like a late night radio broadcaster. I say that not just because of Ray’s mellifluously smooth mid atlantic phrasing, but also because he sounded unsure that anyone was actually listening to him. He had a number of interesting anecdotes about “pioneers of DH” (the Stanwood brothers, for example), some of which were new to me,but I found it difficult to engage with discourse so carefully inoffensive, and so clearly addressed to no-one in particular. And so, after a brief tea break, to my own presentation  which was (need I say) entirely wonderful, if a bit chaotic. More to the point, I managed to get through it within the time allocated, leaving a bit of time to answer questions before the six o clock deadline for everyone to decamp to the room where several bottles of a really quite drinkable chardonnay awaited us. I got pleasantly drunk, staggered out into the night, hailed a taxi to Paddington, and thus eventually back to Oxford for a well-earned rest.

Willard and me, then and now

Deplacements, sept 2011 – 3

16 septembre  Wherefore to Exelmans? Because it is within walking distance of the CNRS “campus” at Michel-Ange of course, where I am due to attend a day long event hosted by the TGE Adonis. Rather like a JISC town meeting, or an EU pre-proposal briefing, the main purpose was partly to inform people about the latest ANR Call for proposals and also of course to present the services offered by the TGE Adonis for the use of such projects. Not surprisingly it was a well subscribed event : between 150 and 200 people turned up, handed in their identity cards (security is strict at the CNRS campus) and made for the coffee and croissants. I met lots of people I know and recognised, and also some I knew but failed to recognise, embarassingly enough. There were lots of the kind of intense conversations you tend to get in academic contexts when there’s money in the air. Patrice Bordelais assured us that the overall budget of the Institut des SHS would not be too much affected by current financial anxieties, and then apologised for having to rush off to a crisis budget meeting to discuss precisely how much they would be. Jean-Claude Rabier, from the ANR, gave a much longer talk about the thinking behind this “ANR Corpus” call, the first since 2007, what they were looking for, how to get funded, what to do and not to do, how much money was on the table, etc. This was what the punters had come for, and he was listened to very attentively. Laurent Doucet talked about the TGIR Corpus which I found interesting since I have been wondering what it is for for some time. Apparently, it will set up lots of subject-focussed consortia, with small amounts of money for meetings, sharing of expertise, training etc. One or two already exist, and more are planned. Are they like the “centres de ressources numeriques” of the TGE Adonis? You might think so, but they’re not, even though some of the same people and institutions are involved in both. Jean-Luc Pinol, Richard Walter, and Laurence Rageot then each spoke about the services offered by the TGE-ADONIS (grille de service, Isidore, archivage perenne etc) and its guides de bonnes pratiques, with a quick plugf also for next month’s ANGD summer school. Asked to be “ponctuelle” in their questions before lunch, some of the audience (notably Serge Woliakowsky) nevertheless threw some googlies, such as “how does this relate to other european initiatives?” or “is there any procedure for automatically enhancing data capture to meet the recommended norms?” or even “what about the Centres de Ressources Numeriques then?”

And so to an excellent buffet lunch and a reprise of intense conversations as above (I mostly learned about Marie-Luce Demonot’s new proposed consortium called Cahier, focussing on literary texts) followed in the afternoon by a briefing on some technical aspects of the Grille de Service, followed by more questions and discussions from the floor. I noted the following comments: yes, multidisciplinary (i.e. not just SHS) projects would be considered; no, there is no charge for the TGE’s services, unless you want more than we are funded to provide, and no, the ANR does not fund anything except “research”; yes there will be another call; at least one-third of the members of the selection panel are “international” (i.e. non-French) experts; 4% of the ANR overall budget is allocated to infrastructural support (I think this is how one might calculate the level of support that the TGE is “funded to provide”), which is why management costs for individual projects should not be claimed. Finally, Patrice Bourdelais reappeared from his own budgetary meerings, re-assured us that the CNRS had plenty of lawyers to defend its interests and said “pour l’instant y a pas de catastrophe”, which is probably as much as one might hope for in these days of ubqiuitous financial deficiency. So the meeting broke up very agreeably, and I reclaimed my passport and headed for the Eurostar.

Deplacements, sept 2011 – 2

13 September The AFLS conference has another half day to go, but I have to move on. I have foolishly agreed to do both opening and closing plenaries at a week-long CNRS-funded Ecole Thematique on linguistic annotation to be held in (I am told) a chateau in Biarritz : who could resist. So I catch the midday train from Nancy into Paris, traverse the city of light by metro ligne 4 once more, emerging at Montparnasse to find that it is infeasibly hot outside as well as beneath. How well I remember leaving rainy Oxford with a suitcase full of autumnal long sleeved shirts and a raincoat; what folly not to have anticipated the need for cooler clothing. Well, there is time and opportunity to buy a short sleeved shirt from the C&A store underneath the Tour Montparnasse, and doing so makes me feel cooler already. I even have time to eat something fairly horrible and cheesy at a Flams’ fast food joint , before catching the TGV to Biarritz. This splendid train is sans arret for 3.5 hours to Bordeaux, at which point the climate and the scenery suddenly change, and the train starts pottering through the wonderful Camargue for another hour or two, before finally stopping at Biarritz, where I descend and join the other stragglers hopefully waiting at the taxi rank.

Good lord, I realise, as I descend from the taxi, they were not kidding about the chateau. The Domaine de Francon was built for a wealthy English milord in the 1880s, and is (it says here  “une vaste demeure de style anglo-normand ou Old English, au luxueux décor intérieur très éclectique et très raffiné”. Though now owned by a holidays rental outfit, it still retains most of the original decor (huge wooden staircase, stained glass windows, painted ceilings, second empire japanese-style decorations and marble fittings passim) as well as a very fine tree-filled park, and verandahs from which you can see huge atlantic breakers banging away on the beach half a mile away. See further my photos. It has the atmosphere and the style of an Agatha Christie country house, as well as an excellent cuisine, though by the time I arrive, there’s not much left to enjoy except the cheese board. Never mind: I retire up the vast creaky staircase, past the glorious stained glass window, into my huge room and sleep soundly.

Next morning, slightly disappointed to find that some French version of Jeeves has not discreetly laid out a morning suit for me, I slip into one anyway, grab a hasty breakfast, and proceed to the salle de conferences, to receive another little conference pack (wooden USB key, another badge, another bag) and listen dutifully to the organisers explain the modalities of this event, which has been in the planning for almost exactly a year. Coffee and buns on the verandah, and then it’s time to wheel out my talk on Linguistic Annotation for a (hopefully slightly improved) second performance. There is some discussion, mostly positive, and afterwards Anne Dister kindly takes me to one side and corrects numerous typos in the slides, which I perversely read also as a positive response. The rest of the day is devoted to brief presentations about oral, written, and video data, which nicely reinforced my comment that the Tour of Babel is still very much with us, and also demonstrated to my relief, that I had at least chosen relevant topics (variation in transcription practice, mark up of named entities, co-referencing, etc.). I was also struck by the fact that formats tied to particular software (lPraat, Childes, Elan, Transcriber etc.) are seen as de facto standards.

The next day we meet in a custom-built bunker under the lawn where there is a swish conference suite (though the wifi is a bit rubbish). I enjoy listening to two grandes dames d’annotation linguistique — Anne Lacheret and Marie-Paule Pery-Woodley — give their take on the complexities of the field, but it becomes clear that these formal presentations, though each in their way impressive, are not the real business of this event. Instead it is in the extensive and occasionally heated discussions taking place over coffee and during meals and, yes, during early evening excursions to the beach that the intellectual action is taking place. As it should be, at an Ecole, of course. The twenty or so participants are an interesting mix of people at different stages in their careers, including recent doctorates, established scholars, and a scattering of engineers , and they also come from several different parts of the applied linguistics forest despite a shared interest in annotation (for example, video capture, the language of signs, psycholinguistics…). I struggle to keep up and feel obscurely honoured to be involved, though I am also somewhat preoccupied by more mundane matters like getting the next few talks ready, or getting my washing done.

In retrospect, taking a day out in the middle of the Ecole to go back to Paris was somewhat eccentric, if not barking mad, but I did it all the same. On Monday evening, I found my way to Biarritz airport in time to catch the evening Easyjet flight to Roissy : a fairly nasty experience, not improved by the excessively long walk from the arrival gate at Charles de Gaulle airport all the way to its railway station. I spent the night at a slightly shabby hotel near the Gare du Nord, and on Tuesday morning, I met Celine who guided me by RER out to the wilds of Villetaneuse, and the Universite de Paris XIII. Here, as guest of the UMR “Lexiques, Dictionnaires, Informatique” I gave a completely different talk about the TEI  (the third one of this trip) to a vast and heterogenous crowd of students. Then I got a lift on the back of Fabrice’s scooter back to the station, took the first train back to Paris, and across to Montparnasse in good time to make the 1540 train back to Biarritz, arriving at the chateau somewhat out of breath, but just in time for dinner.

All of this meant that I missed entirely the chance to learn more from Antoine Widlocher and Yves Mathet about the standoff XML annotation tools developed at Caen, which is a shame, judging from the copy of their presentation on the Ecole’s wiki. But I did get to see Alexei Lavrentev demonstrating txm in action (and successfully broke it yet again). . I continued to worry about the two talks I still hadn’t written, but also went for a good walk around the grounds of the Domaine. I started to feel at home. Next day, we were assured the weather would improve enough to warrant a picnic, which would be provided at the end of the morning, as indeed it was.

And so to Thursday, which began with a good talk from Sylvain Loiseau , almost but not quite saying “use the TEI”, followed by my final wrap-up talk on why standards might be considered to be a good thing, even in this fragmented field. As requested, I managed to finish this early enough for everyone to go off to the beach with their packed lunches. I however was condemned to sit around finishing my next (and final) talk, before setting off back to the airport. This time I took the Air France flight, which was better in that AF still hand out free drinkies, and don’t quite treat the passengers like refractory parcels, but worse in that when it arrived the Orly bus which comes allegedly every 15 mins, did not, on this occasion, come for an hour, which led to much overcrowding and grumpiness on the short hop across to Denfert-Rocherau . I then sleep-walked my way via metro to Exelmans, checked into another overpriced hotel, and collapsed.

Deplacements, sept 2011 – 1

It’s September: the rentrée — when everyone goes back to work, including me even though I am retreated –– looms. I had a very nice summer, thank you, sitting in my garden for possibly the last time, enjoying being visited by daughters and grandchildren, and not having to go anywhere, except occasionally down the shops to buy a newspaper or some more mushrooms. And in particular not having to get up in the morning. It couldnt last. The leaves on my conker tree are brown and as I write the conkers are already starting to fall. Time for some quick blog entries rapidly surveying the first series of displacements I undertook this month: five talks at four different venues in 12 days.

September 8th. I am sitting once more on the eurostar, waiting for the closure of the doors, when my phone rings. The expensive hotel in the Marais where I am booked in for the night wants to cancel my reservation. I am delighted to discover that lack of usage has not impaired my ability to remonstrate in French, so I remonstrate. How very dare they. Then I spend the journey feverishly correcting the first of the four different talks I have to give on this trip (the others will come in due course) and not thinking about hotels any more. Which seems to work, since when I arrive at gare de nord and switch on my French phone, the hotel meekly apologises for deranging me and assures me that everything has been regulated. Good. For 200 euro a night I expect a bit of servility. Luxury would be nice, but is not (it becomes apparent when I actually get to La Turenne du Marais) on the menu today. Never mind: all I need now is some dinner, which (I am pleased to report) was available from a quite acceptable Italian trattoria just across the street. I wolf down a good tricolour salad, some creamy pasta, some wine, and (a mistake this last) an allegedly sicilian cannoli before retiring to my absurdly over-crowded hotel room.

<p>Les archives nationales</p>

Next morning, after a typical hotel breakfast, it’s a five minute stroll to Les Archives Nationales, which are apparently on strike despite the sunshine.  The lovely Anais Wion appears and kindly carries my suitcase up hundreds of stairs, along dozens of corridors, and through all sorts of winding twisty passages which eventually lead me, puffing along behind, to the attic in which Denise Ogilvie hangs out. It boasts splendid views over the rooftops, and its own bathroom. The three of us then spend an agreeable morning debating how to mark up postcards in TEI and gossiping about the other ANGD trainers who haven’t apparently made much progress either. I realise that I still havent done the work I should have done on revising the workplan for the “structurer” session ; in particular I haven’t written to tell the nice lady from INIST that I think there is too much Dublin Core in her proposed scenario.

No time for lunch. A not very quick taxi ride across town through the traffic to rue Lhomond, where I pop into the office to say hello, pop out again to get a sandwich, pop in again for a meeting with the neighbours at ITEM to discuss the possibility of a TEI-compliant version of their legendary Optima program. I form the doubtless incorrect impression that someone has told P-M de Biaisi that he won’t get another round of funding for this software unless it can export TEI. He impresses on me how different it is to everything else I may have seen of the kind, and I assure him it would be utterly delightful to collaborate on such a project and Daniel Ferrer who organized the meeting blinks a bit in a mildly donnish way. Anyway, PM dashes off to his next meeting, and I dash off to mine, which is actually just a brief teabreak with Florence outside her office in the place de l’université. We exchange gossip, and drink what passes for tea in France. And then it’s back to the sweaty metro ligne 4 and off to the Gare de l’Est, for the 1809 train to Nancy, on which I continue to work on that wretched talk. Except for the bit where the train achieves its maximum vitesse, appearing determined to shake itself (and me) into pieces in the process.

Bertrand meets me off the train, walks me to my hotel, buys me a beer, and makes sure we get to the right restaurant for dinner not too late. He’s a pal.  The hotel, the Akena, is a French take on the American motel — lacking in frills, but actually offering slightly more space than the one in the Marais, with functioning free wifi and a third the price.Dinner is upstairs at Grand Cafe Foy in the Place Stan’, of course. I go for the obligatory quiche lorraine, followed by entrecote and chips, and truly delicious tarte de rhubarbe. Service is very slow, but the food is worth waiting for: so maybe not nickel (can this word be applied to food?) but definitely correct. I slowly warm up to academic discourse in French again : its been a while.

Next morning infeasibly early, I accompany my fellow plenary speaker, a distinguished lady called Catherine Kerbrat-Orecchioni, down to Nancy’s remarkably ugly Palais des Congres (it looks like a carpark) in good time to observe my hosts of last night doing conference organiser panicky things for a while, get some coffee, get my very own conference bag, check my email, and give my intervention on linguistic annotation (Is it A Good Thing?) yet another final polish. And so eventually to this talk’s first outing, as opening keynote for the annual conference of the Association of French Language Studies which seems to go down better than it has any right to. I can spend the rest of the day graciously accepting compliments on the clarity of my french and re-acquainting myself with applied corpus linguistics, a field which (had I forgotten) seems characterised as much by some really very nice people as much as by any methodology. After lunch, I listen with interest to various presentations about dozens of small oral corpora, and wonder if the time is really ripe for the TEI to be adopted as an interchange format for them. The closing plenary is from a grand homme de corpus linguistique, Bernard Combettes, who extemporizes with no visible visual aids or notes in the impressive way that only French grands hommes can. At the end of the day, we form a disputacious crocodile and trek across town down to the University Library’s vast salle d’honneur in the cours Léopold, which I distinctly recognise from the time back in 2008 when the TEI annual meeting was hosted at ATILF. As on that occasion, there are speeches, (largely content free) lots of champagne, and lots of amuse-gueules I realise after a while that I am very tired, as well as pleasantly drunk, and stagger back to le motel, missing out the son et lumiere.

Next day, being Monday, I feel I can justifiably skip some sessions in order to deal with some of the minor crises which have popped up in my email, notably a panic about a deliverable for the Agora project. I have a long and interesting discussion with Carol Etienne about the CLAPI project, which seems to have accumulated and (more usefully) makes available lot of information about the various oral transcription formats currently in use. And I listen to another plenary, this time in English, from a Canadian called Tom Cobb, largely advertising his wonderful teaching software: he apparently runs a company called Linguasoft which (in BNC days) I had occasion to speak to rather sternly.

After lunch, I attend a session largely devoted to people talking about something called “la langue des jeunes” which is seemingly the politically correct term for the language of the banlieus, aka banlieusart, many of the presenters taking a curiously anthropological approach to the task of collecting and reporting linguistic evidence. After lunch, Jenny Cheshire’s plenary reported some results from her ongoing English (well, Hackney) based project on innovation in “Multicultural London English” : an impressive talk for its methodological rigour and thought provoking analysis: see further this earlier report though that one doesn’t talk about pronominal “man”.

The evening was rounded off by a celebratory dinner in the downstairs dining room at Nancy’s gloriously art nouveau restaurant Flo during which I spent a fair amount of time chatting with the ladies who run AFLS, partly because they seemed to like it, partly in the hope they might invite me back. some time.

« Exploiter les données structurées en XML »

Here’s a nice way of spending a day in the heart of the Marais. Get together a bunch of people who do actually use the TEI (or some other kind of structured XML markup) to do cool things and ask them to talk for a maximum of 10 minutes each about the software they use and what they do with it. I claim no credit at all for this idea: the event was master minded by Anais Wion, Fabrice Melka, and Denise Ogilvie who just coincidentally have to prepare a workshop on the verb “exploiter” in Aussois later this year. Whatever its origins, this turned out to be a really worthwhile day, and not just because of the venue (the alabaster hall of the Archives Nationales) or the lunch (yum, Lebanese buffet).

A proper account of the proceedings has been promised for a couple of weeks hence, so this note is just the consequence of me jotting down some immediate impressions on the train home. There is already a useful page of links to stuff mentioned at the workshop at http://www.delicious.com/workshopexploiter, which I should probably update with this report.

I kicked off by explaining why the TEI really didn’t ought to have much to do with software production, except for its own nefarious purposes. I conceded, however, that those purposes led inelectably led to the production of Sebastian’s Excellent Stylesheets and hence to a generic software tool of some importance in the community. Marjorie Burghart then talked about XML database eXist, showing it in action on her sermones.net site, and also her paleographic exercise site; the main problem with it, for her, was that its installation and maintenance on a local server require a little more technical expertise (for example, fine tuning a java environment, recovering tomcat when it falls over, etc.) than is available for the typical humanities department. This need for infrastructural computing support turned out to be a major theme of the day. Next up was Lauranne Bertrand from the CESR team at Tours, who showed how they currently use XTF to display various versions of their richly encoded texts. Maud Ingaro then introduced us to a new XML database from the University of Konstanz called BaseX which seems worth a second look, if only for its very sparkly visualisation features, though its main claim to fame is probably its ability to handle REALLY BIG (multi-gigabyte) databases, which (if true) should give several current pontificators pause for thought. Jorge Fins, also from CESR then talked about Philologic, which provides traditional text searching (full text indexing, concordancing, etc) capabilities, running on a distinct (and distinctly dumbed down) copy of the Bibliotheques Virtuelles des Humanistes exported to Chicago.

After a brief pause for coffee, Alexei Lavrentev, standing in for Serge Heiden (reportedly recently immobilised by a close encounter with a crampon) showed us the current state of  txm the open source text analysis system developed by the textometrie project at Lyon. Severine Gedzelman, also from Lyon, then described Hypermachiavel, an application for handling multiple aligned corpora (or, to be more exact, one specific set of multiple aligned corpora). I found the difference in software design between these two projects interesting: txm was developed very consciously as a generic text processing framework, incorporating and rationalising feaures from many other systems; whereas hypermachiavel was developed (almost from zero) very much to meet the specific needs of a particular research project, but without any particular generic intention.

Does the world need another generic tool for doing textual annotation in XML? Certainly many linguists and computer scientists seem to think so. Cue Antoine Widlocher from the University of Caen, and Glozz, a new plateform for distributed linguistic annotations of text segments, overlapping or otherwise, relationships, graphs, etc. etc. Very nice visualisations, as per other java applications; nice features such as annotation histories; no evidence that any researchers from the humanities had been involved in its design or application up to now. Florence Clavaud, from the Ecole Nationale des Chartes, then spoke very briefly (no really) about Pleade and her plans to enhance this mainstream EAD-muncher to include TEI capabilities. Pleade is one of the tools of choice in the French Archival community so that enhancing it to handle TEI as well as it currently manages EAD and sets of digital images would be very cool. Also from ENC, Vincent Jolivet and Frederick Glorieux showed us diple which is a nice simple package written in php to transform complex TEI markup to static web pages with a complementary suite of stylesheets to render them, and something called xrem, a very glamorous tool for visualisation and construction of RELAX NG schemas. Fred likes to work directly in RELAX NG rather than via ODD, but the results almost justify such heresy. Nicole Dufournaud, aided and abetted by Denise Ogilvie, told the (possibly) instructive history of how Millefeuille (a nice customized TEI editing and indexing application based on work Nicole pioneered back in the nineties)  is now in a suspended state of animation. Following one unsuccessful attempt at reanimation, it appears that another one is proposed as part of a European project. Finally before lunch, Maud Ingaro showed us some camstudio videos about dinah : this “philological platform for the construction of multi-structured documents” is currently being developed at Lyon in a project studying the manuscripts of Jean-Toussaint Desanti, and seems worth a second look, even though it’s a long way from being stable yet.

After the afore-mentioned very nice lunch, there was a wide-ranging free-form discussion, from which I took away chiefly the following points (as aforesaid, there will be a more complete and correct report later):

  • a general feeling that IT infrastructural support was lacking: in particular, people wanted
    • some kind of sandpit environment in which they could experiment with different tools
    • some easily accessible web-publishing service for e.g. doctoral students to showcase their work
  • a general feeling that development and implementation of XML-based projects was hard work requiring input from specialists, consequently a need for more training
  • a desire to share experience of these and other tools; the existence of TEI-FR, and the TEI Tools SIG were agreed to be appropriate channels.

Some pointed requests were made for the TGE to do more to provide some of these services, which proposal I agreed to go away and investigate.

Quel avenir pour l’édition génétique sans "digital forensics"?

Ce texte représente une intervention au séminaire général de l’ITEM qui a eu lieu à Paris le 31 janvier 2011. Remerciements à ma collègue Nadine Dardenne qui l’a relu pour en corriger les fautes d’orthographe et de syntaxe répandues dans la version originelle; je revendique cependant toute faute intellectuelle résiduelle.

Je souhaiterais vous proposer une brève présentation d’un champ d’études émergeant qui se nomme “digital forensics”. Ce terme reprend un ensemble de techniques et théories propres aux procédures juridiques, mais probablement également d’une importance incontournable pour l’archivage et l’étude des objets nativement numériques; considéré du point de vue patrimonial. Le besoin de mettre en évidence, d’une manière crédible et certaine, les traces de mots enregistrés sur disque dur ou floppy, même supprimées, et d’associer ces traces avec un écrivain, est un enjeu qui afflige l’éditeur critique autant que l’agent de police, ou les services secrets. À chaque fois on a besoin d’une connaissance des affordances des systèmes de stockage numérique, de ce qu’ils rendent possible, et de ce qu’ils cachent. À chaque fois, il est question de balancer des probabilités, de proposer une vérité vraisemblable basée sur des évidences. On pourrait rester aveugle devant ces possibilités, bien sûr. On pourrait dire que l’histoire d’un texte est réduit à l’histoire de ses incarnations multiples, sur ces feuilles de papier que nous aimons si bien. On pourrait renoncer à l’investigation de la manière par laquelle ces incarnations ont été réalisées. Mais dans ce cas il faudrait également renoncer à la majorité du discours artistique actuel, qui est nénumérique, vit et évolue dans le numérique, et meurt dans les archives numérisées de M Google. Car les objets d’étude des humanités et sciences sociales sont de plus en plus conçus et stockés sous forme numérique; il est donc indispensable de revoir et de transformer l’outillage  avec lequel on espère les archiver et les analyser. L’ordinateur de l’auteur, ses disques, son téléphone portable, ses espaces virtuels sur le réseau internet, remplacent ses cahiers, ses brouillons, et ses manuscrits. Il faut ré-équiper le chercheur avec une compréhension des principes d’enregistrement numériques, pour compléter sa compréhension des principes de l’écriture analogique. Le choix est simple: ou bien il faut redéfinir la diplomatique pour le numérique, ou bien il faut renoncer à l’étude de la genèse textuelle des oeuvres modernes.

Comment constituer cette redéfinition? Je propose un réajustement à deux niveaux: intellectuel, et substantif.

Au niveau intellectuel d’abord, il faut affecter une bonne compréhension de l’informatique aux disciplines des SHS. En dépit de deux décennies (au moins) de “humanities computing”, à present relabelisé comme “digital humanities”, il reste une étonnante ignorance autour de l’ordinateur et de ses capacitàs à faire (ou à ne pas faire). En partie, c’est une des conséquences de l’émergence de l’informatique grand public, comme phénomène de marché de masse. Des impératifs commerciaux restreignent l’usage de l’ordinateur à des plateformes spécifiques, et transforment ce moteur universel en un jouet uni-fonctionnel. Ce n’est guère surprenant alors d’entendre les gens affirmer que cette technologie réductive pervertit l’intelligence humaine en la transformant dans une disposition de bits. Ou, à l’extrême opposé, d’y voir l’éternel attrait du divin se manifestant cette fois dans la tendance à vouloir ‘attribuer une intelligence consciente aux effets d’échelle (par exemple, le crowd sourcing, les réseaux neuronaux, le data mining…) Peut être il y en a-t-il parmi nous qui ont besoin de récalibrer le cadre de leur esprit pour supporter l’ère de l’information, juste comme nos ancêtres ont dû s’ajuster à l’ère de la vapeur… mais un tel ajustement consisterait en une extension de nos perceptions, en aucun cas d’une transformation. Dans la langue française, un ordinateur a pour objectif de mettre de l’ordre dans les choses; le mot “ordinateur” porte même des nuances religieuses en rappelant par exemple l’ordination des prêtres. Dans les langages anglo-saxons par contre, un “computer” n’est qu’une machine pour calculer. Mais les objets auxquels l’ordinateur apporte un ordre ne sont pas que les chiffres: il est la machine par excellence pour organiser n’importe quelle espèce de signe, pour le ré-encodage des systèmes sémiotiques de toute sorte. Voilà pourquoi j’ai toujours insisté pour que l’informatique soit considérée comme une branche des sciences humaines, plutot que de l’ingenierie ou de la mathématique. Au niveau materiel, je propose une élargissement des connaissances attendues pour ceux qui veulent faire des études philologiques. On attend de tels gens une compréhension assez intime des technologies typographiques ou paleographiques. Maintenant, a l’urgence on doit élargir ces compétences pour le numérique.

Je termine avec quelques mots sur quelques elements de ce qu’il faut faire apprendre aux futurs généticiens. Quand j’écris un document sur mon ordinateur, le texte apparaît et disparaît sur l’écran, sous le contrôle d’un logiciel avec lequel j’interagis à travers mon clavier. Les traces propres à mon texte sont de deux sortes: lettres, et ce que l’on pourrait nommer “meta-lettres”: c’est-à-dire des codes qui déterminent la façon d’ afficher ou de traiter les lettres. (Un autre terme possible serait “markup” ou “balisage”). Ma conscience de ces meta-lettres est variable: quelques-unes (la ponctuation par exemple) me semble être un composant de ce système sémiotique que l’on appelle la langue naturelle; d’autres (les retours de chariot, les indications de rature, etc) me semblent moins visibles, et j’attends que la machine s’en occupe seule. De la même façon, les codes insérés par le logiciel de traitement de texte pour générer des effets spéciaux tels que les changements de police ou de couleur appartiennent, de mon point de vue, à un niveau sémiotique tout à fait différent. Cependant, mon texte est composé de signes appartenant à ces trois niveaux. Le texte numérisé que j’ai ainsi composé commence son existence physique comme des changements d’état dans la partie dynamique de la mémoire de mon ordinateur; très rapidement ces changements sont transférés et enregistrés dans un format plus permanent quelque part sur mon disque dur, ou dans une autre mémoire. D’habitude ceci s’effectue automatiquement par l’infrastructure informatique, le OS: à noter que c’est fait sans aucune intervention de ma part. Même au moment ou je me décide consciemment d’enregistrer l’état courant de mon texte, bien que je pense savoir ou je le mets (dans un fichier nommé, sur un médium specifique), la manière dans laquelle sont organisés à cet emplacement les composants de mon texte — par exemple, les adresses des secteurs concernés, leurs tailles, la disposition des caractères et autres signes dans ces secteurs — est entièrement hors de mon contrôle et de ma connaissance.

Quand j’écris un document sur papier, le texte apparaît, mais ne disparaît que rarement. Je dois utiliser un ensemble assez complexe de “meta-markup” pour indiquer que tel ou tel signe n’existe plus dans mon texte, qu’il a été remplacé par un autre etc. Le système sémiotique auquel appartient ce markup sera entièrement le mien (exception faite des signes de correction imposés par une maison d’édition). Plus significativement, chacun de mes bouts d’écriture a sa propre existence physique, qu’il m’est impossible d’ignorer, surtout si j’ai un petit bureau ou déjà bien rempli … Par conséquence, il me faut trouver rapidement des stratégies de stockage (ou de recyclage), qui vont déterminer les possibilités de récupérer à l’avenir mes procédures d’écriture. Ces stratégies seront déterminées, bien naturellement, par ce qui me paraît utile, ou ce qui semble approprié dans le contexte institutionnel dans lequel mon écriture prend place. Elles représentent des jugements de valeurs considerés justes dans ces contextes, et c’est pour cela qu’on dit que l’histoire est toujours écrite par les gagnants, et que les archives de n’importe quelle société ont tendance à ne contenir que ce qui est valorisé par cette société. Avec l’arrivée des média numériques, pourtant, les affordances de nos systèmes de stockage se sont transformés d’une manière fondamentale. En dépit des efforts des artistes modernistes, on ne peut lire un bout de papier que d’une seule manière. Mais l’organisation des fragments d’écriture sur un medium numérique de stockage est indépendant de son écriture; elle peut être lue de plusieurs façons. Les séquences de bits constitutifs de ce document peuvent être lus (comme je le suppose assez naïvement) à travers le système de gestion des fichiers sur mon laptop. Mais ce dernier n’est qu’une espèce d’index, comprenant un ensemble de pointeurs sur des segments de stockage éparpillés sur mon disque dur. Ou bien, dans le cas où on recupère mon texte à travers un logiciel plus complexe comme un blog sur le réseau, les traces de mon texte sont hebergées dans une base de données en Californie sur une machine que j’ignore totalement. Mais il demeure possible de récupérer ces mêmes séquences de bits en adressant n’importe quels systèmes de stockage d’une autre manière, tout à fait différemment du système d’acces prévu, que cela soit le système de fichiers sur mon laptop ou le blog, qui (je croyais) représenterait la seule structuration correcte de mon texte. Au contraire. Pour le texte numérique, la structuration est contingente, protéenne.

Ces morceaux écrits, comme je l’ai déjà souligné, pouvaient ne contenir que des materiels raturés, ou des signes qui ne servent qu’à indiquer la manière ou d’autres signes devraient ou pourraient être affichés ou intégrés dans un texte visible. D’où des problèmes pour l’archiviste, et un défi supplémentaire pour la critique textuelle. En acceptant une boîte de papiers comme dépôt, l’archiviste peut raisonnablement supposer que les parties savent exactement ce qu’elles sont en train d’offrir. Mais, quand l’archiviste accepte en dépôt un disque dur, peut-on envisager que les déposants sachent quelles traces d’activités sur l’internet ou quels fichiers supprimés restent encore à découvrir à l’intérieur, au-delà des materiaux proposés et visibles? Un récent rapport américain du Council on Library and Information Resources s’est interrogé sur ce problème, justement perçu comme un vrai défi pour l’éthique professionelle, qui nécessite une mise à jour des standards de contrats de dépôt. Mais je demande aux critiques textuels ici présents — si vous pouviez accéder à l’histoire de browsing sur internet de disons Joyce ou Flaubert, hésiteriez-vous à y aller, par crainte de la violation de la loi sur la vie privée? Peut être moins chimériquement, si vous pouviez récupérer chaque étape de l’écriture d’une oeuvre de l’importance du Satanic Verses de Rushdie (ce qui sera en effet le cas) — chaque rature, chaque ajout, chaque déplacement de mot — de quels outils auriez-vous besoin pour gérer une telle richesse? Les outils et les méthodes élaborés jusqu’à présent sont tous dans la mesure de ce que nous pouvons comprendre: c’est l’abondance de ces informations dans le monde numérique qui nécessite de repenser ces outils et ces méthodes.

Je termine en soulignant encore que le texte numérique serait une construction, pas seulement au sens qu’il est composé de plusieurs séquences fragmentaires de bits, mais aussi au sens que ces séquences reprennent de l’information à plusieurs niveaux. Les mots seuls ne suffisent pas: les documents numériques contiennent inévitablement un balisage, dont une grande partie est (selon le term du philosophe anglais J.L. Austen, repris notamment par Allen Renear) performative — il détermine la nature du texte. D’où l’importance pour le critique textuel numérique de comprendre le balisage et les technologies qui y sont associées . Mais vous vous attendiez probablement à que je vous dise cela…

Does genetic criticism have a future without digital forensics?

This is the text of a presentation I gave at the ITEM’s general symposium on the future of genetic editing, held in Paris on 31 January 2011. I started writing it in French, switched to English for speed, translated it all into French (with the invaluable assistance of my colleague Nadine Dardenne), and then re-Englished it for this version.

I’d like to introduce you to an emerging field called “digital forensics”. This term covers a set of techniques and theories originating in the domain of criminal justice, but also of major importance for the archiving and study of born digital objects considered from a cultural heritage perspective. The need to plausibly identify traces of words recorded on hard or floppy disk, and to reliably associate them with a specific writer, even after their deletion, is a goal which torments the textual critics as much as the police officer or secret service agent. In both cases, a knowledge of the affordances of digital storage systems is needed, to know what they make possible and what the conceal. In both cases, there is a need to balance probabilities when seeking to establish plausible evidence-based conclusions. Ignoring these possibilities is also an option, of course. We could consider the history of a text to be no more than the history of its various embodiments on those sheets of paper we like so well. We could abandon any attempt to investigate by which those embodiments have been achieved. But in that case, we have to give up on the majority of current artistic discourse, which is born digital, lives and evolves digital, and dies in the digital archives of Mr Google. The objects studied in the human and social sciences are increasingly conceived and stored only in digital form; that is why it is essential to rethink and transform the toolkit we use to archive and analyse them. The author’s computer and its disks, their portable telephone, and the virtual spaces they use on the Internet, are taking over from their notebooks, their drafts and their manuscripts. We must re-equip the researcher with an understanding of the principals of digital storage to complement an understanding analog writing. The choice is simple: either redefine diplomatic studies to include the digital world, or abandon any attempt to study the textual genesis of modern works. What are the components of this redefinition? I propose a readjustment at two levels: the intellectual, and the substantive. At the intellectual level first, we need to re-appropriate a proper understanding of information studies within the humanities disciplines. Despite more than two decades of “humanities computing”, now rebranded as “digital humanities”, there is still an astonishing amount of ignorance about what the computer can and cannot do. Partly this is one of the results of the emergence of computing as a mass market phenomenon. Commercial imperatives restrict usage of the infinitely plastic computer to certain platforms, transforming a universal engine into a mono-functional toy. Unsurprisingly, therefore, we still hear people assert that this reductive technology perverts human intelligence as a transient patterns of bits. Or, at the other extreme, we still see evidence of the eternal desire for the divine, now appearing as a tendency to attribute conscious intelligence to effects of scale (for example crowd sourcing, neural nets, data mining…). Maybe some of us need to adjust our mental framework to deal with the information age, just as our ancestors adjusted theirs to deal with the steam age, but such an adjustment is a matter of expanding our perceptions, not transforming them. In the French language, a computer is something which puts things in order: the word ordinateur even has religious overtones, suggesting “ordination” and consecration. In the English and German languages, it is just a machine that “computes”. But the things that a computer puts in order are not just numbers: it is a machine above all for organizing any kind of sign, for re-encoding semiotic systems of all kinds. This is why I have always maintained that computer science is more a branch of the humanities than it is of engineering or mathematics. At the materiel level, I propose an extension of the knowledge expected from those undertaking philological study. Such people are expected to acquire a detailed understanding of typographic or paleographic technologies. There is an urgent need to expand those skills to embrace the digital medium. I conclude with a brief discussion of a few components of the understanding that future genetic editors needed to acquire. When I write a text on my laptop, the text appears and disappears on the screen under control of some piece of software with which I am interacting via a keyboard. The traces which constitute my text are of two kinds — letters, and what we may call meta-letters; codes which determine how the text should be displayed or processed in some way. (Another word we might use is markup). I may or may not be aware of all of these — some, the punctuation for example, is almost a part of the semiotic system I call “natural language” so I am very aware of it; others — the carriage returns, deletion characters, etc. — seem less salient, I expect the machine to deal with them. In the same way, the codes my word processor inserts to produce special effects such as changes of font or colour seem to belong to some other semiotic level entirely. But signs at all three of these levels are what constitute my text. The digital text I create starts its physical existence as detectable changes of state in the dynamic part of my computer’s memory, but very rapidly is transferred to a more permanent form, somewhere on my hard disk, or on some other store. Usually this will be done automatically by the software environment: critically, this will happen without any knowledge or intervention on my part. Even when I do deliberately request that the state of my text should be stored away in its current form, although I may think I know where I am putting it (in a file with such a name, on a specified physical medium), the way in which the components of my text are organized at that location — the order and number of blocks of characters and other signs represented — is entirely beyond my control or knowledge. When I write a text on a piece of paper, signs appear, but rarely disappear. I have to deploy quite a complex range of meta-markup to indicate that some sign is no longer significant or has been superceded by another, but the semiotic system to which that metamarkup belongs is entirely my own (unless forced on me by a publisher in the shape of proof reading marks, of course). More significantly, each of my scraps of writing has a physical existence which forces itself on my attention, especially if my desk is small, or my office already crowded. Consequently, I will rapidly adopt recycling or storage strategies, which effectively determine the future re-traceability of my writing processes. Those strategies are naturally determined by what is useful or perceived as appropriate by myself or the institutional context in which my writing takes place. They represent value judgments deemed appropriate within that context, and that is why (as they say) history is written by the victors, and why the archives of every society represent and maintain what that society values. With the advent of the digital medium however, the affordances of our storage systems change fundamentally. Despite the best efforts of modernist artists, you can only read a written scrap of paper in one way. But the organization of written fragments on a digital storage medium is independent of its reading, and thus can be read in many ways. The blocks of storage constituting this text may be read, as I naively think they should be, via the file system on my laptop, which contains a number of pointers indicating more or less continguous segments of storage scattered across my hard disk.They might be recovered via a more complex piece of software such as a networked blog, which stores my text as records on some database system in California. But it is also possible to recover the same written fragments by addressing those storage systems in an entirely different way, by-passing the intermediate access systems (the file system, the blog) which represent the “organization” of my text. In the digital text, organization is contingent and protean. Those written fragments, as noted above, ma
y actually contain nothing but material that has been deleted, or signs that serve only to indicate how other signs should be, or might be, displayed or integrated into a visible text. The first case poses problems for the archivist, as well as a challenge for the textual critic. When accepting a box of papers for deposit, the archivist can reasonably assume that both parties know exactly what is being handed over. But when the archivist accepts for deposit a hard disk, is it equally likely that the depositor will know what traces of internet activity or deleted files may remain to be recovered from it, in addition to the intended and apparent materials? A recent American report from the Council on Library and Information Studies agonizes considerably over this problem, which it perceives rightly as a challenge to the maintenance of professional ethics, necessitating a reappraisal of such deposit agreements. But I ask the textual critics here present — if you could have access to (say) Joyce’s or Flaubert’s web browsing history, would you hesitate to examine it on the grounds of a breach of confidence? Less fancifully, if you could (as you will soon be able to) recover every stage of the writing of a great work such as Rushdie’s Satanic Verses, every deletion, insertion, and the movement of every word, what tools would you need to make sense of that richness? The tools and methods so far elaborated have been done so in the measure of what we know how to handle; it is the very abundance of information to the textual critic that necessitates a rethinking of those tools and methods. I close by underlining again the fact that the digitized text is a construction, not only in the sense that it is composed of fragmentary byte sequences, but also in the sense that those byte sequences contain information at many levels. The words alone are not enough: digital documents inevitably contain markup, much of which is (in a term Allen Renear borrows from the English philosopher J L Austen) performative — it determines what the text is. Hence the importance of a proper understanding of markup, and markup technologies to the digital textual critic. But you probably expected me to say that.