Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

An experiment in CLS

Some time ago, I agreed to participate along with several others much smarter than me in COST Action Work Group 3. The goals of this work group were, amongst other things, to run a small experiment in counting verb frequencies on ELTeC texts enhanced with POS and lemma information. It took a surprisingly long time to find out exactly what contribution was required of me, and I make no claim to have got it right even now. But here’s what I thought I was doing.

First, I wrote an insultingly simple XSL stylesheet to produce a list, in descending frequency order, of verbal lemmas in each of the (now) 10 ELTeC level 2 corpora. For example, here’s the start of the file rom/verbFreq.xml:

<frequencies>
 <lemma form=”face” freq=”30919″/>
 <lemma form=”avea” freq=”29391″/>
 <lemma form=”zice” freq=”22673″/>
<!– … and so on for several hundred more lines –>
</frequencies>

… which tells us that in our data Romanian’s favourite verb has the lemma face, and the next favourite is avea. The code for doing this is (like all the rest of the code described here) in the github repo COST-ELteC/ELTeC-data/Scripts if you care: it’s called imaginatively verbFreqs.xsl

Next, I wrote another simple-minded script to extract from each novel a bag of words, with no markup or punctuation: just all the verbs, for example, or all the nouns, in their order of appearance in the text. So the that celebrated work Hard Times, which begins in the original like this

<div type=”group”>
 <head>BOOK THE FIRST <hi>SOWING</hi> </head>
 <div type=”chapter”>  <head>CHAPTER I
     THE ONE THING NEEDFUL</head>
  <p>‘<hi>Now</hi>, what I want is, Facts.  Teach these boys and girls nothing but Facts.  Facts    alone are wanted in life.  Plant nothing else, and root out everything else.  You can only form    the minds of reasoning animals upon Facts: nothing else will ever be of any service to them.</p>
<!– … –>
 </div>
<!– … –>
</div>

generates a bag of words starting like this

want be teach be want|wanted plant root form be…    

if I ask for VERB lemmas, or like this

book sowing|sow chapter thing fact boy girl fact fact life mind reasoning|reason animal fact    service 

if I ask for NOUN lemmas. You may wish to complain about the behaviour of the lemmatizer here, but I am taking the path of least resistance and using whatever treetagger (in this case) produces without cavil. This deplorable laziness returns to bite me further below…

I wrote some python to run the xslt script filter.xsl which does this task: the script is called filter.py and it uses a Python interface to the Saxon C processor which I was very pleased with myself about when I got it working; less so later, see below. There’s more mundane detail of how to run it in the README in the Scripts folder.

If still awake, you are probably wondering what the point of all this was. And here comes the scientific bit. The little workgroup I had signed up for wished to test a Hypothesis, which (if I understand it correctly) might be crudely summarized thusly:

  • The European novel undergoes some sort of seismic shift around the turn of the 19th century, which is popularly known as The Rise of Modernism
  • Modernism has many stylistic correlatives, but they include notably a focus on the interior life of characters, on sensation and feeling, rather than on objective omniscient narrative
  • If this is true, we should expect to see a change in the frequency with which verbs associated with that ‘inner life’ appear over time.

I hope you can see where we are going with this, now. All we need is a reasonably plausible list of verbs which express aspects of ‘inner life’. And so, for the next few months, with zoom and email and similar modern contrivances, the group theorized how to actually produce such a list. I may have fallen asleep during the process and missed something critical, but eventually (I think) it was decided that we would explore two approaches to identifying our list. Firstly, we’d ask language experts to vote for their top ten “inner” verbs. Secondly, we’d use a statistical procedure (word vector embedding) to identify a list of candidate verbs automagically. Then we’d compare the results, declare victory, and move on.

What could possible go wrong? Well, at least two things.

Firstly, the ask-an-expert approach turned out to be less successful than it might have been, largely for purely logistical reasons. If we had asked the experts simply to review the existing verb frequency lists for their language and identify in them those verbs which were indubitably and always betokeners of interiority, plus any others which were a bit thus inclined sometimes, then we might have got our results a bit faster. But we didn’t, and the experts, understandably a bit mystified by the whole process, gave us lists which varied widely in their format and scope. So I found myself having to tweak and readjust their contributions, to remove duplicates and ambiguity. As for the automagical procedure, it proved a little challenging for most participants to run, if only because it required access to a machine capable of running Google’s word2vec program which is not meant for your average laptop. In any case, you can see the resulting word lists in the file innerVerbs.xml which I hope is fairly self-explanatory.

Secondly, my simplistic notion of ‘lemma’ turned out to be problematic. As you noticed above, when unable to choose between two alternatives, treetagger obligingly gives you both of them, separated by a vertical bar. That’s no problem for me: I just discard the alternative. But other lemmatizers behave differently. For example, in our Portuguese data, the lemmas for reflexive verbs are suffixed by a # and an indication of person. In our Hungarian data, spelling variations of the same basic lemma are sometimes presented as different lemmas. In the first case, should I simply ignore the part of the lemma after the #? In the second, should I aggregate all the differently spelled variants and consider matches for any of them as equivalent? As usual in computational linguistics, it all depends what you think you’re counting…

Despite these metalinguistic anxieties, I wrote a (needlessly complicated) python script called verbCount.py to count the frequencies of the inner verbs through time, comparing the things-called-lemmas in our various lists of inner verbs with the things-identified-as-lemmas in the level2-encoded files. Invoking various XSLT scripts and Saxon C as before, this script grudgingly churned out a file for each text in the corpus under examination, with a row for each title and a column for each inner verb, like this:

   extId year verbs innerVerbs aimer connaître croire entendre regarder savoir sembler trouver voir vouloir     FRA00101 1860 3889 310  17 9 28 22 18 52 5 47 83 29    FRA00102 1883 5499 465  112 21 38 16 17 55 32 30 77 67       FRA00201 1910 7577 682  26 20 41 75 96 63 49 93 128 91   

I say ‘grudgingly’ because the script was obliged to process the whole of every file in order to extract a year of publication from its TEI header, and consequently ran with noticeable slowness. If I’d thought to include the year of publication along with other metadata in the filename of the “bag of words” I could have used that instead, which would have been much quicker. Maybe if I get a better set of inner life verbs I’ll revise the scripts to do so.

Anyway, we now have a bunch of CSV files. And why? Because my colleague Diana has produced some R scripts which will plot this data set so everyone can understand it. Or at least look at it. Here’s what we get for some of the Portuguese data:

innerVerbs.png

I leave it to the statistically-informed to interpret this and other similar results. The closing conference of the COST Action, taking place next week, includes a paper (on which I am somewhat embarassingly cited as co-author) presenting the results in more detail.

Interoperability of TEI projects : apotheosis or chimera?

This was the title (sounds better in French) of the closing talk I gave at an interesting workshop last week. A prevous COST-funded meeting in Krakow had brought together Czech, French, Catalan, German, and Polish teams working on several different dictionaries of medieval Latin to elaborate the idea that maybe they could make their various lexica interoperable if only they could agree on a common format, for which the TEI seemed the most plausible candidate. Susanna Allés, the energetic organizer of this workshop, got funding for it from several sources, notably the ALLC (or European Society for Digital Humanities as it now prefers to be known). She also seems to have hit on the wheeze of inviting a number of luminaries to make the case for the TEI dictionary tagset (notably L. Romary, F. Glorieux, P. Banski);  alas, in the event only I turned out to be available. Which was useful for me, since preparing for the workshop meant rediscovering all sorts of dusty and neglected (by me, though not by others) parts of the Guidelines.
The workshop was held in the CSIC (Spanish for CNRS) Istitucio Milà i Fontanals, which occupies a rather grand building conveniently located in the Raval, a picturesque if slightly seedy district of Barcelona to the left of the Ramblas, and we were all accommodated next door in the splendidly-named Investigators’ Residence on Calle Hopital. Barcelona is not a place for those uninterested in food and drink; and we were very well fed, in large quantities, if at strange hours and (on one occasion) after a lengthy walk up town through an unexpected tropical-style deluge. The ravioli stuffed with pears and cheese offered by the resto “En Ville” for lunch was particularly memorable, invidious though it is to single out this one occasion.

More intellectual fare was also on offer, of course. It is always a pleasure to arrive amongst a group of specialists personally unknown to me and from a domain of which I am more or less totally ignorant  to find that word of the TEI has already reached them, and often in a far from superficial way. So I was made very happy indeed to hear Sabine Thuillier (currently working in Madrid on the Diccionario Griego-Espanol but ‘formed’ as they say at the Ecole Nationale des Chartes) evangelise for the TEI as an international open source community, and impressed by the way she is implementing it in a workflow which though its editors remain obstinately based on Word Perfect, remains determed to envisage production of a respectable TEI P5 version.
Similarly, the team responsible for the eLexicon Mediae Latinitatis Polonorum lead by Krzysztof Nowak from Krakow, while maintaining a proper scepticism about some aspects of the TEI’s conceptual model, was clearly persuaded of its virtues as an open standard, notably as evidenced by both the amount of open source software (they mentioned XTF, Philogic, and TXM) and the number of comparable projects (they mentioned the Anglo Norman Dictionary, the Glossarium DuCange, and several others) using TEI. Their workflow starts with an OCR phase, since they are starting from an extensive library of source texts  and then uses LibreOffice and a customised library of styles to enhance it to the point where it can be automaticalty converted to TEI, thus (apparently independently) following the same path as is used by Lodel, OxGarage, Agora, and no doubt others, to combine the user friendliness of a word processing style interface with the rigour of a TEI structured maintenance format.

Catalonia has ambitions (as posters everywhere proclaim) to become politically independent of Spain, and certainly its linguistic independence is a well established fact. As a confirmed non-speaker of either Spanish or Catalan (nor of Basque, Galician, or Portuguese for that matter) I regretfully let the interventions in those languages wash over me, and thus missed out, notably on Jose Manuel de Bustamente’s insights on the relation between textual corpus and dictionary. I did however manage to understand the German colleagues present since they made the effort to speak in English or French, for example Alexandra Gorbrecht from the Trier Centre for Digital Humanities gave a brief overview of the dozen or so dictionaries put together online at woerterbuchnetz.de with a well designed query interface. Allegedy all of these dictionaries are locally stored in TEI XML, but as this is not currently exposed one cannot tell how consistently it has been done. None of the other major TEI dictionary projects in Germany I am aware of was represented here: presumably because none of them is specifically concerned with Mediaeval Latin. I had to console myself for the absence of Werner Wegstein from Wurzburg by stealing one of his examples for my own talk.
Bruno Bon and Renaud Alexandre from the IRHT in Paris had the advantage, if advantage it be, of being able to develop their proposals for an over-arching Novum Glossarium Mediae Latinitatis on the basis of the already existant complete Glossarium of Du Cange which has been freely available in TEI-inspired XML markup for some time now, thanks to the work of Fréderic Le Glorieux. The idea seems to be to develop a set of proposals able to express the (not inconsiderable) variation in practice amongst these and others working on different lexica of medieval Latin in Europe, and thus create what (inevitably) Bruno suggested would be called NGML (the Novum Glossarium Markup Language). As a first step they have set up an exploratory multilingual wiki with some nice visualisation tools, based on a few sample entries taken from each of the five different lexical projects (specifically, in Barcelona, Prague, Krakow, Munich and Paris), and are inviting more. This could be fun, though I think expressing NGML as a real TEI ODD would be more of a challenge.

Susanna Allés and her graduate student Frédérique Laugrost (on secondment from the Ecole Nationale des Chartes) talked about the specific problems they faced when starting to apply the TEI to the text of their dictionary: the Glossarium Mediae Latinitatis Cataloniae. Many of these are familiar, of course: notably those which derive directly from the wish to preserve the punctuation and use of abbreviation which characterize such sources and at the same time model the logical structure which they determine. Some of these problems do however point to aspects of the current TEI dictionary model which could be improved.

I started making a short list of such points during the workshop, but sadly did not get very far:

  • too many of the proposed TEI dictionary elements relate only to modern lexicographic practice. Deciding which ones to filter out to make a kind of TEI Lite for dictionaries would be very desirable.
  • an element for “translated segment” is desired, even if it is just syntactic sugar for with a value for xml:lang other than that of the surrounding text
  • some dictionaries have entries which are large enough to have multiple paragraphs but there is no place for <p> in any model.entryLike element
  • when a term is identical in two or more languages, can xml:lang take more than one value (I confidently said it could, but I think I am wrong)
  • how should you mark a word which is clearly readable in the text when its meaning is entirely uncertain? (I suggested <orig>, but there must be better ideas)
  • The typology currently used for <form> combines categories from entirely dissimilar taxonomies, e.g @type=lemma is an entirely different kind of thing from @type=compound. Likewise, the typology one might want to use for should be more to do with the way the sense had evolved. To both these points I said (in my best French) “Bof”, Or, more precisely, it’s only by receiving proper input from specialists in the field — those best able to define more appropriate typologies — that the TEI progresses…

I’m hoping that the undoubted success of this workshop will encourage the participants to form a SIG on the subject or (as Piotr Banski had previously suggested to several of them) to make an active contribution to the existing LingSig. Plenty of scope for very interesting work to come, not to mention the opportunity of returning to Barcelona for the paella which I somehow failed to find time for on this occasion.

TEI++ : une formation avancee

Cet été, j’étais invité par le consortium CAHIER, en partenariat avec les consortium Corpus Ecrits et IRCOM,  d’organiser un atelier dite “TEI avancée” sur quatre journées.  Je leur ai propose une mode d’opération divisée en deux parties (présentation, travaux pratiques) et une organisation selon trois axes:

  1. modélisation des ressources et séléction des traits signifiants
  2. encodage et explicitation TEI des structures modelisés
  3. exploitation et analyse des ressources structurés

Je leur avais aussi propose de partager les  travaux de formation avec quelques experts francais.  La formation s’est tenue à l’Institut de Linguistique Française à Paris du 19 au 22 novembre 2012.

Voici en sommaire (anglophone, désolé) ce qui s’est enfin passé …

Day 1

Proceedings began on the fifth floor of the ILF, a nice light room but not quite big enough for the 18 or so participants. This was not altogether bad, since the consequent huddling together encouraged emergence of a cohesive group identity, which I also tried to encourage by getting the participants to place themselves on an improvised graph along two intersecting axes : literarature vs linguistics, and researchers vs support staff. Most people turned out to be in the bottom right quadrant, i.e. ingénieur + linguistique, but there was also a smattering in the littéraire + scientifique box, to say nothing of two sociologues who insisted on positioning themselves in the middle of the littéraire vs linguistique axis. The rest of this first introductory session was given over to a rapid review of some fundamentals of encoding, and a sampling of the websites of half a dozen real TEI projects around the world, which might have gone better had I rehearsed it better, but got the message across that quite a few very different projects were doing seriously cool stuff with the TEI.

After coffee, I introduced them very quickly to a spot of data analysis, using as vehicle the celebrated postcard archive of M Marcel Virgolos, and Lauranne then took over for a refresher on using Oxygen. They marked up a postcard or two, and reviewed commonly used TEI tags for each of some pre-selected texts : a French novel, a poem, and a play. Most students completed all of these, mastering most of the key features of the Oxygen XML Editor quite rapidly, but I think we did not allow enough time for this session, given the mix of abilities present.

Lunch in the form of a large cardboard box containing a “plateau” of cold cuts, salads, bread roll, plastic cutlery etc. duly appeared and was despatched. Thus strengthened, I embarked on an all-singing all-dancing overview of all the TEI modules, and what you can do with some of them. That took about an hour, but helped motivate Lauranne’s traditional exercise on using Roma to make a schema by reduction from TEI-ALL which followed it. By the end of the day, everyone seemed quite comfortable with the idea of pérsonalisation de schéma, and reasonably convinced that they might find what they wanted to mark-up somewhere, somehow in the TEI.

Day 2

On this and subsequent days we were were displaced to a much better (because bigger) teaching room. I began the day in seriously magisterial mode by explaining (many of) the components of the ODD language, and why you might care to know about it. This was quite punishing, both for me and for some of the less technically-minded participants, but no-one visibly fell asleep. For the subsequent practical, Lauranne had prepared a script in which participants submitted their own texts to analysis by OddByExample, generating a personalised ODD. The majority of course had not come with their own texts, or had texts not in TEI P5, so they ran the exercise on a rather inadequate sample of the Virgolos corpus instead. With a bit more prep, I think this could be a really fun exercise and an excellent way of getting people to learn ODD properly. It also revealed a bug in OddByExample which Sebastian graciously fixed overnight

Lunch being cardboard plateaux once more, I went for a stroll round the nearby Parc de Montsouris to see some of the gray autumnnal paris daylight while it was available. I then droned on for well over an hour on the subject of the TEI Header, which I am now selling as being “metadata for the rest of us”. They made me do it. The exercise we adopted was the French version of creating an ms description for the W. Owen ms, last seen in Berne, This is quite useful for the purpose, especially in combination with the following exercise on marking up a transcription of that manuscript ; time permitting however I would have instead preferred to use a different French manuscript for both. If I had one.

Day 3

Wednesday I had carefully billed as “journee des guest stars” since the idea was to make other celebrated French TEI enthusiasts share in the work. So we began with a presentation about TEI recommendations for dealing with named entities and their names given by Alexandre Gefen. Since the room contained more than a few French linguists this gave rise immediately to a heated debate on the philosophical basis and nature of nominal reference. The exercise in marking up names was a little under-prepared and one of the students insisted on asking the Emperor’s New Clothes question (why bother?), which was answered by another participant citing (at length, and with enthusiasm) the work of Nicole Dufournaud inter alia, which was nice. Since Alexandre had to leave early, I filled the gap by moving my brief overview of tools options up, giving a plug for Sebastian’s stylesheets, and letting them experiment with OxGarage, which they loved.

For lunch we went to the brasserie down the road, which was a much much better idea than the plateaux. Everyone got very jolly and there was a fair amount of shouting. Our second invited expert of the day was Bertrand Gaiffe, from ATILF, who delivered an excellent pair of lectures about encoding of oral and linguistic data respectively, also involving a fair amount of interaction and discusion, but not much actual tagging, since TEI interfaces for the appropriate tools remain elusive, best efforts of the LingSig notwithstanding. .

Day 4

The final day began with a presentation on the various TEI orthodoxies concerned with the editing of primary resources given by our third invited expert : Alexei Lavrentev from ICAR. Participants were then offered the choice of doing either the reverse transcription exercise or the visual encoding exercise from Berne; both options were taken up, though I was too busy sorting out the website to see how far they got with either.

After another nice brasserie lunch (roast duck), I spent about 15 minutes showing how to use TEIBoilerplate, which went down remarkably well, “Génial” they cried, as they saw all that tricky encoding in John’s demo file being rendered beautifully by Safari (France is still largely land of the Mac). The rest of the afternoon, was devoted to a more ambitious TEI-savvy piece of software : txm, from the textometrie project at Lyon. Alexei showed us what it was, and demonstrated how to make it sit up and do tricks with the Graal and Brown corpora, which participants had pre-installed. He also showed it working with a selection of literary texts prepared for use throughout the workshop.

Verdict

I think this workshop worked much better than it deserved to (I always think that). All the participants seemed very happy at the end, and several of them said they had learned more than they expected to. I think the organisation of the programme made good sense, and the balance of exposition and exercise was approximately right, though we probably didn’t do enough to make the practicals consistent and relevant. A few parts suffered from lack of preparation, and I think we could have done more to get a single case study working throughout the course of the four days, in addition to the various more specialised materials we introduced. But next time we’ll definitely get everything right. Thanks are due to the participants, the organisers, and all my co-formateurs, especially Lauranne for calming me down at moments of high anxiety.
All the materials used in the workshop are available in PDF starting from http://meet.tge-adonis.fr/sites/default/files/2012-11-initial.pdf. Dedicated TEI hackers may also be interested in the XML sources of the presentations which are available from my svn repository at http://code.google.com/p/tei-fr/source/browse/#svn%2Ftrunk%2FTalks%2F2012-11-paris

Here we go again

It’s ridiculously early for a Sunday morning, but the only plausible train to catch from Oxford if you want to connect with a 1220 Eurostar leaves at 0940. So here I am wondering, along with many others, where on earth is said train. We can see it in the sidings North of Oxford station, but it’s not moving towards us and the announcements are not reassuring. Maybe the stopping service to Ealing Broadway is a better bet: certainly standing around fretting on Oxford station is not pleasant. Some twenty bucolic minutes later, I detrain at Didcot in the hope of something better: which does indeed turn up in the shape of the 0940, now proudly running only 20 minutes late. I spend my uneventful trundle through the morning sunshine trying to work out what I have done to incapacitate tei-emacs on my laptop. Then an unspeakably horrible Circle line train bears me off to St Pancras, and the comparatively civilised space of the Eurostar lounge where I discover that in the general confusion of getting myself ready for this week’s set of French gigs I have failed to check something crucial into my nice new subversion repository. Ah well: no time to agonise over that, it’s time to get my disordered thoughts on the history of the TEI into some sort of plausible order, and to construct an appropriate French narrative around same. Which keeps me happily occupied for the rest of the day: out of London, across the wilds of Kent, under the channel, through Picardy into Paris, my nose barely strays a few inches from my laptop screen, tappety tappety tap, except for a few minutes degustation (I use the term advisedly) of a Eurostar snack lunch, and a few dirty looks in the general direction of some fellow passengers yapping away noisily behind me. Even nastier, but mercifully not noticeably longer than the Circle line is the hop by RER B from Gare du Nord to Gare de Lyon, where I resume work on board a nice peaceful TGV all the way to Lyon. With such good effect that my talk for tomorrow is all ready to go, even before I arrive at Perrache. Such virtue warrants dinner, even though it’s now a little late, so I stride purposefully across the Place Carnot to the brasserie Victor Hugo, order a hamburger a cheval (nothing to do with horses, this is a burger with a fried egg on it) frites, et un pot de cote and phone Marjorie to re-assure her that I am here and ready to boogie, before retiring to bed.

One hasty breakfast later, Dominique Roux and I set off in search of one of the many fine Universities in which Lyon rejoices, more exactly the vaulted basement dungeon in which Marjorie’s seminaire is taking place. The morning was supposed to be a double act, but since Paul Spence couldn’t make it his colleague Guilhem Pépin instead gave us an interesting lecture about medieval history before showing us some of the Gascon Rolls project. Pépin is a French (or more properly, Gascon) historian actually working at Oxford in the History faculty. There was a time when I might have huffed and puffed a bit about Oxford academics who take their TEI digital projects off to Kings College instead of using the local facilities, but these days I have become placid and boring. Anyway, Pépin was a good speaker and clearly an agreeable person to work with; and the material presented all sorts of interesting possibilities for analysis once marked up, even if he was almost aggresively reluctant to claim any expertise in the application of markup. Not for the first time, I wonder why it is perfectly acceptable for academics to profess ignorance of one technology that is essential to their work, whereas ignorance of others (say, bibliography) would seriously damage their career prospects. And then off we all went for a decent lunch, this being France: dos de colin avec ses pommes de terres lyonnaises, if I remember correctly. After which I gave my talk, which seemed (to me at least) to go remarkably well for a first outing: I suspect I will give it again, at least as long as people go on asking me to explain where on earth the TEI came from, and why it has not sunk without trace. It is a good story, with a good moral, I think. After a coffee break, Dominique Roux from the Presses Universitaires de Caen gave a thorough overview of their projects and preoccupations, presenting a variety of cool projects, a TEI-based workflow, some wise remarks about the use of TEI in commercial publishing, and much else besides. It’s a pity he came at the end of a long day with perhaps a touch too much Gasconnade in it, since it would have been good to discuss several of the ideas he presented with the masters students present — who had all been assiduously taking notes earlier in the day, but were clearly flagging somewhat by the end. I was sorry to have to rush off  in time to catch the train to my next gig, in Tours.

Preparation for said next gig took up quite a bit of the journey, quelle surprise; indeed I don’t think I looked out of the window once. And yes, it is possible to get from Lyon to Tours without passing through Paris, if only once or twice a day. The TGV concerned stops at a place I have never heard of called Massy, and then at St Pierre des Corps, before zooming on to Caen. St Pierre des Corps is a dismal little junction from which a variety of trains shuttle into the architectural splendour of Tours central, about 5 minutes away. Even when entirely enclosed in scaffolding as a part of its restoration as a patronomial monument, Tours station is an uplifting spectacle late at night, when everything around it is closed except for Macdonalds. Equally good for the soul is the Grand Hotel of Tours which has retained and lovingly refurbished its charming 1930s decor, all peacock feathers and wooden panelling and geometric patterns. Last time I was here in December, the wifi was misbehaving but everything seems to be fine now, and the breakfast is excellent. Next morning, it’s a quick trot across town to the Centre d’Etudes Superieures de la Renaissance to give my contribution to their Master 2 professionnalisant Patrimoine écrit et édition numérique : initiation à l’encodage des textes patrimoniaux. This is the third or fourth year I have done this, so you would think I had it sorted by now. My contribution this year consisted of a ninety-minute lecture on manuscript encoding (much revised to recognise the existence of the new <sourceDoc> element, as of release 2.0 of TEI P5 — this was the talk I thought I had mislaid, but hadn’t); followed by another 90 minutes on Roma and schemas and such like mysteries, using the Virgolos project as a case study (called TEI a la cartes, geddit?); and finally another 90 minutes attempting to explain XSLT pour les nuls. This last was a rather more quixotic and under-prepared venture: although novices quite quickly grasp the basic ideas and usefulness of XPATH, grasping exactly what an xsl template is and why you might want one is rather more of a challenge. But the punters seemed content to be slightly baffled at the end of a long and varied day, and I am sure that the local team will clarify any residual bewilderment next week. Dinner was at the Odeon, another piece of lovingly restored 1930s kitsch, where the food was excellent (I had the rognons since you ask), and Marie-Luce and I discussed the notion of a weeklong residential formation approfondie sur la TEI under the auspices of the Cahiers consortium, plus anyone else who might like to play.

Tours is in the process of acquiring a tramway, which means that large amounts of it are being dug up and knocked down, notably near the railway station: I observed this with interest over breakfast, before hastening off rather late for a short consultation with the CESR team about how to autogenerate an ODD from their Epistemon corpus (sort of difficult if you don’t have Saxon installed), and some discussion about how best to proceed with their ongoing project of revising the project’s encoding manual. The plan is not only to update but also to generalise this manual for use by other similar projects, which would certainly be useful: there isn’t a lot like it in French, aside from the BFM manual. However, I have a train to catch this morning, so I have to sprint back through the marche des fleurs, looking neither to the right nor the left, regretfully for there is much to see, and resisting the temptation to stop to buy fresh garlic or dried flowers or a sandwich for the journey or even take some photos of the pavements now decorated with a rich and colourful assortment of flowering bedding out plants. Tours is a charming place with much to recommend it. And so, off to Paris where I have a couple of crucial meetings to attend, crucial enough to propel me into an irrational anxiety about the progress of my train which suddenly decides to slow down and stop in the middle of nowhere more frequently than is decent, even for an intercite. In the event, though, we pull into Austerlitz ten minutes early, allowing me to take a pleasantly-paced walk through the Jardin des Plantes and up the hill to the TGE Adonis office in good time for my appointment with my directeur, Jean Luc Pinol. We discuss the coming year’s work plan for MEET; this being satisfactorily resolved, Ariane agrees to release a PUMA forthwith (don’t ask) … I spend the afternoon catching up on the gossip with TGE colleagues before checking into this week’s hotel which is conveniently located opposite a nice bar and round the corner from a rather excellent brasserie. Here I dine, expensively but deliciously on foie de veau patates et encore un pot de rhone. It’s tiring work all this gluttony, you know.

Next morning, I rise at a civilised hour, and catch up on my committments at the TGE most of the day, taking however an extensive lunch break to discuss with Mathieu Andro from the Bibliotheque Ste Genevieve a wondrous new digital library project which has apparently secured 1.7 million euros of local funding to finance a deposit archive for the digitized outputs of a select bunch of Parisian libraries, and wants to use the TEI. Did I hear that right? The lunch was pretty good too. Finally, I put in place some hasty arrangements for another meeting in Paris next week, and then trek on foot across town to Chatelet (where there seems to be, as usual, a manif going on) to catch the metro to Gare St Lazare  (which, post-renovation, seems to be mysteriously disguising itself as the gare de l’est), to take the train to Caen, for the last gig of this tour, namely viz Matthew Driscoll’s ongoing TEI seminar at the MSH. The Hotel Quatrans is much as I last saw it, and so, I am pleased to report, is the little restaurant called “Les saveurs de la Reunion” just round the corner from it, where Matthew, Eric, and I enjoy some rum, some gateaux piments assortis, two bottles of muscadet, and a tasty carrie cabri before retiring for the evening.

Friday is seminar day. Serge Heiden ARE YOU READING THIS SERGE? from the ENS Lyon opens proceedings with an update and an impressive demonstration of the textometrie project, which goes from strength to strength. They have an equipex in which they will be working with hundreds of Historians, and a number of other collaborations in prospects, some ANR, some DFG-funded. The software is, of course, still available from sourceforge, and they are also in the process of setting up a portal for general access to some demonstration applications of it   . Serge discussed the way the software uses TEI and other forms of markup; they have now fixed on a TEI-conformant pivot format, for which an ODD is in preparation. He also demonstrated many XAIRA-like features of the software and reported some work done by Alexei Lavrentev in importing and analysing the markup of a large corpus of texts from Frantext.  He was followed by Antoine Widlocher who described the  search engine under development at Caen’s Greyc research group initially for use in the Descartes project Its data model uses graphs rather than trees, and much of his talk therefore concerned the difference between the two, although he did also present the user interface envisaged for the system; this is, of course, SPARQL-based, and will access a triple store in which XML and other annotations are all represented in RDF. All very interesting if, perhaps, a little computer science oriented. Maud Ingarao commented that the project resembled  Edouard Portier’s work on multistructured documents; I should have mentioned Desmond Schmidt, but didn’t. After lunch (in the student canteen; n’en parlons plus) Maud gave a brief overview of a newish XML database system called BaseX, and demonstrated some of its jazzier features: she also noted that a test basex server has now been implemented as part of the TGE Grille de services. Frederic Glorieux then gave a nice talk demonstrating how the presence of detailed markup in his version of Francois Ganaz’s “XMLittré” project facilitated several interesting searches: he proposed tthe average size of text fragment within a TEI document might be an interesting stylistic indicator; and remarked on the high frequency of emotive words like “dieu, homme, roi” in the examples cited by Littré. Finally in this session Marie Bisson demonstrated the current state of the Juxta collation system under Windows, and working on three manuscripts of Thomas Le Roy. Juxta apparently has its own XML markup but  does now also (more or less) grock TEI .

Last but one session of the day concerned “quantitative codicology“, a term, I learned, which is even older than the TEI, having apparently been invented by someone called Ornato in 1980, according to Matthew, though it is a concept which can be seen to underlie Don McKenzie’s 1985 Panizzi lectures on bibliography as the “the sociology of texts”, or the so-called New Philogy of Stephen Nichols at the start of the nineties. I liked Matthew’s use of the phrase “the artefactual turn” to describe his increasing certainty that the meaning of text should not be dissociated from its “embodiment” or the historical and social forces that documents manifest, and intend to appropriate it for use when presenting the TEI’s recent reinvention of <sourceDoc> . Matthew and colleagues described the Fornaldarsögur norðurlanda project, which aims to provide an account of the production, dissemination, and reception of the “chirographically transmitted texts” of 36 stories from prehistoric times which can be identified in some 1500 texts presented in over 750 distinct Icelandic manuscripts. These are described using (inter alia) a reduced and tightly constrained schema derived from TEI P5, extended to include information derived from the transcriptions of the mss such as the average written area, the number of abbreviations per line, etc. as well as such features as the presence of decoration, or the types of text included. Sylvia Hufnagel presented some hypotheses about possible connexions between these evidential characteristics and assumptions about the wealth or status of the owner or person believed to have commisioned creation of a manuscript, though there is really insufficient evidence so far to justify any generalisations one might be tempted to make about (say) the emergence of the “prestigious reading manuscript” distinguishing (as it were) “coffee table” manuscripts from “paperbacks” . Eric Haswell described clearly and concisely the technologies used in the project, contrasting the “data centric” and “document centric” notions of relational and xml databases, and also showing how their web-service based implemetation based on eXist made it possible very easily to extract query results as CSV for input into traditional spreadsheets or as JSON for use by cooler things such  simile widgets. Finally, I gave that talk about linguistic annotation and why people say such terrible things about it. Not sure how appropriate it was to the day, but people seemed to be listening anyway. Final dinner of this week of over eating, was at Le Bouchon du Vaugueux where I (and others) tucked into a four course gastronomic menu, including some excellent roast duck, and rather a lot of stewed pears.

And on Saturday, the journey home, which was all very pleasant till I actually got to London : trains cancelled without warning, inadequate fallback facilities, Great British Public mustnt-grumbling etc etc. It took longer to get from London to Oxford (about 100 km) than from Caen to Paris (about 200 km), and involved a train that was so overcrowded it could not leave the station, not to mention a 30 minute wait for a replacement bus in the cold outside Reading station. Never mind, next week I’m going back to France, where the trains (mostly) run on time and the train crews are (usually) helpful and less demoralised when they don’t.

A day in Lower Normandy

And so to Caen, whose University campus boasts magnificent if vaguely fascist architecture, at the top of a hill, commanding splendid views over the urban sprawl to the countryside beyond, and liberally decked with graffiti to bewilder future epigraphers

OK Epidoc, encode this.

The University Press of Caen having joined forces with two other departments to offer him a visiting fellowship, my distinguished and white haired Danish colleague Matthew Driscoll is organising a series of seminars over the next few months, and I am here for the kick off session “TEI et encodage des sources”. About a dozen or so TEI fans are gathered in the Belvedere Room which is vast and very cold but still affords delightful prospects (as they say).

First up is Julia Rogers, a local doctorante describing the online edition of Descartes on which she is working under the watchful tutelage of Pierre-Yves Buard inter alia. No manuscript survives of Descartes’ works, and modern editors have played fairly fast and loose with them as a consequence: this impeccable electronic edition returns to the first printed editions as its basis, but uses all the possibilities of digital editing. Text is captured and maintained collaboratively by up to 15 scholarly editors, using a customisation of XML Mind to enforce a simple P5-conformant protocol designed by Pierre-Yves (and built with Roma), allowing for such niceties as the addition of editorial notes, citations, tracking of quotations, mathenmatical formulae (currently done in TeX though this will change) etc. Elsewhere in the University a fairly sophisticated morphologically-aware search engine is being developed, so that the original text can be queried in Modern French. The online edition will also integrate high quality page images supplied by the BNF, compensating for the decision not to encode all features of the layout. Impeccable, as I said. I was also impressed (as usual) by Sourcencyme,  presented by Isabelle Draelants from Nancy and Catherine Jacquemard from Caen. This ongoing project will combine a textual corpus of medieval encylopaedias (about seven so far) with a sophisticated indexing system tracing the chains of reference and citation amongst them, extending in some cases beyond into the 19th century. As a real hand-built hypertext, it is thus increasingly becoming the thing it represents: a complete encyclopaedia of medieval learning, endowed with tools for collaborative editing and annotation, and also with a specialist journal-like addition published by the ubiquitous revues.org. However, unless I misunderstand, a significant number of the texts it treats are owned by Brepols, which may pose access problems. Next before lunch, we were entertained by Vincent Olivet and Frederick Glorieux from the Ecole Nationale des Chartes whose home-grown RelaxNG tools continue to advance in the general direction of TEI conformance. They have been working on a direct conversion from ODT to TEI, using the same principles as Sebastian Rahtz’ stylesheets but aiming at a more specific homegrown RelaxNG schema, now expressed (I think) using an ODD. This was all very satisfactory, as is the fact that the tools in their workshop continue to be readily accessible.

Lunch (a three course affair involving some rather good salmon, and a chocolate mousse) was also highly satisfactory, and we reconvened much restored for an afternoon combining three short project presentations with set pieces from Matthew and from myself. Subhasree Pasupathy, from Caen, first of the three, described her use of the TEI mechanism to represent textual variation in her thesis on the projects of the Abbe de St Pierre. Thomas Lebarbe introduced us to the pleasingly heterodox digital Stendhal project at Grenoble during which I wondered not for the first time how hard could it be to write an ODD corresponding with their home grown DTD. Finally, Jorge Fins from Tours showed us how the Bibliotheque Virtuelle des Humanistes at Tours is now using both XTF and Philologic to search its corpus.

And so to the grand old man of TEI -based editing: not me, but Matthew Driscoll. He spoke in English but (as someone said to me afterwards) with such limpidity of discourse as to pose no problem (which sounds even better in French), Citing WS Greg’s distinction between “substantive” and “accidental” variation he showed how TEI markup enables one to capture both, but display either, by the judicious tweaking of rather cunning stylesheets developed by Eric Haswell. He also talked about gaiji, news of the existence and facilities of which does not seem to have penetrated everyone’s consciousness to the extent that it probably should have by now. And finally, a good half of the material I had prepared for my own talk having been presented by previous speakers, I was able to close the day in a suitably forward-looking way by focussing mainly on the new concepts proposed for handling l’edition genetique (sourceDoc, mod, change etc.) in TEI P5 which all seemed to go down quite well.

Deplacements, sept 2011 – 2

13 September The AFLS conference has another half day to go, but I have to move on. I have foolishly agreed to do both opening and closing plenaries at a week-long CNRS-funded Ecole Thematique on linguistic annotation to be held in (I am told) a chateau in Biarritz : who could resist. So I catch the midday train from Nancy into Paris, traverse the city of light by metro ligne 4 once more, emerging at Montparnasse to find that it is infeasibly hot outside as well as beneath. How well I remember leaving rainy Oxford with a suitcase full of autumnal long sleeved shirts and a raincoat; what folly not to have anticipated the need for cooler clothing. Well, there is time and opportunity to buy a short sleeved shirt from the C&A store underneath the Tour Montparnasse, and doing so makes me feel cooler already. I even have time to eat something fairly horrible and cheesy at a Flams’ fast food joint , before catching the TGV to Biarritz. This splendid train is sans arret for 3.5 hours to Bordeaux, at which point the climate and the scenery suddenly change, and the train starts pottering through the wonderful Camargue for another hour or two, before finally stopping at Biarritz, where I descend and join the other stragglers hopefully waiting at the taxi rank.

Good lord, I realise, as I descend from the taxi, they were not kidding about the chateau. The Domaine de Francon was built for a wealthy English milord in the 1880s, and is (it says here  “une vaste demeure de style anglo-normand ou Old English, au luxueux décor intérieur très éclectique et très raffiné”. Though now owned by a holidays rental outfit, it still retains most of the original decor (huge wooden staircase, stained glass windows, painted ceilings, second empire japanese-style decorations and marble fittings passim) as well as a very fine tree-filled park, and verandahs from which you can see huge atlantic breakers banging away on the beach half a mile away. See further my photos. It has the atmosphere and the style of an Agatha Christie country house, as well as an excellent cuisine, though by the time I arrive, there’s not much left to enjoy except the cheese board. Never mind: I retire up the vast creaky staircase, past the glorious stained glass window, into my huge room and sleep soundly.

Next morning, slightly disappointed to find that some French version of Jeeves has not discreetly laid out a morning suit for me, I slip into one anyway, grab a hasty breakfast, and proceed to the salle de conferences, to receive another little conference pack (wooden USB key, another badge, another bag) and listen dutifully to the organisers explain the modalities of this event, which has been in the planning for almost exactly a year. Coffee and buns on the verandah, and then it’s time to wheel out my talk on Linguistic Annotation for a (hopefully slightly improved) second performance. There is some discussion, mostly positive, and afterwards Anne Dister kindly takes me to one side and corrects numerous typos in the slides, which I perversely read also as a positive response. The rest of the day is devoted to brief presentations about oral, written, and video data, which nicely reinforced my comment that the Tour of Babel is still very much with us, and also demonstrated to my relief, that I had at least chosen relevant topics (variation in transcription practice, mark up of named entities, co-referencing, etc.). I was also struck by the fact that formats tied to particular software (lPraat, Childes, Elan, Transcriber etc.) are seen as de facto standards.

The next day we meet in a custom-built bunker under the lawn where there is a swish conference suite (though the wifi is a bit rubbish). I enjoy listening to two grandes dames d’annotation linguistique — Anne Lacheret and Marie-Paule Pery-Woodley — give their take on the complexities of the field, but it becomes clear that these formal presentations, though each in their way impressive, are not the real business of this event. Instead it is in the extensive and occasionally heated discussions taking place over coffee and during meals and, yes, during early evening excursions to the beach that the intellectual action is taking place. As it should be, at an Ecole, of course. The twenty or so participants are an interesting mix of people at different stages in their careers, including recent doctorates, established scholars, and a scattering of engineers , and they also come from several different parts of the applied linguistics forest despite a shared interest in annotation (for example, video capture, the language of signs, psycholinguistics…). I struggle to keep up and feel obscurely honoured to be involved, though I am also somewhat preoccupied by more mundane matters like getting the next few talks ready, or getting my washing done.

In retrospect, taking a day out in the middle of the Ecole to go back to Paris was somewhat eccentric, if not barking mad, but I did it all the same. On Monday evening, I found my way to Biarritz airport in time to catch the evening Easyjet flight to Roissy : a fairly nasty experience, not improved by the excessively long walk from the arrival gate at Charles de Gaulle airport all the way to its railway station. I spent the night at a slightly shabby hotel near the Gare du Nord, and on Tuesday morning, I met Celine who guided me by RER out to the wilds of Villetaneuse, and the Universite de Paris XIII. Here, as guest of the UMR “Lexiques, Dictionnaires, Informatique” I gave a completely different talk about the TEI  (the third one of this trip) to a vast and heterogenous crowd of students. Then I got a lift on the back of Fabrice’s scooter back to the station, took the first train back to Paris, and across to Montparnasse in good time to make the 1540 train back to Biarritz, arriving at the chateau somewhat out of breath, but just in time for dinner.

All of this meant that I missed entirely the chance to learn more from Antoine Widlocher and Yves Mathet about the standoff XML annotation tools developed at Caen, which is a shame, judging from the copy of their presentation on the Ecole’s wiki. But I did get to see Alexei Lavrentev demonstrating txm in action (and successfully broke it yet again). . I continued to worry about the two talks I still hadn’t written, but also went for a good walk around the grounds of the Domaine. I started to feel at home. Next day, we were assured the weather would improve enough to warrant a picnic, which would be provided at the end of the morning, as indeed it was.

And so to Thursday, which began with a good talk from Sylvain Loiseau , almost but not quite saying “use the TEI”, followed by my final wrap-up talk on why standards might be considered to be a good thing, even in this fragmented field. As requested, I managed to finish this early enough for everyone to go off to the beach with their packed lunches. I however was condemned to sit around finishing my next (and final) talk, before setting off back to the airport. This time I took the Air France flight, which was better in that AF still hand out free drinkies, and don’t quite treat the passengers like refractory parcels, but worse in that when it arrived the Orly bus which comes allegedly every 15 mins, did not, on this occasion, come for an hour, which led to much overcrowding and grumpiness on the short hop across to Denfert-Rocherau . I then sleep-walked my way via metro to Exelmans, checked into another overpriced hotel, and collapsed.

Deplacements, sept 2011 – 1

It’s September: the rentrée — when everyone goes back to work, including me even though I am retreated –– looms. I had a very nice summer, thank you, sitting in my garden for possibly the last time, enjoying being visited by daughters and grandchildren, and not having to go anywhere, except occasionally down the shops to buy a newspaper or some more mushrooms. And in particular not having to get up in the morning. It couldnt last. The leaves on my conker tree are brown and as I write the conkers are already starting to fall. Time for some quick blog entries rapidly surveying the first series of displacements I undertook this month: five talks at four different venues in 12 days.

September 8th. I am sitting once more on the eurostar, waiting for the closure of the doors, when my phone rings. The expensive hotel in the Marais where I am booked in for the night wants to cancel my reservation. I am delighted to discover that lack of usage has not impaired my ability to remonstrate in French, so I remonstrate. How very dare they. Then I spend the journey feverishly correcting the first of the four different talks I have to give on this trip (the others will come in due course) and not thinking about hotels any more. Which seems to work, since when I arrive at gare de nord and switch on my French phone, the hotel meekly apologises for deranging me and assures me that everything has been regulated. Good. For 200 euro a night I expect a bit of servility. Luxury would be nice, but is not (it becomes apparent when I actually get to La Turenne du Marais) on the menu today. Never mind: all I need now is some dinner, which (I am pleased to report) was available from a quite acceptable Italian trattoria just across the street. I wolf down a good tricolour salad, some creamy pasta, some wine, and (a mistake this last) an allegedly sicilian cannoli before retiring to my absurdly over-crowded hotel room.

<p>Les archives nationales</p>

Next morning, after a typical hotel breakfast, it’s a five minute stroll to Les Archives Nationales, which are apparently on strike despite the sunshine.  The lovely Anais Wion appears and kindly carries my suitcase up hundreds of stairs, along dozens of corridors, and through all sorts of winding twisty passages which eventually lead me, puffing along behind, to the attic in which Denise Ogilvie hangs out. It boasts splendid views over the rooftops, and its own bathroom. The three of us then spend an agreeable morning debating how to mark up postcards in TEI and gossiping about the other ANGD trainers who haven’t apparently made much progress either. I realise that I still havent done the work I should have done on revising the workplan for the “structurer” session ; in particular I haven’t written to tell the nice lady from INIST that I think there is too much Dublin Core in her proposed scenario.

No time for lunch. A not very quick taxi ride across town through the traffic to rue Lhomond, where I pop into the office to say hello, pop out again to get a sandwich, pop in again for a meeting with the neighbours at ITEM to discuss the possibility of a TEI-compliant version of their legendary Optima program. I form the doubtless incorrect impression that someone has told P-M de Biaisi that he won’t get another round of funding for this software unless it can export TEI. He impresses on me how different it is to everything else I may have seen of the kind, and I assure him it would be utterly delightful to collaborate on such a project and Daniel Ferrer who organized the meeting blinks a bit in a mildly donnish way. Anyway, PM dashes off to his next meeting, and I dash off to mine, which is actually just a brief teabreak with Florence outside her office in the place de l’université. We exchange gossip, and drink what passes for tea in France. And then it’s back to the sweaty metro ligne 4 and off to the Gare de l’Est, for the 1809 train to Nancy, on which I continue to work on that wretched talk. Except for the bit where the train achieves its maximum vitesse, appearing determined to shake itself (and me) into pieces in the process.

Bertrand meets me off the train, walks me to my hotel, buys me a beer, and makes sure we get to the right restaurant for dinner not too late. He’s a pal.  The hotel, the Akena, is a French take on the American motel — lacking in frills, but actually offering slightly more space than the one in the Marais, with functioning free wifi and a third the price.Dinner is upstairs at Grand Cafe Foy in the Place Stan’, of course. I go for the obligatory quiche lorraine, followed by entrecote and chips, and truly delicious tarte de rhubarbe. Service is very slow, but the food is worth waiting for: so maybe not nickel (can this word be applied to food?) but definitely correct. I slowly warm up to academic discourse in French again : its been a while.

Next morning infeasibly early, I accompany my fellow plenary speaker, a distinguished lady called Catherine Kerbrat-Orecchioni, down to Nancy’s remarkably ugly Palais des Congres (it looks like a carpark) in good time to observe my hosts of last night doing conference organiser panicky things for a while, get some coffee, get my very own conference bag, check my email, and give my intervention on linguistic annotation (Is it A Good Thing?) yet another final polish. And so eventually to this talk’s first outing, as opening keynote for the annual conference of the Association of French Language Studies which seems to go down better than it has any right to. I can spend the rest of the day graciously accepting compliments on the clarity of my french and re-acquainting myself with applied corpus linguistics, a field which (had I forgotten) seems characterised as much by some really very nice people as much as by any methodology. After lunch, I listen with interest to various presentations about dozens of small oral corpora, and wonder if the time is really ripe for the TEI to be adopted as an interchange format for them. The closing plenary is from a grand homme de corpus linguistique, Bernard Combettes, who extemporizes with no visible visual aids or notes in the impressive way that only French grands hommes can. At the end of the day, we form a disputacious crocodile and trek across town down to the University Library’s vast salle d’honneur in the cours Léopold, which I distinctly recognise from the time back in 2008 when the TEI annual meeting was hosted at ATILF. As on that occasion, there are speeches, (largely content free) lots of champagne, and lots of amuse-gueules I realise after a while that I am very tired, as well as pleasantly drunk, and stagger back to le motel, missing out the son et lumiere.

Next day, being Monday, I feel I can justifiably skip some sessions in order to deal with some of the minor crises which have popped up in my email, notably a panic about a deliverable for the Agora project. I have a long and interesting discussion with Carol Etienne about the CLAPI project, which seems to have accumulated and (more usefully) makes available lot of information about the various oral transcription formats currently in use. And I listen to another plenary, this time in English, from a Canadian called Tom Cobb, largely advertising his wonderful teaching software: he apparently runs a company called Linguasoft which (in BNC days) I had occasion to speak to rather sternly.

After lunch, I attend a session largely devoted to people talking about something called “la langue des jeunes” which is seemingly the politically correct term for the language of the banlieus, aka banlieusart, many of the presenters taking a curiously anthropological approach to the task of collecting and reporting linguistic evidence. After lunch, Jenny Cheshire’s plenary reported some results from her ongoing English (well, Hackney) based project on innovation in “Multicultural London English” : an impressive talk for its methodological rigour and thought provoking analysis: see further this earlier report though that one doesn’t talk about pronominal “man”.

The evening was rounded off by a celebratory dinner in the downstairs dining room at Nancy’s gloriously art nouveau restaurant Flo during which I spent a fair amount of time chatting with the ladies who run AFLS, partly because they seemed to like it, partly in the hope they might invite me back. some time.

« Exploiter les données structurées en XML »

Here’s a nice way of spending a day in the heart of the Marais. Get together a bunch of people who do actually use the TEI (or some other kind of structured XML markup) to do cool things and ask them to talk for a maximum of 10 minutes each about the software they use and what they do with it. I claim no credit at all for this idea: the event was master minded by Anais Wion, Fabrice Melka, and Denise Ogilvie who just coincidentally have to prepare a workshop on the verb “exploiter” in Aussois later this year. Whatever its origins, this turned out to be a really worthwhile day, and not just because of the venue (the alabaster hall of the Archives Nationales) or the lunch (yum, Lebanese buffet).

A proper account of the proceedings has been promised for a couple of weeks hence, so this note is just the consequence of me jotting down some immediate impressions on the train home. There is already a useful page of links to stuff mentioned at the workshop at http://www.delicious.com/workshopexploiter, which I should probably update with this report.

I kicked off by explaining why the TEI really didn’t ought to have much to do with software production, except for its own nefarious purposes. I conceded, however, that those purposes led inelectably led to the production of Sebastian’s Excellent Stylesheets and hence to a generic software tool of some importance in the community. Marjorie Burghart then talked about XML database eXist, showing it in action on her sermones.net site, and also her paleographic exercise site; the main problem with it, for her, was that its installation and maintenance on a local server require a little more technical expertise (for example, fine tuning a java environment, recovering tomcat when it falls over, etc.) than is available for the typical humanities department. This need for infrastructural computing support turned out to be a major theme of the day. Next up was Lauranne Bertrand from the CESR team at Tours, who showed how they currently use XTF to display various versions of their richly encoded texts. Maud Ingaro then introduced us to a new XML database from the University of Konstanz called BaseX which seems worth a second look, if only for its very sparkly visualisation features, though its main claim to fame is probably its ability to handle REALLY BIG (multi-gigabyte) databases, which (if true) should give several current pontificators pause for thought. Jorge Fins, also from CESR then talked about Philologic, which provides traditional text searching (full text indexing, concordancing, etc) capabilities, running on a distinct (and distinctly dumbed down) copy of the Bibliotheques Virtuelles des Humanistes exported to Chicago.

After a brief pause for coffee, Alexei Lavrentev, standing in for Serge Heiden (reportedly recently immobilised by a close encounter with a crampon) showed us the current state of  txm the open source text analysis system developed by the textometrie project at Lyon. Severine Gedzelman, also from Lyon, then described Hypermachiavel, an application for handling multiple aligned corpora (or, to be more exact, one specific set of multiple aligned corpora). I found the difference in software design between these two projects interesting: txm was developed very consciously as a generic text processing framework, incorporating and rationalising feaures from many other systems; whereas hypermachiavel was developed (almost from zero) very much to meet the specific needs of a particular research project, but without any particular generic intention.

Does the world need another generic tool for doing textual annotation in XML? Certainly many linguists and computer scientists seem to think so. Cue Antoine Widlocher from the University of Caen, and Glozz, a new plateform for distributed linguistic annotations of text segments, overlapping or otherwise, relationships, graphs, etc. etc. Very nice visualisations, as per other java applications; nice features such as annotation histories; no evidence that any researchers from the humanities had been involved in its design or application up to now. Florence Clavaud, from the Ecole Nationale des Chartes, then spoke very briefly (no really) about Pleade and her plans to enhance this mainstream EAD-muncher to include TEI capabilities. Pleade is one of the tools of choice in the French Archival community so that enhancing it to handle TEI as well as it currently manages EAD and sets of digital images would be very cool. Also from ENC, Vincent Jolivet and Frederick Glorieux showed us diple which is a nice simple package written in php to transform complex TEI markup to static web pages with a complementary suite of stylesheets to render them, and something called xrem, a very glamorous tool for visualisation and construction of RELAX NG schemas. Fred likes to work directly in RELAX NG rather than via ODD, but the results almost justify such heresy. Nicole Dufournaud, aided and abetted by Denise Ogilvie, told the (possibly) instructive history of how Millefeuille (a nice customized TEI editing and indexing application based on work Nicole pioneered back in the nineties)  is now in a suspended state of animation. Following one unsuccessful attempt at reanimation, it appears that another one is proposed as part of a European project. Finally before lunch, Maud Ingaro showed us some camstudio videos about dinah : this “philological platform for the construction of multi-structured documents” is currently being developed at Lyon in a project studying the manuscripts of Jean-Toussaint Desanti, and seems worth a second look, even though it’s a long way from being stable yet.

After the afore-mentioned very nice lunch, there was a wide-ranging free-form discussion, from which I took away chiefly the following points (as aforesaid, there will be a more complete and correct report later):

  • a general feeling that IT infrastructural support was lacking: in particular, people wanted
    • some kind of sandpit environment in which they could experiment with different tools
    • some easily accessible web-publishing service for e.g. doctoral students to showcase their work
  • a general feeling that development and implementation of XML-based projects was hard work requiring input from specialists, consequently a need for more training
  • a desire to share experience of these and other tools; the existence of TEI-FR, and the TEI Tools SIG were agreed to be appropriate channels.

Some pointed requests were made for the TGE to do more to provide some of these services, which proposal I agreed to go away and investigate.

Quel avenir pour l’édition génétique sans "digital forensics"?

Ce texte représente une intervention au séminaire général de l’ITEM qui a eu lieu à Paris le 31 janvier 2011. Remerciements à ma collègue Nadine Dardenne qui l’a relu pour en corriger les fautes d’orthographe et de syntaxe répandues dans la version originelle; je revendique cependant toute faute intellectuelle résiduelle.

Je souhaiterais vous proposer une brève présentation d’un champ d’études émergeant qui se nomme “digital forensics”. Ce terme reprend un ensemble de techniques et théories propres aux procédures juridiques, mais probablement également d’une importance incontournable pour l’archivage et l’étude des objets nativement numériques; considéré du point de vue patrimonial. Le besoin de mettre en évidence, d’une manière crédible et certaine, les traces de mots enregistrés sur disque dur ou floppy, même supprimées, et d’associer ces traces avec un écrivain, est un enjeu qui afflige l’éditeur critique autant que l’agent de police, ou les services secrets. À chaque fois on a besoin d’une connaissance des affordances des systèmes de stockage numérique, de ce qu’ils rendent possible, et de ce qu’ils cachent. À chaque fois, il est question de balancer des probabilités, de proposer une vérité vraisemblable basée sur des évidences. On pourrait rester aveugle devant ces possibilités, bien sûr. On pourrait dire que l’histoire d’un texte est réduit à l’histoire de ses incarnations multiples, sur ces feuilles de papier que nous aimons si bien. On pourrait renoncer à l’investigation de la manière par laquelle ces incarnations ont été réalisées. Mais dans ce cas il faudrait également renoncer à la majorité du discours artistique actuel, qui est nénumérique, vit et évolue dans le numérique, et meurt dans les archives numérisées de M Google. Car les objets d’étude des humanités et sciences sociales sont de plus en plus conçus et stockés sous forme numérique; il est donc indispensable de revoir et de transformer l’outillage  avec lequel on espère les archiver et les analyser. L’ordinateur de l’auteur, ses disques, son téléphone portable, ses espaces virtuels sur le réseau internet, remplacent ses cahiers, ses brouillons, et ses manuscrits. Il faut ré-équiper le chercheur avec une compréhension des principes d’enregistrement numériques, pour compléter sa compréhension des principes de l’écriture analogique. Le choix est simple: ou bien il faut redéfinir la diplomatique pour le numérique, ou bien il faut renoncer à l’étude de la genèse textuelle des oeuvres modernes.

Comment constituer cette redéfinition? Je propose un réajustement à deux niveaux: intellectuel, et substantif.

Au niveau intellectuel d’abord, il faut affecter une bonne compréhension de l’informatique aux disciplines des SHS. En dépit de deux décennies (au moins) de “humanities computing”, à present relabelisé comme “digital humanities”, il reste une étonnante ignorance autour de l’ordinateur et de ses capacitàs à faire (ou à ne pas faire). En partie, c’est une des conséquences de l’émergence de l’informatique grand public, comme phénomène de marché de masse. Des impératifs commerciaux restreignent l’usage de l’ordinateur à des plateformes spécifiques, et transforment ce moteur universel en un jouet uni-fonctionnel. Ce n’est guère surprenant alors d’entendre les gens affirmer que cette technologie réductive pervertit l’intelligence humaine en la transformant dans une disposition de bits. Ou, à l’extrême opposé, d’y voir l’éternel attrait du divin se manifestant cette fois dans la tendance à vouloir ‘attribuer une intelligence consciente aux effets d’échelle (par exemple, le crowd sourcing, les réseaux neuronaux, le data mining…) Peut être il y en a-t-il parmi nous qui ont besoin de récalibrer le cadre de leur esprit pour supporter l’ère de l’information, juste comme nos ancêtres ont dû s’ajuster à l’ère de la vapeur… mais un tel ajustement consisterait en une extension de nos perceptions, en aucun cas d’une transformation. Dans la langue française, un ordinateur a pour objectif de mettre de l’ordre dans les choses; le mot “ordinateur” porte même des nuances religieuses en rappelant par exemple l’ordination des prêtres. Dans les langages anglo-saxons par contre, un “computer” n’est qu’une machine pour calculer. Mais les objets auxquels l’ordinateur apporte un ordre ne sont pas que les chiffres: il est la machine par excellence pour organiser n’importe quelle espèce de signe, pour le ré-encodage des systèmes sémiotiques de toute sorte. Voilà pourquoi j’ai toujours insisté pour que l’informatique soit considérée comme une branche des sciences humaines, plutot que de l’ingenierie ou de la mathématique. Au niveau materiel, je propose une élargissement des connaissances attendues pour ceux qui veulent faire des études philologiques. On attend de tels gens une compréhension assez intime des technologies typographiques ou paleographiques. Maintenant, a l’urgence on doit élargir ces compétences pour le numérique.

Je termine avec quelques mots sur quelques elements de ce qu’il faut faire apprendre aux futurs généticiens. Quand j’écris un document sur mon ordinateur, le texte apparaît et disparaît sur l’écran, sous le contrôle d’un logiciel avec lequel j’interagis à travers mon clavier. Les traces propres à mon texte sont de deux sortes: lettres, et ce que l’on pourrait nommer “meta-lettres”: c’est-à-dire des codes qui déterminent la façon d’ afficher ou de traiter les lettres. (Un autre terme possible serait “markup” ou “balisage”). Ma conscience de ces meta-lettres est variable: quelques-unes (la ponctuation par exemple) me semble être un composant de ce système sémiotique que l’on appelle la langue naturelle; d’autres (les retours de chariot, les indications de rature, etc) me semblent moins visibles, et j’attends que la machine s’en occupe seule. De la même façon, les codes insérés par le logiciel de traitement de texte pour générer des effets spéciaux tels que les changements de police ou de couleur appartiennent, de mon point de vue, à un niveau sémiotique tout à fait différent. Cependant, mon texte est composé de signes appartenant à ces trois niveaux. Le texte numérisé que j’ai ainsi composé commence son existence physique comme des changements d’état dans la partie dynamique de la mémoire de mon ordinateur; très rapidement ces changements sont transférés et enregistrés dans un format plus permanent quelque part sur mon disque dur, ou dans une autre mémoire. D’habitude ceci s’effectue automatiquement par l’infrastructure informatique, le OS: à noter que c’est fait sans aucune intervention de ma part. Même au moment ou je me décide consciemment d’enregistrer l’état courant de mon texte, bien que je pense savoir ou je le mets (dans un fichier nommé, sur un médium specifique), la manière dans laquelle sont organisés à cet emplacement les composants de mon texte — par exemple, les adresses des secteurs concernés, leurs tailles, la disposition des caractères et autres signes dans ces secteurs — est entièrement hors de mon contrôle et de ma connaissance.

Quand j’écris un document sur papier, le texte apparaît, mais ne disparaît que rarement. Je dois utiliser un ensemble assez complexe de “meta-markup” pour indiquer que tel ou tel signe n’existe plus dans mon texte, qu’il a été remplacé par un autre etc. Le système sémiotique auquel appartient ce markup sera entièrement le mien (exception faite des signes de correction imposés par une maison d’édition). Plus significativement, chacun de mes bouts d’écriture a sa propre existence physique, qu’il m’est impossible d’ignorer, surtout si j’ai un petit bureau ou déjà bien rempli … Par conséquence, il me faut trouver rapidement des stratégies de stockage (ou de recyclage), qui vont déterminer les possibilités de récupérer à l’avenir mes procédures d’écriture. Ces stratégies seront déterminées, bien naturellement, par ce qui me paraît utile, ou ce qui semble approprié dans le contexte institutionnel dans lequel mon écriture prend place. Elles représentent des jugements de valeurs considerés justes dans ces contextes, et c’est pour cela qu’on dit que l’histoire est toujours écrite par les gagnants, et que les archives de n’importe quelle société ont tendance à ne contenir que ce qui est valorisé par cette société. Avec l’arrivée des média numériques, pourtant, les affordances de nos systèmes de stockage se sont transformés d’une manière fondamentale. En dépit des efforts des artistes modernistes, on ne peut lire un bout de papier que d’une seule manière. Mais l’organisation des fragments d’écriture sur un medium numérique de stockage est indépendant de son écriture; elle peut être lue de plusieurs façons. Les séquences de bits constitutifs de ce document peuvent être lus (comme je le suppose assez naïvement) à travers le système de gestion des fichiers sur mon laptop. Mais ce dernier n’est qu’une espèce d’index, comprenant un ensemble de pointeurs sur des segments de stockage éparpillés sur mon disque dur. Ou bien, dans le cas où on recupère mon texte à travers un logiciel plus complexe comme un blog sur le réseau, les traces de mon texte sont hebergées dans une base de données en Californie sur une machine que j’ignore totalement. Mais il demeure possible de récupérer ces mêmes séquences de bits en adressant n’importe quels systèmes de stockage d’une autre manière, tout à fait différemment du système d’acces prévu, que cela soit le système de fichiers sur mon laptop ou le blog, qui (je croyais) représenterait la seule structuration correcte de mon texte. Au contraire. Pour le texte numérique, la structuration est contingente, protéenne.

Ces morceaux écrits, comme je l’ai déjà souligné, pouvaient ne contenir que des materiels raturés, ou des signes qui ne servent qu’à indiquer la manière ou d’autres signes devraient ou pourraient être affichés ou intégrés dans un texte visible. D’où des problèmes pour l’archiviste, et un défi supplémentaire pour la critique textuelle. En acceptant une boîte de papiers comme dépôt, l’archiviste peut raisonnablement supposer que les parties savent exactement ce qu’elles sont en train d’offrir. Mais, quand l’archiviste accepte en dépôt un disque dur, peut-on envisager que les déposants sachent quelles traces d’activités sur l’internet ou quels fichiers supprimés restent encore à découvrir à l’intérieur, au-delà des materiaux proposés et visibles? Un récent rapport américain du Council on Library and Information Resources s’est interrogé sur ce problème, justement perçu comme un vrai défi pour l’éthique professionelle, qui nécessite une mise à jour des standards de contrats de dépôt. Mais je demande aux critiques textuels ici présents — si vous pouviez accéder à l’histoire de browsing sur internet de disons Joyce ou Flaubert, hésiteriez-vous à y aller, par crainte de la violation de la loi sur la vie privée? Peut être moins chimériquement, si vous pouviez récupérer chaque étape de l’écriture d’une oeuvre de l’importance du Satanic Verses de Rushdie (ce qui sera en effet le cas) — chaque rature, chaque ajout, chaque déplacement de mot — de quels outils auriez-vous besoin pour gérer une telle richesse? Les outils et les méthodes élaborés jusqu’à présent sont tous dans la mesure de ce que nous pouvons comprendre: c’est l’abondance de ces informations dans le monde numérique qui nécessite de repenser ces outils et ces méthodes.

Je termine en soulignant encore que le texte numérique serait une construction, pas seulement au sens qu’il est composé de plusieurs séquences fragmentaires de bits, mais aussi au sens que ces séquences reprennent de l’information à plusieurs niveaux. Les mots seuls ne suffisent pas: les documents numériques contiennent inévitablement un balisage, dont une grande partie est (selon le term du philosophe anglais J.L. Austen, repris notamment par Allen Renear) performative — il détermine la nature du texte. D’où l’importance pour le critique textuel numérique de comprendre le balisage et les technologies qui y sont associées . Mais vous vous attendiez probablement à que je vous dise cela…

Does genetic criticism have a future without digital forensics?

This is the text of a presentation I gave at the ITEM’s general symposium on the future of genetic editing, held in Paris on 31 January 2011. I started writing it in French, switched to English for speed, translated it all into French (with the invaluable assistance of my colleague Nadine Dardenne), and then re-Englished it for this version.

I’d like to introduce you to an emerging field called “digital forensics”. This term covers a set of techniques and theories originating in the domain of criminal justice, but also of major importance for the archiving and study of born digital objects considered from a cultural heritage perspective. The need to plausibly identify traces of words recorded on hard or floppy disk, and to reliably associate them with a specific writer, even after their deletion, is a goal which torments the textual critics as much as the police officer or secret service agent. In both cases, a knowledge of the affordances of digital storage systems is needed, to know what they make possible and what the conceal. In both cases, there is a need to balance probabilities when seeking to establish plausible evidence-based conclusions. Ignoring these possibilities is also an option, of course. We could consider the history of a text to be no more than the history of its various embodiments on those sheets of paper we like so well. We could abandon any attempt to investigate by which those embodiments have been achieved. But in that case, we have to give up on the majority of current artistic discourse, which is born digital, lives and evolves digital, and dies in the digital archives of Mr Google. The objects studied in the human and social sciences are increasingly conceived and stored only in digital form; that is why it is essential to rethink and transform the toolkit we use to archive and analyse them. The author’s computer and its disks, their portable telephone, and the virtual spaces they use on the Internet, are taking over from their notebooks, their drafts and their manuscripts. We must re-equip the researcher with an understanding of the principals of digital storage to complement an understanding analog writing. The choice is simple: either redefine diplomatic studies to include the digital world, or abandon any attempt to study the textual genesis of modern works. What are the components of this redefinition? I propose a readjustment at two levels: the intellectual, and the substantive. At the intellectual level first, we need to re-appropriate a proper understanding of information studies within the humanities disciplines. Despite more than two decades of “humanities computing”, now rebranded as “digital humanities”, there is still an astonishing amount of ignorance about what the computer can and cannot do. Partly this is one of the results of the emergence of computing as a mass market phenomenon. Commercial imperatives restrict usage of the infinitely plastic computer to certain platforms, transforming a universal engine into a mono-functional toy. Unsurprisingly, therefore, we still hear people assert that this reductive technology perverts human intelligence as a transient patterns of bits. Or, at the other extreme, we still see evidence of the eternal desire for the divine, now appearing as a tendency to attribute conscious intelligence to effects of scale (for example crowd sourcing, neural nets, data mining…). Maybe some of us need to adjust our mental framework to deal with the information age, just as our ancestors adjusted theirs to deal with the steam age, but such an adjustment is a matter of expanding our perceptions, not transforming them. In the French language, a computer is something which puts things in order: the word ordinateur even has religious overtones, suggesting “ordination” and consecration. In the English and German languages, it is just a machine that “computes”. But the things that a computer puts in order are not just numbers: it is a machine above all for organizing any kind of sign, for re-encoding semiotic systems of all kinds. This is why I have always maintained that computer science is more a branch of the humanities than it is of engineering or mathematics. At the materiel level, I propose an extension of the knowledge expected from those undertaking philological study. Such people are expected to acquire a detailed understanding of typographic or paleographic technologies. There is an urgent need to expand those skills to embrace the digital medium. I conclude with a brief discussion of a few components of the understanding that future genetic editors needed to acquire. When I write a text on my laptop, the text appears and disappears on the screen under control of some piece of software with which I am interacting via a keyboard. The traces which constitute my text are of two kinds — letters, and what we may call meta-letters; codes which determine how the text should be displayed or processed in some way. (Another word we might use is markup). I may or may not be aware of all of these — some, the punctuation for example, is almost a part of the semiotic system I call “natural language” so I am very aware of it; others — the carriage returns, deletion characters, etc. — seem less salient, I expect the machine to deal with them. In the same way, the codes my word processor inserts to produce special effects such as changes of font or colour seem to belong to some other semiotic level entirely. But signs at all three of these levels are what constitute my text. The digital text I create starts its physical existence as detectable changes of state in the dynamic part of my computer’s memory, but very rapidly is transferred to a more permanent form, somewhere on my hard disk, or on some other store. Usually this will be done automatically by the software environment: critically, this will happen without any knowledge or intervention on my part. Even when I do deliberately request that the state of my text should be stored away in its current form, although I may think I know where I am putting it (in a file with such a name, on a specified physical medium), the way in which the components of my text are organized at that location — the order and number of blocks of characters and other signs represented — is entirely beyond my control or knowledge. When I write a text on a piece of paper, signs appear, but rarely disappear. I have to deploy quite a complex range of meta-markup to indicate that some sign is no longer significant or has been superceded by another, but the semiotic system to which that metamarkup belongs is entirely my own (unless forced on me by a publisher in the shape of proof reading marks, of course). More significantly, each of my scraps of writing has a physical existence which forces itself on my attention, especially if my desk is small, or my office already crowded. Consequently, I will rapidly adopt recycling or storage strategies, which effectively determine the future re-traceability of my writing processes. Those strategies are naturally determined by what is useful or perceived as appropriate by myself or the institutional context in which my writing takes place. They represent value judgments deemed appropriate within that context, and that is why (as they say) history is written by the victors, and why the archives of every society represent and maintain what that society values. With the advent of the digital medium however, the affordances of our storage systems change fundamentally. Despite the best efforts of modernist artists, you can only read a written scrap of paper in one way. But the organization of written fragments on a digital storage medium is independent of its reading, and thus can be read in many ways. The blocks of storage constituting this text may be read, as I naively think they should be, via the file system on my laptop, which contains a number of pointers indicating more or less continguous segments of storage scattered across my hard disk.They might be recovered via a more complex piece of software such as a networked blog, which stores my text as records on some database system in California. But it is also possible to recover the same written fragments by addressing those storage systems in an entirely different way, by-passing the intermediate access systems (the file system, the blog) which represent the “organization” of my text. In the digital text, organization is contingent and protean. Those written fragments, as noted above, ma
y actually contain nothing but material that has been deleted, or signs that serve only to indicate how other signs should be, or might be, displayed or integrated into a visible text. The first case poses problems for the archivist, as well as a challenge for the textual critic. When accepting a box of papers for deposit, the archivist can reasonably assume that both parties know exactly what is being handed over. But when the archivist accepts for deposit a hard disk, is it equally likely that the depositor will know what traces of internet activity or deleted files may remain to be recovered from it, in addition to the intended and apparent materials? A recent American report from the Council on Library and Information Studies agonizes considerably over this problem, which it perceives rightly as a challenge to the maintenance of professional ethics, necessitating a reappraisal of such deposit agreements. But I ask the textual critics here present — if you could have access to (say) Joyce’s or Flaubert’s web browsing history, would you hesitate to examine it on the grounds of a breach of confidence? Less fancifully, if you could (as you will soon be able to) recover every stage of the writing of a great work such as Rushdie’s Satanic Verses, every deletion, insertion, and the movement of every word, what tools would you need to make sense of that richness? The tools and methods so far elaborated have been done so in the measure of what we know how to handle; it is the very abundance of information to the textual critic that necessitates a rethinking of those tools and methods. I close by underlining again the fact that the digitized text is a construction, not only in the sense that it is composed of fragmentary byte sequences, but also in the sense that those byte sequences contain information at many levels. The words alone are not enough: digital documents inevitably contain markup, much of which is (in a term Allen Renear borrows from the English philosopher J L Austen) performative — it determines what the text is. Hence the importance of a proper understanding of markup, and markup technologies to the digital textual critic. But you probably expected me to say that.