Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

Encoding the history of the OuLiPo

At the beginning of February, I had the pleasure of co-organising (with Sebastian Rahtz, Camille Bloomfield, and Hélène Campaignolle-Catel)  a workshop on data capture, as the second event in the Algoritm Seminar Series which forms part of an interesting ANR funded project called DifdePo. The project is a collaboration between the BnF and Ecritures de modernité, a research unit located at Paris III, and its objectives include creation of a TEI-based digital archive of the archives of the OuLiPo, which are currently stashed away in boxes at the Bibliothèque Nationale’s Arsenal depository. The papers include letters, photos, press cuttings, postcards, drafts, and notes of all sorts, but for the purpose of this exercise we decided to focus on the records of the OuLiPo’s regular meetings, which began back in the early 1960s. The Archive has already been catalogued, and work is in hand to produce digital images of a sizeable proportion of it. The object of our workshop was to explore ways of transcribing these documents, given that the project has very little funding, and will therefore have to rely on the good will of volunteer transcribers, enthused by things OuLiPien but maybe a little deficient in TEI knowledge.

About a dozen people participated, most of them surviving to the end of the day. We began by asking them to transcribe a page from a small collection of pre-selected digital page images, using Word. (I freely admit to a degree of smugness on discovering at the last minute that the teaching room was initially equipped only with old-style doc-producing Word, which had to be enhanced to a more modern docx-producing version at rather short notice by the unflappable Joël) This exercise demonstrated, as we had hoped, quite a bit of variation about what exactly should be transferred from the image to the text, and on what editorial principles, thus motivating a useful initial discussion about the principles and praxis of text encoding. One of the participants proposed (unprompted) the principle of “fidelity” to the source, while another argued repeatedly for “capturing the meaning”.

Once lulled into a false sense of security by this exercise, participants were exposed to the weirdness of an XML-editing environment using everyone’s favourite XML Editor  oXyGen and my usual tutorial — create a document, learn how to tag parts of it, learn how to manipulate the structure, etc. We then offered them a more demanding work flow, involving first capturing a document in Word using a Word Template, which defined Styles to highlight a number of significant features (headings, list items, etc., but also personal names etc.), secondly converting this to a TEI form, using oxGarage and a specialised profile, thirdly looking at (and possibly modifying) that in oXyGen, and then converting it back to Word to confirm the feasibilty of round-tripping.  Sebastian Rahtz of Oxford (whom God preserve) invested quite a bit of pre-workshop effort into setting up the necessary infrastructure for this, and making sure that it all worked correctly on the day. He also made it possible for us to inflict on the encoders a third alternative approach, based on an experimental installation of Ben Brumfield’s “From The Page” crowd-sourcing prototype software. I had expected this to be everyone’s favourite, but (maybe because we had already by then sensitized them to the delights of structural markup) our encoders seemed to find the simplicity of its interface made it hard to take seriously. We had prepared tutorial scripts for each of the three approaches (TEI source code available from my tei-fr repository, if you’re interested) so I was able to spend some of the time wandering about taking photos of hard-working encoders.
By the end of the day, everyone had tried all three approaches, and everyone had produced a couple of TEI XML files conforming to a simple transcription schema I had prepared earlier. We collected them all up and Sebastian showed how our pretend archive could be displayed on a web page, complete with corresponding page images, and vocabulary lists, and personography. This was (of course) all done with a straightforward customization of the standard TEI-HTML stylesheets, now available in the Stylesheet package as part of the Difdepo profile.
Conclusions? We still don’t really know whether our TEI-XML transcriptions are aiming for “fidelity” or “meaning”, but we have at least demonstrated the possibility of either (or both) . And we do know that the participants all seemed to be more enthusiastic about using the customized-word-template approach than either raw Oxygen or (possibly over-cooked) From The Page. We didn’t explore the idea of a pre-customised oXygen author-mode interface, which might well repay the necessary investment of effort, if there is a lot of metadata to be entered, for example.

 

Joel and I sample the oXygen
Joel and I sample the oXygen

I take the liberty of listing the names of the registered participants, for their greater glory:

  • Camille Bloomfield
  • Hélène Campagnolle-Catel
  • Paula Klein (Projet DifdePo)
  • Chris Clarke (Projet DifdePo)
  • Jeanne Devautour
  • Julie Bernard (Poitiers)
  • Marie Bonnot
  • Marianne di Benedetto (ENS Lyon)
  • Guillermo Hector
  • Pradeep Claassen
  • Louise Kari-Merau
  • Leïla Berlot
  • Barbara Servant (Univ Rennes II)
  • Clara de Reigniac
  • Gabrielle Bruzzone (Poitiers)
  • Claire Leroy

All affiliated with Paris III, unless otherwise indicated.

Lodelisation

Lodel (Logiciel d’edition electronique) is the name of the CMS which drives Open Editions, one of Europe’s leading open access publishers. Back in 2009, Marin Dacos announced at the TEI council meeting in Lyon that Lodel would start using a TEI schema for its internal processing, while continuing to accept manuscripts for publication in any of the commonly used office document formats. Documents would be worked on in ODT, and automatically converted to a simple TEI schema for internal processing, from which they would be converted for publication on the web and on paper.

Documentation subsequently appeared on how to prepare documents in TEI for processing by Lodel (in French at http://lodel.org/701 and also in English). An XSD schema for it is documented at http://lodel.org/715.

This blog entry summarizes what I needed to do to a real TEI document (specifically my forthcoming title What is the TEI?) to get it to work with Lodel: the full story is implicit in an XSLT stylesheet I wrote for the purpose. Actually, when I say ‘I’, I should make clear that the conversion was in fact handled by the nice people at Open Editions, who were remarkably patient with my eccentric use of TEI, and my even more eccentric wish to generate a Lodel document directly with as little manual intervention as possible. My thanks to Jean-François Rivière and Martin Dulong from Open Editions for their helpfulness, both in steering my TEI manuscript through the process, and in responding politely to my inane questions about what on earth was wrong with my lovely tagging.

The following list shows (in no particular order) the chief changes I found necessary and in some cases a bit unexpected.

  1. As might be expected, the Lodel schema doesn’t have any of the following semantic elements, which I have found useful when marking up technical documents: <gi>, <att>, <ident>, <val>.  More surprisingly perhaps, it doesn’t seem to have <foreign>, <emph>, <soCalled>, <mentioned>, <q> or <quote>either. My stylesheet turned all of them into <hi rendition=”#gi”>, and also generated a <rendition> element with an appropriate default style for them.
  2. The Lodel schema doesn’t allow lists or quotes to be contained by paragraphs. This is a generic HTML limitation, if I understand aright, but that doesn’t make it any less annoying. Call me verbose if you will, but I often write a single para with a bit of prose, followed by a list, a bit more prose, and another list. My stylesheet had to do some clever fiddling to deal with this (tx SPQR) but this is one case where I think Lodel should be a bit more broad minded.
  3. In fact, the Lodel schema only knows about two kinds of list: type=ordered which are numbered, and type=unordered, which are not. Gloss lists are not supported, so my stylesheet had to tweak <label> elements into a <hi rend=”#label”> child at the start of an unordered list item (but with @rendition of “gloss”).
  4. My TEI documents can have lots of XML examples, which are easier to read if they are wrapped in a differently-namespaced <egXML> container. The Lodel schema requires use of the <code> element, containing either a CDATA marked section inside it to preserve the layout, or XML tagging escaped by entity references. The only problem with this is that <code> is a phrase level element, not a block, which means that some hand tweaking is needed at the Lodel end.
  5. Lodel is intended for journal articles and manages each of them separately as a distinct TEI document. Chapters of a book have to be treated in the same way, which seems a bit odd — for example, each chapter gets its own TEI header. My stylesheet splits things up rather crudely, assuming that each top level <div> within the body is intended to be a separate document.
  6. Lodel insists on having an explicit indication of the nesting level of each subdivision, using (bizarrely) the @subtype attribute on <div> with values level1 level2 etc. My stylesheet grits its teeth and generates these automatically, but I think this is one design aspect of Lodel which might merit a second thought.
  7. The Lodel schema doesn’t allow headings within anything except sections, so you cannot provide them for lists, tables, or figures without some fiddling about.
  8. Lodel doesn’t number headings for you. Even if you supply a number for a section (using the @n attribute on a <div>, as recommended in the Guidelines), Lodel will not use it. My stylesheet does nothing about this: I just decided to live without numbered sections.
  9. Lodel handles cross references using much as you’d expect, provided that the value of @target is a complete URL, i.e. a link outside the current document. This means you cannot cross reference other sections of the document being encoded which seems rather an odd restriction. Put together with the foregoing lack of automatic section numbering, this can make for quite a lot of rewriting.
  10. Lodel knows about <bibl>, but not <biblStruct> or <biblFull>. Up to a point. Most of the semantic elements defined for the content of bibliographic elements (<publisher>, <biblScope>, etc.) are allowed, but it doesn’t actually do anything with them. To produce a correctly formatted bibliography, such encodings have to be converted to a fully styled version, following the requirements of the Open Edition style guide. I wrote a stylesheet to do (most of) this for one small bibliography: in the general case something much more complicated would be necessary.

That last caveat is of course true of all the rest: I’ve only tested this process properly on one text, albeit a reasonably large one, and only on a born-digital document. If you’re thinking of authoring documents in TEI though, chances are you won’t do it significantly differently from me, so some of the issues I encountered will affect you too. And, for the avoidance of doubt, let me repeat that none of this is meant to discourage anyone from using Lodel!

Poster Slamming

At the TEI 2013 member conference today, I had the pleasure of participating in the “Poster Slam”, a well established TEI ritual in which each poster-presenter is given one minute (and one slide) to introduce the topic of their poster as a means of persuading people to come to it. Preferably in verse. This year, Syd made the fatal mistake of allowing presentations in languages other than English, providing they were accompanied by a translation. So Nicolas Larrousse and I naturally presented the following poem:

Le Tageur et L’Archiviste

Le tageur ayant tagué tout l’été
Se trouva embarrassé l’avenir étant arrivé.
Pas un seul petit morceau
d’explication claire de ses travaux

Ze tagger having tagged all summer long
Found himself embarassed when the future has arrived
Not one little bit of explanation survived for all his efforts

Il alla chercher des avis malins
Chez l’archiviste son copain
Le priant de lui prêter
De la sémantique pour tout regler.

E went to ask some tricky advice from his friend ze archivist
Begging im to lend him some semantics to sort sings out

Les archivistes ne sont pas créateurs
C’est là leur moindre défaut.
Que faisiez-vous au temps chaud ?
Dit-il à cet emprunteur.

But archivists are not creators, that’s their smallest problem
What did you do during the fine weather?
He asked the borrower

– Nuit et jour à tout venant
Je taguais, ne vous déplaise.
– Vous taguiez ? j’en suis fort aise.
Eh bien! transformez maintenant.

Day and night I was tagging for anyone, if you dont mind
You were tagging? Oh that’s fine. So now you can do transformations!

Nicolas did the English bits, and I did the French bits, under the inspiration of the late great C. Trenet.

Further adventures with ODDs

This post is mostly an aide-memoire, since how to do the ODD things I want to do is not very well documented in the TEI as such.

First challenge

I have an ODD which was produced by webRoma some time ago and which (naturally) uses the traditional “exclude” syntax. I want to convert this to the new “include” format and also to ensure that it won’t get any of the new elements added to P5 since it was first defined.  I proceeds as follows:

  1. I look at the source of my ODD and I see the comment Roma inserted in the <sourceDesc> “created by ROMA on Monday 21st June 2010”
  2. I go to the list of releases on the TEI sourceforge site to find which release of P5 must have been in use at that date. Judging by the dates here, it is probably release 1.6 I want
  3. whichversionBuried away in the standard release of the TEI Stylesheets there is a a cool utility for converting an “exclusion” ODD into an “inclusion” one. It’s called tools/odd2nuodd.xsl and I run it like this:
saxon -p defaultSource=http://www.tei-c.org/Vault/P5/1.6.0/xml/tei/odd/p5subset.xml 
myOldODD.xml tools/odd2nuodd.xsl > myNewODD.xml

Note the inclusion of the 1.6.0 release number as the source directory to be used when the stylesheet starts looking for TEI definitions.

Second challenge

I have two or more new style ODDs and I want to compare their use of the TEI to assess their interoperability. So far, I only have an approximation to an answer for this, in part because I am too lazy to improve the scripts I hacked together for it last time, in part because it’s actually a rather ill-defined problem, and hence hard.

The approximation goes as follows:

  1. Run an XSL transformation on each ODD in turn, appending the results to a big text file listing element names and what happened to them in which ODD;
  2. Run a perl script (ouch) on the results of (1) to produce a summary table which starts like this:
<table>
<row role='label'><cell>Element</cell><cell>lodel</cell><cell>tei</cell><cell>sc
ore</cell></row>
<!-- 2 --><row><cell><ref target='http://www.tei-c.org/release/doc/tei-p5-doc/en
/html/ref-TEI.html'>TEI</ref></cell> <cell>change</cell> <cell>use</cell><cell>2
</cell></row>
<!-- 2 --><row><cell><ref target='http://www.tei-c.org/release/doc/tei-p5-doc/en
/html/ref-ab.html'>ab</ref></cell> <cell>use</cell> <cell>use</cell><cell>2</cel
l></row>
<!-- 2 --><row><cell><ref target='http://www.tei-c.org/release/doc/tei-p5-doc/en
/html/ref-abbr.html'>abbr</ref></cell> <cell>use</cell> <cell>use</cell><cell>2<
/cell></row>

OK, work is needed in this area. But it’s a start.

 

 

 

Interoperability of TEI projects : apotheosis or chimera?

This was the title (sounds better in French) of the closing talk I gave at an interesting workshop last week. A prevous COST-funded meeting in Krakow had brought together Czech, French, Catalan, German, and Polish teams working on several different dictionaries of medieval Latin to elaborate the idea that maybe they could make their various lexica interoperable if only they could agree on a common format, for which the TEI seemed the most plausible candidate. Susanna Allés, the energetic organizer of this workshop, got funding for it from several sources, notably the ALLC (or European Society for Digital Humanities as it now prefers to be known). She also seems to have hit on the wheeze of inviting a number of luminaries to make the case for the TEI dictionary tagset (notably L. Romary, F. Glorieux, P. Banski);  alas, in the event only I turned out to be available. Which was useful for me, since preparing for the workshop meant rediscovering all sorts of dusty and neglected (by me, though not by others) parts of the Guidelines.
The workshop was held in the CSIC (Spanish for CNRS) Istitucio Milà i Fontanals, which occupies a rather grand building conveniently located in the Raval, a picturesque if slightly seedy district of Barcelona to the left of the Ramblas, and we were all accommodated next door in the splendidly-named Investigators’ Residence on Calle Hopital. Barcelona is not a place for those uninterested in food and drink; and we were very well fed, in large quantities, if at strange hours and (on one occasion) after a lengthy walk up town through an unexpected tropical-style deluge. The ravioli stuffed with pears and cheese offered by the resto “En Ville” for lunch was particularly memorable, invidious though it is to single out this one occasion.

More intellectual fare was also on offer, of course. It is always a pleasure to arrive amongst a group of specialists personally unknown to me and from a domain of which I am more or less totally ignorant  to find that word of the TEI has already reached them, and often in a far from superficial way. So I was made very happy indeed to hear Sabine Thuillier (currently working in Madrid on the Diccionario Griego-Espanol but ‘formed’ as they say at the Ecole Nationale des Chartes) evangelise for the TEI as an international open source community, and impressed by the way she is implementing it in a workflow which though its editors remain obstinately based on Word Perfect, remains determed to envisage production of a respectable TEI P5 version.
Similarly, the team responsible for the eLexicon Mediae Latinitatis Polonorum lead by Krzysztof Nowak from Krakow, while maintaining a proper scepticism about some aspects of the TEI’s conceptual model, was clearly persuaded of its virtues as an open standard, notably as evidenced by both the amount of open source software (they mentioned XTF, Philogic, and TXM) and the number of comparable projects (they mentioned the Anglo Norman Dictionary, the Glossarium DuCange, and several others) using TEI. Their workflow starts with an OCR phase, since they are starting from an extensive library of source texts  and then uses LibreOffice and a customised library of styles to enhance it to the point where it can be automaticalty converted to TEI, thus (apparently independently) following the same path as is used by Lodel, OxGarage, Agora, and no doubt others, to combine the user friendliness of a word processing style interface with the rigour of a TEI structured maintenance format.

Catalonia has ambitions (as posters everywhere proclaim) to become politically independent of Spain, and certainly its linguistic independence is a well established fact. As a confirmed non-speaker of either Spanish or Catalan (nor of Basque, Galician, or Portuguese for that matter) I regretfully let the interventions in those languages wash over me, and thus missed out, notably on Jose Manuel de Bustamente’s insights on the relation between textual corpus and dictionary. I did however manage to understand the German colleagues present since they made the effort to speak in English or French, for example Alexandra Gorbrecht from the Trier Centre for Digital Humanities gave a brief overview of the dozen or so dictionaries put together online at woerterbuchnetz.de with a well designed query interface. Allegedy all of these dictionaries are locally stored in TEI XML, but as this is not currently exposed one cannot tell how consistently it has been done. None of the other major TEI dictionary projects in Germany I am aware of was represented here: presumably because none of them is specifically concerned with Mediaeval Latin. I had to console myself for the absence of Werner Wegstein from Wurzburg by stealing one of his examples for my own talk.
Bruno Bon and Renaud Alexandre from the IRHT in Paris had the advantage, if advantage it be, of being able to develop their proposals for an over-arching Novum Glossarium Mediae Latinitatis on the basis of the already existant complete Glossarium of Du Cange which has been freely available in TEI-inspired XML markup for some time now, thanks to the work of Fréderic Le Glorieux. The idea seems to be to develop a set of proposals able to express the (not inconsiderable) variation in practice amongst these and others working on different lexica of medieval Latin in Europe, and thus create what (inevitably) Bruno suggested would be called NGML (the Novum Glossarium Markup Language). As a first step they have set up an exploratory multilingual wiki with some nice visualisation tools, based on a few sample entries taken from each of the five different lexical projects (specifically, in Barcelona, Prague, Krakow, Munich and Paris), and are inviting more. This could be fun, though I think expressing NGML as a real TEI ODD would be more of a challenge.

Susanna Allés and her graduate student Frédérique Laugrost (on secondment from the Ecole Nationale des Chartes) talked about the specific problems they faced when starting to apply the TEI to the text of their dictionary: the Glossarium Mediae Latinitatis Cataloniae. Many of these are familiar, of course: notably those which derive directly from the wish to preserve the punctuation and use of abbreviation which characterize such sources and at the same time model the logical structure which they determine. Some of these problems do however point to aspects of the current TEI dictionary model which could be improved.

I started making a short list of such points during the workshop, but sadly did not get very far:

  • too many of the proposed TEI dictionary elements relate only to modern lexicographic practice. Deciding which ones to filter out to make a kind of TEI Lite for dictionaries would be very desirable.
  • an element for “translated segment” is desired, even if it is just syntactic sugar for with a value for xml:lang other than that of the surrounding text
  • some dictionaries have entries which are large enough to have multiple paragraphs but there is no place for <p> in any model.entryLike element
  • when a term is identical in two or more languages, can xml:lang take more than one value (I confidently said it could, but I think I am wrong)
  • how should you mark a word which is clearly readable in the text when its meaning is entirely uncertain? (I suggested <orig>, but there must be better ideas)
  • The typology currently used for <form> combines categories from entirely dissimilar taxonomies, e.g @type=lemma is an entirely different kind of thing from @type=compound. Likewise, the typology one might want to use for should be more to do with the way the sense had evolved. To both these points I said (in my best French) “Bof”, Or, more precisely, it’s only by receiving proper input from specialists in the field — those best able to define more appropriate typologies — that the TEI progresses…

I’m hoping that the undoubted success of this workshop will encourage the participants to form a SIG on the subject or (as Piotr Banski had previously suggested to several of them) to make an active contribution to the existing LingSig. Plenty of scope for very interesting work to come, not to mention the opportunity of returning to Barcelona for the paella which I somehow failed to find time for on this occasion.

TEI++ : une formation avancee

Cet été, j’étais invité par le consortium CAHIER, en partenariat avec les consortium Corpus Ecrits et IRCOM,  d’organiser un atelier dite “TEI avancée” sur quatre journées.  Je leur ai propose une mode d’opération divisée en deux parties (présentation, travaux pratiques) et une organisation selon trois axes:

  1. modélisation des ressources et séléction des traits signifiants
  2. encodage et explicitation TEI des structures modelisés
  3. exploitation et analyse des ressources structurés

Je leur avais aussi propose de partager les  travaux de formation avec quelques experts francais.  La formation s’est tenue à l’Institut de Linguistique Française à Paris du 19 au 22 novembre 2012.

Voici en sommaire (anglophone, désolé) ce qui s’est enfin passé …

Day 1

Proceedings began on the fifth floor of the ILF, a nice light room but not quite big enough for the 18 or so participants. This was not altogether bad, since the consequent huddling together encouraged emergence of a cohesive group identity, which I also tried to encourage by getting the participants to place themselves on an improvised graph along two intersecting axes : literarature vs linguistics, and researchers vs support staff. Most people turned out to be in the bottom right quadrant, i.e. ingénieur + linguistique, but there was also a smattering in the littéraire + scientifique box, to say nothing of two sociologues who insisted on positioning themselves in the middle of the littéraire vs linguistique axis. The rest of this first introductory session was given over to a rapid review of some fundamentals of encoding, and a sampling of the websites of half a dozen real TEI projects around the world, which might have gone better had I rehearsed it better, but got the message across that quite a few very different projects were doing seriously cool stuff with the TEI.

After coffee, I introduced them very quickly to a spot of data analysis, using as vehicle the celebrated postcard archive of M Marcel Virgolos, and Lauranne then took over for a refresher on using Oxygen. They marked up a postcard or two, and reviewed commonly used TEI tags for each of some pre-selected texts : a French novel, a poem, and a play. Most students completed all of these, mastering most of the key features of the Oxygen XML Editor quite rapidly, but I think we did not allow enough time for this session, given the mix of abilities present.

Lunch in the form of a large cardboard box containing a “plateau” of cold cuts, salads, bread roll, plastic cutlery etc. duly appeared and was despatched. Thus strengthened, I embarked on an all-singing all-dancing overview of all the TEI modules, and what you can do with some of them. That took about an hour, but helped motivate Lauranne’s traditional exercise on using Roma to make a schema by reduction from TEI-ALL which followed it. By the end of the day, everyone seemed quite comfortable with the idea of pérsonalisation de schéma, and reasonably convinced that they might find what they wanted to mark-up somewhere, somehow in the TEI.

Day 2

On this and subsequent days we were were displaced to a much better (because bigger) teaching room. I began the day in seriously magisterial mode by explaining (many of) the components of the ODD language, and why you might care to know about it. This was quite punishing, both for me and for some of the less technically-minded participants, but no-one visibly fell asleep. For the subsequent practical, Lauranne had prepared a script in which participants submitted their own texts to analysis by OddByExample, generating a personalised ODD. The majority of course had not come with their own texts, or had texts not in TEI P5, so they ran the exercise on a rather inadequate sample of the Virgolos corpus instead. With a bit more prep, I think this could be a really fun exercise and an excellent way of getting people to learn ODD properly. It also revealed a bug in OddByExample which Sebastian graciously fixed overnight

Lunch being cardboard plateaux once more, I went for a stroll round the nearby Parc de Montsouris to see some of the gray autumnnal paris daylight while it was available. I then droned on for well over an hour on the subject of the TEI Header, which I am now selling as being “metadata for the rest of us”. They made me do it. The exercise we adopted was the French version of creating an ms description for the W. Owen ms, last seen in Berne, This is quite useful for the purpose, especially in combination with the following exercise on marking up a transcription of that manuscript ; time permitting however I would have instead preferred to use a different French manuscript for both. If I had one.

Day 3

Wednesday I had carefully billed as “journee des guest stars” since the idea was to make other celebrated French TEI enthusiasts share in the work. So we began with a presentation about TEI recommendations for dealing with named entities and their names given by Alexandre Gefen. Since the room contained more than a few French linguists this gave rise immediately to a heated debate on the philosophical basis and nature of nominal reference. The exercise in marking up names was a little under-prepared and one of the students insisted on asking the Emperor’s New Clothes question (why bother?), which was answered by another participant citing (at length, and with enthusiasm) the work of Nicole Dufournaud inter alia, which was nice. Since Alexandre had to leave early, I filled the gap by moving my brief overview of tools options up, giving a plug for Sebastian’s stylesheets, and letting them experiment with OxGarage, which they loved.

For lunch we went to the brasserie down the road, which was a much much better idea than the plateaux. Everyone got very jolly and there was a fair amount of shouting. Our second invited expert of the day was Bertrand Gaiffe, from ATILF, who delivered an excellent pair of lectures about encoding of oral and linguistic data respectively, also involving a fair amount of interaction and discusion, but not much actual tagging, since TEI interfaces for the appropriate tools remain elusive, best efforts of the LingSig notwithstanding. .

Day 4

The final day began with a presentation on the various TEI orthodoxies concerned with the editing of primary resources given by our third invited expert : Alexei Lavrentev from ICAR. Participants were then offered the choice of doing either the reverse transcription exercise or the visual encoding exercise from Berne; both options were taken up, though I was too busy sorting out the website to see how far they got with either.

After another nice brasserie lunch (roast duck), I spent about 15 minutes showing how to use TEIBoilerplate, which went down remarkably well, “Génial” they cried, as they saw all that tricky encoding in John’s demo file being rendered beautifully by Safari (France is still largely land of the Mac). The rest of the afternoon, was devoted to a more ambitious TEI-savvy piece of software : txm, from the textometrie project at Lyon. Alexei showed us what it was, and demonstrated how to make it sit up and do tricks with the Graal and Brown corpora, which participants had pre-installed. He also showed it working with a selection of literary texts prepared for use throughout the workshop.

Verdict

I think this workshop worked much better than it deserved to (I always think that). All the participants seemed very happy at the end, and several of them said they had learned more than they expected to. I think the organisation of the programme made good sense, and the balance of exposition and exercise was approximately right, though we probably didn’t do enough to make the practicals consistent and relevant. A few parts suffered from lack of preparation, and I think we could have done more to get a single case study working throughout the course of the four days, in addition to the various more specialised materials we introduced. But next time we’ll definitely get everything right. Thanks are due to the participants, the organisers, and all my co-formateurs, especially Lauranne for calming me down at moments of high anxiety.
All the materials used in the workshop are available in PDF starting from http://meet.tge-adonis.fr/sites/default/files/2012-11-initial.pdf. Dedicated TEI hackers may also be interested in the XML sources of the presentations which are available from my svn repository at http://code.google.com/p/tei-fr/source/browse/#svn%2Ftrunk%2FTalks%2F2012-11-paris

Digital Palaeography meets Optical Glyph Recognition in Rouen

HDDA2012 (“Historical Documents in the Digital Age”)  at the University of Rouen turned out to be unusual (for me at least) in a number of respects. Firstly it was organised as part of a project (“DocExplore”)  funded under the Interreg framework of the EU, and hence attended by people from both sides of the channel, rather than being exclusively French. As a consequence the presentations were in both English and French, with apparently quite successful simultaneous translation, though I did not test this for more than a a few minutes. Secondly, I didn’t have to explain to anyone what the TEI was, and why it might be interesting; everyone seemed to know all about that already, even the informaticiens. And thirdly, there was no-one else from Adonis present, so it fell to me to ask the man from the Archives Nationales why they did not provide an OAI feed into Isidore as well as into Europeana (they’re planning to).

There were about eighty attendees, most of whom survived the full day and a half of invited presentations/round tables. There was a bit of audience interaction, but not much, and surprisingly perhaps only a couple of desultory tweeters, one of which doesn’t count since it was me. There was however plenty of time for old-fashioned face to face discussion over lengthy pauses for sustenance in Rouen’s rather nice Maison de l’Universite. As far as I can tell there were roughly equal numbers of archivistes and informaticiens, but they did not mix a great deal.

Proceedings were kicked off with two very good “state of the art” summaries of what’s going on in the way of cultural heritage digitization in France by J F Moufflet from the Archives de France, and Matthieu Bonicel from the BNF. I particularly liked the latter because of his optimism about using the technology to break down the walls between the silos of digital artefacts being created everywhere, pointing to evidence from maybe half a dozen great projects previously unknown to me. Both of these speakers pushed all the right buttons about open public access and accountability, transparency and integration of resources, respect for standards etc. thus making quite a contrast with the following speaker, from the archive of Canterbury Cathedral, who found herself having to explain why they’d made a deal with Satan in the form of findmypast.co.uk to get their parish records database online, thus perhaps revealing the very different business models in which archivists operate on either side of the channel.

The second session was given over to tools for transcribing and indexing all those lovely digital images. Stephane Nicolas from LITIS, the Rouen team responsible for software development, laid out clearly the challenges and advantages of integrating transcription and images. Two rather more technical presentations followed: one from Franck Lebourgeois which felt a bit like a graduate seminar about the mathematical basis of OCR, and another from Marcal Rusinol from a Spanish lab about vision processing techniques for word recognition or (as it seems it is called in the trade) “word spotting”.

The last session of the day was billed as being about digital paleography proper, and was divided appropriately between two contributions from palaeographers (Elizabeth Lalou from Rouen, and Marc Smith from the Ecole Nationale des Chartes) and two computer engineers (Veronique Eglin from LIRIS and Richard Guest from Kent). The former group clearly understood the potential the technology offered to address some long standing difficulties in the treatment of e.g. allographic variation or the use of frequency statistics in the definition of “writing style” ; the latter group maybe had a harder job in making explicit just what the state of those particular arts currently is.

The second day I arrived a bit late, for some rather odd discussions, again revealing extraordinary differences in attitude on either side of the Channel, about the “ludique” use of IT in cultural heritage applications, i.e. how to make cool exhibits in museums. It began with a moderately dreadful intervention from a professional French developer of such things, but was rescued by a man from the British Library called Clive Izard who gave a historical survey of the BL’s flirtations with technology, from the days of the Information Access programme (which, I may say, was one of the funders of the BNC) up to the present, third, generation of the “Turning the Pages” application. He was followed by another excellent (and splendidly named) speaker Clotilde Vaissaire-Agard from Le Havre, who reminded us about the need to place the scholar at the centre of the picture (I was reminded of a former OUC S colleague’s plaintive cries of “What about the users”?) . She also endeared herself to me forever by citing the manuscriptorium project (remember Enrich?) as an outstanding example of what the technology facilitated by making it possible to share metadata and digital resources across institutional boundaries for the benefit of manuscript scholarship.

The final session though labelled as con cerning that old war horse “Is there such a thing as Digital Humanities”, actually contained three very good and complementary talks intimately concerned with the themes of the conference. Alison Wiggins from Glasgow’s Bess of Hardwick project gave a convincing account of their attempt to ground the project in practical user-focussed concerns (she cited Claire Warwick et al’s Lairah as one of their inspirations); Dominique Stutzmann from IRHT raged, with ample evidence, against the lack of decent interfaces in transcription software; and finally Alixe Bovey from Kent gave a well illustrated overview of the strengths and limitations of various interfaces developed for interacting with the physicality of medieval sources. She concluded by lamenting, in the way people do, the absence of smell associated with digital images, and the mismatch between the haptics of the touchscreen and the codex. I was more impressed by Alison’s comment that it was more useful to know what kind of paper Bess of Hardwick wrote to the Queen on than it was to be able to reproduce it.

The price of a new laptop, or My love affair with Ubuntu

I think the first unix system I installed on a laptop was release 0.8 or thereabouts of Knoppix. You could take an innocuous -looking CD, stick it into a crusty old PC, tweak its bios to boot from CD, and bingo, you were running a real linux without having touched the Windows hard disk itself. How cool is that? OK, it took the best part of a day to load Knoppix from the CD into memory, so I pretty soon found the button for copying Linux itself onto the hard disk of my elderly laptop, but the “Live CD” concept had much to recommend it. You didn’t have to drop the Windows security blanket, and you could go on using your Windows filestore. That must have been around the year 2001 or so : at OUCS we started using Knoppix as the basis for a series of TEI give-aways at workshops and conferences – at first on CD, then on USB sticks, as the technology improved.

And then came Ubuntu. I think my first real Linux laptop was a Thinkpad on which I installed the first of an entire menagerie, from Warthog in 2004, to the Quetzal I installed yesterday. Yesterday also, I switched allegiance from Lenovo to Samsung, and installed the Quetzal on a series 9. It looks a bit like a Mac Airbook, but it’s not evil.

To get a new laptop in 2004, I had to write a one page justification for the boss, fill in numerous forms, send them off to the University’s approved supplier, and wait a few weeks for the machine to arrive. Then it would take a few minutes to unpack the machine, and at least 3 days to get Linux installed on it, much of it involving me pestering smarter people with better things to do.

In 2012, it took me just a few minutes to click on a button and order a new laptop, which was delivered to my house in about 24 hours. It took rather more than a few minutes to get it out of the packaging, but installing Linux took a couple of hours max. I downloaded an ISO image from the Ubuntu website. I made it into a bootable USB key using some software recommended by some other website. I stuck the key into the side of my new laptop. I tweaked its BIOS (just like the old days) to boot from the USB drive. And everything Just Worked. OK, I had to make difficult decisions like what language I use, what is my name, what graphic did I want to represent me, and did I want to wipe out the Windows 78 partition on this machine, so maybe a little more than an hour or two later, and that’s it. Wifi works, sound and graphics work, wireless mouse works (good, as my fingers don’t understand trackpads) … I can even imagine getting used to the Unity interface (which seems to be a bit more stable than it was under Pangolin).

And that, dear reader, is when I realise that the real price of a new laptop is yet to be paid. OK, I expect to have to install some favourite bits of software to get my familiar working environment back (digikam, subversion, emacs, oxygen, chrome, dropbox …) : that doesn’t take long. I can remember how to re-configure thunderbird to collect my mail: what I’d forgotten is just how long it takes to get nice new fresh copies of all the old mail which was sitting gathering dust on the IMAP server. I know how to check out all the stuff that actually matters from the Sourceforge and Googlecode TEI repositories : don’t underestimate how long that takes either. It took me over an hour just to remember how to re-set my password on the OUCS subversion repository.

My goal for today was to be able to rebuild TEI P5 from source, and crank out nice PDF slides from TEI source. (that’s the sort of thing I do every day, to be honest). On top of what I already had, I needed to download and install Oxygen XML editor, Chrome, and Dropbox. I needed to install nice new packaged 64-bit versions of jing, trang, onvdl, rnv, tei-emacs, openjdk-jdk, latex-beamer, texlive-xetex, and texlive. That all went smoothly, except that the packaged version of rnv was a 32 bit one, and so I had to rebuild it from source. Same problem with Acrobat Reader, but there is, of course, no option to rebuild that from source, so I will probably have to live without Mr Adobe’s fine products for a while.

Conclusion? Nothing surprising I suppose: installing your own system is still a good feeling, seductive enough for enough people that other people put lots of effort into making it much simpler to achieve. Hats off to the unsung labourers in the Canonical salt mines, and the open source community generally, who just go on doing what they have always promised they would, keeping the faith.  It’s a relief that manufacturers like Samsung let them get away with it.

And now, it is the evening of my first full day with psammead and I think another glass of wine is in order. If I could only learn how to use this wretched trackpad…

Teaching TEI in Bern

At the invitation of Bénédicte Vauthier from the University of Bern (whose indefatigable enthusiasm, fund raising expertise, and hard work organizing the event, are all hereby gratefully acknowledged) I spent the last week teaching the French strand of a four and a half day long trilingual TEI training session, aimed primarily at people working on the encoding of modern primary manuscript sources, in collaboration with Christof Schoech and Alejandro Bia, who provided the German strand, and Elena Pierazzo who provided the Italian one. We did try a bit to co-ordinate our efforts so that the material covered in each strand was the same, though our methods of teaching, and some of the materials used naturally varied. The course also included a full day of invited presentations about a wide range of Swiss projects and a series of invited plenary presentations. The first day was hosted by the Swiss National Archive who showed us some of their collection, which includes a splendid exhibit of authors’ typewriters. More details of the whole event are, or will be, available elsewhere; there are also a few photos here. But mostly this blog entry just reports on the French teaching strand, and may be of most interest to others undertaking similar quixotic enterprises.

Each of the three strands contained five sessions, each held in a different teaching room (the French one equipped with PCs, the German and Italian ones with Macs) and combining a 45-60 minute presentation with a 60-90 minute scripted hands-on exercise. The programme, also common to all three strands, was as follow:

  1. Introduction to basic ideas of encoding, distinction between document and text, refresher on XML. In the practical session, students were shown how to use oXygen to create a simple document from scratch, using just the TEI- bare schema.
  2. Introduction to TEI, focussing on how to tag basic components of a text, brief overview of some commonly encountered TEI elements, using French examples. Practical session using  TEI Lite as schema. In the first part, students used oXygen to add tags to a plain text version of a sonnet by duBellay; in the second, they used oxGarage to convert a Word file containing a scene from Jarry’s Ubu Roi into TEI, and then used oXygen to improve its tagging.
  3. The TEI Landscape with a slide or two on each of the 23 chapters in the Guidelines, followed by a quick guide to using the TEI website as a means of exploring them. In the practical session, students first use oXygen to create a tei_all file, and then use Roma to create a schema which is better constrained to the encoding of modern mss.
  4. TEI encoding of editorial interventions and simple manuscript transcription. In the practical; students reverse-engineer a rather heavily edited version of the start of a Wilfred Owen manuscript into a more diplomatic TEI transcription in oXygen, using the schema they made in the previous tp (which is also used for the rest of the course).
  5. TEI Header, focussing on its use as a repository for metadata of importance to librarians and archivists, but also to scholars and end-users. In the practical session following students create a full msDesc for a manuscript letter, using oXygen to tag the various parts of a plain text catalogue entry with which they are supplied.
  6. TEI documentary editing, starting with facsimiles and zones, and moving on to include sourceDoc, line, mod, metamark, and other recent additions to the TEI family, notably change and @change. In the practical session, students used Inkscape to identify various zones within a page from a Durenmatt ms, transferred these to a basic TEI transcription of the same file, and then ran an XSLT conversion to combine these into an SVG file in which the transcript and the graphic were merged in such a way that a piece of javascript could display the changes documented in the TEI independently

Archives of all the materials I used for these sessions are available for download in PDF form (Exercices; Talks).  The TEI XML source is also available, under the usual CC-BY licence, from the MEET subversion repository.

In the final session, students were invited to apply their experience to their own materials, samples of which they’d been instructed to bring with them. Most of them set to immediately and started tagging, which was quite encouraging. I was able to do some hand-holding with the one or two students who had somehow failed to understand what was going on earlier, and the others all started banging away in earnest.

Some further comments on the sources I used for each of these talks, and some things I found it necessary to change follow….

  1. The pattern and content for Session 1 is pretty much traditional now. Even so, I managed to find some typos to correct in the exercise script — the keyboard shortcuts for the Windows, Mac, and Linux versions of oXygen are all subtly different, and my script is indecisive as to which it is using. I noted, as always, that it’s hard for beginners to understand that “I want a new line here” really means “I want to close this paragraph and start a new one”.
  2. I put the talk for session 2 together using some old talks introducing TEI Lite. I included a bit of the Punch exercise for old times’ sake, but mostly it derives from a very old talk introducing TEI Lite, spiced up with French examples taken from those in the Guidelines. Some of these are not tagged in quite the way I would have expected, or don’t demonstrate things as clearly as they might, so the whole lot probably needs to be reviewed for consistency. The first part of the exercise for this session went very well, with everyone suitably impressed by the effect of tweaking the CSS to render the document in “Mode Auteur”. The second part was less successful (though not everyone got to it through lack of time) largely because of improvements to oxGarage since I last revised the script; however, I remain convinced that this is the right place to introduce the possibility of starting from a Word document.
  3. The “TEI from chap 1 to chap 23” talk in session 3 is quite a stretch to do in 40 minutes, especially if one is easily distracted, but remains useful since it motivates the following Roma practical. It’s essential to get across the notion of TEI modularity and this helps students see the point immediately. I based the tutorial on the  script used at this year’s Oxford Summer School, which I translated more or less faithfully, just removing some of James’s jocularities.
  4. The two talks on textual editing (sessions 4 and 6) with TEI were both derived from one I gave in 2011. The initial part, about encoding critical apparatus, seemed a bit redundant (especially for an audience including e.g. Jean-Louis Lebrave)  since we didn’t proceed to use the tags it introduces, and also because I spent too long on it; though it does provide some useful background. In the practical session, both Christof and I decided to use the Wilfred Owen examples used at this year’s Oxford Summer School, though with some  revisions. In the first exercise on transcription,  rather than making sense of an arbitrary non-XML transcription, students had to work out for themselves what was going on in the manuscript image, using the edited transcript provided purely for reference. They found this initially difficult, but once they’d realised that simply cutting and pasting would not be good enough, they became quite enthusiastic. During this exercise I also learned the interesting linguistic fact that the word raturer can be used for any kind of deletion, whereas the word biffer means specifically a deletion carried out by striking through something with a single line. This means that if I revise this exercise again (as I probably will) I shall be providing @rend values in French. (The word “stroked” used throughout the Oxford material seems to me just wrong, by the way. We’re not in the business of cuddling our texts. Au contraire).
  5. Session five’s talk on the TEI Header was more or less unchanged (except for a few typos) from the version I gave earlier this year, itself translated from plenty of other older versions. I think that providing the context for a <msDesc> is useful; the talk also provides an opportunity to talk about named entities, a topic which was otherwise missing in our programme. The practical was again based on one from this year’s Oxford Summer School, though again with some modification — notably that the manuscript described surely contains two <msItem>s, not one.
  6. For session 6, I used the second part of the textual editing talk more or less unchanged: the underlying concepts when defining zones seemed to go well, particularly once I’d admitted to being nul en math. The practical session was an entirely new one, made by translating Christof’s German version, which he produced on the basis of some notes from Elena, describing how she’d presented her Proust prototype at a conference in Australia earlier this year. In the event, one of the nicest moments of the entire course was seeing one of my students punching the air in excitement after having successfully opened the SVG output from this exercise in Internet Explorer. There was a small hiccup when I realised that the version of Inkscape we were using was in German, not English which made configuring it correctly rather tricky, but otherwise this all went far more satisfactorily than I expected. Which was, as they say, nice.

Resolving the Durand Conundrum: some ODD thoughts


The Durand Conundrum is a jokey name for a serious question first raised by David Durand when the current TEI ODD XML format was being finalised at an early working group meeting, though I cannot now remember where. In the original TEI ODD system, that used for the production of all versions before P4, content models and system entities were declared using SGML syntax, which was then wrapped in ODD-defined containers of various kinds. When we moved to XML, the line of least resistance was to re-express those same SGML rules using RelaxNG, thus making ODDXML a hybrid beast of a language, in which everything except element content models was expressed in our own XML vocabulary, while element content models (and datatype definitions) used the RelaxNG XML vocabulary. David pointed out, quite reasonably, that this was a lazy compromise of a solution: we could probably express everything using RelaxNG, not just the content models for elements, and so why not ditch the ODD system entirely and use pure RelaxNG ?

We called this a conundrum because we couldn’t come up with any entirely convincing argument against it, and perhaps there is none. A hybrid system is just not good engineering. It means that parts of the conceptual model which ODD expresses cannot be manipulated or specified using ODD itself – witness the contorsions we go through to facilitate different interpretations of class references, or the recent debate about how to introduce interleaving. But the Durand conundrum can be resolved in two ways: one would indeed be to re-express everything in ODD in RelaxNG; the other would be to expand ODD to enable it to define content models natively, without having recourse to a loosely defined subset of RelaxNG. In this article, I would like to explore the second possibility.

The current set of P5 content models makes use of a (largely undocumented) subset of RelaxNG facilities, mainly as a consequence of the design goal of supporting schema generation in DTD and W3C schema languages as well as RelaxNG, so it isn’t strictly true that we are currently using RelaxNG. Support for DTD schema language in particular imposes many limitations on what would otherwise be possible, while the many additional facilities provided by W3C Schema and Relaxng for content validation are hardly used at all (though equivalent facilities are now provided by the <constraints> element). A few years ago, the demise of DTDs was confidently expected; in 2012 however the patient remains in rude health, and it seems likely that support for DTDs will continue to be an ongoing requirement, even after support for P4 is formally withdrawn at the end of 2012.  We therefore assume that whatever elements we use to specify content models will need to have the following characteristics:

  • the model permits alternation, repetition, and sequencing of individual elements,
    element classes, or sub-models (groups of elements)
  • only one kind of mixed content model — the classic (#PCDATA | foo | bar)* — is
    permitted
  • the SGML ampersand connector — (a & b) as a shortcut for ( (a,b) | (b,a) ) is not
    permitted
  • a parser or validator is not required to do look ahead and consequently the model
    must be deterministic, that is, when applying the model to a document instance, there
    must be only one possible matching label in the model for each point in the document

To support repetition, we already have attributes @minOccurs and @maxOccurs, which are defined locally on the <datatype> element. We propose that these should instead be supplied by an attribute class @att.repeatable, members of which would include <datatype>, and also the existing elements <elementRef>, <classRef> and <macroRef>, which we repurpose for use within content model declarations.

We also need to add four new elements: <model> to define the content model; <sequence> to indicate that its children form a sequence within a content model; <alternate> to indicate that its children can be alternated within a content model; and <pcdata> to indicate that the content model permits a text node at this point.

<model> might eventually perhaps replace the current <content> element, since it has the same function. It seems safer however to define a new root element for the new proposals in case everything goes horribly wrong. In the testModel.odd test implementation, this new element is added as an alternative to <content> wherever that occurs in the schema.

<sequence> and <alternate> are used to wrap parts of the content model being defined. <pcdata> is used to indicate presence of a pure text node in the model: making it an explicit XML element makes it possible to write a schematron constraint to implement the required restriction on mixed content models (though I haven’t done that yet)

Using these elements, a content model such as ((a, (b|c)*, d+), e?) would be expressed as follows:

<model>
<sequence>
<sequence>
<elementRef key="a"/>
<alternate minOccurs="0" maxOccurs="unlimited">
<elementRef key="b"/>
<elementRef key="c"/>
</alternate>
<elementRef key="d" maxOccurs="unlimited"/>
</sequence>
<elementRef key="e" minOccurs="0"/>
</sequence>
</model>

Note that the value for both @minOccurs and @maxOccurs is understood to be 1 unless it is explicitly provided on an element, or on its immediate parent.

Classes are handled in the same way. Thus, a content model such as (model.a,
model.b+, (model.c | model.d)*)
would be expressed as follows:

<model>
<sequence>
<classRef key="model.a"/>
<classRef key="model.b" maxOccurs="unlimited"/>
<alternate minOccurs="0" maxOccurs="unlimited">
<classRef key="model.c"/>
<classRef key="model.d"/>
</alternate>
</sequence>
</model>

When processing this declaration, an ODD processor has to expand each
<classRef> to a model which will match any single member of the specified class, by default.

This behaviour may be varied by supplying a different value for the attribute @expand which we propose to make available on <classRef>. This attribute takes the same possible values as the existing @generate attribute
on the <classSpec> element. For example, assuming that elements a and b are the members of class model.ab,
<classRef key="model.ab" expand="sequence"/> is interpreted as a,b rather than as (a | b).

A mixed content model such as (#PCDATA | a | model.b)* would be expressed as follows:
<model>
<alternate minOccurs="0" maxOccurs="unlimited">
<pcdata/>
<elementRef key="a"/>
<classRef key="model.a"/>
</alternate></model>

References to existing TEI-defined macros are handled in the same way, using the existing <macroRef> element. Where the body of a macro is a <content> element rather than a <valList>, it will of course be necessary to replace that by an equivalent <model>.

CR du CS de la MRSH

J’ai eu la semaine dernière le plaisir d’assister à la première réunion du CS récemment reconstitué de la Maison de la Recherche en Sciences de l’Homme sur le campus de l’Universite de Caen; ceci a été l’occasion d’une présentation de toute la gamme des activités remarquables et originales de cette Maison innovatrice et atypique. (Les personnes qui ne connaissent pas déjà la MRSH sont vivement conseillées de consulter sa presentation en ligne ) Sur ses 6 pôles d’activités (et 27 équipes de recherche), nous n’avons travaillé que sur une moitié, et malgré cela il y avait de quoi travailler pendant trois journées très complètes. J’ai eu, en plus, le plaisir de faire la connaissance des autres membres du CS, un regroupement intéressant de 12 personnes distinguées, de plusieurs nationalités (suisse, anglais, norvégien, italien, québecois) et d’expertises très diverses (géographes, littéraires, historiques, informatiques…). S’il y avait une prépondérance de cheveux blancs autour de la table cela n’a nullement réduit l’intensité et l’interêt de nos débats avec le personel de la Maison et avec d’autres personalités ayant des responsabiltés importantes dans la région; ceci s’est fait au cours d’un programme de rencontres très complet qui avait été organisée par le directeur Pascal Buléon: dîner avec le DRAC, petit déjeûner au conseil regional en présence de son president, séance du Conseil avec la présence du député-maire de Caen, avec la Présidente de l’Université (et de son successeur élu ce même jour), avec aussi le Délégué regional du CNRS. C’était ainsi pour moi un “crash course” dans les structures administratives francaises qui continuent de me fasciner et (il faut l’avouer) de me confondre. Et on a donc dû se présenter (“Je m’appelle Lou Burnard, je suis …. “) à plusieurs reprises, avec des variations inévitables voire aléatoires.

L’objectif de la réunion n’était pas, bien sûr, de faire une évaluation formelle des activités de la maison, celle-ci ayant été tout récemment effectuée par l’AERES (qui lui a donné une note très favorable, si je ne me trompe pas). On était là pour discuter, pour contribuer avec nos petits grains de sel au rich pudding des activités et des procédures intellectuelles sousjacentes… néanmoins une évaluation informelle reste presque inévitable, puisque nos réactions ont été sollicitées par notre président afin de l’aider à faire son rapport ( il s’agit d’un géographe célèbre, Guy Di Meo de Bordeaux, élu à l’unanimité lors de notre première séance). Voici donc quelques petites remarques que j’ai tirées de cette visite:

  • La richesse et la varieté des opportunités de travailler d’une manière interdisciplinaire, voire multidisciplinaire (notre collègue québecoise nous avait expliqué la distinction mais je ne l’ai pas retenue) ont été clairement mises en évidence dans les rapports de chacun des pôles d’activités, et des directeurs d’unités concernées ;
  • Les dispositifs offerts par la Maison semblent bien conçus pour répondre aux besoins de ses utilisateurs et semblent favoriser un élargissement d’activités fructueuses; un seul obstacle est le manque d’espace car les locaux actuels sont surchargés;
  • Il y a une variété évidente parmi les niveaux d’activités ; quelques uns des acteurs me semblent de véritables “leaders” dans leurs domaines (notamment les pôles “numériques” et “rural”), pendant que d’autres me semblent ne rien manifester d’exceptionel. Une université, bien sûr, est un lieu de variété, mais il reste essentiel, à mon avis, de promouvoir une culture de partage d’expertise, et de n’avoir pas peur de terminer ou de repenser des activités qui n’arrivent pas à s’organiser effectivement, pour n’importe quelle raison ;
  • Nous avons constaté, avec un peu de surprise, que parmi les équipes présentées il semblait subsister un peu d’ignorance des activités de toutes les partners de la maison. Il serait avantageux, à mon avis, de promouvoir un “esprit de maison” un peu plus fort, car les possibilités des actions synergetiques, vues les expertises disponibles, sont loin d’être négligeables. Par exemple, les activités du Pôle “Risques” signifiantes et multi-disciplinaires qu’elles soient, pourraient quand même profiter des compétences linguistiques du centre CRILET ; la politique d’édition très complète (et mondialement reconnue) du pôle “Rural” néanmoins pourrait peut être profiter d’une réflexion avec les collègues du pôle numérique sur une éventuelle “digital turn”. Je souligne qu’il s’agit bien sûr de reflexions collégiales (dans le sens oxfordien) et non pas de restructuration des structures déjà très complexe!

En conclusion, il faut avouer que l’occasion ne manquait pas d’interludes sybaritiques pour compléter les rigueurs intellectuelles, notamment les repas, sur lesquels je n’insiste pas (quelques photos sont disponibles ). On a aussi pu faire un peu de tourisme, notamment dans la bibliothèque de la maison, qui a reçu le fond ancien du Ministère de l’agriculture, et à l’IMEC, site également magnifique du point de vue de l’architecture (il est hébergé dans une ancienne abbaye restaurée et reaménagée d’une manière très sympathique) et du point de vue de son contenu (sont déposées ici les archives personelles de quelques centaines d’ écrivains et d’artistes modernes). Noter que cet archive pourrait bien profiter des expertises techniques (par exemple) du pôle numerique pour mieux sauvegarder la partie de leur fonds n’existant que sur supports numériques; ceci témoignagerait de l’importance du réseau des compétences facilitées et mises à disposition par la MRSH.

 

 

Un point hors de discussion

It’s not often that my good friend Jean-Daniel gets indignant enough to post on Facebook a notice of something he’s read rather than an announcement of his current whereabouts, so I feel particularly indebted to him for having alerted me to the existence of a review article appearing in the Bulletin of the centre d’etudes medievales d’Auxerre, written by one Alain Guerreau, a distinguished medieval historian I learn from his extensive Wikipedia entry. This entry makes no reference at all to his experience or expertise in the area which forms the topic of the article but I am not sure that I would in any case trust anything produced by someone feeling the need to resort to UNDERLINED CAPITALS or sudden splatters of bold to make an argument, much less by someone quite so fond of such dogmatic phrases (“il n’y a qu’un choix possible … c’est un point hors de discussion … c’est la seule voie possible … c’est un must absolu”).

Nevertheless, the article does make some sound if hardly controversial recommendations about the need to use Unicode (here called “UTF-8”) and the usefulness of FOSS – Free and Open Source Software. It’s disappointing to come across a French speaker failing to point out that the French language actually boasts two words for “free” (gratuit and libre) corresponding with its two quite different senses – even more to find a francophone systematically choosing the wrong one, but you can’t have everything. I am also grateful for the pointers Guerreau provides to some software of which I would otherwise have been unaware, and for his endorsement of some others which I would certainly second (examples include txm, antconc, and cqp, all of which surely must be a part of any self-respecting text analyst’s armoury these days). It’s a pity that these recommendations come along with a tirade against the TEI, which Guerreau engagingly terms une gaspillage et perte de temps. For him, the efforts of generations of library and information scientists to define ways of classifying and structuring information have been a complete waste of time, if not worse (amongst his politer remarks about them is a reference to “le fantasme aussi ancient que recurrent d’une “mathesius universalis”). Not content with putting the boot into those poor misguided librarians, he then attributes the same fantastic objective to the “poignée d’informaticiens, essentiallement anglo-saxons, dépourvus autant de connaissances historiques que d’esprit critique” which he apparently believes gave rise to the Text Encoding Initiative.

It’s hard to know where to start correcting the fallacies in this part of his article, but for starters, the TEI was not designed by computer scientists, nor by people lacking in historical or critical awareness, but rather emerged from a productive conversation amongst several hundred scholarly users and creators of digitized resources worldwide, a conversation which has been going on for over three decades now and shows no sign of running out of steam. It seems ironic that Guerreau considers learning perl and regexp syntax neither too long nor too complex for the timorous, and yet a page later is busily asserting that no-one could possibly understand more than 25 of the tags proposed by the TEI – which is therefore at all costs to be avoided.

It’s even more ironic to read in section 4 “Ce qui manque” the claim that no-one has ever tried to define a reliable way of recording “de manière structurée toutes les variantes d’un texte”. Really? I think a cursory look at the literature will show that textual editing and textual variation has been an area in which the use of the TEI has established itself over the years. That’s not to say that every textual editor uses it, much less that those who do use it in identical ways, but it is absurd to claim that there is no open source software available to support it or that the TEI has nothing to offer in this domain. To quote M Guerreau himself, “on ne peut que regretter que les historiens et philologues le sachent à peine, et ne les [i.e. les outils FOSS] utilisent qu’à doses homéopathiques”. In his second conclusion (“sur lequel on n’insistera jamais assez”) he rightly prioritizes the intellectual effort of understanding a source above mere technical skills, and rightly insists that “pour chaque corpus il faut bien comprendre et saisir les specifités”. Which is, of course, exactly why the TEI not only offers you more than 25 tags, but also expects you to decide for yourself how to use them.

Encoding documents and collections at Caen

And so, once more, and maybe for the last time, to Caen for Encodage de documents et de collections, the two-day culmination of the seminar series of Caen’s Pole document numérique, ‘organisé dans le cadre de la chaire d’excellence de Matthew James Driscoll’ We are met this time in the magnificent Belvedere room, affording splendid views over the surrounding countryside, which is bathed in unwonted spring sunshine. Matthew kicked off with an overview of his handrit project, focussing this time on the TEI’s manuscript description module, its evolution and how it fits the needs of his project (or was adjusted to do so); this was nicely complemented by a description of the manuscript holdings (crossing the frontier between Library and Archive), and the digitization work flow used by the Icelandic partners in the project from Örn Hrafnkelsson of the National Library in Reykjavík.

The virtual reconstitution of the great libraries of the middle ages is one of the projects which mass digitization has been promising us for many years. The Bibliothèque virtuelle du Mont Saint-Michel is a classic example: Catherine Jacquemard, from CRAHAM at the Université de Caen Basse-Normandie, Jean-Luc Leservoisier, from the Scriptorial d’Avranches (where many but no means all of the surviving manuscripts from the Abbey of Saint-Michel are now holed up) and Marie Bisson, the technician responsible for finding ways of pooling and harmonising the scattered records describing that library, gave a good report from the coal face where those actually trying to deliver on that promise have been labouring, stubbing their toes occasionally on the mutually inconsistent cataloguing of manuscripts in various institutions.

We then broke for lunch, noticing en passant that the campus seemed to have acquired a number of students disguised as angels, smurfs, gangsters, and other figures of popular iconography.

After lunch, Marie-Luce Demonet, of the CESR, Université de Tours gave a whirlwind overview of the activities of the Bibliothèques Virtuelles Humanistes: I noted in particular the way it needs to treat uniformly both manuscript and printed sources, the ingenious use of iconclass as a unifying vocabulary to provide image search facilities across both miniatures and ornamented letters, and the availability of an online lexicon of printers marks, but there was much more meat besides.

The SCRIPTA project at Caen uses a traditional mySQL database to catalogue charters, but is now evolving into something more like an XML database by means of the addition of a front end written in XML Mind. This was presented by Pierre Bauduin and Tamiko Fujimoto from the CRAHAM unit at Caen, with technical support from Anne Goloubkoff of the Pôle Document numérique. About this point in the day, the growing number of angels, smurfs, gangsters etc. outside the building reached a critical mass and started its rather noisy procession around the building and indeed the town, which made it difficult to follow all of Tamiko’s walk through the software. I did note however that the TEI markup deployed was using some rather politically incorrect values for its @type attributes, derived apparently from recommended practice in the archival community.

I rounded off the day with another appearance of my talk on the History of the TEI, which I still haven’t quite got to fit into the confines of a 45 minute presentation, despite two previous attempts. Ah well. If on the other hand, you’re more interested in angels, smurfs, gangsters, etc. then you may prefer to look at my photos.

Next morning bright and early, we listed to Georg Vogeler from Graz (now located in something called the Center for information-modelling in the humanities, I learn: probably one word in German) describing the Monasterium.net, which is a kind of collaborative digital library and hence maybe a collaborative research environment. It holds information about thousands of charters and legal documents, either aggregated or syndicated from 99 other archives in a dozen countries world wide. As such, it is itself arguably (I say arguably because we argued about the meaning of the term) a kind of finding aid. It uses, of course, its own schema, drawing on both EAD and TEI P4, the former for the archive-level description, the latter for the encoding of individual documents. The resulting CEI schema is arguably neither fish nor fowl, but does have quite an impressive implementation, using eXist via Xforms and a bunch of ajax controls to deliver a cool integrated browse and search interface for finding aid and document alike (though only 10% of the documents are transcribed). Clearly any kind of cross archive search’n browse facility is a Good Thing, though whether this constitutes any kind of “digital edition”, collaborative or otherwise is more debatable, as indeed we did.

Lists of documents, and the collections which gave rise to them, were the theme of this second day. Lucien Reynhout from the Bibliothèque Royale de Belgique described a Belgian project to create “Sanderus Electronicus” a digital edition of an important 17th century list of lists of books, made by one Antonius Sanderus: this too was a collaborative project: Sanderus published as a single work about sixty different lists derived from the reports of several correspondents whom he had asked to describe the holdings of several significant libraries and as such exhibits all the problems of inconsistency of description and detail we’re accustomed to in the digital domain, deriving perhaps also from the same ontological anxieties: what are the individual components of such lists? which object in the FRBR model corresponds with their constituents? for example what does “duo Iuvenales” actually mean? Sanderus Electronicus will take the common sense view that it is composed of list items (or so I believe) rather than anything more bibliographic, though it will also use a database called BIBALES, to hold entries for people, places, works etc. referenced.

After coffee, the man from the ministry, an amiable person called Florent Palluault explained just why every archive in France, if it creates a catalogue at all, will do so using EAD. and how it came about that the digital version of the venerable Catalogue général des manuscrits des bibliothèques, all 116 volumes of it, is being updated and produced to the same standard. He described the workflow, which reminded me of some other large scale retrodigitizan projects : OCR of the original ancient print volumes had been automatically split up into separate Word documents for editing, each notice managed within a database, had been exported as a Word document with some degree of automatic conversion to EAD on the basis of the typography. Jérôme Sirdey (Bibliothèque nationale de France) then described PALME, a new project aiming to convert an existing MARC-based catalogue of 20thc French literary mss. into EAD and Pellualt then concluded with some speculation about future directions, notably a planned catalogue collectif de france (CCFR): an ambitious union catalogue of mss holdings across CALAME (funded by CNRS institutions), the BNF, and the new digital CGM, all still based on EAD, which clearly still has a great future in France.

EAD and TEI and whether there was any hope for a happy marriage between them was a theme to which Florence Clavaud (Ecole des Chartes) returned after lunch. Florence is a member of the expert group which is currently proposing revisions to the EAD standard and to the accompanying (French) Guide to best practice for its application, as well as being expert in both EAD and TEI, amd so she has lots to say on both, unfortunately rather more than she really had time for on this occasion. Anne-Marie Turcan and Hanno Wijsman from IRHT concluded the session by presenting work building on the Biblifram project notably a database under development at IRHT (and allegedly only accessible there, for IPR reasons) to support research in the history of the book: Libraria et Bibale.

The two days were rounded off in a very satisfactory way by Torsten Schassen, from the Herzog August Bibliothek in Wolfenbüttel, recounting his experience as a participant in the EU-funded digitisation project Europeana regia project which aims to catalogue and provide access to all the mss from three specific royal collections now dispersed across a number of European libraries, and hence catalogued in a number of different formats (Marc, EAD, TEI, MAB, MXML… ) and seven different languages. Possibly an unusual aspect of the project, or one that Thorsten chose to emphasize at any rate, was a requirement that the resulting system be both usable and interesting for the general public. As the party responsible for metadata, WAB had the thankless task of trying to define a kind of Dublin Core minimal set for manuscript description, which is reassuringly a clean subset derived from TEI-P5, even if the European Library cannot currently handle TEI format data directly. The minimum data set was also internationalised, even though Europeana itself cannot currently handle multilingual data. There is even a button on the website which sends you the TEI <msDesc> for each manuscript that has one.

The take-away message from this presentation, as from the seminar as a whole, was encouraging: TEI is proving its usefulness in a variety of complex document management situations. I also think some serious investigation of the feasibility of integrating EAD within it is warranted: not much is needed and much would be gained.

The origins of ODD

I’m moving house this week, which involves packing up thirty years of accumulated junk of various sorts. As a result, every now and then I stumble upon some long lost historic document, like this one. It dates from a lunch that Michael Sperberg-McQueen and I enjoyed at the Lido restaurant Bergen in November 1991. This being a family restaurant, it was equipped with paper table cloths and wax crayons, Norwegian kids for the use of, which Michael and I were quick to reappropriate to our immediate needs, namely some kind of visual representation of the production system we wanted to create for the editing and processsing of the TEI Guidelines, version P2. We knew we were going to write and edit it in some version of TEI SGML; we had faith that anything in SGML could be transformed into anything else. We just had to work out how, and what.

P1 had been produced by some devious hackery that only Michael understood, and more critically which only ran on the mainframe at UIC; we wanted something that would be platform (hardware and software) independe nt. Such was the promise of TEI SGML, after all. Somewhat to our horror, the only reasonable high level programming language in which we were both reasonably competent and for which there were decent implementations on all the machines we collectively used (IBM CMS, VAX VMS, IBM PC, Macintosh…) seemed to be a now largely-forgotten string handling language called Macro Spitbol, so we decided that our production system (what nowadays we’d call a work flow) would have to be written in that. But of course the heart of everything would be a nice author-friendly TEI SGML dialect, for which we optimistically coined the acronym ODD: One Document Does-it-all. ODD files would be parsed by an SGML parser, and its output filtered through a variety of Spitbol processors to create other formats. And that, more or less, is what we did.

On this schematic you can see the basic idea in blue. The big blue circle is the ODD format, from which are generated canonical TEI files (with extension .TIN (for Tiny) or .TEI), RL files (extension .TD), and DTD files, the three little blue boxes. DTD files are of course SGML DTD files, which ois why you see a green line going back from them to validate individual ODD files (I dont know why it’s labelled LB though). “Tiny” files would use a subset of the TEI Lite schema defined back in 1988; RL (later renamed .REF) files would use the TEI vocabulary Michael had developed for reference documentation of individual elements (“TD” for tag documentation). Down the middle you see a list of TLAs in blue which I think must have been attempts to decide on a name for the format (WEB, Joe, LAM, RDF, CSP…), though what they expand to I really don’t remember – what a pity we didn’t choose RDF. Or not. And over on the left in red you see some notes which eventually became the canonical structure of the TEI Guidelines: there is a chapter about the “blort”, containing prose paragraphs; there is a documentation element referencing the blort tag, and there is a parameter entity reference which pulls in the definitions for the blort chapter.

What happened next? Well, we did set up a workflow more or less on this model, and we did use three separate filters written in macro Spitbol (mostly by Michael) which turned our ODD SGML into two flavours of straightforward TEI-lite-like SGML, which we called “P2X” and “REF” and also generated SGML DTD fragments. After experimenting with a generic filter called “tf” (also in Spitbol) to translate the generated TEI files into LaTeX, and dallying with a Canadian tool called Omnimark, we finally settled on a rather swish transformation engine called Balise, which was produced by a French company called AIS. Either way we were able to print the fascicles of P2 in something that not only looked quite nice but also looked just the same whether I printed it in Oxford, or Michael in Chicago. Except for the paper size, of course: ain’t standardisation a marvellous thing.

And what happened to ODD? It turned out to be quite a good idea. We gave a presentation about it at the ACH-ALLC conference in 1994, though I cannot remember what we said and we never got round to writing it up. Michael developed the ideas in the “tag documentation” part quite extensively, and (I believe) used them also in his next job working for the W3C, but the TEI’s ODD stayed more or less unchanged until work started on the TEI’s XML reincarnation, at which time the whole system was re-imagined and redesigned as the lean mean generic schema generation system we know and love today. But that’s another story.

Here we go again

It’s ridiculously early for a Sunday morning, but the only plausible train to catch from Oxford if you want to connect with a 1220 Eurostar leaves at 0940. So here I am wondering, along with many others, where on earth is said train. We can see it in the sidings North of Oxford station, but it’s not moving towards us and the announcements are not reassuring. Maybe the stopping service to Ealing Broadway is a better bet: certainly standing around fretting on Oxford station is not pleasant. Some twenty bucolic minutes later, I detrain at Didcot in the hope of something better: which does indeed turn up in the shape of the 0940, now proudly running only 20 minutes late. I spend my uneventful trundle through the morning sunshine trying to work out what I have done to incapacitate tei-emacs on my laptop. Then an unspeakably horrible Circle line train bears me off to St Pancras, and the comparatively civilised space of the Eurostar lounge where I discover that in the general confusion of getting myself ready for this week’s set of French gigs I have failed to check something crucial into my nice new subversion repository. Ah well: no time to agonise over that, it’s time to get my disordered thoughts on the history of the TEI into some sort of plausible order, and to construct an appropriate French narrative around same. Which keeps me happily occupied for the rest of the day: out of London, across the wilds of Kent, under the channel, through Picardy into Paris, my nose barely strays a few inches from my laptop screen, tappety tappety tap, except for a few minutes degustation (I use the term advisedly) of a Eurostar snack lunch, and a few dirty looks in the general direction of some fellow passengers yapping away noisily behind me. Even nastier, but mercifully not noticeably longer than the Circle line is the hop by RER B from Gare du Nord to Gare de Lyon, where I resume work on board a nice peaceful TGV all the way to Lyon. With such good effect that my talk for tomorrow is all ready to go, even before I arrive at Perrache. Such virtue warrants dinner, even though it’s now a little late, so I stride purposefully across the Place Carnot to the brasserie Victor Hugo, order a hamburger a cheval (nothing to do with horses, this is a burger with a fried egg on it) frites, et un pot de cote and phone Marjorie to re-assure her that I am here and ready to boogie, before retiring to bed.

One hasty breakfast later, Dominique Roux and I set off in search of one of the many fine Universities in which Lyon rejoices, more exactly the vaulted basement dungeon in which Marjorie’s seminaire is taking place. The morning was supposed to be a double act, but since Paul Spence couldn’t make it his colleague Guilhem Pépin instead gave us an interesting lecture about medieval history before showing us some of the Gascon Rolls project. Pépin is a French (or more properly, Gascon) historian actually working at Oxford in the History faculty. There was a time when I might have huffed and puffed a bit about Oxford academics who take their TEI digital projects off to Kings College instead of using the local facilities, but these days I have become placid and boring. Anyway, Pépin was a good speaker and clearly an agreeable person to work with; and the material presented all sorts of interesting possibilities for analysis once marked up, even if he was almost aggresively reluctant to claim any expertise in the application of markup. Not for the first time, I wonder why it is perfectly acceptable for academics to profess ignorance of one technology that is essential to their work, whereas ignorance of others (say, bibliography) would seriously damage their career prospects. And then off we all went for a decent lunch, this being France: dos de colin avec ses pommes de terres lyonnaises, if I remember correctly. After which I gave my talk, which seemed (to me at least) to go remarkably well for a first outing: I suspect I will give it again, at least as long as people go on asking me to explain where on earth the TEI came from, and why it has not sunk without trace. It is a good story, with a good moral, I think. After a coffee break, Dominique Roux from the Presses Universitaires de Caen gave a thorough overview of their projects and preoccupations, presenting a variety of cool projects, a TEI-based workflow, some wise remarks about the use of TEI in commercial publishing, and much else besides. It’s a pity he came at the end of a long day with perhaps a touch too much Gasconnade in it, since it would have been good to discuss several of the ideas he presented with the masters students present — who had all been assiduously taking notes earlier in the day, but were clearly flagging somewhat by the end. I was sorry to have to rush off  in time to catch the train to my next gig, in Tours.

Preparation for said next gig took up quite a bit of the journey, quelle surprise; indeed I don’t think I looked out of the window once. And yes, it is possible to get from Lyon to Tours without passing through Paris, if only once or twice a day. The TGV concerned stops at a place I have never heard of called Massy, and then at St Pierre des Corps, before zooming on to Caen. St Pierre des Corps is a dismal little junction from which a variety of trains shuttle into the architectural splendour of Tours central, about 5 minutes away. Even when entirely enclosed in scaffolding as a part of its restoration as a patronomial monument, Tours station is an uplifting spectacle late at night, when everything around it is closed except for Macdonalds. Equally good for the soul is the Grand Hotel of Tours which has retained and lovingly refurbished its charming 1930s decor, all peacock feathers and wooden panelling and geometric patterns. Last time I was here in December, the wifi was misbehaving but everything seems to be fine now, and the breakfast is excellent. Next morning, it’s a quick trot across town to the Centre d’Etudes Superieures de la Renaissance to give my contribution to their Master 2 professionnalisant Patrimoine écrit et édition numérique : initiation à l’encodage des textes patrimoniaux. This is the third or fourth year I have done this, so you would think I had it sorted by now. My contribution this year consisted of a ninety-minute lecture on manuscript encoding (much revised to recognise the existence of the new <sourceDoc> element, as of release 2.0 of TEI P5 — this was the talk I thought I had mislaid, but hadn’t); followed by another 90 minutes on Roma and schemas and such like mysteries, using the Virgolos project as a case study (called TEI a la cartes, geddit?); and finally another 90 minutes attempting to explain XSLT pour les nuls. This last was a rather more quixotic and under-prepared venture: although novices quite quickly grasp the basic ideas and usefulness of XPATH, grasping exactly what an xsl template is and why you might want one is rather more of a challenge. But the punters seemed content to be slightly baffled at the end of a long and varied day, and I am sure that the local team will clarify any residual bewilderment next week. Dinner was at the Odeon, another piece of lovingly restored 1930s kitsch, where the food was excellent (I had the rognons since you ask), and Marie-Luce and I discussed the notion of a weeklong residential formation approfondie sur la TEI under the auspices of the Cahiers consortium, plus anyone else who might like to play.

Tours is in the process of acquiring a tramway, which means that large amounts of it are being dug up and knocked down, notably near the railway station: I observed this with interest over breakfast, before hastening off rather late for a short consultation with the CESR team about how to autogenerate an ODD from their Epistemon corpus (sort of difficult if you don’t have Saxon installed), and some discussion about how best to proceed with their ongoing project of revising the project’s encoding manual. The plan is not only to update but also to generalise this manual for use by other similar projects, which would certainly be useful: there isn’t a lot like it in French, aside from the BFM manual. However, I have a train to catch this morning, so I have to sprint back through the marche des fleurs, looking neither to the right nor the left, regretfully for there is much to see, and resisting the temptation to stop to buy fresh garlic or dried flowers or a sandwich for the journey or even take some photos of the pavements now decorated with a rich and colourful assortment of flowering bedding out plants. Tours is a charming place with much to recommend it. And so, off to Paris where I have a couple of crucial meetings to attend, crucial enough to propel me into an irrational anxiety about the progress of my train which suddenly decides to slow down and stop in the middle of nowhere more frequently than is decent, even for an intercite. In the event, though, we pull into Austerlitz ten minutes early, allowing me to take a pleasantly-paced walk through the Jardin des Plantes and up the hill to the TGE Adonis office in good time for my appointment with my directeur, Jean Luc Pinol. We discuss the coming year’s work plan for MEET; this being satisfactorily resolved, Ariane agrees to release a PUMA forthwith (don’t ask) … I spend the afternoon catching up on the gossip with TGE colleagues before checking into this week’s hotel which is conveniently located opposite a nice bar and round the corner from a rather excellent brasserie. Here I dine, expensively but deliciously on foie de veau patates et encore un pot de rhone. It’s tiring work all this gluttony, you know.

Next morning, I rise at a civilised hour, and catch up on my committments at the TGE most of the day, taking however an extensive lunch break to discuss with Mathieu Andro from the Bibliotheque Ste Genevieve a wondrous new digital library project which has apparently secured 1.7 million euros of local funding to finance a deposit archive for the digitized outputs of a select bunch of Parisian libraries, and wants to use the TEI. Did I hear that right? The lunch was pretty good too. Finally, I put in place some hasty arrangements for another meeting in Paris next week, and then trek on foot across town to Chatelet (where there seems to be, as usual, a manif going on) to catch the metro to Gare St Lazare  (which, post-renovation, seems to be mysteriously disguising itself as the gare de l’est), to take the train to Caen, for the last gig of this tour, namely viz Matthew Driscoll’s ongoing TEI seminar at the MSH. The Hotel Quatrans is much as I last saw it, and so, I am pleased to report, is the little restaurant called “Les saveurs de la Reunion” just round the corner from it, where Matthew, Eric, and I enjoy some rum, some gateaux piments assortis, two bottles of muscadet, and a tasty carrie cabri before retiring for the evening.

Friday is seminar day. Serge Heiden ARE YOU READING THIS SERGE? from the ENS Lyon opens proceedings with an update and an impressive demonstration of the textometrie project, which goes from strength to strength. They have an equipex in which they will be working with hundreds of Historians, and a number of other collaborations in prospects, some ANR, some DFG-funded. The software is, of course, still available from sourceforge, and they are also in the process of setting up a portal for general access to some demonstration applications of it   . Serge discussed the way the software uses TEI and other forms of markup; they have now fixed on a TEI-conformant pivot format, for which an ODD is in preparation. He also demonstrated many XAIRA-like features of the software and reported some work done by Alexei Lavrentev in importing and analysing the markup of a large corpus of texts from Frantext.  He was followed by Antoine Widlocher who described the  search engine under development at Caen’s Greyc research group initially for use in the Descartes project Its data model uses graphs rather than trees, and much of his talk therefore concerned the difference between the two, although he did also present the user interface envisaged for the system; this is, of course, SPARQL-based, and will access a triple store in which XML and other annotations are all represented in RDF. All very interesting if, perhaps, a little computer science oriented. Maud Ingarao commented that the project resembled  Edouard Portier’s work on multistructured documents; I should have mentioned Desmond Schmidt, but didn’t. After lunch (in the student canteen; n’en parlons plus) Maud gave a brief overview of a newish XML database system called BaseX, and demonstrated some of its jazzier features: she also noted that a test basex server has now been implemented as part of the TGE Grille de services. Frederic Glorieux then gave a nice talk demonstrating how the presence of detailed markup in his version of Francois Ganaz’s “XMLittré” project facilitated several interesting searches: he proposed tthe average size of text fragment within a TEI document might be an interesting stylistic indicator; and remarked on the high frequency of emotive words like “dieu, homme, roi” in the examples cited by Littré. Finally in this session Marie Bisson demonstrated the current state of the Juxta collation system under Windows, and working on three manuscripts of Thomas Le Roy. Juxta apparently has its own XML markup but  does now also (more or less) grock TEI .

Last but one session of the day concerned “quantitative codicology“, a term, I learned, which is even older than the TEI, having apparently been invented by someone called Ornato in 1980, according to Matthew, though it is a concept which can be seen to underlie Don McKenzie’s 1985 Panizzi lectures on bibliography as the “the sociology of texts”, or the so-called New Philogy of Stephen Nichols at the start of the nineties. I liked Matthew’s use of the phrase “the artefactual turn” to describe his increasing certainty that the meaning of text should not be dissociated from its “embodiment” or the historical and social forces that documents manifest, and intend to appropriate it for use when presenting the TEI’s recent reinvention of <sourceDoc> . Matthew and colleagues described the Fornaldarsögur norðurlanda project, which aims to provide an account of the production, dissemination, and reception of the “chirographically transmitted texts” of 36 stories from prehistoric times which can be identified in some 1500 texts presented in over 750 distinct Icelandic manuscripts. These are described using (inter alia) a reduced and tightly constrained schema derived from TEI P5, extended to include information derived from the transcriptions of the mss such as the average written area, the number of abbreviations per line, etc. as well as such features as the presence of decoration, or the types of text included. Sylvia Hufnagel presented some hypotheses about possible connexions between these evidential characteristics and assumptions about the wealth or status of the owner or person believed to have commisioned creation of a manuscript, though there is really insufficient evidence so far to justify any generalisations one might be tempted to make about (say) the emergence of the “prestigious reading manuscript” distinguishing (as it were) “coffee table” manuscripts from “paperbacks” . Eric Haswell described clearly and concisely the technologies used in the project, contrasting the “data centric” and “document centric” notions of relational and xml databases, and also showing how their web-service based implemetation based on eXist made it possible very easily to extract query results as CSV for input into traditional spreadsheets or as JSON for use by cooler things such  simile widgets. Finally, I gave that talk about linguistic annotation and why people say such terrible things about it. Not sure how appropriate it was to the day, but people seemed to be listening anyway. Final dinner of this week of over eating, was at Le Bouchon du Vaugueux where I (and others) tucked into a four course gastronomic menu, including some excellent roast duck, and rather a lot of stewed pears.

And on Saturday, the journey home, which was all very pleasant till I actually got to London : trains cancelled without warning, inadequate fallback facilities, Great British Public mustnt-grumbling etc etc. It took longer to get from London to Oxford (about 100 km) than from Caen to Paris (about 200 km), and involved a train that was so overcrowded it could not leave the station, not to mention a 30 minute wait for a replacement bus in the cold outside Reading station. Never mind, next week I’m going back to France, where the trains (mostly) run on time and the train crews are (usually) helpful and less demoralised when they don’t.