Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

Digital Palaeography meets Optical Glyph Recognition in Rouen

HDDA2012 (“Historical Documents in the Digital Age”)  at the University of Rouen turned out to be unusual (for me at least) in a number of respects. Firstly it was organised as part of a project (“DocExplore”)  funded under the Interreg framework of the EU, and hence attended by people from both sides of the channel, rather than being exclusively French. As a consequence the presentations were in both English and French, with apparently quite successful simultaneous translation, though I did not test this for more than a a few minutes. Secondly, I didn’t have to explain to anyone what the TEI was, and why it might be interesting; everyone seemed to know all about that already, even the informaticiens. And thirdly, there was no-one else from Adonis present, so it fell to me to ask the man from the Archives Nationales why they did not provide an OAI feed into Isidore as well as into Europeana (they’re planning to).

There were about eighty attendees, most of whom survived the full day and a half of invited presentations/round tables. There was a bit of audience interaction, but not much, and surprisingly perhaps only a couple of desultory tweeters, one of which doesn’t count since it was me. There was however plenty of time for old-fashioned face to face discussion over lengthy pauses for sustenance in Rouen’s rather nice Maison de l’Universite. As far as I can tell there were roughly equal numbers of archivistes and informaticiens, but they did not mix a great deal.

Proceedings were kicked off with two very good “state of the art” summaries of what’s going on in the way of cultural heritage digitization in France by J F Moufflet from the Archives de France, and Matthieu Bonicel from the BNF. I particularly liked the latter because of his optimism about using the technology to break down the walls between the silos of digital artefacts being created everywhere, pointing to evidence from maybe half a dozen great projects previously unknown to me. Both of these speakers pushed all the right buttons about open public access and accountability, transparency and integration of resources, respect for standards etc. thus making quite a contrast with the following speaker, from the archive of Canterbury Cathedral, who found herself having to explain why they’d made a deal with Satan in the form of findmypast.co.uk to get their parish records database online, thus perhaps revealing the very different business models in which archivists operate on either side of the channel.

The second session was given over to tools for transcribing and indexing all those lovely digital images. Stephane Nicolas from LITIS, the Rouen team responsible for software development, laid out clearly the challenges and advantages of integrating transcription and images. Two rather more technical presentations followed: one from Franck Lebourgeois which felt a bit like a graduate seminar about the mathematical basis of OCR, and another from Marcal Rusinol from a Spanish lab about vision processing techniques for word recognition or (as it seems it is called in the trade) “word spotting”.

The last session of the day was billed as being about digital paleography proper, and was divided appropriately between two contributions from palaeographers (Elizabeth Lalou from Rouen, and Marc Smith from the Ecole Nationale des Chartes) and two computer engineers (Veronique Eglin from LIRIS and Richard Guest from Kent). The former group clearly understood the potential the technology offered to address some long standing difficulties in the treatment of e.g. allographic variation or the use of frequency statistics in the definition of “writing style” ; the latter group maybe had a harder job in making explicit just what the state of those particular arts currently is.

The second day I arrived a bit late, for some rather odd discussions, again revealing extraordinary differences in attitude on either side of the Channel, about the “ludique” use of IT in cultural heritage applications, i.e. how to make cool exhibits in museums. It began with a moderately dreadful intervention from a professional French developer of such things, but was rescued by a man from the British Library called Clive Izard who gave a historical survey of the BL’s flirtations with technology, from the days of the Information Access programme (which, I may say, was one of the funders of the BNC) up to the present, third, generation of the “Turning the Pages” application. He was followed by another excellent (and splendidly named) speaker Clotilde Vaissaire-Agard from Le Havre, who reminded us about the need to place the scholar at the centre of the picture (I was reminded of a former OUC S colleague’s plaintive cries of “What about the users”?) . She also endeared herself to me forever by citing the manuscriptorium project (remember Enrich?) as an outstanding example of what the technology facilitated by making it possible to share metadata and digital resources across institutional boundaries for the benefit of manuscript scholarship.

The final session though labelled as con cerning that old war horse “Is there such a thing as Digital Humanities”, actually contained three very good and complementary talks intimately concerned with the themes of the conference. Alison Wiggins from Glasgow’s Bess of Hardwick project gave a convincing account of their attempt to ground the project in practical user-focussed concerns (she cited Claire Warwick et al’s Lairah as one of their inspirations); Dominique Stutzmann from IRHT raged, with ample evidence, against the lack of decent interfaces in transcription software; and finally Alixe Bovey from Kent gave a well illustrated overview of the strengths and limitations of various interfaces developed for interacting with the physicality of medieval sources. She concluded by lamenting, in the way people do, the absence of smell associated with digital images, and the mismatch between the haptics of the touchscreen and the codex. I was more impressed by Alison’s comment that it was more useful to know what kind of paper Bess of Hardwick wrote to the Queen on than it was to be able to reproduce it.

The price of a new laptop, or My love affair with Ubuntu

I think the first unix system I installed on a laptop was release 0.8 or thereabouts of Knoppix. You could take an innocuous -looking CD, stick it into a crusty old PC, tweak its bios to boot from CD, and bingo, you were running a real linux without having touched the Windows hard disk itself. How cool is that? OK, it took the best part of a day to load Knoppix from the CD into memory, so I pretty soon found the button for copying Linux itself onto the hard disk of my elderly laptop, but the “Live CD” concept had much to recommend it. You didn’t have to drop the Windows security blanket, and you could go on using your Windows filestore. That must have been around the year 2001 or so : at OUCS we started using Knoppix as the basis for a series of TEI give-aways at workshops and conferences – at first on CD, then on USB sticks, as the technology improved.

And then came Ubuntu. I think my first real Linux laptop was a Thinkpad on which I installed the first of an entire menagerie, from Warthog in 2004, to the Quetzal I installed yesterday. Yesterday also, I switched allegiance from Lenovo to Samsung, and installed the Quetzal on a series 9. It looks a bit like a Mac Airbook, but it’s not evil.

To get a new laptop in 2004, I had to write a one page justification for the boss, fill in numerous forms, send them off to the University’s approved supplier, and wait a few weeks for the machine to arrive. Then it would take a few minutes to unpack the machine, and at least 3 days to get Linux installed on it, much of it involving me pestering smarter people with better things to do.

In 2012, it took me just a few minutes to click on a button and order a new laptop, which was delivered to my house in about 24 hours. It took rather more than a few minutes to get it out of the packaging, but installing Linux took a couple of hours max. I downloaded an ISO image from the Ubuntu website. I made it into a bootable USB key using some software recommended by some other website. I stuck the key into the side of my new laptop. I tweaked its BIOS (just like the old days) to boot from the USB drive. And everything Just Worked. OK, I had to make difficult decisions like what language I use, what is my name, what graphic did I want to represent me, and did I want to wipe out the Windows 78 partition on this machine, so maybe a little more than an hour or two later, and that’s it. Wifi works, sound and graphics work, wireless mouse works (good, as my fingers don’t understand trackpads) … I can even imagine getting used to the Unity interface (which seems to be a bit more stable than it was under Pangolin).

And that, dear reader, is when I realise that the real price of a new laptop is yet to be paid. OK, I expect to have to install some favourite bits of software to get my familiar working environment back (digikam, subversion, emacs, oxygen, chrome, dropbox …) : that doesn’t take long. I can remember how to re-configure thunderbird to collect my mail: what I’d forgotten is just how long it takes to get nice new fresh copies of all the old mail which was sitting gathering dust on the IMAP server. I know how to check out all the stuff that actually matters from the Sourceforge and Googlecode TEI repositories : don’t underestimate how long that takes either. It took me over an hour just to remember how to re-set my password on the OUCS subversion repository.

My goal for today was to be able to rebuild TEI P5 from source, and crank out nice PDF slides from TEI source. (that’s the sort of thing I do every day, to be honest). On top of what I already had, I needed to download and install Oxygen XML editor, Chrome, and Dropbox. I needed to install nice new packaged 64-bit versions of jing, trang, onvdl, rnv, tei-emacs, openjdk-jdk, latex-beamer, texlive-xetex, and texlive. That all went smoothly, except that the packaged version of rnv was a 32 bit one, and so I had to rebuild it from source. Same problem with Acrobat Reader, but there is, of course, no option to rebuild that from source, so I will probably have to live without Mr Adobe’s fine products for a while.

Conclusion? Nothing surprising I suppose: installing your own system is still a good feeling, seductive enough for enough people that other people put lots of effort into making it much simpler to achieve. Hats off to the unsung labourers in the Canonical salt mines, and the open source community generally, who just go on doing what they have always promised they would, keeping the faith.  It’s a relief that manufacturers like Samsung let them get away with it.

And now, it is the evening of my first full day with psammead and I think another glass of wine is in order. If I could only learn how to use this wretched trackpad…

Teaching TEI in Bern

At the invitation of Bénédicte Vauthier from the University of Bern (whose indefatigable enthusiasm, fund raising expertise, and hard work organizing the event, are all hereby gratefully acknowledged) I spent the last week teaching the French strand of a four and a half day long trilingual TEI training session, aimed primarily at people working on the encoding of modern primary manuscript sources, in collaboration with Christof Schoech and Alejandro Bia, who provided the German strand, and Elena Pierazzo who provided the Italian one. We did try a bit to co-ordinate our efforts so that the material covered in each strand was the same, though our methods of teaching, and some of the materials used naturally varied. The course also included a full day of invited presentations about a wide range of Swiss projects and a series of invited plenary presentations. The first day was hosted by the Swiss National Archive who showed us some of their collection, which includes a splendid exhibit of authors’ typewriters. More details of the whole event are, or will be, available elsewhere; there are also a few photos here. But mostly this blog entry just reports on the French teaching strand, and may be of most interest to others undertaking similar quixotic enterprises.

Each of the three strands contained five sessions, each held in a different teaching room (the French one equipped with PCs, the German and Italian ones with Macs) and combining a 45-60 minute presentation with a 60-90 minute scripted hands-on exercise. The programme, also common to all three strands, was as follow:

  1. Introduction to basic ideas of encoding, distinction between document and text, refresher on XML. In the practical session, students were shown how to use oXygen to create a simple document from scratch, using just the TEI- bare schema.
  2. Introduction to TEI, focussing on how to tag basic components of a text, brief overview of some commonly encountered TEI elements, using French examples. Practical session using  TEI Lite as schema. In the first part, students used oXygen to add tags to a plain text version of a sonnet by duBellay; in the second, they used oxGarage to convert a Word file containing a scene from Jarry’s Ubu Roi into TEI, and then used oXygen to improve its tagging.
  3. The TEI Landscape with a slide or two on each of the 23 chapters in the Guidelines, followed by a quick guide to using the TEI website as a means of exploring them. In the practical session, students first use oXygen to create a tei_all file, and then use Roma to create a schema which is better constrained to the encoding of modern mss.
  4. TEI encoding of editorial interventions and simple manuscript transcription. In the practical; students reverse-engineer a rather heavily edited version of the start of a Wilfred Owen manuscript into a more diplomatic TEI transcription in oXygen, using the schema they made in the previous tp (which is also used for the rest of the course).
  5. TEI Header, focussing on its use as a repository for metadata of importance to librarians and archivists, but also to scholars and end-users. In the practical session following students create a full msDesc for a manuscript letter, using oXygen to tag the various parts of a plain text catalogue entry with which they are supplied.
  6. TEI documentary editing, starting with facsimiles and zones, and moving on to include sourceDoc, line, mod, metamark, and other recent additions to the TEI family, notably change and @change. In the practical session, students used Inkscape to identify various zones within a page from a Durenmatt ms, transferred these to a basic TEI transcription of the same file, and then ran an XSLT conversion to combine these into an SVG file in which the transcript and the graphic were merged in such a way that a piece of javascript could display the changes documented in the TEI independently

Archives of all the materials I used for these sessions are available for download in PDF form (Exercices; Talks).  The TEI XML source is also available, under the usual CC-BY licence, from the MEET subversion repository.

In the final session, students were invited to apply their experience to their own materials, samples of which they’d been instructed to bring with them. Most of them set to immediately and started tagging, which was quite encouraging. I was able to do some hand-holding with the one or two students who had somehow failed to understand what was going on earlier, and the others all started banging away in earnest.

Some further comments on the sources I used for each of these talks, and some things I found it necessary to change follow….

  1. The pattern and content for Session 1 is pretty much traditional now. Even so, I managed to find some typos to correct in the exercise script — the keyboard shortcuts for the Windows, Mac, and Linux versions of oXygen are all subtly different, and my script is indecisive as to which it is using. I noted, as always, that it’s hard for beginners to understand that “I want a new line here” really means “I want to close this paragraph and start a new one”.
  2. I put the talk for session 2 together using some old talks introducing TEI Lite. I included a bit of the Punch exercise for old times’ sake, but mostly it derives from a very old talk introducing TEI Lite, spiced up with French examples taken from those in the Guidelines. Some of these are not tagged in quite the way I would have expected, or don’t demonstrate things as clearly as they might, so the whole lot probably needs to be reviewed for consistency. The first part of the exercise for this session went very well, with everyone suitably impressed by the effect of tweaking the CSS to render the document in “Mode Auteur”. The second part was less successful (though not everyone got to it through lack of time) largely because of improvements to oxGarage since I last revised the script; however, I remain convinced that this is the right place to introduce the possibility of starting from a Word document.
  3. The “TEI from chap 1 to chap 23” talk in session 3 is quite a stretch to do in 40 minutes, especially if one is easily distracted, but remains useful since it motivates the following Roma practical. It’s essential to get across the notion of TEI modularity and this helps students see the point immediately. I based the tutorial on the  script used at this year’s Oxford Summer School, which I translated more or less faithfully, just removing some of James’s jocularities.
  4. The two talks on textual editing (sessions 4 and 6) with TEI were both derived from one I gave in 2011. The initial part, about encoding critical apparatus, seemed a bit redundant (especially for an audience including e.g. Jean-Louis Lebrave)  since we didn’t proceed to use the tags it introduces, and also because I spent too long on it; though it does provide some useful background. In the practical session, both Christof and I decided to use the Wilfred Owen examples used at this year’s Oxford Summer School, though with some  revisions. In the first exercise on transcription,  rather than making sense of an arbitrary non-XML transcription, students had to work out for themselves what was going on in the manuscript image, using the edited transcript provided purely for reference. They found this initially difficult, but once they’d realised that simply cutting and pasting would not be good enough, they became quite enthusiastic. During this exercise I also learned the interesting linguistic fact that the word raturer can be used for any kind of deletion, whereas the word biffer means specifically a deletion carried out by striking through something with a single line. This means that if I revise this exercise again (as I probably will) I shall be providing @rend values in French. (The word “stroked” used throughout the Oxford material seems to me just wrong, by the way. We’re not in the business of cuddling our texts. Au contraire).
  5. Session five’s talk on the TEI Header was more or less unchanged (except for a few typos) from the version I gave earlier this year, itself translated from plenty of other older versions. I think that providing the context for a <msDesc> is useful; the talk also provides an opportunity to talk about named entities, a topic which was otherwise missing in our programme. The practical was again based on one from this year’s Oxford Summer School, though again with some modification — notably that the manuscript described surely contains two <msItem>s, not one.
  6. For session 6, I used the second part of the textual editing talk more or less unchanged: the underlying concepts when defining zones seemed to go well, particularly once I’d admitted to being nul en math. The practical session was an entirely new one, made by translating Christof’s German version, which he produced on the basis of some notes from Elena, describing how she’d presented her Proust prototype at a conference in Australia earlier this year. In the event, one of the nicest moments of the entire course was seeing one of my students punching the air in excitement after having successfully opened the SVG output from this exercise in Internet Explorer. There was a small hiccup when I realised that the version of Inkscape we were using was in German, not English which made configuring it correctly rather tricky, but otherwise this all went far more satisfactorily than I expected. Which was, as they say, nice.

Resolving the Durand Conundrum: some ODD thoughts


The Durand Conundrum is a jokey name for a serious question first raised by David Durand when the current TEI ODD XML format was being finalised at an early working group meeting, though I cannot now remember where. In the original TEI ODD system, that used for the production of all versions before P4, content models and system entities were declared using SGML syntax, which was then wrapped in ODD-defined containers of various kinds. When we moved to XML, the line of least resistance was to re-express those same SGML rules using RelaxNG, thus making ODDXML a hybrid beast of a language, in which everything except element content models was expressed in our own XML vocabulary, while element content models (and datatype definitions) used the RelaxNG XML vocabulary. David pointed out, quite reasonably, that this was a lazy compromise of a solution: we could probably express everything using RelaxNG, not just the content models for elements, and so why not ditch the ODD system entirely and use pure RelaxNG ?

We called this a conundrum because we couldn’t come up with any entirely convincing argument against it, and perhaps there is none. A hybrid system is just not good engineering. It means that parts of the conceptual model which ODD expresses cannot be manipulated or specified using ODD itself – witness the contorsions we go through to facilitate different interpretations of class references, or the recent debate about how to introduce interleaving. But the Durand conundrum can be resolved in two ways: one would indeed be to re-express everything in ODD in RelaxNG; the other would be to expand ODD to enable it to define content models natively, without having recourse to a loosely defined subset of RelaxNG. In this article, I would like to explore the second possibility.

The current set of P5 content models makes use of a (largely undocumented) subset of RelaxNG facilities, mainly as a consequence of the design goal of supporting schema generation in DTD and W3C schema languages as well as RelaxNG, so it isn’t strictly true that we are currently using RelaxNG. Support for DTD schema language in particular imposes many limitations on what would otherwise be possible, while the many additional facilities provided by W3C Schema and Relaxng for content validation are hardly used at all (though equivalent facilities are now provided by the <constraints> element). A few years ago, the demise of DTDs was confidently expected; in 2012 however the patient remains in rude health, and it seems likely that support for DTDs will continue to be an ongoing requirement, even after support for P4 is formally withdrawn at the end of 2012.  We therefore assume that whatever elements we use to specify content models will need to have the following characteristics:

  • the model permits alternation, repetition, and sequencing of individual elements,
    element classes, or sub-models (groups of elements)
  • only one kind of mixed content model — the classic (#PCDATA | foo | bar)* — is
    permitted
  • the SGML ampersand connector — (a & b) as a shortcut for ( (a,b) | (b,a) ) is not
    permitted
  • a parser or validator is not required to do look ahead and consequently the model
    must be deterministic, that is, when applying the model to a document instance, there
    must be only one possible matching label in the model for each point in the document

To support repetition, we already have attributes @minOccurs and @maxOccurs, which are defined locally on the <datatype> element. We propose that these should instead be supplied by an attribute class @att.repeatable, members of which would include <datatype>, and also the existing elements <elementRef>, <classRef> and <macroRef>, which we repurpose for use within content model declarations.

We also need to add four new elements: <model> to define the content model; <sequence> to indicate that its children form a sequence within a content model; <alternate> to indicate that its children can be alternated within a content model; and <pcdata> to indicate that the content model permits a text node at this point.

<model> might eventually perhaps replace the current <content> element, since it has the same function. It seems safer however to define a new root element for the new proposals in case everything goes horribly wrong. In the testModel.odd test implementation, this new element is added as an alternative to <content> wherever that occurs in the schema.

<sequence> and <alternate> are used to wrap parts of the content model being defined. <pcdata> is used to indicate presence of a pure text node in the model: making it an explicit XML element makes it possible to write a schematron constraint to implement the required restriction on mixed content models (though I haven’t done that yet)

Using these elements, a content model such as ((a, (b|c)*, d+), e?) would be expressed as follows:

<model>
<sequence>
<sequence>
<elementRef key="a"/>
<alternate minOccurs="0" maxOccurs="unlimited">
<elementRef key="b"/>
<elementRef key="c"/>
</alternate>
<elementRef key="d" maxOccurs="unlimited"/>
</sequence>
<elementRef key="e" minOccurs="0"/>
</sequence>
</model>

Note that the value for both @minOccurs and @maxOccurs is understood to be 1 unless it is explicitly provided on an element, or on its immediate parent.

Classes are handled in the same way. Thus, a content model such as (model.a,
model.b+, (model.c | model.d)*)
would be expressed as follows:

<model>
<sequence>
<classRef key="model.a"/>
<classRef key="model.b" maxOccurs="unlimited"/>
<alternate minOccurs="0" maxOccurs="unlimited">
<classRef key="model.c"/>
<classRef key="model.d"/>
</alternate>
</sequence>
</model>

When processing this declaration, an ODD processor has to expand each
<classRef> to a model which will match any single member of the specified class, by default.

This behaviour may be varied by supplying a different value for the attribute @expand which we propose to make available on <classRef>. This attribute takes the same possible values as the existing @generate attribute
on the <classSpec> element. For example, assuming that elements a and b are the members of class model.ab,
<classRef key="model.ab" expand="sequence"/> is interpreted as a,b rather than as (a | b).

A mixed content model such as (#PCDATA | a | model.b)* would be expressed as follows:
<model>
<alternate minOccurs="0" maxOccurs="unlimited">
<pcdata/>
<elementRef key="a"/>
<classRef key="model.a"/>
</alternate></model>

References to existing TEI-defined macros are handled in the same way, using the existing <macroRef> element. Where the body of a macro is a <content> element rather than a <valList>, it will of course be necessary to replace that by an equivalent <model>.

CR du CS de la MRSH

J’ai eu la semaine dernière le plaisir d’assister à la première réunion du CS récemment reconstitué de la Maison de la Recherche en Sciences de l’Homme sur le campus de l’Universite de Caen; ceci a été l’occasion d’une présentation de toute la gamme des activités remarquables et originales de cette Maison innovatrice et atypique. (Les personnes qui ne connaissent pas déjà la MRSH sont vivement conseillées de consulter sa presentation en ligne ) Sur ses 6 pôles d’activités (et 27 équipes de recherche), nous n’avons travaillé que sur une moitié, et malgré cela il y avait de quoi travailler pendant trois journées très complètes. J’ai eu, en plus, le plaisir de faire la connaissance des autres membres du CS, un regroupement intéressant de 12 personnes distinguées, de plusieurs nationalités (suisse, anglais, norvégien, italien, québecois) et d’expertises très diverses (géographes, littéraires, historiques, informatiques…). S’il y avait une prépondérance de cheveux blancs autour de la table cela n’a nullement réduit l’intensité et l’interêt de nos débats avec le personel de la Maison et avec d’autres personalités ayant des responsabiltés importantes dans la région; ceci s’est fait au cours d’un programme de rencontres très complet qui avait été organisée par le directeur Pascal Buléon: dîner avec le DRAC, petit déjeûner au conseil regional en présence de son president, séance du Conseil avec la présence du député-maire de Caen, avec la Présidente de l’Université (et de son successeur élu ce même jour), avec aussi le Délégué regional du CNRS. C’était ainsi pour moi un “crash course” dans les structures administratives francaises qui continuent de me fasciner et (il faut l’avouer) de me confondre. Et on a donc dû se présenter (“Je m’appelle Lou Burnard, je suis …. “) à plusieurs reprises, avec des variations inévitables voire aléatoires.

L’objectif de la réunion n’était pas, bien sûr, de faire une évaluation formelle des activités de la maison, celle-ci ayant été tout récemment effectuée par l’AERES (qui lui a donné une note très favorable, si je ne me trompe pas). On était là pour discuter, pour contribuer avec nos petits grains de sel au rich pudding des activités et des procédures intellectuelles sousjacentes… néanmoins une évaluation informelle reste presque inévitable, puisque nos réactions ont été sollicitées par notre président afin de l’aider à faire son rapport ( il s’agit d’un géographe célèbre, Guy Di Meo de Bordeaux, élu à l’unanimité lors de notre première séance). Voici donc quelques petites remarques que j’ai tirées de cette visite:

  • La richesse et la varieté des opportunités de travailler d’une manière interdisciplinaire, voire multidisciplinaire (notre collègue québecoise nous avait expliqué la distinction mais je ne l’ai pas retenue) ont été clairement mises en évidence dans les rapports de chacun des pôles d’activités, et des directeurs d’unités concernées ;
  • Les dispositifs offerts par la Maison semblent bien conçus pour répondre aux besoins de ses utilisateurs et semblent favoriser un élargissement d’activités fructueuses; un seul obstacle est le manque d’espace car les locaux actuels sont surchargés;
  • Il y a une variété évidente parmi les niveaux d’activités ; quelques uns des acteurs me semblent de véritables “leaders” dans leurs domaines (notamment les pôles “numériques” et “rural”), pendant que d’autres me semblent ne rien manifester d’exceptionel. Une université, bien sûr, est un lieu de variété, mais il reste essentiel, à mon avis, de promouvoir une culture de partage d’expertise, et de n’avoir pas peur de terminer ou de repenser des activités qui n’arrivent pas à s’organiser effectivement, pour n’importe quelle raison ;
  • Nous avons constaté, avec un peu de surprise, que parmi les équipes présentées il semblait subsister un peu d’ignorance des activités de toutes les partners de la maison. Il serait avantageux, à mon avis, de promouvoir un “esprit de maison” un peu plus fort, car les possibilités des actions synergetiques, vues les expertises disponibles, sont loin d’être négligeables. Par exemple, les activités du Pôle “Risques” signifiantes et multi-disciplinaires qu’elles soient, pourraient quand même profiter des compétences linguistiques du centre CRILET ; la politique d’édition très complète (et mondialement reconnue) du pôle “Rural” néanmoins pourrait peut être profiter d’une réflexion avec les collègues du pôle numérique sur une éventuelle “digital turn”. Je souligne qu’il s’agit bien sûr de reflexions collégiales (dans le sens oxfordien) et non pas de restructuration des structures déjà très complexe!

En conclusion, il faut avouer que l’occasion ne manquait pas d’interludes sybaritiques pour compléter les rigueurs intellectuelles, notamment les repas, sur lesquels je n’insiste pas (quelques photos sont disponibles ). On a aussi pu faire un peu de tourisme, notamment dans la bibliothèque de la maison, qui a reçu le fond ancien du Ministère de l’agriculture, et à l’IMEC, site également magnifique du point de vue de l’architecture (il est hébergé dans une ancienne abbaye restaurée et reaménagée d’une manière très sympathique) et du point de vue de son contenu (sont déposées ici les archives personelles de quelques centaines d’ écrivains et d’artistes modernes). Noter que cet archive pourrait bien profiter des expertises techniques (par exemple) du pôle numerique pour mieux sauvegarder la partie de leur fonds n’existant que sur supports numériques; ceci témoignagerait de l’importance du réseau des compétences facilitées et mises à disposition par la MRSH.

 

 

Un point hors de discussion

It’s not often that my good friend Jean-Daniel gets indignant enough to post on Facebook a notice of something he’s read rather than an announcement of his current whereabouts, so I feel particularly indebted to him for having alerted me to the existence of a review article appearing in the Bulletin of the centre d’etudes medievales d’Auxerre, written by one Alain Guerreau, a distinguished medieval historian I learn from his extensive Wikipedia entry. This entry makes no reference at all to his experience or expertise in the area which forms the topic of the article but I am not sure that I would in any case trust anything produced by someone feeling the need to resort to UNDERLINED CAPITALS or sudden splatters of bold to make an argument, much less by someone quite so fond of such dogmatic phrases (“il n’y a qu’un choix possible … c’est un point hors de discussion … c’est la seule voie possible … c’est un must absolu”).

Nevertheless, the article does make some sound if hardly controversial recommendations about the need to use Unicode (here called “UTF-8”) and the usefulness of FOSS – Free and Open Source Software. It’s disappointing to come across a French speaker failing to point out that the French language actually boasts two words for “free” (gratuit and libre) corresponding with its two quite different senses – even more to find a francophone systematically choosing the wrong one, but you can’t have everything. I am also grateful for the pointers Guerreau provides to some software of which I would otherwise have been unaware, and for his endorsement of some others which I would certainly second (examples include txm, antconc, and cqp, all of which surely must be a part of any self-respecting text analyst’s armoury these days). It’s a pity that these recommendations come along with a tirade against the TEI, which Guerreau engagingly terms une gaspillage et perte de temps. For him, the efforts of generations of library and information scientists to define ways of classifying and structuring information have been a complete waste of time, if not worse (amongst his politer remarks about them is a reference to “le fantasme aussi ancient que recurrent d’une “mathesius universalis”). Not content with putting the boot into those poor misguided librarians, he then attributes the same fantastic objective to the “poignée d’informaticiens, essentiallement anglo-saxons, dépourvus autant de connaissances historiques que d’esprit critique” which he apparently believes gave rise to the Text Encoding Initiative.

It’s hard to know where to start correcting the fallacies in this part of his article, but for starters, the TEI was not designed by computer scientists, nor by people lacking in historical or critical awareness, but rather emerged from a productive conversation amongst several hundred scholarly users and creators of digitized resources worldwide, a conversation which has been going on for over three decades now and shows no sign of running out of steam. It seems ironic that Guerreau considers learning perl and regexp syntax neither too long nor too complex for the timorous, and yet a page later is busily asserting that no-one could possibly understand more than 25 of the tags proposed by the TEI – which is therefore at all costs to be avoided.

It’s even more ironic to read in section 4 “Ce qui manque” the claim that no-one has ever tried to define a reliable way of recording “de manière structurée toutes les variantes d’un texte”. Really? I think a cursory look at the literature will show that textual editing and textual variation has been an area in which the use of the TEI has established itself over the years. That’s not to say that every textual editor uses it, much less that those who do use it in identical ways, but it is absurd to claim that there is no open source software available to support it or that the TEI has nothing to offer in this domain. To quote M Guerreau himself, “on ne peut que regretter que les historiens et philologues le sachent à peine, et ne les [i.e. les outils FOSS] utilisent qu’à doses homéopathiques”. In his second conclusion (“sur lequel on n’insistera jamais assez”) he rightly prioritizes the intellectual effort of understanding a source above mere technical skills, and rightly insists that “pour chaque corpus il faut bien comprendre et saisir les specifités”. Which is, of course, exactly why the TEI not only offers you more than 25 tags, but also expects you to decide for yourself how to use them.

Encoding documents and collections at Caen

And so, once more, and maybe for the last time, to Caen for Encodage de documents et de collections, the two-day culmination of the seminar series of Caen’s Pole document numérique, ‘organisé dans le cadre de la chaire d’excellence de Matthew James Driscoll’ We are met this time in the magnificent Belvedere room, affording splendid views over the surrounding countryside, which is bathed in unwonted spring sunshine. Matthew kicked off with an overview of his handrit project, focussing this time on the TEI’s manuscript description module, its evolution and how it fits the needs of his project (or was adjusted to do so); this was nicely complemented by a description of the manuscript holdings (crossing the frontier between Library and Archive), and the digitization work flow used by the Icelandic partners in the project from Örn Hrafnkelsson of the National Library in Reykjavík.

The virtual reconstitution of the great libraries of the middle ages is one of the projects which mass digitization has been promising us for many years. The Bibliothèque virtuelle du Mont Saint-Michel is a classic example: Catherine Jacquemard, from CRAHAM at the Université de Caen Basse-Normandie, Jean-Luc Leservoisier, from the Scriptorial d’Avranches (where many but no means all of the surviving manuscripts from the Abbey of Saint-Michel are now holed up) and Marie Bisson, the technician responsible for finding ways of pooling and harmonising the scattered records describing that library, gave a good report from the coal face where those actually trying to deliver on that promise have been labouring, stubbing their toes occasionally on the mutually inconsistent cataloguing of manuscripts in various institutions.

We then broke for lunch, noticing en passant that the campus seemed to have acquired a number of students disguised as angels, smurfs, gangsters, and other figures of popular iconography.

After lunch, Marie-Luce Demonet, of the CESR, Université de Tours gave a whirlwind overview of the activities of the Bibliothèques Virtuelles Humanistes: I noted in particular the way it needs to treat uniformly both manuscript and printed sources, the ingenious use of iconclass as a unifying vocabulary to provide image search facilities across both miniatures and ornamented letters, and the availability of an online lexicon of printers marks, but there was much more meat besides.

The SCRIPTA project at Caen uses a traditional mySQL database to catalogue charters, but is now evolving into something more like an XML database by means of the addition of a front end written in XML Mind. This was presented by Pierre Bauduin and Tamiko Fujimoto from the CRAHAM unit at Caen, with technical support from Anne Goloubkoff of the Pôle Document numérique. About this point in the day, the growing number of angels, smurfs, gangsters etc. outside the building reached a critical mass and started its rather noisy procession around the building and indeed the town, which made it difficult to follow all of Tamiko’s walk through the software. I did note however that the TEI markup deployed was using some rather politically incorrect values for its @type attributes, derived apparently from recommended practice in the archival community.

I rounded off the day with another appearance of my talk on the History of the TEI, which I still haven’t quite got to fit into the confines of a 45 minute presentation, despite two previous attempts. Ah well. If on the other hand, you’re more interested in angels, smurfs, gangsters, etc. then you may prefer to look at my photos.

Next morning bright and early, we listed to Georg Vogeler from Graz (now located in something called the Center for information-modelling in the humanities, I learn: probably one word in German) describing the Monasterium.net, which is a kind of collaborative digital library and hence maybe a collaborative research environment. It holds information about thousands of charters and legal documents, either aggregated or syndicated from 99 other archives in a dozen countries world wide. As such, it is itself arguably (I say arguably because we argued about the meaning of the term) a kind of finding aid. It uses, of course, its own schema, drawing on both EAD and TEI P4, the former for the archive-level description, the latter for the encoding of individual documents. The resulting CEI schema is arguably neither fish nor fowl, but does have quite an impressive implementation, using eXist via Xforms and a bunch of ajax controls to deliver a cool integrated browse and search interface for finding aid and document alike (though only 10% of the documents are transcribed). Clearly any kind of cross archive search’n browse facility is a Good Thing, though whether this constitutes any kind of “digital edition”, collaborative or otherwise is more debatable, as indeed we did.

Lists of documents, and the collections which gave rise to them, were the theme of this second day. Lucien Reynhout from the Bibliothèque Royale de Belgique described a Belgian project to create “Sanderus Electronicus” a digital edition of an important 17th century list of lists of books, made by one Antonius Sanderus: this too was a collaborative project: Sanderus published as a single work about sixty different lists derived from the reports of several correspondents whom he had asked to describe the holdings of several significant libraries and as such exhibits all the problems of inconsistency of description and detail we’re accustomed to in the digital domain, deriving perhaps also from the same ontological anxieties: what are the individual components of such lists? which object in the FRBR model corresponds with their constituents? for example what does “duo Iuvenales” actually mean? Sanderus Electronicus will take the common sense view that it is composed of list items (or so I believe) rather than anything more bibliographic, though it will also use a database called BIBALES, to hold entries for people, places, works etc. referenced.

After coffee, the man from the ministry, an amiable person called Florent Palluault explained just why every archive in France, if it creates a catalogue at all, will do so using EAD. and how it came about that the digital version of the venerable Catalogue général des manuscrits des bibliothèques, all 116 volumes of it, is being updated and produced to the same standard. He described the workflow, which reminded me of some other large scale retrodigitizan projects : OCR of the original ancient print volumes had been automatically split up into separate Word documents for editing, each notice managed within a database, had been exported as a Word document with some degree of automatic conversion to EAD on the basis of the typography. Jérôme Sirdey (Bibliothèque nationale de France) then described PALME, a new project aiming to convert an existing MARC-based catalogue of 20thc French literary mss. into EAD and Pellualt then concluded with some speculation about future directions, notably a planned catalogue collectif de france (CCFR): an ambitious union catalogue of mss holdings across CALAME (funded by CNRS institutions), the BNF, and the new digital CGM, all still based on EAD, which clearly still has a great future in France.

EAD and TEI and whether there was any hope for a happy marriage between them was a theme to which Florence Clavaud (Ecole des Chartes) returned after lunch. Florence is a member of the expert group which is currently proposing revisions to the EAD standard and to the accompanying (French) Guide to best practice for its application, as well as being expert in both EAD and TEI, amd so she has lots to say on both, unfortunately rather more than she really had time for on this occasion. Anne-Marie Turcan and Hanno Wijsman from IRHT concluded the session by presenting work building on the Biblifram project notably a database under development at IRHT (and allegedly only accessible there, for IPR reasons) to support research in the history of the book: Libraria et Bibale.

The two days were rounded off in a very satisfactory way by Torsten Schassen, from the Herzog August Bibliothek in Wolfenbüttel, recounting his experience as a participant in the EU-funded digitisation project Europeana regia project which aims to catalogue and provide access to all the mss from three specific royal collections now dispersed across a number of European libraries, and hence catalogued in a number of different formats (Marc, EAD, TEI, MAB, MXML… ) and seven different languages. Possibly an unusual aspect of the project, or one that Thorsten chose to emphasize at any rate, was a requirement that the resulting system be both usable and interesting for the general public. As the party responsible for metadata, WAB had the thankless task of trying to define a kind of Dublin Core minimal set for manuscript description, which is reassuringly a clean subset derived from TEI-P5, even if the European Library cannot currently handle TEI format data directly. The minimum data set was also internationalised, even though Europeana itself cannot currently handle multilingual data. There is even a button on the website which sends you the TEI <msDesc> for each manuscript that has one.

The take-away message from this presentation, as from the seminar as a whole, was encouraging: TEI is proving its usefulness in a variety of complex document management situations. I also think some serious investigation of the feasibility of integrating EAD within it is warranted: not much is needed and much would be gained.

A day in Lower Normandy

And so to Caen, whose University campus boasts magnificent if vaguely fascist architecture, at the top of a hill, commanding splendid views over the urban sprawl to the countryside beyond, and liberally decked with graffiti to bewilder future epigraphers

OK Epidoc, encode this.

The University Press of Caen having joined forces with two other departments to offer him a visiting fellowship, my distinguished and white haired Danish colleague Matthew Driscoll is organising a series of seminars over the next few months, and I am here for the kick off session “TEI et encodage des sources”. About a dozen or so TEI fans are gathered in the Belvedere Room which is vast and very cold but still affords delightful prospects (as they say).

First up is Julia Rogers, a local doctorante describing the online edition of Descartes on which she is working under the watchful tutelage of Pierre-Yves Buard inter alia. No manuscript survives of Descartes’ works, and modern editors have played fairly fast and loose with them as a consequence: this impeccable electronic edition returns to the first printed editions as its basis, but uses all the possibilities of digital editing. Text is captured and maintained collaboratively by up to 15 scholarly editors, using a customisation of XML Mind to enforce a simple P5-conformant protocol designed by Pierre-Yves (and built with Roma), allowing for such niceties as the addition of editorial notes, citations, tracking of quotations, mathenmatical formulae (currently done in TeX though this will change) etc. Elsewhere in the University a fairly sophisticated morphologically-aware search engine is being developed, so that the original text can be queried in Modern French. The online edition will also integrate high quality page images supplied by the BNF, compensating for the decision not to encode all features of the layout. Impeccable, as I said. I was also impressed (as usual) by Sourcencyme,  presented by Isabelle Draelants from Nancy and Catherine Jacquemard from Caen. This ongoing project will combine a textual corpus of medieval encylopaedias (about seven so far) with a sophisticated indexing system tracing the chains of reference and citation amongst them, extending in some cases beyond into the 19th century. As a real hand-built hypertext, it is thus increasingly becoming the thing it represents: a complete encyclopaedia of medieval learning, endowed with tools for collaborative editing and annotation, and also with a specialist journal-like addition published by the ubiquitous revues.org. However, unless I misunderstand, a significant number of the texts it treats are owned by Brepols, which may pose access problems. Next before lunch, we were entertained by Vincent Olivet and Frederick Glorieux from the Ecole Nationale des Chartes whose home-grown RelaxNG tools continue to advance in the general direction of TEI conformance. They have been working on a direct conversion from ODT to TEI, using the same principles as Sebastian Rahtz’ stylesheets but aiming at a more specific homegrown RelaxNG schema, now expressed (I think) using an ODD. This was all very satisfactory, as is the fact that the tools in their workshop continue to be readily accessible.

Lunch (a three course affair involving some rather good salmon, and a chocolate mousse) was also highly satisfactory, and we reconvened much restored for an afternoon combining three short project presentations with set pieces from Matthew and from myself. Subhasree Pasupathy, from Caen, first of the three, described her use of the TEI mechanism to represent textual variation in her thesis on the projects of the Abbe de St Pierre. Thomas Lebarbe introduced us to the pleasingly heterodox digital Stendhal project at Grenoble during which I wondered not for the first time how hard could it be to write an ODD corresponding with their home grown DTD. Finally, Jorge Fins from Tours showed us how the Bibliotheque Virtuelle des Humanistes at Tours is now using both XTF and Philologic to search its corpus.

And so to the grand old man of TEI -based editing: not me, but Matthew Driscoll. He spoke in English but (as someone said to me afterwards) with such limpidity of discourse as to pose no problem (which sounds even better in French), Citing WS Greg’s distinction between “substantive” and “accidental” variation he showed how TEI markup enables one to capture both, but display either, by the judicious tweaking of rather cunning stylesheets developed by Eric Haswell. He also talked about gaiji, news of the existence and facilities of which does not seem to have penetrated everyone’s consciousness to the extent that it probably should have by now. And finally, a good half of the material I had prepared for my own talk having been presented by previous speakers, I was able to close the day in a suitably forward-looking way by focussing mainly on the new concepts proposed for handling l’edition genetique (sourceDoc, mod, change etc.) in TEI P5 which all seemed to go down quite well.