CR du CS de la MRSH

J’ai eu la semaine dernière le plaisir d’assister à la première réunion du CS récemment reconstitué de la Maison de la Recherche en Sciences de l’Homme sur le campus de l’Universite de Caen; ceci a été l’occasion d’une présentation de toute la gamme des activités remarquables et originales de cette Maison innovatrice et atypique. (Les personnes qui ne connaissent pas déjà la MRSH sont vivement conseillées de consulter sa presentation en ligne ) Sur ses 6 pôles d’activités (et 27 équipes de recherche), nous n’avons travaillé que sur une moitié, et malgré cela il y avait de quoi travailler pendant trois journées très complètes. J’ai eu, en plus, le plaisir de faire la connaissance des autres membres du CS, un regroupement intéressant de 12 personnes distinguées, de plusieurs nationalités (suisse, anglais, norvégien, italien, québecois) et d’expertises très diverses (géographes, littéraires, historiques, informatiques…). S’il y avait une prépondérance de cheveux blancs autour de la table cela n’a nullement réduit l’intensité et l’interêt de nos débats avec le personel de la Maison et avec d’autres personalités ayant des responsabiltés importantes dans la région; ceci s’est fait au cours d’un programme de rencontres très complet qui avait été organisée par le directeur Pascal Buléon: dîner avec le DRAC, petit déjeûner au conseil regional en présence de son president, séance du Conseil avec la présence du député-maire de Caen, avec la Présidente de l’Université (et de son successeur élu ce même jour), avec aussi le Délégué regional du CNRS. C’était ainsi pour moi un “crash course” dans les structures administratives francaises qui continuent de me fasciner et (il faut l’avouer) de me confondre. Et on a donc dû se présenter (“Je m’appelle Lou Burnard, je suis …. “) à plusieurs reprises, avec des variations inévitables voire aléatoires.

L’objectif de la réunion n’était pas, bien sûr, de faire une évaluation formelle des activités de la maison, celle-ci ayant été tout récemment effectuée par l’AERES (qui lui a donné une note très favorable, si je ne me trompe pas). On était là pour discuter, pour contribuer avec nos petits grains de sel au rich pudding des activités et des procédures intellectuelles sousjacentes… néanmoins une évaluation informelle reste presque inévitable, puisque nos réactions ont été sollicitées par notre président afin de l’aider à faire son rapport ( il s’agit d’un géographe célèbre, Guy Di Meo de Bordeaux, élu à l’unanimité lors de notre première séance). Voici donc quelques petites remarques que j’ai tirées de cette visite:

  • La richesse et la varieté des opportunités de travailler d’une manière interdisciplinaire, voire multidisciplinaire (notre collègue québecoise nous avait expliqué la distinction mais je ne l’ai pas retenue) ont été clairement mises en évidence dans les rapports de chacun des pôles d’activités, et des directeurs d’unités concernées ;
  • Les dispositifs offerts par la Maison semblent bien conçus pour répondre aux besoins de ses utilisateurs et semblent favoriser un élargissement d’activités fructueuses; un seul obstacle est le manque d’espace car les locaux actuels sont surchargés;
  • Il y a une variété évidente parmi les niveaux d’activités ; quelques uns des acteurs me semblent de véritables “leaders” dans leurs domaines (notamment les pôles “numériques” et “rural”), pendant que d’autres me semblent ne rien manifester d’exceptionel. Une université, bien sûr, est un lieu de variété, mais il reste essentiel, à mon avis, de promouvoir une culture de partage d’expertise, et de n’avoir pas peur de terminer ou de repenser des activités qui n’arrivent pas à s’organiser effectivement, pour n’importe quelle raison ;
  • Nous avons constaté, avec un peu de surprise, que parmi les équipes présentées il semblait subsister un peu d’ignorance des activités de toutes les partners de la maison. Il serait avantageux, à mon avis, de promouvoir un “esprit de maison” un peu plus fort, car les possibilités des actions synergetiques, vues les expertises disponibles, sont loin d’être négligeables. Par exemple, les activités du Pôle “Risques” signifiantes et multi-disciplinaires qu’elles soient, pourraient quand même profiter des compétences linguistiques du centre CRILET ; la politique d’édition très complète (et mondialement reconnue) du pôle “Rural” néanmoins pourrait peut être profiter d’une réflexion avec les collègues du pôle numérique sur une éventuelle “digital turn”. Je souligne qu’il s’agit bien sûr de reflexions collégiales (dans le sens oxfordien) et non pas de restructuration des structures déjà très complexe!

En conclusion, il faut avouer que l’occasion ne manquait pas d’interludes sybaritiques pour compléter les rigueurs intellectuelles, notamment les repas, sur lesquels je n’insiste pas (quelques photos sont disponibles ). On a aussi pu faire un peu de tourisme, notamment dans la bibliothèque de la maison, qui a reçu le fond ancien du Ministère de l’agriculture, et à l’IMEC, site également magnifique du point de vue de l’architecture (il est hébergé dans une ancienne abbaye restaurée et reaménagée d’une manière très sympathique) et du point de vue de son contenu (sont déposées ici les archives personelles de quelques centaines d’ écrivains et d’artistes modernes). Noter que cet archive pourrait bien profiter des expertises techniques (par exemple) du pôle numerique pour mieux sauvegarder la partie de leur fonds n’existant que sur supports numériques; ceci témoignagerait de l’importance du réseau des compétences facilitées et mises à disposition par la MRSH.

 

 

Un point hors de discussion

It’s not often that my good friend Jean-Daniel gets indignant enough to post on Facebook a notice of something he’s read rather than an announcement of his current whereabouts, so I feel particularly indebted to him for having alerted me to the existence of a review article appearing in the Bulletin of the centre d’etudes medievales d’Auxerre, written by one Alain Guerreau, a distinguished medieval historian I learn from his extensive Wikipedia entry. This entry makes no reference at all to his experience or expertise in the area which forms the topic of the article but I am not sure that I would in any case trust anything produced by someone feeling the need to resort to UNDERLINED CAPITALS or sudden splatters of bold to make an argument, much less by someone quite so fond of such dogmatic phrases (“il n’y a qu’un choix possible … c’est un point hors de discussion … c’est la seule voie possible … c’est un must absolu”).

Nevertheless, the article does make some sound if hardly controversial recommendations about the need to use Unicode (here called “UTF-8”) and the usefulness of FOSS – Free and Open Source Software. It’s disappointing to come across a French speaker failing to point out that the French language actually boasts two words for “free” (gratuit and libre) corresponding with its two quite different senses – even more to find a francophone systematically choosing the wrong one, but you can’t have everything. I am also grateful for the pointers Guerreau provides to some software of which I would otherwise have been unaware, and for his endorsement of some others which I would certainly second (examples include txm, antconc, and cqp, all of which surely must be a part of any self-respecting text analyst’s armoury these days). It’s a pity that these recommendations come along with a tirade against the TEI, which Guerreau engagingly terms une gaspillage et perte de temps. For him, the efforts of generations of library and information scientists to define ways of classifying and structuring information have been a complete waste of time, if not worse (amongst his politer remarks about them is a reference to “le fantasme aussi ancient que recurrent d’une “mathesius universalis”). Not content with putting the boot into those poor misguided librarians, he then attributes the same fantastic objective to the “poignée d’informaticiens, essentiallement anglo-saxons, dépourvus autant de connaissances historiques que d’esprit critique” which he apparently believes gave rise to the Text Encoding Initiative.

It’s hard to know where to start correcting the fallacies in this part of his article, but for starters, the TEI was not designed by computer scientists, nor by people lacking in historical or critical awareness, but rather emerged from a productive conversation amongst several hundred scholarly users and creators of digitized resources worldwide, a conversation which has been going on for over three decades now and shows no sign of running out of steam. It seems ironic that Guerreau considers learning perl and regexp syntax neither too long nor too complex for the timorous, and yet a page later is busily asserting that no-one could possibly understand more than 25 of the tags proposed by the TEI – which is therefore at all costs to be avoided.

It’s even more ironic to read in section 4 “Ce qui manque” the claim that no-one has ever tried to define a reliable way of recording “de manière structurée toutes les variantes d’un texte”. Really? I think a cursory look at the literature will show that textual editing and textual variation has been an area in which the use of the TEI has established itself over the years. That’s not to say that every textual editor uses it, much less that those who do use it in identical ways, but it is absurd to claim that there is no open source software available to support it or that the TEI has nothing to offer in this domain. To quote M Guerreau himself, “on ne peut que regretter que les historiens et philologues le sachent à peine, et ne les [i.e. les outils FOSS] utilisent qu’à doses homéopathiques”. In his second conclusion (“sur lequel on n’insistera jamais assez”) he rightly prioritizes the intellectual effort of understanding a source above mere technical skills, and rightly insists that “pour chaque corpus il faut bien comprendre et saisir les specifités”. Which is, of course, exactly why the TEI not only offers you more than 25 tags, but also expects you to decide for yourself how to use them.

Encoding documents and collections at Caen

And so, once more, and maybe for the last time, to Caen for Encodage de documents et de collections, the two-day culmination of the seminar series of Caen’s Pole document numérique, ‘organisé dans le cadre de la chaire d’excellence de Matthew James Driscoll’ We are met this time in the magnificent Belvedere room, affording splendid views over the surrounding countryside, which is bathed in unwonted spring sunshine. Matthew kicked off with an overview of his handrit project, focussing this time on the TEI’s manuscript description module, its evolution and how it fits the needs of his project (or was adjusted to do so); this was nicely complemented by a description of the manuscript holdings (crossing the frontier between Library and Archive), and the digitization work flow used by the Icelandic partners in the project from Örn Hrafnkelsson of the National Library in Reykjavík.

The virtual reconstitution of the great libraries of the middle ages is one of the projects which mass digitization has been promising us for many years. The Bibliothèque virtuelle du Mont Saint-Michel is a classic example: Catherine Jacquemard, from CRAHAM at the Université de Caen Basse-Normandie, Jean-Luc Leservoisier, from the Scriptorial d’Avranches (where many but no means all of the surviving manuscripts from the Abbey of Saint-Michel are now holed up) and Marie Bisson, the technician responsible for finding ways of pooling and harmonising the scattered records describing that library, gave a good report from the coal face where those actually trying to deliver on that promise have been labouring, stubbing their toes occasionally on the mutually inconsistent cataloguing of manuscripts in various institutions.

We then broke for lunch, noticing en passant that the campus seemed to have acquired a number of students disguised as angels, smurfs, gangsters, and other figures of popular iconography.

After lunch, Marie-Luce Demonet, of the CESR, Université de Tours gave a whirlwind overview of the activities of the Bibliothèques Virtuelles Humanistes: I noted in particular the way it needs to treat uniformly both manuscript and printed sources, the ingenious use of iconclass as a unifying vocabulary to provide image search facilities across both miniatures and ornamented letters, and the availability of an online lexicon of printers marks, but there was much more meat besides.

The SCRIPTA project at Caen uses a traditional mySQL database to catalogue charters, but is now evolving into something more like an XML database by means of the addition of a front end written in XML Mind. This was presented by Pierre Bauduin and Tamiko Fujimoto from the CRAHAM unit at Caen, with technical support from Anne Goloubkoff of the Pôle Document numérique. About this point in the day, the growing number of angels, smurfs, gangsters etc. outside the building reached a critical mass and started its rather noisy procession around the building and indeed the town, which made it difficult to follow all of Tamiko’s walk through the software. I did note however that the TEI markup deployed was using some rather politically incorrect values for its @type attributes, derived apparently from recommended practice in the archival community.

I rounded off the day with another appearance of my talk on the History of the TEI, which I still haven’t quite got to fit into the confines of a 45 minute presentation, despite two previous attempts. Ah well. If on the other hand, you’re more interested in angels, smurfs, gangsters, etc. then you may prefer to look at my photos.

Next morning bright and early, we listed to Georg Vogeler from Graz (now located in something called the Center for information-modelling in the humanities, I learn: probably one word in German) describing the Monasterium.net, which is a kind of collaborative digital library and hence maybe a collaborative research environment. It holds information about thousands of charters and legal documents, either aggregated or syndicated from 99 other archives in a dozen countries world wide. As such, it is itself arguably (I say arguably because we argued about the meaning of the term) a kind of finding aid. It uses, of course, its own schema, drawing on both EAD and TEI P4, the former for the archive-level description, the latter for the encoding of individual documents. The resulting CEI schema is arguably neither fish nor fowl, but does have quite an impressive implementation, using eXist via Xforms and a bunch of ajax controls to deliver a cool integrated browse and search interface for finding aid and document alike (though only 10% of the documents are transcribed). Clearly any kind of cross archive search’n browse facility is a Good Thing, though whether this constitutes any kind of “digital edition”, collaborative or otherwise is more debatable, as indeed we did.

Lists of documents, and the collections which gave rise to them, were the theme of this second day. Lucien Reynhout from the Bibliothèque Royale de Belgique described a Belgian project to create “Sanderus Electronicus” a digital edition of an important 17th century list of lists of books, made by one Antonius Sanderus: this too was a collaborative project: Sanderus published as a single work about sixty different lists derived from the reports of several correspondents whom he had asked to describe the holdings of several significant libraries and as such exhibits all the problems of inconsistency of description and detail we’re accustomed to in the digital domain, deriving perhaps also from the same ontological anxieties: what are the individual components of such lists? which object in the FRBR model corresponds with their constituents? for example what does “duo Iuvenales” actually mean? Sanderus Electronicus will take the common sense view that it is composed of list items (or so I believe) rather than anything more bibliographic, though it will also use a database called BIBALES, to hold entries for people, places, works etc. referenced.

After coffee, the man from the ministry, an amiable person called Florent Palluault explained just why every archive in France, if it creates a catalogue at all, will do so using EAD. and how it came about that the digital version of the venerable Catalogue général des manuscrits des bibliothèques, all 116 volumes of it, is being updated and produced to the same standard. He described the workflow, which reminded me of some other large scale retrodigitizan projects : OCR of the original ancient print volumes had been automatically split up into separate Word documents for editing, each notice managed within a database, had been exported as a Word document with some degree of automatic conversion to EAD on the basis of the typography. Jérôme Sirdey (Bibliothèque nationale de France) then described PALME, a new project aiming to convert an existing MARC-based catalogue of 20thc French literary mss. into EAD and Pellualt then concluded with some speculation about future directions, notably a planned catalogue collectif de france (CCFR): an ambitious union catalogue of mss holdings across CALAME (funded by CNRS institutions), the BNF, and the new digital CGM, all still based on EAD, which clearly still has a great future in France.

EAD and TEI and whether there was any hope for a happy marriage between them was a theme to which Florence Clavaud (Ecole des Chartes) returned after lunch. Florence is a member of the expert group which is currently proposing revisions to the EAD standard and to the accompanying (French) Guide to best practice for its application, as well as being expert in both EAD and TEI, amd so she has lots to say on both, unfortunately rather more than she really had time for on this occasion. Anne-Marie Turcan and Hanno Wijsman from IRHT concluded the session by presenting work building on the Biblifram project notably a database under development at IRHT (and allegedly only accessible there, for IPR reasons) to support research in the history of the book: Libraria et Bibale.

The two days were rounded off in a very satisfactory way by Torsten Schassen, from the Herzog August Bibliothek in Wolfenbüttel, recounting his experience as a participant in the EU-funded digitisation project Europeana regia project which aims to catalogue and provide access to all the mss from three specific royal collections now dispersed across a number of European libraries, and hence catalogued in a number of different formats (Marc, EAD, TEI, MAB, MXML… ) and seven different languages. Possibly an unusual aspect of the project, or one that Thorsten chose to emphasize at any rate, was a requirement that the resulting system be both usable and interesting for the general public. As the party responsible for metadata, WAB had the thankless task of trying to define a kind of Dublin Core minimal set for manuscript description, which is reassuringly a clean subset derived from TEI-P5, even if the European Library cannot currently handle TEI format data directly. The minimum data set was also internationalised, even though Europeana itself cannot currently handle multilingual data. There is even a button on the website which sends you the TEI <msDesc> for each manuscript that has one.

The take-away message from this presentation, as from the seminar as a whole, was encouraging: TEI is proving its usefulness in a variety of complex document management situations. I also think some serious investigation of the feasibility of integrating EAD within it is warranted: not much is needed and much would be gained.

A day in Lower Normandy

And so to Caen, whose University campus boasts magnificent if vaguely fascist architecture, at the top of a hill, commanding splendid views over the urban sprawl to the countryside beyond, and liberally decked with graffiti to bewilder future epigraphers

OK Epidoc, encode this.

The University Press of Caen having joined forces with two other departments to offer him a visiting fellowship, my distinguished and white haired Danish colleague Matthew Driscoll is organising a series of seminars over the next few months, and I am here for the kick off session “TEI et encodage des sources”. About a dozen or so TEI fans are gathered in the Belvedere Room which is vast and very cold but still affords delightful prospects (as they say).

First up is Julia Rogers, a local doctorante describing the online edition of Descartes on which she is working under the watchful tutelage of Pierre-Yves Buard inter alia. No manuscript survives of Descartes’ works, and modern editors have played fairly fast and loose with them as a consequence: this impeccable electronic edition returns to the first printed editions as its basis, but uses all the possibilities of digital editing. Text is captured and maintained collaboratively by up to 15 scholarly editors, using a customisation of XML Mind to enforce a simple P5-conformant protocol designed by Pierre-Yves (and built with Roma), allowing for such niceties as the addition of editorial notes, citations, tracking of quotations, mathenmatical formulae (currently done in TeX though this will change) etc. Elsewhere in the University a fairly sophisticated morphologically-aware search engine is being developed, so that the original text can be queried in Modern French. The online edition will also integrate high quality page images supplied by the BNF, compensating for the decision not to encode all features of the layout. Impeccable, as I said. I was also impressed (as usual) by Sourcencyme,  presented by Isabelle Draelants from Nancy and Catherine Jacquemard from Caen. This ongoing project will combine a textual corpus of medieval encylopaedias (about seven so far) with a sophisticated indexing system tracing the chains of reference and citation amongst them, extending in some cases beyond into the 19th century. As a real hand-built hypertext, it is thus increasingly becoming the thing it represents: a complete encyclopaedia of medieval learning, endowed with tools for collaborative editing and annotation, and also with a specialist journal-like addition published by the ubiquitous revues.org. However, unless I misunderstand, a significant number of the texts it treats are owned by Brepols, which may pose access problems. Next before lunch, we were entertained by Vincent Olivet and Frederick Glorieux from the Ecole Nationale des Chartes whose home-grown RelaxNG tools continue to advance in the general direction of TEI conformance. They have been working on a direct conversion from ODT to TEI, using the same principles as Sebastian Rahtz’ stylesheets but aiming at a more specific homegrown RelaxNG schema, now expressed (I think) using an ODD. This was all very satisfactory, as is the fact that the tools in their workshop continue to be readily accessible.

Lunch (a three course affair involving some rather good salmon, and a chocolate mousse) was also highly satisfactory, and we reconvened much restored for an afternoon combining three short project presentations with set pieces from Matthew and from myself. Subhasree Pasupathy, from Caen, first of the three, described her use of the TEI mechanism to represent textual variation in her thesis on the projects of the Abbe de St Pierre. Thomas Lebarbe introduced us to the pleasingly heterodox digital Stendhal project at Grenoble during which I wondered not for the first time how hard could it be to write an ODD corresponding with their home grown DTD. Finally, Jorge Fins from Tours showed us how the Bibliotheque Virtuelle des Humanistes at Tours is now using both XTF and Philologic to search its corpus.

And so to the grand old man of TEI -based editing: not me, but Matthew Driscoll. He spoke in English but (as someone said to me afterwards) with such limpidity of discourse as to pose no problem (which sounds even better in French), Citing WS Greg’s distinction between “substantive” and “accidental” variation he showed how TEI markup enables one to capture both, but display either, by the judicious tweaking of rather cunning stylesheets developed by Eric Haswell. He also talked about gaiji, news of the existence and facilities of which does not seem to have penetrated everyone’s consciousness to the extent that it probably should have by now. And finally, a good half of the material I had prepared for my own talk having been presented by previous speakers, I was able to close the day in a suitably forward-looking way by focussing mainly on the new concepts proposed for handling l’edition genetique (sourceDoc, mod, change etc.) in TEI P5 which all seemed to go down quite well.