Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

How to make a sow’s ear into a silk purse/Comment faire d’une buse un épervier

As the proverb says, you can’t turn a buzzard into a sparrow-hawk. But here’s how I went about producing a TEI-conformant minimally encoded edition of the 1611 King James bible, starting from an all-singing, all-dancing, vastly over-complicated, web site to the existence of which Martin Mueller had alerted me last Friday in a plaintive posting to TEI-L

The problem with web sites like this is that they expose only various HTML views of their underlying data, usually heavily infested with javascript, often deploying all sorts of nonstandard gizmos to make the pages look just so. I’ve no quarrel with their doing that, I just wish they’d make it a bit easier to get at the real data, i.e. the transcribed text and the pointers to the images that go with them. This site is a case in point: after spending some time looking at some of the gazillion HTML files a simple-minded wget -r starts hoovering up, I hypothesize that the data is actually stashed away somewhere inaccessible and all that you see on the site is a bunch of variously configured static web pages which have been created from it. The good thing about that is that the static web pages were created by an automaton, and the process can therefore be reversed by another automaton. The bad thing is that (in this case at least) clearly different automata have been used to generate vaguely different versions of the same page. For example: the following three URLs all show subtly different versions of the same first page of the 1611 bible : https://www.kingjamesbibleonline.org/1611_Genesis-Chapter-1/  https://www.kingjamesbibleonline.org/Genesis-Chapter-1_Original-1611-KJV/ and https://www.kingjamesbibleonline.org/Genesis_1_1611/

These are not simple redirects: each URL delivers a file in a different format. Well, I could have asked whoever runs this site what was going on, but that would have spoiled the fun, so I chose the format I liked best (the last of the three above) and wrote a script to generate requests for just those pages. I had to assume that the URLs would follow a consistent naming format, encouraged by a table of the names of the books of the bible I found in one of the chunks of embedded javascript, and which I moved into my little perl script. Running this script produced a bash script which grabbed each page using curl and saved it as an HTML file on my machine.
What went wrong with this process? Surprisingly little: I didn’t find out till Sunday lunchtime that not only had I completely overlooked the Apocryphal books, but also that the file naming conventions used for these were not quite the same as for the canonical chapters (“Judith” for example is actually spelled “Iudeth”), but for the bulk of the 1300 or so chapters my guesses about the URL to use were spot on.  [This was Hubris. See my comment below]
The real challenge was disentangling the useful data from the resulting screen-scraped mess of pottage. My usual approach here is to run the HTML I get through Dave Ragget’s utterly wonderful tidy utility to turn it into kosher XML, and then write an XSLT script to pick out the bits I want. In this case, however, the pottage I got was so messy even tidy threw up its metaphorical hands and refused to proceed with it, so I had to concoct a little perl script to pre-process each file and extract just the useful part, before running it through tidy, and processing the resulting XML into conformant TEI by means of saxon and a little XSLT stylesheet. Like this:

for f in webScraped/*.html; do \
FNAME=`basename $f .html`;\ 
echo ${FNAME} ;\ 
perl extract.prl $f | \ 
tidy -q  --error-file tidyerrs --doctype omit --numeric-entities yes - asxml | \ 
saxon -o:chaps/${FNAME}.xml -s:- -xsl:postGrab.xsl chapId=${FNAME};\ 
done;

And what went wrong with this process? Well, quite a lot, as you might expect, but nothing that couldn’t be fixed by tweaking and rerunning the process (this is why I always put the effort into writing bash scripts for jobs like this: they can be rerun till I get it right). For example, I was deriving an XML id for each chapter from the name of the input file, but some of the files had names beginning with digits. My plan was to put each chapter into a separate XML file and then use xInclude to munge them together into a TEI document (to maximize the flexibility of said mungeing) — but for this to work I had to get namespace declarations in the right place, and as anyone who has used them knows, anything involving namespace declarations never works the first time. To say nothing of unexpected inconsistencies in the data : which in fact were really very few. In fact so far, the only thing that has so far caught me out was a handful of chunks which looked like biblical data in the HTML markup but were not. Fortunately these were detectable because they wound up with an invalid @n attribute in the XML output and could therefore be weeded out after the event.
While all this was going on, I wrote my driver file and did some further sniffing around on the interwebs for sources from which to complete the work. The 1611 Bible has a title page and loads of prefatory matter not all of which is available on www.kingjamesbibleonline.org (for the record, I found the title page on Wikimedia, and the rest of the front matter at several places, but in page image form only).
How should the Bible be modelled as a TEI document? In my first view, each verse is an <ab>, each chapter is a <div>, each book is a <text>, each testament is a <group>. This made sense to me, but then I realised that processing would be simpler if instead each book were regarded as a <div> of a different type; hence, that is what the current versions of both driver file and ODD say. Considerations of where to put the genealogies, tables for Easter, almanachs etc. which arguably do not belong in the front matter may lead me to review that decision, so I have left <group> lurking in my ODD. It should be easy to produce a separate driver file representing that alternative view.

A more pressing need, however, is to sort out the placement of the page image links: in the HTML source, and therefore in my XML, links to the page images for the whole of a chapter are given as a sequence at the start of the transcribed text for that chapter, rather than being placed at the point in the transcript where that page begins. In some cases that point is identifiable because the transcribers have chosen to include part of the running page header in the transcription there (it winds up as a <fw> element); but in many it isn’t. Most chapters only occupy a few pages so sorting this out would not be a major effort, just a rather tedious one, and not-easily-automatable one.
So what have we learned? Actually it’s not that hard to make a usable TEI document even from something as idiosyncratic as this web site. Which is encouraging. Let me also hasten to add that I mean no disrespect for the hard work (still less for the generosity) which must have gone into the production of that website: everyone is entitled to their own design decisions. I am pleased to express my thanks to the anonymous people who have put in that effort, and decided to share its fruits even with the godless multitude. As the 1611 translators say in their preface ‘Zeale to promote the common good, whether it be by devising any thing our selves, or revising that which hath bene laboured by others, deserveth certainly much respect and esteeme’.

Encoding the history of the OuLiPo

At the beginning of February, I had the pleasure of co-organising (with Sebastian Rahtz, Camille Bloomfield, and Hélène Campaignolle-Catel)  a workshop on data capture, as the second event in the Algoritm Seminar Series which forms part of an interesting ANR funded project called DifdePo. The project is a collaboration between the BnF and Ecritures de modernité, a research unit located at Paris III, and its objectives include creation of a TEI-based digital archive of the archives of the OuLiPo, which are currently stashed away in boxes at the Bibliothèque Nationale’s Arsenal depository. The papers include letters, photos, press cuttings, postcards, drafts, and notes of all sorts, but for the purpose of this exercise we decided to focus on the records of the OuLiPo’s regular meetings, which began back in the early 1960s. The Archive has already been catalogued, and work is in hand to produce digital images of a sizeable proportion of it. The object of our workshop was to explore ways of transcribing these documents, given that the project has very little funding, and will therefore have to rely on the good will of volunteer transcribers, enthused by things OuLiPien but maybe a little deficient in TEI knowledge.

About a dozen people participated, most of them surviving to the end of the day. We began by asking them to transcribe a page from a small collection of pre-selected digital page images, using Word. (I freely admit to a degree of smugness on discovering at the last minute that the teaching room was initially equipped only with old-style doc-producing Word, which had to be enhanced to a more modern docx-producing version at rather short notice by the unflappable Joël) This exercise demonstrated, as we had hoped, quite a bit of variation about what exactly should be transferred from the image to the text, and on what editorial principles, thus motivating a useful initial discussion about the principles and praxis of text encoding. One of the participants proposed (unprompted) the principle of “fidelity” to the source, while another argued repeatedly for “capturing the meaning”.

Once lulled into a false sense of security by this exercise, participants were exposed to the weirdness of an XML-editing environment using everyone’s favourite XML Editor  oXyGen and my usual tutorial — create a document, learn how to tag parts of it, learn how to manipulate the structure, etc. We then offered them a more demanding work flow, involving first capturing a document in Word using a Word Template, which defined Styles to highlight a number of significant features (headings, list items, etc., but also personal names etc.), secondly converting this to a TEI form, using oxGarage and a specialised profile, thirdly looking at (and possibly modifying) that in oXyGen, and then converting it back to Word to confirm the feasibilty of round-tripping.  Sebastian Rahtz of Oxford (whom God preserve) invested quite a bit of pre-workshop effort into setting up the necessary infrastructure for this, and making sure that it all worked correctly on the day. He also made it possible for us to inflict on the encoders a third alternative approach, based on an experimental installation of Ben Brumfield’s “From The Page” crowd-sourcing prototype software. I had expected this to be everyone’s favourite, but (maybe because we had already by then sensitized them to the delights of structural markup) our encoders seemed to find the simplicity of its interface made it hard to take seriously. We had prepared tutorial scripts for each of the three approaches (TEI source code available from my tei-fr repository, if you’re interested) so I was able to spend some of the time wandering about taking photos of hard-working encoders.
By the end of the day, everyone had tried all three approaches, and everyone had produced a couple of TEI XML files conforming to a simple transcription schema I had prepared earlier. We collected them all up and Sebastian showed how our pretend archive could be displayed on a web page, complete with corresponding page images, and vocabulary lists, and personography. This was (of course) all done with a straightforward customization of the standard TEI-HTML stylesheets, now available in the Stylesheet package as part of the Difdepo profile.
Conclusions? We still don’t really know whether our TEI-XML transcriptions are aiming for “fidelity” or “meaning”, but we have at least demonstrated the possibility of either (or both) . And we do know that the participants all seemed to be more enthusiastic about using the customized-word-template approach than either raw Oxygen or (possibly over-cooked) From The Page. We didn’t explore the idea of a pre-customised oXygen author-mode interface, which might well repay the necessary investment of effort, if there is a lot of metadata to be entered, for example.

 

Joel and I sample the oXygen
Joel and I sample the oXygen

I take the liberty of listing the names of the registered participants, for their greater glory:

  • Camille Bloomfield
  • Hélène Campagnolle-Catel
  • Paula Klein (Projet DifdePo)
  • Chris Clarke (Projet DifdePo)
  • Jeanne Devautour
  • Julie Bernard (Poitiers)
  • Marie Bonnot
  • Marianne di Benedetto (ENS Lyon)
  • Guillermo Hector
  • Pradeep Claassen
  • Louise Kari-Merau
  • Leïla Berlot
  • Barbara Servant (Univ Rennes II)
  • Clara de Reigniac
  • Gabrielle Bruzzone (Poitiers)
  • Claire Leroy

All affiliated with Paris III, unless otherwise indicated.

Lodelisation

Lodel (Logiciel d’edition electronique) is the name of the CMS which drives Open Editions, one of Europe’s leading open access publishers. Back in 2009, Marin Dacos announced at the TEI council meeting in Lyon that Lodel would start using a TEI schema for its internal processing, while continuing to accept manuscripts for publication in any of the commonly used office document formats. Documents would be worked on in ODT, and automatically converted to a simple TEI schema for internal processing, from which they would be converted for publication on the web and on paper.

Documentation subsequently appeared on how to prepare documents in TEI for processing by Lodel (in French at http://lodel.org/701 and also in English). An XSD schema for it is documented at http://lodel.org/715.

This blog entry summarizes what I needed to do to a real TEI document (specifically my forthcoming title What is the TEI?) to get it to work with Lodel: the full story is implicit in an XSLT stylesheet I wrote for the purpose. Actually, when I say ‘I’, I should make clear that the conversion was in fact handled by the nice people at Open Editions, who were remarkably patient with my eccentric use of TEI, and my even more eccentric wish to generate a Lodel document directly with as little manual intervention as possible. My thanks to Jean-François Rivière and Martin Dulong from Open Editions for their helpfulness, both in steering my TEI manuscript through the process, and in responding politely to my inane questions about what on earth was wrong with my lovely tagging.

The following list shows (in no particular order) the chief changes I found necessary and in some cases a bit unexpected.

  1. As might be expected, the Lodel schema doesn’t have any of the following semantic elements, which I have found useful when marking up technical documents: <gi>, <att>, <ident>, <val>.  More surprisingly perhaps, it doesn’t seem to have <foreign>, <emph>, <soCalled>, <mentioned>, <q> or <quote>either. My stylesheet turned all of them into <hi rendition=”#gi”>, and also generated a <rendition> element with an appropriate default style for them.
  2. The Lodel schema doesn’t allow lists or quotes to be contained by paragraphs. This is a generic HTML limitation, if I understand aright, but that doesn’t make it any less annoying. Call me verbose if you will, but I often write a single para with a bit of prose, followed by a list, a bit more prose, and another list. My stylesheet had to do some clever fiddling to deal with this (tx SPQR) but this is one case where I think Lodel should be a bit more broad minded.
  3. In fact, the Lodel schema only knows about two kinds of list: type=ordered which are numbered, and type=unordered, which are not. Gloss lists are not supported, so my stylesheet had to tweak <label> elements into a <hi rend=”#label”> child at the start of an unordered list item (but with @rendition of “gloss”).
  4. My TEI documents can have lots of XML examples, which are easier to read if they are wrapped in a differently-namespaced <egXML> container. The Lodel schema requires use of the <code> element, containing either a CDATA marked section inside it to preserve the layout, or XML tagging escaped by entity references. The only problem with this is that <code> is a phrase level element, not a block, which means that some hand tweaking is needed at the Lodel end.
  5. Lodel is intended for journal articles and manages each of them separately as a distinct TEI document. Chapters of a book have to be treated in the same way, which seems a bit odd — for example, each chapter gets its own TEI header. My stylesheet splits things up rather crudely, assuming that each top level <div> within the body is intended to be a separate document.
  6. Lodel insists on having an explicit indication of the nesting level of each subdivision, using (bizarrely) the @subtype attribute on <div> with values level1 level2 etc. My stylesheet grits its teeth and generates these automatically, but I think this is one design aspect of Lodel which might merit a second thought.
  7. The Lodel schema doesn’t allow headings within anything except sections, so you cannot provide them for lists, tables, or figures without some fiddling about.
  8. Lodel doesn’t number headings for you. Even if you supply a number for a section (using the @n attribute on a <div>, as recommended in the Guidelines), Lodel will not use it. My stylesheet does nothing about this: I just decided to live without numbered sections.
  9. Lodel handles cross references using much as you’d expect, provided that the value of @target is a complete URL, i.e. a link outside the current document. This means you cannot cross reference other sections of the document being encoded which seems rather an odd restriction. Put together with the foregoing lack of automatic section numbering, this can make for quite a lot of rewriting.
  10. Lodel knows about <bibl>, but not <biblStruct> or <biblFull>. Up to a point. Most of the semantic elements defined for the content of bibliographic elements (<publisher>, <biblScope>, etc.) are allowed, but it doesn’t actually do anything with them. To produce a correctly formatted bibliography, such encodings have to be converted to a fully styled version, following the requirements of the Open Edition style guide. I wrote a stylesheet to do (most of) this for one small bibliography: in the general case something much more complicated would be necessary.

That last caveat is of course true of all the rest: I’ve only tested this process properly on one text, albeit a reasonably large one, and only on a born-digital document. If you’re thinking of authoring documents in TEI though, chances are you won’t do it significantly differently from me, so some of the issues I encountered will affect you too. And, for the avoidance of doubt, let me repeat that none of this is meant to discourage anyone from using Lodel!

Interoperability of TEI projects : apotheosis or chimera?

This was the title (sounds better in French) of the closing talk I gave at an interesting workshop last week. A prevous COST-funded meeting in Krakow had brought together Czech, French, Catalan, German, and Polish teams working on several different dictionaries of medieval Latin to elaborate the idea that maybe they could make their various lexica interoperable if only they could agree on a common format, for which the TEI seemed the most plausible candidate. Susanna Allés, the energetic organizer of this workshop, got funding for it from several sources, notably the ALLC (or European Society for Digital Humanities as it now prefers to be known). She also seems to have hit on the wheeze of inviting a number of luminaries to make the case for the TEI dictionary tagset (notably L. Romary, F. Glorieux, P. Banski);  alas, in the event only I turned out to be available. Which was useful for me, since preparing for the workshop meant rediscovering all sorts of dusty and neglected (by me, though not by others) parts of the Guidelines.
The workshop was held in the CSIC (Spanish for CNRS) Istitucio Milà i Fontanals, which occupies a rather grand building conveniently located in the Raval, a picturesque if slightly seedy district of Barcelona to the left of the Ramblas, and we were all accommodated next door in the splendidly-named Investigators’ Residence on Calle Hopital. Barcelona is not a place for those uninterested in food and drink; and we were very well fed, in large quantities, if at strange hours and (on one occasion) after a lengthy walk up town through an unexpected tropical-style deluge. The ravioli stuffed with pears and cheese offered by the resto “En Ville” for lunch was particularly memorable, invidious though it is to single out this one occasion.

More intellectual fare was also on offer, of course. It is always a pleasure to arrive amongst a group of specialists personally unknown to me and from a domain of which I am more or less totally ignorant  to find that word of the TEI has already reached them, and often in a far from superficial way. So I was made very happy indeed to hear Sabine Thuillier (currently working in Madrid on the Diccionario Griego-Espanol but ‘formed’ as they say at the Ecole Nationale des Chartes) evangelise for the TEI as an international open source community, and impressed by the way she is implementing it in a workflow which though its editors remain obstinately based on Word Perfect, remains determed to envisage production of a respectable TEI P5 version.
Similarly, the team responsible for the eLexicon Mediae Latinitatis Polonorum lead by Krzysztof Nowak from Krakow, while maintaining a proper scepticism about some aspects of the TEI’s conceptual model, was clearly persuaded of its virtues as an open standard, notably as evidenced by both the amount of open source software (they mentioned XTF, Philogic, and TXM) and the number of comparable projects (they mentioned the Anglo Norman Dictionary, the Glossarium DuCange, and several others) using TEI. Their workflow starts with an OCR phase, since they are starting from an extensive library of source texts  and then uses LibreOffice and a customised library of styles to enhance it to the point where it can be automaticalty converted to TEI, thus (apparently independently) following the same path as is used by Lodel, OxGarage, Agora, and no doubt others, to combine the user friendliness of a word processing style interface with the rigour of a TEI structured maintenance format.

Catalonia has ambitions (as posters everywhere proclaim) to become politically independent of Spain, and certainly its linguistic independence is a well established fact. As a confirmed non-speaker of either Spanish or Catalan (nor of Basque, Galician, or Portuguese for that matter) I regretfully let the interventions in those languages wash over me, and thus missed out, notably on Jose Manuel de Bustamente’s insights on the relation between textual corpus and dictionary. I did however manage to understand the German colleagues present since they made the effort to speak in English or French, for example Alexandra Gorbrecht from the Trier Centre for Digital Humanities gave a brief overview of the dozen or so dictionaries put together online at woerterbuchnetz.de with a well designed query interface. Allegedy all of these dictionaries are locally stored in TEI XML, but as this is not currently exposed one cannot tell how consistently it has been done. None of the other major TEI dictionary projects in Germany I am aware of was represented here: presumably because none of them is specifically concerned with Mediaeval Latin. I had to console myself for the absence of Werner Wegstein from Wurzburg by stealing one of his examples for my own talk.
Bruno Bon and Renaud Alexandre from the IRHT in Paris had the advantage, if advantage it be, of being able to develop their proposals for an over-arching Novum Glossarium Mediae Latinitatis on the basis of the already existant complete Glossarium of Du Cange which has been freely available in TEI-inspired XML markup for some time now, thanks to the work of Fréderic Le Glorieux. The idea seems to be to develop a set of proposals able to express the (not inconsiderable) variation in practice amongst these and others working on different lexica of medieval Latin in Europe, and thus create what (inevitably) Bruno suggested would be called NGML (the Novum Glossarium Markup Language). As a first step they have set up an exploratory multilingual wiki with some nice visualisation tools, based on a few sample entries taken from each of the five different lexical projects (specifically, in Barcelona, Prague, Krakow, Munich and Paris), and are inviting more. This could be fun, though I think expressing NGML as a real TEI ODD would be more of a challenge.

Susanna Allés and her graduate student Frédérique Laugrost (on secondment from the Ecole Nationale des Chartes) talked about the specific problems they faced when starting to apply the TEI to the text of their dictionary: the Glossarium Mediae Latinitatis Cataloniae. Many of these are familiar, of course: notably those which derive directly from the wish to preserve the punctuation and use of abbreviation which characterize such sources and at the same time model the logical structure which they determine. Some of these problems do however point to aspects of the current TEI dictionary model which could be improved.

I started making a short list of such points during the workshop, but sadly did not get very far:

  • too many of the proposed TEI dictionary elements relate only to modern lexicographic practice. Deciding which ones to filter out to make a kind of TEI Lite for dictionaries would be very desirable.
  • an element for “translated segment” is desired, even if it is just syntactic sugar for with a value for xml:lang other than that of the surrounding text
  • some dictionaries have entries which are large enough to have multiple paragraphs but there is no place for <p> in any model.entryLike element
  • when a term is identical in two or more languages, can xml:lang take more than one value (I confidently said it could, but I think I am wrong)
  • how should you mark a word which is clearly readable in the text when its meaning is entirely uncertain? (I suggested <orig>, but there must be better ideas)
  • The typology currently used for <form> combines categories from entirely dissimilar taxonomies, e.g @type=lemma is an entirely different kind of thing from @type=compound. Likewise, the typology one might want to use for should be more to do with the way the sense had evolved. To both these points I said (in my best French) “Bof”, Or, more precisely, it’s only by receiving proper input from specialists in the field — those best able to define more appropriate typologies — that the TEI progresses…

I’m hoping that the undoubted success of this workshop will encourage the participants to form a SIG on the subject or (as Piotr Banski had previously suggested to several of them) to make an active contribution to the existing LingSig. Plenty of scope for very interesting work to come, not to mention the opportunity of returning to Barcelona for the paella which I somehow failed to find time for on this occasion.

TEI++ : une formation avancee

Cet été, j’étais invité par le consortium CAHIER, en partenariat avec les consortium Corpus Ecrits et IRCOM,  d’organiser un atelier dite “TEI avancée” sur quatre journées.  Je leur ai propose une mode d’opération divisée en deux parties (présentation, travaux pratiques) et une organisation selon trois axes:

  1. modélisation des ressources et séléction des traits signifiants
  2. encodage et explicitation TEI des structures modelisés
  3. exploitation et analyse des ressources structurés

Je leur avais aussi propose de partager les  travaux de formation avec quelques experts francais.  La formation s’est tenue à l’Institut de Linguistique Française à Paris du 19 au 22 novembre 2012.

Voici en sommaire (anglophone, désolé) ce qui s’est enfin passé …

Day 1

Proceedings began on the fifth floor of the ILF, a nice light room but not quite big enough for the 18 or so participants. This was not altogether bad, since the consequent huddling together encouraged emergence of a cohesive group identity, which I also tried to encourage by getting the participants to place themselves on an improvised graph along two intersecting axes : literarature vs linguistics, and researchers vs support staff. Most people turned out to be in the bottom right quadrant, i.e. ingénieur + linguistique, but there was also a smattering in the littéraire + scientifique box, to say nothing of two sociologues who insisted on positioning themselves in the middle of the littéraire vs linguistique axis. The rest of this first introductory session was given over to a rapid review of some fundamentals of encoding, and a sampling of the websites of half a dozen real TEI projects around the world, which might have gone better had I rehearsed it better, but got the message across that quite a few very different projects were doing seriously cool stuff with the TEI.

After coffee, I introduced them very quickly to a spot of data analysis, using as vehicle the celebrated postcard archive of M Marcel Virgolos, and Lauranne then took over for a refresher on using Oxygen. They marked up a postcard or two, and reviewed commonly used TEI tags for each of some pre-selected texts : a French novel, a poem, and a play. Most students completed all of these, mastering most of the key features of the Oxygen XML Editor quite rapidly, but I think we did not allow enough time for this session, given the mix of abilities present.

Lunch in the form of a large cardboard box containing a “plateau” of cold cuts, salads, bread roll, plastic cutlery etc. duly appeared and was despatched. Thus strengthened, I embarked on an all-singing all-dancing overview of all the TEI modules, and what you can do with some of them. That took about an hour, but helped motivate Lauranne’s traditional exercise on using Roma to make a schema by reduction from TEI-ALL which followed it. By the end of the day, everyone seemed quite comfortable with the idea of pérsonalisation de schéma, and reasonably convinced that they might find what they wanted to mark-up somewhere, somehow in the TEI.

Day 2

On this and subsequent days we were were displaced to a much better (because bigger) teaching room. I began the day in seriously magisterial mode by explaining (many of) the components of the ODD language, and why you might care to know about it. This was quite punishing, both for me and for some of the less technically-minded participants, but no-one visibly fell asleep. For the subsequent practical, Lauranne had prepared a script in which participants submitted their own texts to analysis by OddByExample, generating a personalised ODD. The majority of course had not come with their own texts, or had texts not in TEI P5, so they ran the exercise on a rather inadequate sample of the Virgolos corpus instead. With a bit more prep, I think this could be a really fun exercise and an excellent way of getting people to learn ODD properly. It also revealed a bug in OddByExample which Sebastian graciously fixed overnight

Lunch being cardboard plateaux once more, I went for a stroll round the nearby Parc de Montsouris to see some of the gray autumnnal paris daylight while it was available. I then droned on for well over an hour on the subject of the TEI Header, which I am now selling as being “metadata for the rest of us”. They made me do it. The exercise we adopted was the French version of creating an ms description for the W. Owen ms, last seen in Berne, This is quite useful for the purpose, especially in combination with the following exercise on marking up a transcription of that manuscript ; time permitting however I would have instead preferred to use a different French manuscript for both. If I had one.

Day 3

Wednesday I had carefully billed as “journee des guest stars” since the idea was to make other celebrated French TEI enthusiasts share in the work. So we began with a presentation about TEI recommendations for dealing with named entities and their names given by Alexandre Gefen. Since the room contained more than a few French linguists this gave rise immediately to a heated debate on the philosophical basis and nature of nominal reference. The exercise in marking up names was a little under-prepared and one of the students insisted on asking the Emperor’s New Clothes question (why bother?), which was answered by another participant citing (at length, and with enthusiasm) the work of Nicole Dufournaud inter alia, which was nice. Since Alexandre had to leave early, I filled the gap by moving my brief overview of tools options up, giving a plug for Sebastian’s stylesheets, and letting them experiment with OxGarage, which they loved.

For lunch we went to the brasserie down the road, which was a much much better idea than the plateaux. Everyone got very jolly and there was a fair amount of shouting. Our second invited expert of the day was Bertrand Gaiffe, from ATILF, who delivered an excellent pair of lectures about encoding of oral and linguistic data respectively, also involving a fair amount of interaction and discusion, but not much actual tagging, since TEI interfaces for the appropriate tools remain elusive, best efforts of the LingSig notwithstanding. .

Day 4

The final day began with a presentation on the various TEI orthodoxies concerned with the editing of primary resources given by our third invited expert : Alexei Lavrentev from ICAR. Participants were then offered the choice of doing either the reverse transcription exercise or the visual encoding exercise from Berne; both options were taken up, though I was too busy sorting out the website to see how far they got with either.

After another nice brasserie lunch (roast duck), I spent about 15 minutes showing how to use TEIBoilerplate, which went down remarkably well, “Génial” they cried, as they saw all that tricky encoding in John’s demo file being rendered beautifully by Safari (France is still largely land of the Mac). The rest of the afternoon, was devoted to a more ambitious TEI-savvy piece of software : txm, from the textometrie project at Lyon. Alexei showed us what it was, and demonstrated how to make it sit up and do tricks with the Graal and Brown corpora, which participants had pre-installed. He also showed it working with a selection of literary texts prepared for use throughout the workshop.

Verdict

I think this workshop worked much better than it deserved to (I always think that). All the participants seemed very happy at the end, and several of them said they had learned more than they expected to. I think the organisation of the programme made good sense, and the balance of exposition and exercise was approximately right, though we probably didn’t do enough to make the practicals consistent and relevant. A few parts suffered from lack of preparation, and I think we could have done more to get a single case study working throughout the course of the four days, in addition to the various more specialised materials we introduced. But next time we’ll definitely get everything right. Thanks are due to the participants, the organisers, and all my co-formateurs, especially Lauranne for calming me down at moments of high anxiety.
All the materials used in the workshop are available in PDF starting from http://meet.tge-adonis.fr/sites/default/files/2012-11-initial.pdf. Dedicated TEI hackers may also be interested in the XML sources of the presentations which are available from my svn repository at http://code.google.com/p/tei-fr/source/browse/#svn%2Ftrunk%2FTalks%2F2012-11-paris

Here we go again

It’s ridiculously early for a Sunday morning, but the only plausible train to catch from Oxford if you want to connect with a 1220 Eurostar leaves at 0940. So here I am wondering, along with many others, where on earth is said train. We can see it in the sidings North of Oxford station, but it’s not moving towards us and the announcements are not reassuring. Maybe the stopping service to Ealing Broadway is a better bet: certainly standing around fretting on Oxford station is not pleasant. Some twenty bucolic minutes later, I detrain at Didcot in the hope of something better: which does indeed turn up in the shape of the 0940, now proudly running only 20 minutes late. I spend my uneventful trundle through the morning sunshine trying to work out what I have done to incapacitate tei-emacs on my laptop. Then an unspeakably horrible Circle line train bears me off to St Pancras, and the comparatively civilised space of the Eurostar lounge where I discover that in the general confusion of getting myself ready for this week’s set of French gigs I have failed to check something crucial into my nice new subversion repository. Ah well: no time to agonise over that, it’s time to get my disordered thoughts on the history of the TEI into some sort of plausible order, and to construct an appropriate French narrative around same. Which keeps me happily occupied for the rest of the day: out of London, across the wilds of Kent, under the channel, through Picardy into Paris, my nose barely strays a few inches from my laptop screen, tappety tappety tap, except for a few minutes degustation (I use the term advisedly) of a Eurostar snack lunch, and a few dirty looks in the general direction of some fellow passengers yapping away noisily behind me. Even nastier, but mercifully not noticeably longer than the Circle line is the hop by RER B from Gare du Nord to Gare de Lyon, where I resume work on board a nice peaceful TGV all the way to Lyon. With such good effect that my talk for tomorrow is all ready to go, even before I arrive at Perrache. Such virtue warrants dinner, even though it’s now a little late, so I stride purposefully across the Place Carnot to the brasserie Victor Hugo, order a hamburger a cheval (nothing to do with horses, this is a burger with a fried egg on it) frites, et un pot de cote and phone Marjorie to re-assure her that I am here and ready to boogie, before retiring to bed.

One hasty breakfast later, Dominique Roux and I set off in search of one of the many fine Universities in which Lyon rejoices, more exactly the vaulted basement dungeon in which Marjorie’s seminaire is taking place. The morning was supposed to be a double act, but since Paul Spence couldn’t make it his colleague Guilhem Pépin instead gave us an interesting lecture about medieval history before showing us some of the Gascon Rolls project. Pépin is a French (or more properly, Gascon) historian actually working at Oxford in the History faculty. There was a time when I might have huffed and puffed a bit about Oxford academics who take their TEI digital projects off to Kings College instead of using the local facilities, but these days I have become placid and boring. Anyway, Pépin was a good speaker and clearly an agreeable person to work with; and the material presented all sorts of interesting possibilities for analysis once marked up, even if he was almost aggresively reluctant to claim any expertise in the application of markup. Not for the first time, I wonder why it is perfectly acceptable for academics to profess ignorance of one technology that is essential to their work, whereas ignorance of others (say, bibliography) would seriously damage their career prospects. And then off we all went for a decent lunch, this being France: dos de colin avec ses pommes de terres lyonnaises, if I remember correctly. After which I gave my talk, which seemed (to me at least) to go remarkably well for a first outing: I suspect I will give it again, at least as long as people go on asking me to explain where on earth the TEI came from, and why it has not sunk without trace. It is a good story, with a good moral, I think. After a coffee break, Dominique Roux from the Presses Universitaires de Caen gave a thorough overview of their projects and preoccupations, presenting a variety of cool projects, a TEI-based workflow, some wise remarks about the use of TEI in commercial publishing, and much else besides. It’s a pity he came at the end of a long day with perhaps a touch too much Gasconnade in it, since it would have been good to discuss several of the ideas he presented with the masters students present — who had all been assiduously taking notes earlier in the day, but were clearly flagging somewhat by the end. I was sorry to have to rush off  in time to catch the train to my next gig, in Tours.

Preparation for said next gig took up quite a bit of the journey, quelle surprise; indeed I don’t think I looked out of the window once. And yes, it is possible to get from Lyon to Tours without passing through Paris, if only once or twice a day. The TGV concerned stops at a place I have never heard of called Massy, and then at St Pierre des Corps, before zooming on to Caen. St Pierre des Corps is a dismal little junction from which a variety of trains shuttle into the architectural splendour of Tours central, about 5 minutes away. Even when entirely enclosed in scaffolding as a part of its restoration as a patronomial monument, Tours station is an uplifting spectacle late at night, when everything around it is closed except for Macdonalds. Equally good for the soul is the Grand Hotel of Tours which has retained and lovingly refurbished its charming 1930s decor, all peacock feathers and wooden panelling and geometric patterns. Last time I was here in December, the wifi was misbehaving but everything seems to be fine now, and the breakfast is excellent. Next morning, it’s a quick trot across town to the Centre d’Etudes Superieures de la Renaissance to give my contribution to their Master 2 professionnalisant Patrimoine écrit et édition numérique : initiation à l’encodage des textes patrimoniaux. This is the third or fourth year I have done this, so you would think I had it sorted by now. My contribution this year consisted of a ninety-minute lecture on manuscript encoding (much revised to recognise the existence of the new <sourceDoc> element, as of release 2.0 of TEI P5 — this was the talk I thought I had mislaid, but hadn’t); followed by another 90 minutes on Roma and schemas and such like mysteries, using the Virgolos project as a case study (called TEI a la cartes, geddit?); and finally another 90 minutes attempting to explain XSLT pour les nuls. This last was a rather more quixotic and under-prepared venture: although novices quite quickly grasp the basic ideas and usefulness of XPATH, grasping exactly what an xsl template is and why you might want one is rather more of a challenge. But the punters seemed content to be slightly baffled at the end of a long and varied day, and I am sure that the local team will clarify any residual bewilderment next week. Dinner was at the Odeon, another piece of lovingly restored 1930s kitsch, where the food was excellent (I had the rognons since you ask), and Marie-Luce and I discussed the notion of a weeklong residential formation approfondie sur la TEI under the auspices of the Cahiers consortium, plus anyone else who might like to play.

Tours is in the process of acquiring a tramway, which means that large amounts of it are being dug up and knocked down, notably near the railway station: I observed this with interest over breakfast, before hastening off rather late for a short consultation with the CESR team about how to autogenerate an ODD from their Epistemon corpus (sort of difficult if you don’t have Saxon installed), and some discussion about how best to proceed with their ongoing project of revising the project’s encoding manual. The plan is not only to update but also to generalise this manual for use by other similar projects, which would certainly be useful: there isn’t a lot like it in French, aside from the BFM manual. However, I have a train to catch this morning, so I have to sprint back through the marche des fleurs, looking neither to the right nor the left, regretfully for there is much to see, and resisting the temptation to stop to buy fresh garlic or dried flowers or a sandwich for the journey or even take some photos of the pavements now decorated with a rich and colourful assortment of flowering bedding out plants. Tours is a charming place with much to recommend it. And so, off to Paris where I have a couple of crucial meetings to attend, crucial enough to propel me into an irrational anxiety about the progress of my train which suddenly decides to slow down and stop in the middle of nowhere more frequently than is decent, even for an intercite. In the event, though, we pull into Austerlitz ten minutes early, allowing me to take a pleasantly-paced walk through the Jardin des Plantes and up the hill to the TGE Adonis office in good time for my appointment with my directeur, Jean Luc Pinol. We discuss the coming year’s work plan for MEET; this being satisfactorily resolved, Ariane agrees to release a PUMA forthwith (don’t ask) … I spend the afternoon catching up on the gossip with TGE colleagues before checking into this week’s hotel which is conveniently located opposite a nice bar and round the corner from a rather excellent brasserie. Here I dine, expensively but deliciously on foie de veau patates et encore un pot de rhone. It’s tiring work all this gluttony, you know.

Next morning, I rise at a civilised hour, and catch up on my committments at the TGE most of the day, taking however an extensive lunch break to discuss with Mathieu Andro from the Bibliotheque Ste Genevieve a wondrous new digital library project which has apparently secured 1.7 million euros of local funding to finance a deposit archive for the digitized outputs of a select bunch of Parisian libraries, and wants to use the TEI. Did I hear that right? The lunch was pretty good too. Finally, I put in place some hasty arrangements for another meeting in Paris next week, and then trek on foot across town to Chatelet (where there seems to be, as usual, a manif going on) to catch the metro to Gare St Lazare  (which, post-renovation, seems to be mysteriously disguising itself as the gare de l’est), to take the train to Caen, for the last gig of this tour, namely viz Matthew Driscoll’s ongoing TEI seminar at the MSH. The Hotel Quatrans is much as I last saw it, and so, I am pleased to report, is the little restaurant called “Les saveurs de la Reunion” just round the corner from it, where Matthew, Eric, and I enjoy some rum, some gateaux piments assortis, two bottles of muscadet, and a tasty carrie cabri before retiring for the evening.

Friday is seminar day. Serge Heiden ARE YOU READING THIS SERGE? from the ENS Lyon opens proceedings with an update and an impressive demonstration of the textometrie project, which goes from strength to strength. They have an equipex in which they will be working with hundreds of Historians, and a number of other collaborations in prospects, some ANR, some DFG-funded. The software is, of course, still available from sourceforge, and they are also in the process of setting up a portal for general access to some demonstration applications of it   . Serge discussed the way the software uses TEI and other forms of markup; they have now fixed on a TEI-conformant pivot format, for which an ODD is in preparation. He also demonstrated many XAIRA-like features of the software and reported some work done by Alexei Lavrentev in importing and analysing the markup of a large corpus of texts from Frantext.  He was followed by Antoine Widlocher who described the  search engine under development at Caen’s Greyc research group initially for use in the Descartes project Its data model uses graphs rather than trees, and much of his talk therefore concerned the difference between the two, although he did also present the user interface envisaged for the system; this is, of course, SPARQL-based, and will access a triple store in which XML and other annotations are all represented in RDF. All very interesting if, perhaps, a little computer science oriented. Maud Ingarao commented that the project resembled  Edouard Portier’s work on multistructured documents; I should have mentioned Desmond Schmidt, but didn’t. After lunch (in the student canteen; n’en parlons plus) Maud gave a brief overview of a newish XML database system called BaseX, and demonstrated some of its jazzier features: she also noted that a test basex server has now been implemented as part of the TGE Grille de services. Frederic Glorieux then gave a nice talk demonstrating how the presence of detailed markup in his version of Francois Ganaz’s “XMLittré” project facilitated several interesting searches: he proposed tthe average size of text fragment within a TEI document might be an interesting stylistic indicator; and remarked on the high frequency of emotive words like “dieu, homme, roi” in the examples cited by Littré. Finally in this session Marie Bisson demonstrated the current state of the Juxta collation system under Windows, and working on three manuscripts of Thomas Le Roy. Juxta apparently has its own XML markup but  does now also (more or less) grock TEI .

Last but one session of the day concerned “quantitative codicology“, a term, I learned, which is even older than the TEI, having apparently been invented by someone called Ornato in 1980, according to Matthew, though it is a concept which can be seen to underlie Don McKenzie’s 1985 Panizzi lectures on bibliography as the “the sociology of texts”, or the so-called New Philogy of Stephen Nichols at the start of the nineties. I liked Matthew’s use of the phrase “the artefactual turn” to describe his increasing certainty that the meaning of text should not be dissociated from its “embodiment” or the historical and social forces that documents manifest, and intend to appropriate it for use when presenting the TEI’s recent reinvention of <sourceDoc> . Matthew and colleagues described the Fornaldarsögur norðurlanda project, which aims to provide an account of the production, dissemination, and reception of the “chirographically transmitted texts” of 36 stories from prehistoric times which can be identified in some 1500 texts presented in over 750 distinct Icelandic manuscripts. These are described using (inter alia) a reduced and tightly constrained schema derived from TEI P5, extended to include information derived from the transcriptions of the mss such as the average written area, the number of abbreviations per line, etc. as well as such features as the presence of decoration, or the types of text included. Sylvia Hufnagel presented some hypotheses about possible connexions between these evidential characteristics and assumptions about the wealth or status of the owner or person believed to have commisioned creation of a manuscript, though there is really insufficient evidence so far to justify any generalisations one might be tempted to make about (say) the emergence of the “prestigious reading manuscript” distinguishing (as it were) “coffee table” manuscripts from “paperbacks” . Eric Haswell described clearly and concisely the technologies used in the project, contrasting the “data centric” and “document centric” notions of relational and xml databases, and also showing how their web-service based implemetation based on eXist made it possible very easily to extract query results as CSV for input into traditional spreadsheets or as JSON for use by cooler things such  simile widgets. Finally, I gave that talk about linguistic annotation and why people say such terrible things about it. Not sure how appropriate it was to the day, but people seemed to be listening anyway. Final dinner of this week of over eating, was at Le Bouchon du Vaugueux where I (and others) tucked into a four course gastronomic menu, including some excellent roast duck, and rather a lot of stewed pears.

And on Saturday, the journey home, which was all very pleasant till I actually got to London : trains cancelled without warning, inadequate fallback facilities, Great British Public mustnt-grumbling etc etc. It took longer to get from London to Oxford (about 100 km) than from Caen to Paris (about 200 km), and involved a train that was so overcrowded it could not leave the station, not to mention a 30 minute wait for a replacement bus in the cold outside Reading station. Never mind, next week I’m going back to France, where the trains (mostly) run on time and the train crews are (usually) helpful and less demoralised when they don’t.

« Exploiter les données structurées en XML »

Here’s a nice way of spending a day in the heart of the Marais. Get together a bunch of people who do actually use the TEI (or some other kind of structured XML markup) to do cool things and ask them to talk for a maximum of 10 minutes each about the software they use and what they do with it. I claim no credit at all for this idea: the event was master minded by Anais Wion, Fabrice Melka, and Denise Ogilvie who just coincidentally have to prepare a workshop on the verb “exploiter” in Aussois later this year. Whatever its origins, this turned out to be a really worthwhile day, and not just because of the venue (the alabaster hall of the Archives Nationales) or the lunch (yum, Lebanese buffet).

A proper account of the proceedings has been promised for a couple of weeks hence, so this note is just the consequence of me jotting down some immediate impressions on the train home. There is already a useful page of links to stuff mentioned at the workshop at http://www.delicious.com/workshopexploiter, which I should probably update with this report.

I kicked off by explaining why the TEI really didn’t ought to have much to do with software production, except for its own nefarious purposes. I conceded, however, that those purposes led inelectably led to the production of Sebastian’s Excellent Stylesheets and hence to a generic software tool of some importance in the community. Marjorie Burghart then talked about XML database eXist, showing it in action on her sermones.net site, and also her paleographic exercise site; the main problem with it, for her, was that its installation and maintenance on a local server require a little more technical expertise (for example, fine tuning a java environment, recovering tomcat when it falls over, etc.) than is available for the typical humanities department. This need for infrastructural computing support turned out to be a major theme of the day. Next up was Lauranne Bertrand from the CESR team at Tours, who showed how they currently use XTF to display various versions of their richly encoded texts. Maud Ingaro then introduced us to a new XML database from the University of Konstanz called BaseX which seems worth a second look, if only for its very sparkly visualisation features, though its main claim to fame is probably its ability to handle REALLY BIG (multi-gigabyte) databases, which (if true) should give several current pontificators pause for thought. Jorge Fins, also from CESR then talked about Philologic, which provides traditional text searching (full text indexing, concordancing, etc) capabilities, running on a distinct (and distinctly dumbed down) copy of the Bibliotheques Virtuelles des Humanistes exported to Chicago.

After a brief pause for coffee, Alexei Lavrentev, standing in for Serge Heiden (reportedly recently immobilised by a close encounter with a crampon) showed us the current state of  txm the open source text analysis system developed by the textometrie project at Lyon. Severine Gedzelman, also from Lyon, then described Hypermachiavel, an application for handling multiple aligned corpora (or, to be more exact, one specific set of multiple aligned corpora). I found the difference in software design between these two projects interesting: txm was developed very consciously as a generic text processing framework, incorporating and rationalising feaures from many other systems; whereas hypermachiavel was developed (almost from zero) very much to meet the specific needs of a particular research project, but without any particular generic intention.

Does the world need another generic tool for doing textual annotation in XML? Certainly many linguists and computer scientists seem to think so. Cue Antoine Widlocher from the University of Caen, and Glozz, a new plateform for distributed linguistic annotations of text segments, overlapping or otherwise, relationships, graphs, etc. etc. Very nice visualisations, as per other java applications; nice features such as annotation histories; no evidence that any researchers from the humanities had been involved in its design or application up to now. Florence Clavaud, from the Ecole Nationale des Chartes, then spoke very briefly (no really) about Pleade and her plans to enhance this mainstream EAD-muncher to include TEI capabilities. Pleade is one of the tools of choice in the French Archival community so that enhancing it to handle TEI as well as it currently manages EAD and sets of digital images would be very cool. Also from ENC, Vincent Jolivet and Frederick Glorieux showed us diple which is a nice simple package written in php to transform complex TEI markup to static web pages with a complementary suite of stylesheets to render them, and something called xrem, a very glamorous tool for visualisation and construction of RELAX NG schemas. Fred likes to work directly in RELAX NG rather than via ODD, but the results almost justify such heresy. Nicole Dufournaud, aided and abetted by Denise Ogilvie, told the (possibly) instructive history of how Millefeuille (a nice customized TEI editing and indexing application based on work Nicole pioneered back in the nineties)  is now in a suspended state of animation. Following one unsuccessful attempt at reanimation, it appears that another one is proposed as part of a European project. Finally before lunch, Maud Ingaro showed us some camstudio videos about dinah : this “philological platform for the construction of multi-structured documents” is currently being developed at Lyon in a project studying the manuscripts of Jean-Toussaint Desanti, and seems worth a second look, even though it’s a long way from being stable yet.

After the afore-mentioned very nice lunch, there was a wide-ranging free-form discussion, from which I took away chiefly the following points (as aforesaid, there will be a more complete and correct report later):

  • a general feeling that IT infrastructural support was lacking: in particular, people wanted
    • some kind of sandpit environment in which they could experiment with different tools
    • some easily accessible web-publishing service for e.g. doctoral students to showcase their work
  • a general feeling that development and implementation of XML-based projects was hard work requiring input from specialists, consequently a need for more training
  • a desire to share experience of these and other tools; the existence of TEI-FR, and the TEI Tools SIG were agreed to be appropriate channels.

Some pointed requests were made for the TGE to do more to provide some of these services, which proposal I agreed to go away and investigate.

Tweaking the Agora Stylesheets – 1

The AGORA project (this one, not to be confused with this other one nor even this other one again) has defined a very simple TEI XML schema for  scholarly publishing. In this series of blog entries, I report my attempts to process a set of documents which conform to that schema into PDF and other formats, using the TEI stylesheet library.  My environment is a laptop running Ubuntu 10.04, on which I have installed the 5.1.4 release of the tei-xsl package and most of the texlive Ubuntu packages (versions dating from July 2009 according to dpkg).

On the train to London this morning, I  wrote a Makefile which validates each file and, if valid, then processes it using the teitolatex and xelatex commands. This produced something not entirely discouraging, with the  following obvious things to fix:

  • some of my files had  numbered headings and others didn’t. By  default the stylesheets added numbers willy nilly. I need to switch this behaviour off.
  • some of my files used <byline> in the header to indicate the affiliation for an author, like this:
    <byline><docAuthor>Fred Flintstone </docAuthor>
        Euphoria State University, Kansas</byline>.

    By default, the stylesheets clearly have no idea what to do with the text fragment following the <docAuthor>, and therefore spit it out on a page of its own.

I learned at the excellent MUTEC workshop last week that the recommended way of modifying these stylesheets is to set up a new “profile”, so I duly visited the directory  /usr/share/xml/tei/stylesheet/profiles and created a new folder there called   /usr/share/xml/tei/stylesheet/profiles/agora (somewhat to my surprise this did not require root access).  I then copied the existing default specifications for each of the target transformations I thought I might use in my Agora work into this folder. Like this:

$cd /usr/share/xml/tei/stylesheet/profiles
$mkdir agora
$cp -r default/latex agora
$cp -r default/docx agora
$cp -r default/oo agora

The directory names (latex, docx, etc.) are not particularly well publicized: I worked out by inspection that “oo” must be the one invoked by the command “teitoodt”… presumably at some point it will be renamed Liboff vel sim.

Anyway, this setup should mean that if I now do e.g.

$teitolatex --profile=agora foo.xml

I should get the same result as I would if I left out the –profile … and so indeed I do. Good.  Time to start messing about.

I take a peek into the contents of my agora/latex folder. It contains just one file, called to.xsl — which presumably controls the conversion from tei to latex. One day maybe some clever person will add a file called from.xsl which does the opposite. Or not.

The file is rather dull: all it does is remind me that the file is copyright TEI Consortium 2008, and that the library it invokes is “distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY”. Fair enough. It also loads the stylesheet at
../../../latex2/tei.xsl but all it does to modify that is set some mysterious parameter called reencode to false. So clearly I am at liberty to add further modifications in this file… or will be once I have  changed permissions on the file.
../../../latex2 (i.e. /usr/share/xml/tei/stylesheet/latex2, sibling of the profiles directory) is the directory with the real biz. It contains files named for most TEI modules, as well as promising looking files like tei-param.xsl. A little sniffing around, and I have discovered the XSL template for procesing the TEI <head> element inside the file core.xsl, which contains the following magic:

<xsl:choose>
<xsl:when test="ancestor::tei:floatingText">Star</xsl:when>
<xsl:when test="parent::tei:div/@rend='nonumber'">Star</xsl:when>
<xsl:when test="ancestor::tei:back and $numberBackHeadings='false'">Star</xsl:when>
<xsl:when test="$numberHeadings='false' and      ancestor::tei:body">Star</xsl:when>
<xsl:when test="ancestor::tei:front and $numberFrontHeadings='false'">Star</xsl:when>
</xsl:choose>

That looks to me suspiciously like there should be a parameter called numberHeadings which I should set to false in order to suppress those pesky generated section numbers. (Of course, I’d have found that out immediately if I’d bothered to read the documentation, but …)

Back in my file profiles/agora/latex/to.xsl, I add the following line

<xsl:param name="numberHeadings">false</xsl:param>

and then regenerate the PDF, using the tweaked stylesheet in my agora profile:

teitolatex --profile=agora aaberge_2007.xml
xelatex aaberge_2007.tex

Bingo! no numbering. This could maybe be easier than it looks…

My second problem is trickier. The challenge and the delight of the TEI is precisely its open-endedness, and so it often happens that something which looks plausible in TEI has no obvious translation in some other markup system, such as LaTeX. In my case, how *should* the <byline> element be processed? A grep through the LaTeX directory shows me that at present there is no template at all for it, so my hands are comparatively untied. My first thought is just to add a template like the following to my file:

<xsl:template match="tei:byline/text()">
\author{<xsl:value-of select="."/>}
</xsl:template>

on the assumption that the bit of text inside the byLine elements might as well be treated in the same way as an author as anything else. But LaTeX is not so liberal: when it finds that I have generated

\title{The Semantic Web in a philosophical perspective}\author{Terje Aaberge}
\author{,
Sogndal, Norway}

it simply ignores the first \author. This suggests that I cannot solve this without learning more about LaTeX than I really want to.

Maybe I can modify the existing template for <docAuthor> to deal with this special case. In the file header.xsl there is a template like this

<xsl:template match="tei:docAuthor">
<xsl:if test="not(preceding-sibling::tei:docAuthor)">
<xsl:text>\author{</xsl:text>
</xsl:if>
<xsl:apply-templates/>
<xsl:choose>
<xsl:when test="count(following-sibling::tei:docAuthor)=1"> and </xsl:when>
<xsl:when test="following-sibling::tei:docAuthor">, </xsl:when>
</xsl:choose>
<xsl:if test="not(following-sibling::tei:docAuthor)">
<xsl:text>}</xsl:text>
</xsl:if>
</xsl:template>

It’s a horrible kludge, but if I insert the following before the final <xsl:if> element, it should make sure I output any following sibling text fragment before outputting the }

<xsl:if test="parent::tei:byline and (following-sibling::text())">
<xsl:value-of select="following-sibling::text()"/>
</xsl:if>

I therefore copy the whole of the <xsl:template> for docAuthor into my to.xsl file, add the above clause, and blow me down it (nearly) works. I had, of course, forgotten to suppress a second appearance of those pesky text fragments caused by the default processing for <byline>.  One more template:

<xsl:template match="tei:byline/text()"/>

fixes that.

Of course, the more I look at this, the less I like it. A much better solution would be to tag the affiliation data as such in the XML source, using an element such as <affiliation> perhaps, and then process it correctly into whatever LaTeX provides for the treatment of such things. But that would, as aforesaid, require some research into what LaTeX can do, as well as changing the Agora schema.

Not a bad way to pass the train journey to Paris, especially when surrounded by kids returning home after the half term hols.