EEBO TCP in P5 – the return

It seemed a necessary act of piety to respond positively to a request for help from my former colleagues at the Oxford Text Archive, when they finally got around to considering the conversion of the latest (and one must fear the last) tranche of EEBO texts from the Text Creation Partnership. The conversion into a TEI P5 compatible version of the vast majority of EEBO-TCP phases 1 and 2 texts and their subsequent upload to a gazillion github repositories was accomplished by a team headed by Sebastian, back in the days when TEI Simple Print was new, and we were all a bit more bushy-tailed and bright eyed. Now the OTA has received their last tranche of TCP phase 2 texts, it should not (surely) be too much of a sweat to crank them through the same conversion process and deposit the results in Github too. Though of course nothing is ever quite that simple.

The XSLT script which does the heavy lifting is called tcp2tei and (thank you Sebastian) here it is, safe and sound in the TEI Stylesheets repository. And it still works. There is even a shell script for creating a new github repo and uploading each file to it from the same masterly hand; this one nearly works, as a consequence of github having got a little more fussy about authentication mechanisms in the last five years, but that’s not hard to fix. So I should just declare victory and move on.

On closer inspection however three issues have surfaced (so far).

Firstly, the catalogue numbers. In the current TCP P5 texts, each TEI header has a string of <idno> elements supplying its identifier in the Michigan DLPS database, the identifiers of one or more MARC records from OCLC or UMI catalogues, its Proquest number, identifiers in one or more standard bibliographies (ESTC, Wing, STC, Evans etc.) and the number of the image set which was scanned. For some reason I do not understand, not all the new texts supplied to the OTA have their full complement of these identifiers. For example, of the 6498 titles supplied, 3062 have OCLC Marc record identifiers, (discounting an additional 187 duplicated OCLC records in which the record identifier is prefixed redundantly by “ocn”). None of the 6498 has an image set number. Only 2987 have a Proquest identifier, and it’s always the same as the MARC catalogue number. And 963 have no bibliographic identifier of any sort.

No matter: after my skirmishes with EEBO metadata last summer (reported at https://foxglove.hypotheses.org/date/2020/08) , I am confident of being able to recover missing catalogue numbers from at least two different sources: one being Paul Shaffner (whom God preserve)’s eebodat1.xml and the other being Proquest (whom God has recently abandoned)’s title list. The stylesheet I am working on to do mundane things like change the availability statement in the header is duly expanded to supply the missing <idno>s. I decided to add the new Proquest numbers (the so called GOID) even though these are not present in the existing files.

Secondly the image links. One reason for caring about the Image Set numbers is that they are used as part of the address to which @facs attribute values scattered throughout the texts are mapped. Back in the day, it was possible to link directly to a page image in this way. This facility is however no more: Proquest (and presumably their successors) will only allow you to access individual page images by using their own interface, so far as I can tell. It is possible to access the same images via the JISC Historical Dataset sites by judiciously stringing together values from those <idno>s, but I have yet to find a reliable way of doing so for individual pages. For the present therefore the @facs values will remain a touching reminder of how things once were. Though I did add a link to the JISC site into the new headers, along with other useful documentation.

And thirdly the real subject of this entry: what to do about @rend. Now, I have long believed that the TCP P5 texts are not only valid TEI P5 XML but also valid against a specific TEI schema, to wit the schema named (after much argument and in the teeth of some opposition) TEI Simple Print. I distinctly remember (or think I remember) Sebastian and Magdalena putting quite a bit of effort into enhancing that schema with lots of @rendition values to match EEBO-TCP requirements. So when I actually tried validating my nice new files against it, I was a bit puzzled to find that they didn’t.

Specifically, the attribute @rend is not available in the TEI Simple schema, and has not been since at least August 2016. In its place, I should be using @rendition to point to one or more of the predefined simple:rendition values. So I spent an hour or two tabulating all the @rend values used in the new files, and finding their simple:equivalents. This proved easy for most cases, but impossible for just a few (7), some of them esoteric (@rend=’upsideDown’ anyone?), but others (e.g. @rend=”margQuotes” and its friends margSglQuotes and margDblQuotes) quite frequent and clearly necessary. I also realised (belatedly) that allowing my script to make this change was going to make my new texts inconsistent with the existing ones. For the existing TCP P5 texts are not valid against TEI Simple either, I discovered, and somewhat to my embarassment: they use @rend with all sorts of exotic values all over the place. They do use @rendition, but in one way only: some <pb/> elements have @rendition=’simple:additional’. This was entirely mysterious to me for a while (until Paul remembered what it was for). In any case, I will worry about that when a new systematic revision of the whole collection is undertaken, should that day ever dawn. For the moment, I will grit my teeth, stick with @rend, and assure anyone who asks that all TCP P5 texts are valid against, err, TEI ALL. This is known as biting the bullet, I believe.

Update: 10 June 2021. I uploaded all 6498 new texts to new repositories in the Github textcreationpartnership collection over a period of 24 hours last week. And, at somewhat greater length, I have now updated my repository at eebo-bib to describe more precisely what I did to create a TEI-compatible TCP bibliography. Definitely time to declare victory and move on.

Seven steps to Ossian

A TEI transcription of the 1773 edition of James Macpherson’s “translations” of the works of “Ossian”

Why would anyone want such a thing? I can’t imagine, but here’s how I made this one. It turned out to be a seven step process — so far. You can check out each stage from this github repo, if you’re really curious…

1. Decide which PDF to work from

You might think that one library’s digitized copy of “the 1773 edition of Ossian” would be much the same as another’s. But no. There are variations in the physical state of the originals, and the PDF format in which the digitization is made available may also vary. I downloaded three different digitized versions from the Internet Archive, but mainly I used the PDF version of the copy preserved at the National Library of Scotland. https://ia802302.us.archive.org/33/items/poemsofossiantra11macp/poemsofossiantra11macp.pdf I say “mainly” because that particular PDF file had a curious glitch in it which made some of the half-titles disappear when extracted as separate image files. I supplied the missing text from the PDF version of the New York Public Library’s copy.

2. Extract images from PDF

$ pdfimages [filename.pdf] [outputPrefix]

I am too lazy to install anything clever, so I use tried and tested ancient command line Unix tools, like pdfimages. Applying this to my chosen PDF file, I find that each page in the NLS PDF produces three files: two in PPM format which appear to be masks, and one in grayscale representing the page, in negative form. I extract the page image and save it in my img folder, ready for the next stage.

3. Do OCR using medieval rules

$ tesseract [inputfile] [outputfile] -l enm

As noted above, I have a preference for old-fashioned command-line Unix tools, and tesseract, once instructed to use an appropriate language model (enm, rather than eng), actually does a pretty good job of recognizing 18th century typography. It consistently fails on ligatured “ct” and a few other oddities, but is much better than I expected at distinguishing long-s from f. Most of its errors seem to be due to poor image quality. At the end of the process, I have about eight hundred text files, each corresponding with one page of the source, and most of them containing plausible text, which I save in the txt folder.

4. Hand-check, page by page, introducing minimal non-xml markup

I then (and this is where the time goes) proofread each one, introducing some absolutely minimal markup, of my own invention. The cheatsheet reads as follows:

  • introduce a — line at start and end of text on page
  • introduce a == line at start and end of note-text on page
  • introduce a blank line between paras but otherwise retain linebreaks
  • introduce an extra hyphen following end-of-line hyphens which are to be retained
  • replace * or +1 sigla for notes and note references with @ and a sequence number
  • use entity references for long dash, accented letters etc
  • use ““” for open double quotes
  • retain forme work on a single line
  • delimit smallcaps with { … }
  • delimit italicized phrases with {{ … }}
  • use % to mark the start of a dramatic style speech
  • add \ at end of verse lines
  • add $ at end of speech
  • add &end; at end of an argument or other chunk

I made the corrections using, need you ask, emacs aided and abetted with some perl one-liners for bulk corrections. Reading Ossian is an odd way to pass time during lockdown, but no worse than some of the other sanity-preserving expedients one reads about on Twitter. A good soundtrack seems to be almost any Sibelius symphony.

5. Transform and (slightly) reorganize the textfiles into proper XML, one per text

perl streamer.prl v1files.txt

This is not the most elegant or indeed sanitary code I have ever written; it also took quite a few iterations to get it working acceptably, which I defined as generating well-formed XML. It reads in a listof filenames, interspersed with flags to tell it when to start a new output file, and what its initial page number should be. Then it processes in succession each page of transcribed text, building up one string containing all the text chunks for a work, and another containing all the footnotes. Footnotes often span pages, of course. The resulting strings are then output as two separate XML <div> elements. Their contents also acquire some minimal XML tagging (<pb/>, <hi>, <p>, <sp> etc.) before they get flushed out. I gave up trying to overcome some inelegant results of not particularly elegant process. The code is in the Scripts folder of this repo for the morbidly curious; the results are in the xml folder: at least they are well-formed XML.

6. Run XSLT scripts to convert this stuff to kosher TEI documents and validate same.

Since this version is going to be my contribution to the “Ossian Online” project, it should probably follow that project’s usage and TEI practices. Alas, they do not have an ODD to tell me what that should be, and their files are apparently validated against TEI-All. But they do have a reasonable amount of documentation, and enough files already available online for me to be able to construct an ODD automagically (take a bow oddbyexample.xsl a well kept secret inside the TEI P5 Utilities repository) and thus a schema I can use to validate my TEI files when I have finished licking them into shape.

As ever, the fun part of the project is seeing how much of the remaining data-mungeing can be scripted in XSLT. Quite a lot, it transpires, though it remains necessary to hand-craft the details of titlepages, tables of contents etc. Another complete sweep through the text checking for miscellaneous things like the following is also needed:

  • words broken by a pagebreak but not properly reassembled (happens occasionally)
  • quotations not marked as such
  • verse lines not marked as such
  • code-switching
  • residual OCR errors (there are always residual OCR errors)

Before launching into that campaign, I checked the <pb/> elements introduced at stage 5 against the page numbering of the original as preserved in the paratextual comments of my transcription. Somewhat to my surprise, the page-numbering corresponded exactly with the number of such elements, enabling me to construct both a reliable reference system and reliable page image links as values for the facs attribute on each <pb/>.

7. Decide on the macrostructure

The Ossian Online project uses <div> elements for every subdivision of the 1773 edition, at whatever level, all the way down from its two volumes to the arguments of individual poems. It takes the perfectly reasonable view that every text can be organized as an ordered hierarchy of uniformly nested objects. As a consequence, the @type attribute for <div> has to do quite a lot of heavy lifting. Odd_by_example enumerates its values as follows:

  • advertisement
  • argument
  • book
  • contents
  • dedication
  • dissertation
  • duan
  • fragment
  • maintext
  • poem
  • preface

This list combines types that have a structural function (fragment, duan, book) with others that are purely descriptive (advertisement, argument, poem). Nothing wrong with that, but I still find this “divs all the way down” approach somewhat problematic, and that for for two reasons. Firstly a <div> is supposedly something incomplete, which is true of (for example) the argument prefixed to each poem or book, but not of the book or poem itself. Secondly, the relation between the argument and the poem requires that the two be siblings within some larger entity, but the poem is not really an incomplete part of that entity in quite the same way as the argument. Furthermore, in the 1773 edition, we have some texts which are undivided (Carricthura for example) along with others which are divided variously into “duan”s (Cathlona) or “book”s (Fingal). Should each book of Fingal be treated as a single text? Should the whole of Fingal be treated as a single text? Can’t we have both?

Values such as maintext and poem in the list above ring alarm bells indicating that these ontological issues are being evaded. Since the TEI in its wisdom already provides a mechanism for coping with exactly this (not at all unusual) kind of macrostructure, why not use it? I refer of course to the element <group>.

My version of the 1773 edition prefers to treat each distinct work as a <text>, rather than a <div type='maintext'>. Within it, there is a <front> containing a titlepage or half title and the argument, followed by a <body>, if the work is not further subdivided, or by a <group> if it is. A <group>, combines a number of lower-level <text> elements, each again with a <front> and a <body>. I also treat each of the two volumes of the 1773 edition as a <group>. The file driver.tei embeds each file in the structure using xInclude; it is commented to explain what’s going on (a bit).

It remains to be seen what my colleagues in Galway make of this radical re-organisation, to say nothing of my perverse desire to retain the long-s form. But at least changing to a format which matches more exactly that of the (excellent) work already done on the Ossian Online site will be a Simple Matter Of Programming.

An experiment in counting the books

A couple of years ago I spent some time trying to determine which of the titles in the wonderful “At the Circulating Library” (ATCL) database were freely available online in digital form. This was for largely pragmatic reasons to do with building the ELTeC English language collection: other blog entries describe the method I used and some preliminary results. It’s not as easy as you might suppose to download reliable catalogue information from most digital libraries, nor is it always readily tractable when you do. After some experimentation, I hit on the idea of creating a magic key, a kind of fingerprint, derived from the title and author name as specified, which could then be matched against keys in the same format derived from ATCL entries.

More recently, it occurred to me that this data might also provide some interesting numbers to contribute to current debates about digitization priorities. Exactly why some titles make it to Project Gutenberg, or the HathiTrust, or the Internet Archive and others don’t is not a question to which simple un-nuanced answers are likely or even (maybe) possible, but we should still ask them. Those responsible for the digitization efforts of major libraries are a little coy about the principles on which books are chosen for digitization, or even whether they actually have explicit selection policies, for some reason. I assume that there is a difficult tightrope walk between on the one hand practical but purely adventitious matters such as the relative locations of volume and scanner, the size and state of the volume, the time of day, the temperament of the scanner operator etc.) and on the other principled criteria aiming to ensure a balance of say titles by female and male authors, or high and low brow, date of production, longevity of readership, and so on. It would be surprising if the choices were completely unrelated to characteristics of the population being sampled, or totally failed to reflect the cultural priorities of the scanning operation; the same uncertainties apply, of course, to the collection being sampled for digitization itself.

Anyway, I recently read an interesting article by Allen Riddell and Troy Bassett (“What library digitization leaves out”; preprint available from https://arxiv.org/abs/2009.00513)  which reports that in the data they looked at – the comparatively small sample of surviving English novels published in 1836 and 1838 – shorter books, and books with male authors are disproportionately more likely to be digitized. I naturally wondered whether this applies equally well across the whole of the 19th century.  Which is what led me to revisit my efforts of two years ago. But first, here are the results.

There are 19,912 titles in the current ATCL database. Of these, 9152 (46%) have authors identified in the database as male, 9809 (49%) are identified as female, and 951 (4%) are identified as unknown. These relative proportions are rather different if we look at titles with at least one digital surrogate, of which there are in total 9099 (45%). Of these 9099 digitized texts, we find 5221 (57%) are of male authorship, 3718 (41%) of female authorship, and 160 (2%) are unsexed.

Look at that again. Although there are actually more titles available for digitization from female authors than for male, the number that actually gets digitized is significantly smaller (if, like me, you think a gap of 16 percentage points is pretty significant). Hmmm. These counts of course derive from the whole period covered by ATCL, from 1800 to 1900, so I also calculated them for each decade, only to find that the proportions and their imbalance remain fairly consistent across the century. And this despite huge changes in the numbers: for the last decade of the century ATCL lists nearly 6000 titles, a six-fold increase on (for example) the fourth decade. What percentage of those titles were digitized? In both decades, over 51%. And what proportion of those digitized titles were male-authored ? In both decades, 62%. There is some variability across the decades, but the basic picture remains the same

One possible explanation might be that titles with unknown or unsexable authorship (e.g. the ubiquitous “Anonymous”) are more likely to have been female, and that hence we are not seeing all the truly female authors. But even were this the case (after all, why should we not equally well hypothesize that male authors might be bashful or crave secrecy?), the proportions for books ostensibly male-authored with respect to books ostensibly not male-authored (i.e. those classed as either F or U by ATCL) remain stubbornly higher than the proportions for books definitely not male-authored. And indeed, the same mutatis mutandis is true for the ostensibly-female to ostensibly-not-female ratio.

Here’s a table showing the raw counts:

Decade All “Male” “Female” “U” A-dig M-dig F-dig U-dig
19912 9152 9809 951 9099 5221 3718 160
1830s 482 256 174 52 250 164 85 1
1840s 1037 543 422 72 538 334 202 2
1850s 1483 595 778 110 718 347 358 13
1860s 2341 1019 1093 229 1015 540 456 19
1870s 2866 1189 1514 163 1300 642 633 25
1880s 4126 1693 2287 146 1765 945 782 38
1890s 5979 2995 2863 121 3092 1929 1103 60

 

And here’s another showing the percentages:

               
Decade Ad% M% Md% F% Fd% U% Ud%
45.70% 45.96% 57.38% 49.26% 40.86% 4.78% 1.76%
1830s 51.87% 53.11% 65.60% 36.10% 34.00% 10.79% 0.40%
1840s 51.88% 52.36% 62.08% 40.69% 37.55% 6.94% 0.37%
1850s 48.42% 40.12% 48.33% 52.46% 49.86% 7.42% 1.81%
1860s 43.36% 43.53% 53.20% 46.69% 44.93% 9.78% 1.87%
1870s 45.36% 41.49% 49.38% 52.83% 48.69% 5.69% 1.92%
1880s 42.78% 41.03% 53.54% 55.43% 44.31% 3.54% 2.15%
1890s 51.71% 50.09% 62.39% 47.88% 35.67% 2.02% 1.94%

 

In an ideal world, you’d expect the percentages for titles with male authors (M%)  and for digitized titles with male authors (Md%)  to be roughly the same, right?  Think on… And feel free to download the csv file behind these tables for your own experimentation.

One should always suspect the data, so I make no excuse for the following detailed blow by blow account of how I got these numbers. Full gruesome details, including the scripts mentioned below, are available from https://github.com/lb42/bookLists

The basic method was to download a complete catalogue of relevant titles available from each target digital library, and then try to match them with records in the ATCL. For Google Books, which does not seemingly provide a complete catalogue online, I tried a different method, discussed further below.

I started by downloading the latest (June 2020) dump of the ATCL database, and converting it to a basic TEI XML format. I then did much the same for the holdings of five digital libraries with good holdings of 19th century novels: the Hathi Trust, the British Library, the Internet Archive, Project Gutenberg, and Google Books. As a control, and for testing purposes, I also looked at a few smaller collections, notably the Victorian Women Writers Project at Indiana University and the (now defunct) University of Adelaide “ebooks” repository. I wanted to provide something similar to John Mark Ockerbloom’s lovely Online Books Pages at https://onlinebooks.library.upenn.edu/ but more precisely tied in to ATCL.

Hathi Trust makes available a monthly dump of their entire collection as a huge tab-delimited file. Working with the most recent dump, dated September 1 2020, I used a simple minded perl script `hathiProcess.prl` to parse this file and select from it only freely-available English language books published in Great Britain between 1800 and 1920; an  XSLT stylesheet `htConv.xsl` then converted the results to the common project format (CPF).

The British Library website makes available an Excel spreadsheet providing metadata for the titles from their collection which were digitized some time ago by the Microsoft Books project I downloaded this, converted it to TEI with `csvtotei` and converted the result to CPF, (selecting just the 19th century titles) with `blConv.xsl`.

Project Gutenberg makes available several versions of its catalogue data. I worked with the most recently updated one, which is a vast archive of unbelievably verbose RDF files. Despite its complexity, this data doesn’t include any publication data for the source texts concerned (unsurprising really), though it does provide birth and death dates for the authors. To cut down the numbers a little, I rejected titles whose authors were not born during the 19th century, and also those which specified a MARC relator field “edt” (to cut out non-original editions). Once I had remembered how on earth to handle a gazillion tiny files of RDF (I did this back in 2018 ), I used the `gutConvRDF.xsl` script to process them all to CPF, and concatenated the results into a single file.

The Internet Archive, so far as I can see, doesn’t have any generally available or downloadable catalogue, though it does have a really good query interface. The method I used for attacking Google Books would presumably work equally well (or equally badly; see below) in this case, but I haven’t tried it. Instead I just used a predefined collection called `19thcennov` which someone at UIC Urbana Champaigne thoughtfully created back in December 2008. This gave me 7828 XML records which were easily converted to CPF using `iaConv.xsl`.

The common project format files all consist of TEI <bibl> elements with either an @xml:id attribute or an <idno> specifying the identifying code for this item in the relevant repository, e.g. ‘ia:foreignersnovel03pric` identifies the Internet Archive’s digitization of volume three of Eleanor Price’s novel “The Foreigners” . Each <bibl> also has an @n attribute supplying the magic key for the title, which is confected as follows:

  • remove the full stop following Mr or Mrs in any title containing one
  • take the substring of the title up to the first occurrence of one of the punctuation marks . , : ; or /
  • concatenate this with the author’s last name
  • convert to lower case and remove all punctuation characters and spaces

So, for example ATCL lists a work with the title “The Foreigners: A Novel” attributed to author “Eleanor C. Price”. The same work appears in the Internet Archive list, but with the author “Price, Eleanor C. (Eleanor Catherine)” and the title “The foreigners : a novel”. Despite the differing strings, both will get the same magic key “theforeigners|price”. This method is far from bullet proof, but it’s serviceable.

For Google Books, as noted above, there is no readily downloadable catalogue. But there is an API, which in a moment of madness I thought it might be cool to learn how to use. A day of poking around led me to a neat python script some helpful person had written to look up ISBN numbers (hat tip to AO8’s treasury , which I mercilessly hacked to my own purposes. My version reads a file of URL-encoded search requests like this “inTitle:the+inTitle:foreigners+inAuthor:Price”, fires them at the Google API, and processes the returns into a rudimentary bibl or a comment lamenting the absence or unavailability of the item in question. The file of search requests is rather long (one for each title in ATCL for which I have not yet found any digital version – a total of 11,203 ) so I make the program sleep for a while after firing off about 40 consecutive requests, to help the Google server catch up. Despite this considerate behaviour on my part, it did not take Mr Google long to decide that my program (or my IP address) was a threat, and then to start returning unco-operative HTTP messages like 503 (“Service Unavailable”) and 429 (“Too many requests”). The API Help pages confirm that Google considers “using an app, program or script to perform a large number of searches in a short time” prima facie justification for temporarily blocking the IP address in question; though it’s not clear what exactly is meant by “large” (more than 100?) and “short” (less than a minute?) in that phrase. Furthermore, when I search using my specially-minted API key, there seems to be a hard limit of 1000 queries per day in any case: so this job is not going to be finished very quickly. Still, I do now have an extra 1517 records to show for two day’s work.

Once I’ve created all these lists, I run the merger.xsl script to add <ref> elements to the ATCL-TEI file I created in the first step. This makes for some redundancy, for two reasons: firstly, for most of the archives a three volume novel is likely to get a separate entry for each volume; secondly, for many titles, there exist multiple digitizations – which may (or may not) derive from the same source. The following table shows for each archive the number of records selected for processing, the number of references to ATCL titles found, and the number of titles affected. Note that I haven’t yet done any de-duplication to remove overlaps.

British Library 62015 9920 5104  
Hathi Trust 460070 18891 5655  
Internet Archive 7829 4691 1655  
Project Gutenberg 38338 2880 2275  
Google Books ? 1517 1517  

I haven’t made available the CPF files for each archive, nor the final merged TEI version of the ATCL dump, since this is not really my data to share. But I have made available a file called atcl-links.csv, which is a spreadsheet with a row for each ATCL title digitized in one or more publicly available digital collection, mapping its ATCL identifier to its identifier in each repo. I’ll  update these as and when the data improves.

Building the Eltec (stage 0) … continued

Have at you, Project Gutenberg…

I am for sure not the first person to think it would be nice to try to make the Project Gutenberg metadata more easily machine tractable. Matthew Jockers wrote a python script to hack usable metadata out of the individual texts back in 2010 (see this blog entry ) ; Damon Cavar wrote some java to do something similar but starting from the RDF form of the Gutenberg catalog, as part of an ambitious
(but I think as yet incomplete) Project Gutenberg to TEI XML conversion project  last updated 2012.  More recently, Jonathan Reeve has announced an interesting project which is hacking together various bits of Gutenberg, Gitenberg, and Wikipaedia to make a  project Gutenberg database for text mining  … one day.

My objectives are not so ambitious and I like to keep things  simple. I just want to know how many Gutenberg titles are listed in the Bassett database of 19th c British fiction. (I’d also like to be able to extract a list of all British novels in English published for the first time between 1902 and 1920, but that’s a separate problem) Having experimented with other plain text options, I reluctantly decided to start from the Gutenberg RDF catalogue. At least that is expressed using a syntax which xslt can handle and validate. No claims that its semantics are entirely reliable, of course.

Step 1 is to download and unpack a massive zip file from the Gutenberg site. We want the RDF format data is linked to from a page in the Gutenberg wiki:  It is massive because it actually contains nigh on 50,000 subdirectories, each containing a single file, describing a single text. So, for example, the RDF format catalogue entry for text number 1234 is in the unpacked file cache/epub /1234/pg1234.rdf When I looked there was also just one directory called DELETE-55495 which contained a variant of the entry for pg55485.rdf, but I pretended I hadn’t noticed that.

Step 2 is to develop and perfect a simple XSLT script to extract the useful grains from the enormous amount of chaff in each RDF file. This script (rdftotei) is designed to meet the needs of the ELTeC, so it rejects anything which is clearly out of the desired time zone (author born after 1920 or before 1800), or definitely not a novel (some records use a marc edt descriptor to show that they are edited compilations). If I could find a way of identifying books which are not in English I would exclude them too.  It cranks out simplified TEI bibl records like this:

<bibl xml:id="10037" n="abeautifulpossibility|Black">
<title>A Beautiful Possibility</title>
<author dates="1857 1936">Black, Edith Ferguson</author>
</bibl>

As you can see, this includes a  magic key that I will later use for matching with other ELTeC bibliographic records, notably the Bassett database I blogged about last week.

Step 3 is to find a way of running this script against 50,000 files which does not cause my computer to melt down, and preferably will complete in my lifetime. My first simple minded approach was  a shell script that invokes saxon on each file. But this has to set up a JVM afresh each time it runs, so it takes forever. I considered glomming the individual files together into a smaller number of larger files, so that loading the JRE gets done less frequently, but this is fiddly because each of the individual files begins with an XML declaration that would have to be removed during the glomming process. A question to the oxygen users list evinces 3 helpful alternative suggestions in ten minutes: the easiest and quickest of which is to use a feature I didn’t even know existed in saxon: specifying a directory as input and as output. So with all my RDF files in the folder RDF and nothing in the directory RDFx, I do the following two shell commands:

saxon -s:RDF -o:RDFx rdftotei.xsl
cat RDFx/* > gutenList.xml

and the whole thing is done in a couple of minutes.

Step 4 is to repeat the process as before: pick out the magic keys and then look for overlaps between those keys and those in the Bassett database (like this:

saxon guten-list.xml getKeys.xsl > gutenKeys.txt
comm -12 <(sort gutenKeys.txt) <(sort bassetKeys.txt)

Result on the first round: 1478 Gutenberg titles are already known to Bassett. Not as many as I’d expected, but not bad. Here are the full results for all three digital collections.

Out of 13,859 titles in Bassett’s database,  a total of 2937 appear in at least one of Gutenberg, Internet Archive, Google Books, or VWWP, i.e. more than 20% (which is better than I was expecting).  Here are the counts for the individual collections:

Gutenberg InternetArchive Google Books VWWP
1478 1155 594 32

 

Also to be expected, there’s a bit of overlap. 2638 appear in only one digital collection; 276 in two, and 23 in all 3. You can probably guess which titles those are, though one of them came as a bit of a surprise. What’s so great about Mary Ward’s “Marcella”?.

How to make a sow’s ear into a silk purse/Comment faire d’une buse un épervier

As the proverb says, you can’t turn a buzzard into a sparrow-hawk. But here’s how I went about producing a TEI-conformant minimally encoded edition of the 1611 King James bible, starting from an all-singing, all-dancing, vastly over-complicated, web site to the existence of which Martin Mueller had alerted me last Friday in a plaintive posting to TEI-L

The problem with web sites like this is that they expose only various HTML views of their underlying data, usually heavily infested with javascript, often deploying all sorts of nonstandard gizmos to make the pages look just so. I’ve no quarrel with their doing that, I just wish they’d make it a bit easier to get at the real data, i.e. the transcribed text and the pointers to the images that go with them. This site is a case in point: after spending some time looking at some of the gazillion HTML files a simple-minded wget -r starts hoovering up, I hypothesize that the data is actually stashed away somewhere inaccessible and all that you see on the site is a bunch of variously configured static web pages which have been created from it. The good thing about that is that the static web pages were created by an automaton, and the process can therefore be reversed by another automaton. The bad thing is that (in this case at least) clearly different automata have been used to generate vaguely different versions of the same page. For example: the following three URLs all show subtly different versions of the same first page of the 1611 bible : https://www.kingjamesbibleonline.org/1611_Genesis-Chapter-1/  https://www.kingjamesbibleonline.org/Genesis-Chapter-1_Original-1611-KJV/ and https://www.kingjamesbibleonline.org/Genesis_1_1611/

These are not simple redirects: each URL delivers a file in a different format. Well, I could have asked whoever runs this site what was going on, but that would have spoiled the fun, so I chose the format I liked best (the last of the three above) and wrote a script to generate requests for just those pages. I had to assume that the URLs would follow a consistent naming format, encouraged by a table of the names of the books of the bible I found in one of the chunks of embedded javascript, and which I moved into my little perl script. Running this script produced a bash script which grabbed each page using curl and saved it as an HTML file on my machine.
What went wrong with this process? Surprisingly little: I didn’t find out till Sunday lunchtime that not only had I completely overlooked the Apocryphal books, but also that the file naming conventions used for these were not quite the same as for the canonical chapters (“Judith” for example is actually spelled “Iudeth”), but for the bulk of the 1300 or so chapters my guesses about the URL to use were spot on.  [This was Hubris. See my comment below]
The real challenge was disentangling the useful data from the resulting screen-scraped mess of pottage. My usual approach here is to run the HTML I get through Dave Ragget’s utterly wonderful tidy utility to turn it into kosher XML, and then write an XSLT script to pick out the bits I want. In this case, however, the pottage I got was so messy even tidy threw up its metaphorical hands and refused to proceed with it, so I had to concoct a little perl script to pre-process each file and extract just the useful part, before running it through tidy, and processing the resulting XML into conformant TEI by means of saxon and a little XSLT stylesheet. Like this:

for f in webScraped/*.html; do \
FNAME=`basename $f .html`;\ 
echo ${FNAME} ;\ 
perl extract.prl $f | \ 
tidy -q  --error-file tidyerrs --doctype omit --numeric-entities yes - asxml | \ 
saxon -o:chaps/${FNAME}.xml -s:- -xsl:postGrab.xsl chapId=${FNAME};\ 
done;

And what went wrong with this process? Well, quite a lot, as you might expect, but nothing that couldn’t be fixed by tweaking and rerunning the process (this is why I always put the effort into writing bash scripts for jobs like this: they can be rerun till I get it right). For example, I was deriving an XML id for each chapter from the name of the input file, but some of the files had names beginning with digits. My plan was to put each chapter into a separate XML file and then use xInclude to munge them together into a TEI document (to maximize the flexibility of said mungeing) — but for this to work I had to get namespace declarations in the right place, and as anyone who has used them knows, anything involving namespace declarations never works the first time. To say nothing of unexpected inconsistencies in the data : which in fact were really very few. In fact so far, the only thing that has so far caught me out was a handful of chunks which looked like biblical data in the HTML markup but were not. Fortunately these were detectable because they wound up with an invalid @n attribute in the XML output and could therefore be weeded out after the event.
While all this was going on, I wrote my driver file and did some further sniffing around on the interwebs for sources from which to complete the work. The 1611 Bible has a title page and loads of prefatory matter not all of which is available on www.kingjamesbibleonline.org (for the record, I found the title page on Wikimedia, and the rest of the front matter at several places, but in page image form only).
How should the Bible be modelled as a TEI document? In my first view, each verse is an <ab>, each chapter is a <div>, each book is a <text>, each testament is a <group>. This made sense to me, but then I realised that processing would be simpler if instead each book were regarded as a <div> of a different type; hence, that is what the current versions of both driver file and ODD say. Considerations of where to put the genealogies, tables for Easter, almanachs etc. which arguably do not belong in the front matter may lead me to review that decision, so I have left <group> lurking in my ODD. It should be easy to produce a separate driver file representing that alternative view.

A more pressing need, however, is to sort out the placement of the page image links: in the HTML source, and therefore in my XML, links to the page images for the whole of a chapter are given as a sequence at the start of the transcribed text for that chapter, rather than being placed at the point in the transcript where that page begins. In some cases that point is identifiable because the transcribers have chosen to include part of the running page header in the transcription there (it winds up as a <fw> element); but in many it isn’t. Most chapters only occupy a few pages so sorting this out would not be a major effort, just a rather tedious one, and not-easily-automatable one.
So what have we learned? Actually it’s not that hard to make a usable TEI document even from something as idiosyncratic as this web site. Which is encouraging. Let me also hasten to add that I mean no disrespect for the hard work (still less for the generosity) which must have gone into the production of that website: everyone is entitled to their own design decisions. I am pleased to express my thanks to the anonymous people who have put in that effort, and decided to share its fruits even with the godless multitude. As the 1611 translators say in their preface ‘Zeale to promote the common good, whether it be by devising any thing our selves, or revising that which hath bene laboured by others, deserveth certainly much respect and esteeme’.

Encoding the history of the OuLiPo

At the beginning of February, I had the pleasure of co-organising (with Sebastian Rahtz, Camille Bloomfield, and Hélène Campaignolle-Catel)  a workshop on data capture, as the second event in the Algoritm Seminar Series which forms part of an interesting ANR funded project called DifdePo. The project is a collaboration between the BnF and Ecritures de modernité, a research unit located at Paris III, and its objectives include creation of a TEI-based digital archive of the archives of the OuLiPo, which are currently stashed away in boxes at the Bibliothèque Nationale’s Arsenal depository. The papers include letters, photos, press cuttings, postcards, drafts, and notes of all sorts, but for the purpose of this exercise we decided to focus on the records of the OuLiPo’s regular meetings, which began back in the early 1960s. The Archive has already been catalogued, and work is in hand to produce digital images of a sizeable proportion of it. The object of our workshop was to explore ways of transcribing these documents, given that the project has very little funding, and will therefore have to rely on the good will of volunteer transcribers, enthused by things OuLiPien but maybe a little deficient in TEI knowledge.

About a dozen people participated, most of them surviving to the end of the day. We began by asking them to transcribe a page from a small collection of pre-selected digital page images, using Word. (I freely admit to a degree of smugness on discovering at the last minute that the teaching room was initially equipped only with old-style doc-producing Word, which had to be enhanced to a more modern docx-producing version at rather short notice by the unflappable Joël) This exercise demonstrated, as we had hoped, quite a bit of variation about what exactly should be transferred from the image to the text, and on what editorial principles, thus motivating a useful initial discussion about the principles and praxis of text encoding. One of the participants proposed (unprompted) the principle of “fidelity” to the source, while another argued repeatedly for “capturing the meaning”.

Once lulled into a false sense of security by this exercise, participants were exposed to the weirdness of an XML-editing environment using everyone’s favourite XML Editor  oXyGen and my usual tutorial — create a document, learn how to tag parts of it, learn how to manipulate the structure, etc. We then offered them a more demanding work flow, involving first capturing a document in Word using a Word Template, which defined Styles to highlight a number of significant features (headings, list items, etc., but also personal names etc.), secondly converting this to a TEI form, using oxGarage and a specialised profile, thirdly looking at (and possibly modifying) that in oXyGen, and then converting it back to Word to confirm the feasibilty of round-tripping.  Sebastian Rahtz of Oxford (whom God preserve) invested quite a bit of pre-workshop effort into setting up the necessary infrastructure for this, and making sure that it all worked correctly on the day. He also made it possible for us to inflict on the encoders a third alternative approach, based on an experimental installation of Ben Brumfield’s “From The Page” crowd-sourcing prototype software. I had expected this to be everyone’s favourite, but (maybe because we had already by then sensitized them to the delights of structural markup) our encoders seemed to find the simplicity of its interface made it hard to take seriously. We had prepared tutorial scripts for each of the three approaches (TEI source code available from my tei-fr repository, if you’re interested) so I was able to spend some of the time wandering about taking photos of hard-working encoders.
By the end of the day, everyone had tried all three approaches, and everyone had produced a couple of TEI XML files conforming to a simple transcription schema I had prepared earlier. We collected them all up and Sebastian showed how our pretend archive could be displayed on a web page, complete with corresponding page images, and vocabulary lists, and personography. This was (of course) all done with a straightforward customization of the standard TEI-HTML stylesheets, now available in the Stylesheet package as part of the Difdepo profile.
Conclusions? We still don’t really know whether our TEI-XML transcriptions are aiming for “fidelity” or “meaning”, but we have at least demonstrated the possibility of either (or both) . And we do know that the participants all seemed to be more enthusiastic about using the customized-word-template approach than either raw Oxygen or (possibly over-cooked) From The Page. We didn’t explore the idea of a pre-customised oXygen author-mode interface, which might well repay the necessary investment of effort, if there is a lot of metadata to be entered, for example.

 

Joel and I sample the oXygen
Joel and I sample the oXygen

I take the liberty of listing the names of the registered participants, for their greater glory:

  • Camille Bloomfield
  • Hélène Campagnolle-Catel
  • Paula Klein (Projet DifdePo)
  • Chris Clarke (Projet DifdePo)
  • Jeanne Devautour
  • Julie Bernard (Poitiers)
  • Marie Bonnot
  • Marianne di Benedetto (ENS Lyon)
  • Guillermo Hector
  • Pradeep Claassen
  • Louise Kari-Merau
  • Leïla Berlot
  • Barbara Servant (Univ Rennes II)
  • Clara de Reigniac
  • Gabrielle Bruzzone (Poitiers)
  • Claire Leroy

All affiliated with Paris III, unless otherwise indicated.

Lodelisation

Lodel (Logiciel d’edition electronique) is the name of the CMS which drives Open Editions, one of Europe’s leading open access publishers. Back in 2009, Marin Dacos announced at the TEI council meeting in Lyon that Lodel would start using a TEI schema for its internal processing, while continuing to accept manuscripts for publication in any of the commonly used office document formats. Documents would be worked on in ODT, and automatically converted to a simple TEI schema for internal processing, from which they would be converted for publication on the web and on paper.

Documentation subsequently appeared on how to prepare documents in TEI for processing by Lodel (in French at http://lodel.org/701 and also in English). An XSD schema for it is documented at http://lodel.org/715.

This blog entry summarizes what I needed to do to a real TEI document (specifically my forthcoming title What is the TEI?) to get it to work with Lodel: the full story is implicit in an XSLT stylesheet I wrote for the purpose. Actually, when I say ‘I’, I should make clear that the conversion was in fact handled by the nice people at Open Editions, who were remarkably patient with my eccentric use of TEI, and my even more eccentric wish to generate a Lodel document directly with as little manual intervention as possible. My thanks to Jean-François Rivière and Martin Dulong from Open Editions for their helpfulness, both in steering my TEI manuscript through the process, and in responding politely to my inane questions about what on earth was wrong with my lovely tagging.

The following list shows (in no particular order) the chief changes I found necessary and in some cases a bit unexpected.

  1. As might be expected, the Lodel schema doesn’t have any of the following semantic elements, which I have found useful when marking up technical documents: <gi>, <att>, <ident>, <val>.  More surprisingly perhaps, it doesn’t seem to have <foreign>, <emph>, <soCalled>, <mentioned>, <q> or <quote>either. My stylesheet turned all of them into <hi rendition=”#gi”>, and also generated a <rendition> element with an appropriate default style for them.
  2. The Lodel schema doesn’t allow lists or quotes to be contained by paragraphs. This is a generic HTML limitation, if I understand aright, but that doesn’t make it any less annoying. Call me verbose if you will, but I often write a single para with a bit of prose, followed by a list, a bit more prose, and another list. My stylesheet had to do some clever fiddling to deal with this (tx SPQR) but this is one case where I think Lodel should be a bit more broad minded.
  3. In fact, the Lodel schema only knows about two kinds of list: type=ordered which are numbered, and type=unordered, which are not. Gloss lists are not supported, so my stylesheet had to tweak <label> elements into a <hi rend=”#label”> child at the start of an unordered list item (but with @rendition of “gloss”).
  4. My TEI documents can have lots of XML examples, which are easier to read if they are wrapped in a differently-namespaced <egXML> container. The Lodel schema requires use of the <code> element, containing either a CDATA marked section inside it to preserve the layout, or XML tagging escaped by entity references. The only problem with this is that <code> is a phrase level element, not a block, which means that some hand tweaking is needed at the Lodel end.
  5. Lodel is intended for journal articles and manages each of them separately as a distinct TEI document. Chapters of a book have to be treated in the same way, which seems a bit odd — for example, each chapter gets its own TEI header. My stylesheet splits things up rather crudely, assuming that each top level <div> within the body is intended to be a separate document.
  6. Lodel insists on having an explicit indication of the nesting level of each subdivision, using (bizarrely) the @subtype attribute on <div> with values level1 level2 etc. My stylesheet grits its teeth and generates these automatically, but I think this is one design aspect of Lodel which might merit a second thought.
  7. The Lodel schema doesn’t allow headings within anything except sections, so you cannot provide them for lists, tables, or figures without some fiddling about.
  8. Lodel doesn’t number headings for you. Even if you supply a number for a section (using the @n attribute on a <div>, as recommended in the Guidelines), Lodel will not use it. My stylesheet does nothing about this: I just decided to live without numbered sections.
  9. Lodel handles cross references using much as you’d expect, provided that the value of @target is a complete URL, i.e. a link outside the current document. This means you cannot cross reference other sections of the document being encoded which seems rather an odd restriction. Put together with the foregoing lack of automatic section numbering, this can make for quite a lot of rewriting.
  10. Lodel knows about <bibl>, but not <biblStruct> or <biblFull>. Up to a point. Most of the semantic elements defined for the content of bibliographic elements (<publisher>, <biblScope>, etc.) are allowed, but it doesn’t actually do anything with them. To produce a correctly formatted bibliography, such encodings have to be converted to a fully styled version, following the requirements of the Open Edition style guide. I wrote a stylesheet to do (most of) this for one small bibliography: in the general case something much more complicated would be necessary.

That last caveat is of course true of all the rest: I’ve only tested this process properly on one text, albeit a reasonably large one, and only on a born-digital document. If you’re thinking of authoring documents in TEI though, chances are you won’t do it significantly differently from me, so some of the issues I encountered will affect you too. And, for the avoidance of doubt, let me repeat that none of this is meant to discourage anyone from using Lodel!

Interoperability of TEI projects : apotheosis or chimera?

This was the title (sounds better in French) of the closing talk I gave at an interesting workshop last week. A prevous COST-funded meeting in Krakow had brought together Czech, French, Catalan, German, and Polish teams working on several different dictionaries of medieval Latin to elaborate the idea that maybe they could make their various lexica interoperable if only they could agree on a common format, for which the TEI seemed the most plausible candidate. Susanna Allés, the energetic organizer of this workshop, got funding for it from several sources, notably the ALLC (or European Society for Digital Humanities as it now prefers to be known). She also seems to have hit on the wheeze of inviting a number of luminaries to make the case for the TEI dictionary tagset (notably L. Romary, F. Glorieux, P. Banski);  alas, in the event only I turned out to be available. Which was useful for me, since preparing for the workshop meant rediscovering all sorts of dusty and neglected (by me, though not by others) parts of the Guidelines.
The workshop was held in the CSIC (Spanish for CNRS) Istitucio Milà i Fontanals, which occupies a rather grand building conveniently located in the Raval, a picturesque if slightly seedy district of Barcelona to the left of the Ramblas, and we were all accommodated next door in the splendidly-named Investigators’ Residence on Calle Hopital. Barcelona is not a place for those uninterested in food and drink; and we were very well fed, in large quantities, if at strange hours and (on one occasion) after a lengthy walk up town through an unexpected tropical-style deluge. The ravioli stuffed with pears and cheese offered by the resto “En Ville” for lunch was particularly memorable, invidious though it is to single out this one occasion.

More intellectual fare was also on offer, of course. It is always a pleasure to arrive amongst a group of specialists personally unknown to me and from a domain of which I am more or less totally ignorant  to find that word of the TEI has already reached them, and often in a far from superficial way. So I was made very happy indeed to hear Sabine Thuillier (currently working in Madrid on the Diccionario Griego-Espanol but ‘formed’ as they say at the Ecole Nationale des Chartes) evangelise for the TEI as an international open source community, and impressed by the way she is implementing it in a workflow which though its editors remain obstinately based on Word Perfect, remains determed to envisage production of a respectable TEI P5 version.
Similarly, the team responsible for the eLexicon Mediae Latinitatis Polonorum lead by Krzysztof Nowak from Krakow, while maintaining a proper scepticism about some aspects of the TEI’s conceptual model, was clearly persuaded of its virtues as an open standard, notably as evidenced by both the amount of open source software (they mentioned XTF, Philogic, and TXM) and the number of comparable projects (they mentioned the Anglo Norman Dictionary, the Glossarium DuCange, and several others) using TEI. Their workflow starts with an OCR phase, since they are starting from an extensive library of source texts  and then uses LibreOffice and a customised library of styles to enhance it to the point where it can be automaticalty converted to TEI, thus (apparently independently) following the same path as is used by Lodel, OxGarage, Agora, and no doubt others, to combine the user friendliness of a word processing style interface with the rigour of a TEI structured maintenance format.

Catalonia has ambitions (as posters everywhere proclaim) to become politically independent of Spain, and certainly its linguistic independence is a well established fact. As a confirmed non-speaker of either Spanish or Catalan (nor of Basque, Galician, or Portuguese for that matter) I regretfully let the interventions in those languages wash over me, and thus missed out, notably on Jose Manuel de Bustamente’s insights on the relation between textual corpus and dictionary. I did however manage to understand the German colleagues present since they made the effort to speak in English or French, for example Alexandra Gorbrecht from the Trier Centre for Digital Humanities gave a brief overview of the dozen or so dictionaries put together online at woerterbuchnetz.de with a well designed query interface. Allegedy all of these dictionaries are locally stored in TEI XML, but as this is not currently exposed one cannot tell how consistently it has been done. None of the other major TEI dictionary projects in Germany I am aware of was represented here: presumably because none of them is specifically concerned with Mediaeval Latin. I had to console myself for the absence of Werner Wegstein from Wurzburg by stealing one of his examples for my own talk.
Bruno Bon and Renaud Alexandre from the IRHT in Paris had the advantage, if advantage it be, of being able to develop their proposals for an over-arching Novum Glossarium Mediae Latinitatis on the basis of the already existant complete Glossarium of Du Cange which has been freely available in TEI-inspired XML markup for some time now, thanks to the work of Fréderic Le Glorieux. The idea seems to be to develop a set of proposals able to express the (not inconsiderable) variation in practice amongst these and others working on different lexica of medieval Latin in Europe, and thus create what (inevitably) Bruno suggested would be called NGML (the Novum Glossarium Markup Language). As a first step they have set up an exploratory multilingual wiki with some nice visualisation tools, based on a few sample entries taken from each of the five different lexical projects (specifically, in Barcelona, Prague, Krakow, Munich and Paris), and are inviting more. This could be fun, though I think expressing NGML as a real TEI ODD would be more of a challenge.

Susanna Allés and her graduate student Frédérique Laugrost (on secondment from the Ecole Nationale des Chartes) talked about the specific problems they faced when starting to apply the TEI to the text of their dictionary: the Glossarium Mediae Latinitatis Cataloniae. Many of these are familiar, of course: notably those which derive directly from the wish to preserve the punctuation and use of abbreviation which characterize such sources and at the same time model the logical structure which they determine. Some of these problems do however point to aspects of the current TEI dictionary model which could be improved.

I started making a short list of such points during the workshop, but sadly did not get very far:

  • too many of the proposed TEI dictionary elements relate only to modern lexicographic practice. Deciding which ones to filter out to make a kind of TEI Lite for dictionaries would be very desirable.
  • an element for “translated segment” is desired, even if it is just syntactic sugar for with a value for xml:lang other than that of the surrounding text
  • some dictionaries have entries which are large enough to have multiple paragraphs but there is no place for <p> in any model.entryLike element
  • when a term is identical in two or more languages, can xml:lang take more than one value (I confidently said it could, but I think I am wrong)
  • how should you mark a word which is clearly readable in the text when its meaning is entirely uncertain? (I suggested <orig>, but there must be better ideas)
  • The typology currently used for <form> combines categories from entirely dissimilar taxonomies, e.g @type=lemma is an entirely different kind of thing from @type=compound. Likewise, the typology one might want to use for should be more to do with the way the sense had evolved. To both these points I said (in my best French) “Bof”, Or, more precisely, it’s only by receiving proper input from specialists in the field — those best able to define more appropriate typologies — that the TEI progresses…

I’m hoping that the undoubted success of this workshop will encourage the participants to form a SIG on the subject or (as Piotr Banski had previously suggested to several of them) to make an active contribution to the existing LingSig. Plenty of scope for very interesting work to come, not to mention the opportunity of returning to Barcelona for the paella which I somehow failed to find time for on this occasion.

TEI++ : une formation avancee

Cet été, j’étais invité par le consortium CAHIER, en partenariat avec les consortium Corpus Ecrits et IRCOM,  d’organiser un atelier dite “TEI avancée” sur quatre journées.  Je leur ai propose une mode d’opération divisée en deux parties (présentation, travaux pratiques) et une organisation selon trois axes:

  1. modélisation des ressources et séléction des traits signifiants
  2. encodage et explicitation TEI des structures modelisés
  3. exploitation et analyse des ressources structurés

Je leur avais aussi propose de partager les  travaux de formation avec quelques experts francais.  La formation s’est tenue à l’Institut de Linguistique Française à Paris du 19 au 22 novembre 2012.

Voici en sommaire (anglophone, désolé) ce qui s’est enfin passé …

Day 1

Proceedings began on the fifth floor of the ILF, a nice light room but not quite big enough for the 18 or so participants. This was not altogether bad, since the consequent huddling together encouraged emergence of a cohesive group identity, which I also tried to encourage by getting the participants to place themselves on an improvised graph along two intersecting axes : literarature vs linguistics, and researchers vs support staff. Most people turned out to be in the bottom right quadrant, i.e. ingénieur + linguistique, but there was also a smattering in the littéraire + scientifique box, to say nothing of two sociologues who insisted on positioning themselves in the middle of the littéraire vs linguistique axis. The rest of this first introductory session was given over to a rapid review of some fundamentals of encoding, and a sampling of the websites of half a dozen real TEI projects around the world, which might have gone better had I rehearsed it better, but got the message across that quite a few very different projects were doing seriously cool stuff with the TEI.

After coffee, I introduced them very quickly to a spot of data analysis, using as vehicle the celebrated postcard archive of M Marcel Virgolos, and Lauranne then took over for a refresher on using Oxygen. They marked up a postcard or two, and reviewed commonly used TEI tags for each of some pre-selected texts : a French novel, a poem, and a play. Most students completed all of these, mastering most of the key features of the Oxygen XML Editor quite rapidly, but I think we did not allow enough time for this session, given the mix of abilities present.

Lunch in the form of a large cardboard box containing a “plateau” of cold cuts, salads, bread roll, plastic cutlery etc. duly appeared and was despatched. Thus strengthened, I embarked on an all-singing all-dancing overview of all the TEI modules, and what you can do with some of them. That took about an hour, but helped motivate Lauranne’s traditional exercise on using Roma to make a schema by reduction from TEI-ALL which followed it. By the end of the day, everyone seemed quite comfortable with the idea of pérsonalisation de schéma, and reasonably convinced that they might find what they wanted to mark-up somewhere, somehow in the TEI.

Day 2

On this and subsequent days we were were displaced to a much better (because bigger) teaching room. I began the day in seriously magisterial mode by explaining (many of) the components of the ODD language, and why you might care to know about it. This was quite punishing, both for me and for some of the less technically-minded participants, but no-one visibly fell asleep. For the subsequent practical, Lauranne had prepared a script in which participants submitted their own texts to analysis by OddByExample, generating a personalised ODD. The majority of course had not come with their own texts, or had texts not in TEI P5, so they ran the exercise on a rather inadequate sample of the Virgolos corpus instead. With a bit more prep, I think this could be a really fun exercise and an excellent way of getting people to learn ODD properly. It also revealed a bug in OddByExample which Sebastian graciously fixed overnight

Lunch being cardboard plateaux once more, I went for a stroll round the nearby Parc de Montsouris to see some of the gray autumnnal paris daylight while it was available. I then droned on for well over an hour on the subject of the TEI Header, which I am now selling as being “metadata for the rest of us”. They made me do it. The exercise we adopted was the French version of creating an ms description for the W. Owen ms, last seen in Berne, This is quite useful for the purpose, especially in combination with the following exercise on marking up a transcription of that manuscript ; time permitting however I would have instead preferred to use a different French manuscript for both. If I had one.

Day 3

Wednesday I had carefully billed as “journee des guest stars” since the idea was to make other celebrated French TEI enthusiasts share in the work. So we began with a presentation about TEI recommendations for dealing with named entities and their names given by Alexandre Gefen. Since the room contained more than a few French linguists this gave rise immediately to a heated debate on the philosophical basis and nature of nominal reference. The exercise in marking up names was a little under-prepared and one of the students insisted on asking the Emperor’s New Clothes question (why bother?), which was answered by another participant citing (at length, and with enthusiasm) the work of Nicole Dufournaud inter alia, which was nice. Since Alexandre had to leave early, I filled the gap by moving my brief overview of tools options up, giving a plug for Sebastian’s stylesheets, and letting them experiment with OxGarage, which they loved.

For lunch we went to the brasserie down the road, which was a much much better idea than the plateaux. Everyone got very jolly and there was a fair amount of shouting. Our second invited expert of the day was Bertrand Gaiffe, from ATILF, who delivered an excellent pair of lectures about encoding of oral and linguistic data respectively, also involving a fair amount of interaction and discusion, but not much actual tagging, since TEI interfaces for the appropriate tools remain elusive, best efforts of the LingSig notwithstanding. .

Day 4

The final day began with a presentation on the various TEI orthodoxies concerned with the editing of primary resources given by our third invited expert : Alexei Lavrentev from ICAR. Participants were then offered the choice of doing either the reverse transcription exercise or the visual encoding exercise from Berne; both options were taken up, though I was too busy sorting out the website to see how far they got with either.

After another nice brasserie lunch (roast duck), I spent about 15 minutes showing how to use TEIBoilerplate, which went down remarkably well, “Génial” they cried, as they saw all that tricky encoding in John’s demo file being rendered beautifully by Safari (France is still largely land of the Mac). The rest of the afternoon, was devoted to a more ambitious TEI-savvy piece of software : txm, from the textometrie project at Lyon. Alexei showed us what it was, and demonstrated how to make it sit up and do tricks with the Graal and Brown corpora, which participants had pre-installed. He also showed it working with a selection of literary texts prepared for use throughout the workshop.

Verdict

I think this workshop worked much better than it deserved to (I always think that). All the participants seemed very happy at the end, and several of them said they had learned more than they expected to. I think the organisation of the programme made good sense, and the balance of exposition and exercise was approximately right, though we probably didn’t do enough to make the practicals consistent and relevant. A few parts suffered from lack of preparation, and I think we could have done more to get a single case study working throughout the course of the four days, in addition to the various more specialised materials we introduced. But next time we’ll definitely get everything right. Thanks are due to the participants, the organisers, and all my co-formateurs, especially Lauranne for calming me down at moments of high anxiety.
All the materials used in the workshop are available in PDF starting from http://meet.tge-adonis.fr/sites/default/files/2012-11-initial.pdf. Dedicated TEI hackers may also be interested in the XML sources of the presentations which are available from my svn repository at http://code.google.com/p/tei-fr/source/browse/#svn%2Ftrunk%2FTalks%2F2012-11-paris

Here we go again

It’s ridiculously early for a Sunday morning, but the only plausible train to catch from Oxford if you want to connect with a 1220 Eurostar leaves at 0940. So here I am wondering, along with many others, where on earth is said train. We can see it in the sidings North of Oxford station, but it’s not moving towards us and the announcements are not reassuring. Maybe the stopping service to Ealing Broadway is a better bet: certainly standing around fretting on Oxford station is not pleasant. Some twenty bucolic minutes later, I detrain at Didcot in the hope of something better: which does indeed turn up in the shape of the 0940, now proudly running only 20 minutes late. I spend my uneventful trundle through the morning sunshine trying to work out what I have done to incapacitate tei-emacs on my laptop. Then an unspeakably horrible Circle line train bears me off to St Pancras, and the comparatively civilised space of the Eurostar lounge where I discover that in the general confusion of getting myself ready for this week’s set of French gigs I have failed to check something crucial into my nice new subversion repository. Ah well: no time to agonise over that, it’s time to get my disordered thoughts on the history of the TEI into some sort of plausible order, and to construct an appropriate French narrative around same. Which keeps me happily occupied for the rest of the day: out of London, across the wilds of Kent, under the channel, through Picardy into Paris, my nose barely strays a few inches from my laptop screen, tappety tappety tap, except for a few minutes degustation (I use the term advisedly) of a Eurostar snack lunch, and a few dirty looks in the general direction of some fellow passengers yapping away noisily behind me. Even nastier, but mercifully not noticeably longer than the Circle line is the hop by RER B from Gare du Nord to Gare de Lyon, where I resume work on board a nice peaceful TGV all the way to Lyon. With such good effect that my talk for tomorrow is all ready to go, even before I arrive at Perrache. Such virtue warrants dinner, even though it’s now a little late, so I stride purposefully across the Place Carnot to the brasserie Victor Hugo, order a hamburger a cheval (nothing to do with horses, this is a burger with a fried egg on it) frites, et un pot de cote and phone Marjorie to re-assure her that I am here and ready to boogie, before retiring to bed.

One hasty breakfast later, Dominique Roux and I set off in search of one of the many fine Universities in which Lyon rejoices, more exactly the vaulted basement dungeon in which Marjorie’s seminaire is taking place. The morning was supposed to be a double act, but since Paul Spence couldn’t make it his colleague Guilhem Pépin instead gave us an interesting lecture about medieval history before showing us some of the Gascon Rolls project. Pépin is a French (or more properly, Gascon) historian actually working at Oxford in the History faculty. There was a time when I might have huffed and puffed a bit about Oxford academics who take their TEI digital projects off to Kings College instead of using the local facilities, but these days I have become placid and boring. Anyway, Pépin was a good speaker and clearly an agreeable person to work with; and the material presented all sorts of interesting possibilities for analysis once marked up, even if he was almost aggresively reluctant to claim any expertise in the application of markup. Not for the first time, I wonder why it is perfectly acceptable for academics to profess ignorance of one technology that is essential to their work, whereas ignorance of others (say, bibliography) would seriously damage their career prospects. And then off we all went for a decent lunch, this being France: dos de colin avec ses pommes de terres lyonnaises, if I remember correctly. After which I gave my talk, which seemed (to me at least) to go remarkably well for a first outing: I suspect I will give it again, at least as long as people go on asking me to explain where on earth the TEI came from, and why it has not sunk without trace. It is a good story, with a good moral, I think. After a coffee break, Dominique Roux from the Presses Universitaires de Caen gave a thorough overview of their projects and preoccupations, presenting a variety of cool projects, a TEI-based workflow, some wise remarks about the use of TEI in commercial publishing, and much else besides. It’s a pity he came at the end of a long day with perhaps a touch too much Gasconnade in it, since it would have been good to discuss several of the ideas he presented with the masters students present — who had all been assiduously taking notes earlier in the day, but were clearly flagging somewhat by the end. I was sorry to have to rush off  in time to catch the train to my next gig, in Tours.

Preparation for said next gig took up quite a bit of the journey, quelle surprise; indeed I don’t think I looked out of the window once. And yes, it is possible to get from Lyon to Tours without passing through Paris, if only once or twice a day. The TGV concerned stops at a place I have never heard of called Massy, and then at St Pierre des Corps, before zooming on to Caen. St Pierre des Corps is a dismal little junction from which a variety of trains shuttle into the architectural splendour of Tours central, about 5 minutes away. Even when entirely enclosed in scaffolding as a part of its restoration as a patronomial monument, Tours station is an uplifting spectacle late at night, when everything around it is closed except for Macdonalds. Equally good for the soul is the Grand Hotel of Tours which has retained and lovingly refurbished its charming 1930s decor, all peacock feathers and wooden panelling and geometric patterns. Last time I was here in December, the wifi was misbehaving but everything seems to be fine now, and the breakfast is excellent. Next morning, it’s a quick trot across town to the Centre d’Etudes Superieures de la Renaissance to give my contribution to their Master 2 professionnalisant Patrimoine écrit et édition numérique : initiation à l’encodage des textes patrimoniaux. This is the third or fourth year I have done this, so you would think I had it sorted by now. My contribution this year consisted of a ninety-minute lecture on manuscript encoding (much revised to recognise the existence of the new <sourceDoc> element, as of release 2.0 of TEI P5 — this was the talk I thought I had mislaid, but hadn’t); followed by another 90 minutes on Roma and schemas and such like mysteries, using the Virgolos project as a case study (called TEI a la cartes, geddit?); and finally another 90 minutes attempting to explain XSLT pour les nuls. This last was a rather more quixotic and under-prepared venture: although novices quite quickly grasp the basic ideas and usefulness of XPATH, grasping exactly what an xsl template is and why you might want one is rather more of a challenge. But the punters seemed content to be slightly baffled at the end of a long and varied day, and I am sure that the local team will clarify any residual bewilderment next week. Dinner was at the Odeon, another piece of lovingly restored 1930s kitsch, where the food was excellent (I had the rognons since you ask), and Marie-Luce and I discussed the notion of a weeklong residential formation approfondie sur la TEI under the auspices of the Cahiers consortium, plus anyone else who might like to play.

Tours is in the process of acquiring a tramway, which means that large amounts of it are being dug up and knocked down, notably near the railway station: I observed this with interest over breakfast, before hastening off rather late for a short consultation with the CESR team about how to autogenerate an ODD from their Epistemon corpus (sort of difficult if you don’t have Saxon installed), and some discussion about how best to proceed with their ongoing project of revising the project’s encoding manual. The plan is not only to update but also to generalise this manual for use by other similar projects, which would certainly be useful: there isn’t a lot like it in French, aside from the BFM manual. However, I have a train to catch this morning, so I have to sprint back through the marche des fleurs, looking neither to the right nor the left, regretfully for there is much to see, and resisting the temptation to stop to buy fresh garlic or dried flowers or a sandwich for the journey or even take some photos of the pavements now decorated with a rich and colourful assortment of flowering bedding out plants. Tours is a charming place with much to recommend it. And so, off to Paris where I have a couple of crucial meetings to attend, crucial enough to propel me into an irrational anxiety about the progress of my train which suddenly decides to slow down and stop in the middle of nowhere more frequently than is decent, even for an intercite. In the event, though, we pull into Austerlitz ten minutes early, allowing me to take a pleasantly-paced walk through the Jardin des Plantes and up the hill to the TGE Adonis office in good time for my appointment with my directeur, Jean Luc Pinol. We discuss the coming year’s work plan for MEET; this being satisfactorily resolved, Ariane agrees to release a PUMA forthwith (don’t ask) … I spend the afternoon catching up on the gossip with TGE colleagues before checking into this week’s hotel which is conveniently located opposite a nice bar and round the corner from a rather excellent brasserie. Here I dine, expensively but deliciously on foie de veau patates et encore un pot de rhone. It’s tiring work all this gluttony, you know.

Next morning, I rise at a civilised hour, and catch up on my committments at the TGE most of the day, taking however an extensive lunch break to discuss with Mathieu Andro from the Bibliotheque Ste Genevieve a wondrous new digital library project which has apparently secured 1.7 million euros of local funding to finance a deposit archive for the digitized outputs of a select bunch of Parisian libraries, and wants to use the TEI. Did I hear that right? The lunch was pretty good too. Finally, I put in place some hasty arrangements for another meeting in Paris next week, and then trek on foot across town to Chatelet (where there seems to be, as usual, a manif going on) to catch the metro to Gare St Lazare  (which, post-renovation, seems to be mysteriously disguising itself as the gare de l’est), to take the train to Caen, for the last gig of this tour, namely viz Matthew Driscoll’s ongoing TEI seminar at the MSH. The Hotel Quatrans is much as I last saw it, and so, I am pleased to report, is the little restaurant called “Les saveurs de la Reunion” just round the corner from it, where Matthew, Eric, and I enjoy some rum, some gateaux piments assortis, two bottles of muscadet, and a tasty carrie cabri before retiring for the evening.

Friday is seminar day. Serge Heiden ARE YOU READING THIS SERGE? from the ENS Lyon opens proceedings with an update and an impressive demonstration of the textometrie project, which goes from strength to strength. They have an equipex in which they will be working with hundreds of Historians, and a number of other collaborations in prospects, some ANR, some DFG-funded. The software is, of course, still available from sourceforge, and they are also in the process of setting up a portal for general access to some demonstration applications of it   . Serge discussed the way the software uses TEI and other forms of markup; they have now fixed on a TEI-conformant pivot format, for which an ODD is in preparation. He also demonstrated many XAIRA-like features of the software and reported some work done by Alexei Lavrentev in importing and analysing the markup of a large corpus of texts from Frantext.  He was followed by Antoine Widlocher who described the  search engine under development at Caen’s Greyc research group initially for use in the Descartes project Its data model uses graphs rather than trees, and much of his talk therefore concerned the difference between the two, although he did also present the user interface envisaged for the system; this is, of course, SPARQL-based, and will access a triple store in which XML and other annotations are all represented in RDF. All very interesting if, perhaps, a little computer science oriented. Maud Ingarao commented that the project resembled  Edouard Portier’s work on multistructured documents; I should have mentioned Desmond Schmidt, but didn’t. After lunch (in the student canteen; n’en parlons plus) Maud gave a brief overview of a newish XML database system called BaseX, and demonstrated some of its jazzier features: she also noted that a test basex server has now been implemented as part of the TGE Grille de services. Frederic Glorieux then gave a nice talk demonstrating how the presence of detailed markup in his version of Francois Ganaz’s “XMLittré” project facilitated several interesting searches: he proposed tthe average size of text fragment within a TEI document might be an interesting stylistic indicator; and remarked on the high frequency of emotive words like “dieu, homme, roi” in the examples cited by Littré. Finally in this session Marie Bisson demonstrated the current state of the Juxta collation system under Windows, and working on three manuscripts of Thomas Le Roy. Juxta apparently has its own XML markup but  does now also (more or less) grock TEI .

Last but one session of the day concerned “quantitative codicology“, a term, I learned, which is even older than the TEI, having apparently been invented by someone called Ornato in 1980, according to Matthew, though it is a concept which can be seen to underlie Don McKenzie’s 1985 Panizzi lectures on bibliography as the “the sociology of texts”, or the so-called New Philogy of Stephen Nichols at the start of the nineties. I liked Matthew’s use of the phrase “the artefactual turn” to describe his increasing certainty that the meaning of text should not be dissociated from its “embodiment” or the historical and social forces that documents manifest, and intend to appropriate it for use when presenting the TEI’s recent reinvention of <sourceDoc> . Matthew and colleagues described the Fornaldarsögur norðurlanda project, which aims to provide an account of the production, dissemination, and reception of the “chirographically transmitted texts” of 36 stories from prehistoric times which can be identified in some 1500 texts presented in over 750 distinct Icelandic manuscripts. These are described using (inter alia) a reduced and tightly constrained schema derived from TEI P5, extended to include information derived from the transcriptions of the mss such as the average written area, the number of abbreviations per line, etc. as well as such features as the presence of decoration, or the types of text included. Sylvia Hufnagel presented some hypotheses about possible connexions between these evidential characteristics and assumptions about the wealth or status of the owner or person believed to have commisioned creation of a manuscript, though there is really insufficient evidence so far to justify any generalisations one might be tempted to make about (say) the emergence of the “prestigious reading manuscript” distinguishing (as it were) “coffee table” manuscripts from “paperbacks” . Eric Haswell described clearly and concisely the technologies used in the project, contrasting the “data centric” and “document centric” notions of relational and xml databases, and also showing how their web-service based implemetation based on eXist made it possible very easily to extract query results as CSV for input into traditional spreadsheets or as JSON for use by cooler things such  simile widgets. Finally, I gave that talk about linguistic annotation and why people say such terrible things about it. Not sure how appropriate it was to the day, but people seemed to be listening anyway. Final dinner of this week of over eating, was at Le Bouchon du Vaugueux where I (and others) tucked into a four course gastronomic menu, including some excellent roast duck, and rather a lot of stewed pears.

And on Saturday, the journey home, which was all very pleasant till I actually got to London : trains cancelled without warning, inadequate fallback facilities, Great British Public mustnt-grumbling etc etc. It took longer to get from London to Oxford (about 100 km) than from Caen to Paris (about 200 km), and involved a train that was so overcrowded it could not leave the station, not to mention a 30 minute wait for a replacement bus in the cold outside Reading station. Never mind, next week I’m going back to France, where the trains (mostly) run on time and the train crews are (usually) helpful and less demoralised when they don’t.

« Exploiter les données structurées en XML »

Here’s a nice way of spending a day in the heart of the Marais. Get together a bunch of people who do actually use the TEI (or some other kind of structured XML markup) to do cool things and ask them to talk for a maximum of 10 minutes each about the software they use and what they do with it. I claim no credit at all for this idea: the event was master minded by Anais Wion, Fabrice Melka, and Denise Ogilvie who just coincidentally have to prepare a workshop on the verb “exploiter” in Aussois later this year. Whatever its origins, this turned out to be a really worthwhile day, and not just because of the venue (the alabaster hall of the Archives Nationales) or the lunch (yum, Lebanese buffet).

A proper account of the proceedings has been promised for a couple of weeks hence, so this note is just the consequence of me jotting down some immediate impressions on the train home. There is already a useful page of links to stuff mentioned at the workshop at http://www.delicious.com/workshopexploiter, which I should probably update with this report.

I kicked off by explaining why the TEI really didn’t ought to have much to do with software production, except for its own nefarious purposes. I conceded, however, that those purposes led inelectably led to the production of Sebastian’s Excellent Stylesheets and hence to a generic software tool of some importance in the community. Marjorie Burghart then talked about XML database eXist, showing it in action on her sermones.net site, and also her paleographic exercise site; the main problem with it, for her, was that its installation and maintenance on a local server require a little more technical expertise (for example, fine tuning a java environment, recovering tomcat when it falls over, etc.) than is available for the typical humanities department. This need for infrastructural computing support turned out to be a major theme of the day. Next up was Lauranne Bertrand from the CESR team at Tours, who showed how they currently use XTF to display various versions of their richly encoded texts. Maud Ingaro then introduced us to a new XML database from the University of Konstanz called BaseX which seems worth a second look, if only for its very sparkly visualisation features, though its main claim to fame is probably its ability to handle REALLY BIG (multi-gigabyte) databases, which (if true) should give several current pontificators pause for thought. Jorge Fins, also from CESR then talked about Philologic, which provides traditional text searching (full text indexing, concordancing, etc) capabilities, running on a distinct (and distinctly dumbed down) copy of the Bibliotheques Virtuelles des Humanistes exported to Chicago.

After a brief pause for coffee, Alexei Lavrentev, standing in for Serge Heiden (reportedly recently immobilised by a close encounter with a crampon) showed us the current state of  txm the open source text analysis system developed by the textometrie project at Lyon. Severine Gedzelman, also from Lyon, then described Hypermachiavel, an application for handling multiple aligned corpora (or, to be more exact, one specific set of multiple aligned corpora). I found the difference in software design between these two projects interesting: txm was developed very consciously as a generic text processing framework, incorporating and rationalising feaures from many other systems; whereas hypermachiavel was developed (almost from zero) very much to meet the specific needs of a particular research project, but without any particular generic intention.

Does the world need another generic tool for doing textual annotation in XML? Certainly many linguists and computer scientists seem to think so. Cue Antoine Widlocher from the University of Caen, and Glozz, a new plateform for distributed linguistic annotations of text segments, overlapping or otherwise, relationships, graphs, etc. etc. Very nice visualisations, as per other java applications; nice features such as annotation histories; no evidence that any researchers from the humanities had been involved in its design or application up to now. Florence Clavaud, from the Ecole Nationale des Chartes, then spoke very briefly (no really) about Pleade and her plans to enhance this mainstream EAD-muncher to include TEI capabilities. Pleade is one of the tools of choice in the French Archival community so that enhancing it to handle TEI as well as it currently manages EAD and sets of digital images would be very cool. Also from ENC, Vincent Jolivet and Frederick Glorieux showed us diple which is a nice simple package written in php to transform complex TEI markup to static web pages with a complementary suite of stylesheets to render them, and something called xrem, a very glamorous tool for visualisation and construction of RELAX NG schemas. Fred likes to work directly in RELAX NG rather than via ODD, but the results almost justify such heresy. Nicole Dufournaud, aided and abetted by Denise Ogilvie, told the (possibly) instructive history of how Millefeuille (a nice customized TEI editing and indexing application based on work Nicole pioneered back in the nineties)  is now in a suspended state of animation. Following one unsuccessful attempt at reanimation, it appears that another one is proposed as part of a European project. Finally before lunch, Maud Ingaro showed us some camstudio videos about dinah : this “philological platform for the construction of multi-structured documents” is currently being developed at Lyon in a project studying the manuscripts of Jean-Toussaint Desanti, and seems worth a second look, even though it’s a long way from being stable yet.

After the afore-mentioned very nice lunch, there was a wide-ranging free-form discussion, from which I took away chiefly the following points (as aforesaid, there will be a more complete and correct report later):

  • a general feeling that IT infrastructural support was lacking: in particular, people wanted
    • some kind of sandpit environment in which they could experiment with different tools
    • some easily accessible web-publishing service for e.g. doctoral students to showcase their work
  • a general feeling that development and implementation of XML-based projects was hard work requiring input from specialists, consequently a need for more training
  • a desire to share experience of these and other tools; the existence of TEI-FR, and the TEI Tools SIG were agreed to be appropriate channels.

Some pointed requests were made for the TGE to do more to provide some of these services, which proposal I agreed to go away and investigate.

Tweaking the Agora Stylesheets – 1

The AGORA project (this one, not to be confused with this other one nor even this other one again) has defined a very simple TEI XML schema for  scholarly publishing. In this series of blog entries, I report my attempts to process a set of documents which conform to that schema into PDF and other formats, using the TEI stylesheet library.  My environment is a laptop running Ubuntu 10.04, on which I have installed the 5.1.4 release of the tei-xsl package and most of the texlive Ubuntu packages (versions dating from July 2009 according to dpkg).

On the train to London this morning, I  wrote a Makefile which validates each file and, if valid, then processes it using the teitolatex and xelatex commands. This produced something not entirely discouraging, with the  following obvious things to fix:

  • some of my files had  numbered headings and others didn’t. By  default the stylesheets added numbers willy nilly. I need to switch this behaviour off.
  • some of my files used <byline> in the header to indicate the affiliation for an author, like this:
    <byline><docAuthor>Fred Flintstone </docAuthor>
        Euphoria State University, Kansas</byline>.

    By default, the stylesheets clearly have no idea what to do with the text fragment following the <docAuthor>, and therefore spit it out on a page of its own.

I learned at the excellent MUTEC workshop last week that the recommended way of modifying these stylesheets is to set up a new “profile”, so I duly visited the directory  /usr/share/xml/tei/stylesheet/profiles and created a new folder there called   /usr/share/xml/tei/stylesheet/profiles/agora (somewhat to my surprise this did not require root access).  I then copied the existing default specifications for each of the target transformations I thought I might use in my Agora work into this folder. Like this:

$cd /usr/share/xml/tei/stylesheet/profiles
$mkdir agora
$cp -r default/latex agora
$cp -r default/docx agora
$cp -r default/oo agora

The directory names (latex, docx, etc.) are not particularly well publicized: I worked out by inspection that “oo” must be the one invoked by the command “teitoodt”… presumably at some point it will be renamed Liboff vel sim.

Anyway, this setup should mean that if I now do e.g.

$teitolatex --profile=agora foo.xml

I should get the same result as I would if I left out the –profile … and so indeed I do. Good.  Time to start messing about.

I take a peek into the contents of my agora/latex folder. It contains just one file, called to.xsl — which presumably controls the conversion from tei to latex. One day maybe some clever person will add a file called from.xsl which does the opposite. Or not.

The file is rather dull: all it does is remind me that the file is copyright TEI Consortium 2008, and that the library it invokes is “distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY”. Fair enough. It also loads the stylesheet at
../../../latex2/tei.xsl but all it does to modify that is set some mysterious parameter called reencode to false. So clearly I am at liberty to add further modifications in this file… or will be once I have  changed permissions on the file.
../../../latex2 (i.e. /usr/share/xml/tei/stylesheet/latex2, sibling of the profiles directory) is the directory with the real biz. It contains files named for most TEI modules, as well as promising looking files like tei-param.xsl. A little sniffing around, and I have discovered the XSL template for procesing the TEI <head> element inside the file core.xsl, which contains the following magic:

<xsl:choose>
<xsl:when test="ancestor::tei:floatingText">Star</xsl:when>
<xsl:when test="parent::tei:div/@rend='nonumber'">Star</xsl:when>
<xsl:when test="ancestor::tei:back and $numberBackHeadings='false'">Star</xsl:when>
<xsl:when test="$numberHeadings='false' and      ancestor::tei:body">Star</xsl:when>
<xsl:when test="ancestor::tei:front and $numberFrontHeadings='false'">Star</xsl:when>
</xsl:choose>

That looks to me suspiciously like there should be a parameter called numberHeadings which I should set to false in order to suppress those pesky generated section numbers. (Of course, I’d have found that out immediately if I’d bothered to read the documentation, but …)

Back in my file profiles/agora/latex/to.xsl, I add the following line

<xsl:param name="numberHeadings">false</xsl:param>

and then regenerate the PDF, using the tweaked stylesheet in my agora profile:

teitolatex --profile=agora aaberge_2007.xml
xelatex aaberge_2007.tex

Bingo! no numbering. This could maybe be easier than it looks…

My second problem is trickier. The challenge and the delight of the TEI is precisely its open-endedness, and so it often happens that something which looks plausible in TEI has no obvious translation in some other markup system, such as LaTeX. In my case, how *should* the <byline> element be processed? A grep through the LaTeX directory shows me that at present there is no template at all for it, so my hands are comparatively untied. My first thought is just to add a template like the following to my file:

<xsl:template match="tei:byline/text()">
\author{<xsl:value-of select="."/>}
</xsl:template>

on the assumption that the bit of text inside the byLine elements might as well be treated in the same way as an author as anything else. But LaTeX is not so liberal: when it finds that I have generated

\title{The Semantic Web in a philosophical perspective}\author{Terje Aaberge}
\author{,
Sogndal, Norway}

it simply ignores the first \author. This suggests that I cannot solve this without learning more about LaTeX than I really want to.

Maybe I can modify the existing template for <docAuthor> to deal with this special case. In the file header.xsl there is a template like this

<xsl:template match="tei:docAuthor">
<xsl:if test="not(preceding-sibling::tei:docAuthor)">
<xsl:text>\author{</xsl:text>
</xsl:if>
<xsl:apply-templates/>
<xsl:choose>
<xsl:when test="count(following-sibling::tei:docAuthor)=1"> and </xsl:when>
<xsl:when test="following-sibling::tei:docAuthor">, </xsl:when>
</xsl:choose>
<xsl:if test="not(following-sibling::tei:docAuthor)">
<xsl:text>}</xsl:text>
</xsl:if>
</xsl:template>

It’s a horrible kludge, but if I insert the following before the final <xsl:if> element, it should make sure I output any following sibling text fragment before outputting the }

<xsl:if test="parent::tei:byline and (following-sibling::text())">
<xsl:value-of select="following-sibling::text()"/>
</xsl:if>

I therefore copy the whole of the <xsl:template> for docAuthor into my to.xsl file, add the above clause, and blow me down it (nearly) works. I had, of course, forgotten to suppress a second appearance of those pesky text fragments caused by the default processing for <byline>.  One more template:

<xsl:template match="tei:byline/text()"/>

fixes that.

Of course, the more I look at this, the less I like it. A much better solution would be to tag the affiliation data as such in the XML source, using an element such as <affiliation> perhaps, and then process it correctly into whatever LaTeX provides for the treatment of such things. But that would, as aforesaid, require some research into what LaTeX can do, as well as changing the Agora schema.

Not a bad way to pass the train journey to Paris, especially when surrounded by kids returning home after the half term hols.