Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

EEBO TCP in P5 – the return

It seemed a necessary act of piety to respond positively to a request for help from my former colleagues at the Oxford Text Archive, when they finally got around to considering the conversion of the latest (and one must fear the last) tranche of EEBO texts from the Text Creation Partnership. The conversion into a TEI P5 compatible version of the vast majority of EEBO-TCP phases 1 and 2 texts and their subsequent upload to a gazillion github repositories was accomplished by a team headed by Sebastian, back in the days when TEI Simple Print was new, and we were all a bit more bushy-tailed and bright eyed. Now the OTA has received their last tranche of TCP phase 2 texts, it should not (surely) be too much of a sweat to crank them through the same conversion process and deposit the results in Github too. Though of course nothing is ever quite that simple.

The XSLT script which does the heavy lifting is called tcp2tei and (thank you Sebastian) here it is, safe and sound in the TEI Stylesheets repository. And it still works. There is even a shell script for creating a new github repo and uploading each file to it from the same masterly hand; this one nearly works, as a consequence of github having got a little more fussy about authentication mechanisms in the last five years, but that’s not hard to fix. So I should just declare victory and move on.

On closer inspection however three issues have surfaced (so far).

Firstly, the catalogue numbers. In the current TCP P5 texts, each TEI header has a string of <idno> elements supplying its identifier in the Michigan DLPS database, the identifiers of one or more MARC records from OCLC or UMI catalogues, its Proquest number, identifiers in one or more standard bibliographies (ESTC, Wing, STC, Evans etc.) and the number of the image set which was scanned. For some reason I do not understand, not all the new texts supplied to the OTA have their full complement of these identifiers. For example, of the 6498 titles supplied, 3062 have OCLC Marc record identifiers, (discounting an additional 187 duplicated OCLC records in which the record identifier is prefixed redundantly by “ocn”). None of the 6498 has an image set number. Only 2987 have a Proquest identifier, and it’s always the same as the MARC catalogue number. And 963 have no bibliographic identifier of any sort.

No matter: after my skirmishes with EEBO metadata last summer (reported at https://foxglove.hypotheses.org/date/2020/08) , I am confident of being able to recover missing catalogue numbers from at least two different sources: one being Paul Shaffner (whom God preserve)’s eebodat1.xml and the other being Proquest (whom God has recently abandoned)’s title list. The stylesheet I am working on to do mundane things like change the availability statement in the header is duly expanded to supply the missing <idno>s. I decided to add the new Proquest numbers (the so called GOID) even though these are not present in the existing files.

Secondly the image links. One reason for caring about the Image Set numbers is that they are used as part of the address to which @facs attribute values scattered throughout the texts are mapped. Back in the day, it was possible to link directly to a page image in this way. This facility is however no more: Proquest (and presumably their successors) will only allow you to access individual page images by using their own interface, so far as I can tell. It is possible to access the same images via the JISC Historical Dataset sites by judiciously stringing together values from those <idno>s, but I have yet to find a reliable way of doing so for individual pages. For the present therefore the @facs values will remain a touching reminder of how things once were. Though I did add a link to the JISC site into the new headers, along with other useful documentation.

And thirdly the real subject of this entry: what to do about @rend. Now, I have long believed that the TCP P5 texts are not only valid TEI P5 XML but also valid against a specific TEI schema, to wit the schema named (after much argument and in the teeth of some opposition) TEI Simple Print. I distinctly remember (or think I remember) Sebastian and Magdalena putting quite a bit of effort into enhancing that schema with lots of @rendition values to match EEBO-TCP requirements. So when I actually tried validating my nice new files against it, I was a bit puzzled to find that they didn’t.

Specifically, the attribute @rend is not available in the TEI Simple schema, and has not been since at least August 2016. In its place, I should be using @rendition to point to one or more of the predefined simple:rendition values. So I spent an hour or two tabulating all the @rend values used in the new files, and finding their simple:equivalents. This proved easy for most cases, but impossible for just a few (7), some of them esoteric (@rend=’upsideDown’ anyone?), but others (e.g. @rend=”margQuotes” and its friends margSglQuotes and margDblQuotes) quite frequent and clearly necessary. I also realised (belatedly) that allowing my script to make this change was going to make my new texts inconsistent with the existing ones. For the existing TCP P5 texts are not valid against TEI Simple either, I discovered, and somewhat to my embarassment: they use @rend with all sorts of exotic values all over the place. They do use @rendition, but in one way only: some <pb/> elements have @rendition=’simple:additional’. This was entirely mysterious to me for a while (until Paul remembered what it was for). In any case, I will worry about that when a new systematic revision of the whole collection is undertaken, should that day ever dawn. For the moment, I will grit my teeth, stick with @rend, and assure anyone who asks that all TCP P5 texts are valid against, err, TEI ALL. This is known as biting the bullet, I believe.

Update: 10 June 2021. I uploaded all 6498 new texts to new repositories in the Github textcreationpartnership collection over a period of 24 hours last week. And, at somewhat greater length, I have now updated my repository at eebo-bib to describe more precisely what I did to create a TEI-compatible TCP bibliography. Definitely time to declare victory and move on.

An experiment in counting the books

A couple of years ago I spent some time trying to determine which of the titles in the wonderful “At the Circulating Library” (ATCL) database were freely available online in digital form. This was for largely pragmatic reasons to do with building the ELTeC English language collection: other blog entries describe the method I used and some preliminary results. It’s not as easy as you might suppose to download reliable catalogue information from most digital libraries, nor is it always readily tractable when you do. After some experimentation, I hit on the idea of creating a magic key, a kind of fingerprint, derived from the title and author name as specified, which could then be matched against keys in the same format derived from ATCL entries.

More recently, it occurred to me that this data might also provide some interesting numbers to contribute to current debates about digitization priorities. Exactly why some titles make it to Project Gutenberg, or the HathiTrust, or the Internet Archive and others don’t is not a question to which simple un-nuanced answers are likely or even (maybe) possible, but we should still ask them. Those responsible for the digitization efforts of major libraries are a little coy about the principles on which books are chosen for digitization, or even whether they actually have explicit selection policies, for some reason. I assume that there is a difficult tightrope walk between on the one hand practical but purely adventitious matters such as the relative locations of volume and scanner, the size and state of the volume, the time of day, the temperament of the scanner operator etc.) and on the other principled criteria aiming to ensure a balance of say titles by female and male authors, or high and low brow, date of production, longevity of readership, and so on. It would be surprising if the choices were completely unrelated to characteristics of the population being sampled, or totally failed to reflect the cultural priorities of the scanning operation; the same uncertainties apply, of course, to the collection being sampled for digitization itself.

Anyway, I recently read an interesting article by Allen Riddell and Troy Bassett (“What library digitization leaves out”; preprint available from https://arxiv.org/abs/2009.00513)  which reports that in the data they looked at – the comparatively small sample of surviving English novels published in 1836 and 1838 – shorter books, and books with male authors are disproportionately more likely to be digitized. I naturally wondered whether this applies equally well across the whole of the 19th century.  Which is what led me to revisit my efforts of two years ago. But first, here are the results.

There are 19,912 titles in the current ATCL database. Of these, 9152 (46%) have authors identified in the database as male, 9809 (49%) are identified as female, and 951 (4%) are identified as unknown. These relative proportions are rather different if we look at titles with at least one digital surrogate, of which there are in total 9099 (45%). Of these 9099 digitized texts, we find 5221 (57%) are of male authorship, 3718 (41%) of female authorship, and 160 (2%) are unsexed.

Look at that again. Although there are actually more titles available for digitization from female authors than for male, the number that actually gets digitized is significantly smaller (if, like me, you think a gap of 16 percentage points is pretty significant). Hmmm. These counts of course derive from the whole period covered by ATCL, from 1800 to 1900, so I also calculated them for each decade, only to find that the proportions and their imbalance remain fairly consistent across the century. And this despite huge changes in the numbers: for the last decade of the century ATCL lists nearly 6000 titles, a six-fold increase on (for example) the fourth decade. What percentage of those titles were digitized? In both decades, over 51%. And what proportion of those digitized titles were male-authored ? In both decades, 62%. There is some variability across the decades, but the basic picture remains the same

One possible explanation might be that titles with unknown or unsexable authorship (e.g. the ubiquitous “Anonymous”) are more likely to have been female, and that hence we are not seeing all the truly female authors. But even were this the case (after all, why should we not equally well hypothesize that male authors might be bashful or crave secrecy?), the proportions for books ostensibly male-authored with respect to books ostensibly not male-authored (i.e. those classed as either F or U by ATCL) remain stubbornly higher than the proportions for books definitely not male-authored. And indeed, the same mutatis mutandis is true for the ostensibly-female to ostensibly-not-female ratio.

Here’s a table showing the raw counts:

Decade All “Male” “Female” “U” A-dig M-dig F-dig U-dig
19912 9152 9809 951 9099 5221 3718 160
1830s 482 256 174 52 250 164 85 1
1840s 1037 543 422 72 538 334 202 2
1850s 1483 595 778 110 718 347 358 13
1860s 2341 1019 1093 229 1015 540 456 19
1870s 2866 1189 1514 163 1300 642 633 25
1880s 4126 1693 2287 146 1765 945 782 38
1890s 5979 2995 2863 121 3092 1929 1103 60

 

And here’s another showing the percentages:

               
Decade Ad% M% Md% F% Fd% U% Ud%
45.70% 45.96% 57.38% 49.26% 40.86% 4.78% 1.76%
1830s 51.87% 53.11% 65.60% 36.10% 34.00% 10.79% 0.40%
1840s 51.88% 52.36% 62.08% 40.69% 37.55% 6.94% 0.37%
1850s 48.42% 40.12% 48.33% 52.46% 49.86% 7.42% 1.81%
1860s 43.36% 43.53% 53.20% 46.69% 44.93% 9.78% 1.87%
1870s 45.36% 41.49% 49.38% 52.83% 48.69% 5.69% 1.92%
1880s 42.78% 41.03% 53.54% 55.43% 44.31% 3.54% 2.15%
1890s 51.71% 50.09% 62.39% 47.88% 35.67% 2.02% 1.94%

 

In an ideal world, you’d expect the percentages for titles with male authors (M%)  and for digitized titles with male authors (Md%)  to be roughly the same, right?  Think on… And feel free to download the csv file behind these tables for your own experimentation.

One should always suspect the data, so I make no excuse for the following detailed blow by blow account of how I got these numbers. Full gruesome details, including the scripts mentioned below, are available from https://github.com/lb42/bookLists

The basic method was to download a complete catalogue of relevant titles available from each target digital library, and then try to match them with records in the ATCL. For Google Books, which does not seemingly provide a complete catalogue online, I tried a different method, discussed further below.

I started by downloading the latest (June 2020) dump of the ATCL database, and converting it to a basic TEI XML format. I then did much the same for the holdings of five digital libraries with good holdings of 19th century novels: the Hathi Trust, the British Library, the Internet Archive, Project Gutenberg, and Google Books. As a control, and for testing purposes, I also looked at a few smaller collections, notably the Victorian Women Writers Project at Indiana University and the (now defunct) University of Adelaide “ebooks” repository. I wanted to provide something similar to John Mark Ockerbloom’s lovely Online Books Pages at https://onlinebooks.library.upenn.edu/ but more precisely tied in to ATCL.

Hathi Trust makes available a monthly dump of their entire collection as a huge tab-delimited file. Working with the most recent dump, dated September 1 2020, I used a simple minded perl script `hathiProcess.prl` to parse this file and select from it only freely-available English language books published in Great Britain between 1800 and 1920; an  XSLT stylesheet `htConv.xsl` then converted the results to the common project format (CPF).

The British Library website makes available an Excel spreadsheet providing metadata for the titles from their collection which were digitized some time ago by the Microsoft Books project I downloaded this, converted it to TEI with `csvtotei` and converted the result to CPF, (selecting just the 19th century titles) with `blConv.xsl`.

Project Gutenberg makes available several versions of its catalogue data. I worked with the most recently updated one, which is a vast archive of unbelievably verbose RDF files. Despite its complexity, this data doesn’t include any publication data for the source texts concerned (unsurprising really), though it does provide birth and death dates for the authors. To cut down the numbers a little, I rejected titles whose authors were not born during the 19th century, and also those which specified a MARC relator field “edt” (to cut out non-original editions). Once I had remembered how on earth to handle a gazillion tiny files of RDF (I did this back in 2018 ), I used the `gutConvRDF.xsl` script to process them all to CPF, and concatenated the results into a single file.

The Internet Archive, so far as I can see, doesn’t have any generally available or downloadable catalogue, though it does have a really good query interface. The method I used for attacking Google Books would presumably work equally well (or equally badly; see below) in this case, but I haven’t tried it. Instead I just used a predefined collection called `19thcennov` which someone at UIC Urbana Champaigne thoughtfully created back in December 2008. This gave me 7828 XML records which were easily converted to CPF using `iaConv.xsl`.

The common project format files all consist of TEI <bibl> elements with either an @xml:id attribute or an <idno> specifying the identifying code for this item in the relevant repository, e.g. ‘ia:foreignersnovel03pric` identifies the Internet Archive’s digitization of volume three of Eleanor Price’s novel “The Foreigners” . Each <bibl> also has an @n attribute supplying the magic key for the title, which is confected as follows:

  • remove the full stop following Mr or Mrs in any title containing one
  • take the substring of the title up to the first occurrence of one of the punctuation marks . , : ; or /
  • concatenate this with the author’s last name
  • convert to lower case and remove all punctuation characters and spaces

So, for example ATCL lists a work with the title “The Foreigners: A Novel” attributed to author “Eleanor C. Price”. The same work appears in the Internet Archive list, but with the author “Price, Eleanor C. (Eleanor Catherine)” and the title “The foreigners : a novel”. Despite the differing strings, both will get the same magic key “theforeigners|price”. This method is far from bullet proof, but it’s serviceable.

For Google Books, as noted above, there is no readily downloadable catalogue. But there is an API, which in a moment of madness I thought it might be cool to learn how to use. A day of poking around led me to a neat python script some helpful person had written to look up ISBN numbers (hat tip to AO8’s treasury , which I mercilessly hacked to my own purposes. My version reads a file of URL-encoded search requests like this “inTitle:the+inTitle:foreigners+inAuthor:Price”, fires them at the Google API, and processes the returns into a rudimentary bibl or a comment lamenting the absence or unavailability of the item in question. The file of search requests is rather long (one for each title in ATCL for which I have not yet found any digital version – a total of 11,203 ) so I make the program sleep for a while after firing off about 40 consecutive requests, to help the Google server catch up. Despite this considerate behaviour on my part, it did not take Mr Google long to decide that my program (or my IP address) was a threat, and then to start returning unco-operative HTTP messages like 503 (“Service Unavailable”) and 429 (“Too many requests”). The API Help pages confirm that Google considers “using an app, program or script to perform a large number of searches in a short time” prima facie justification for temporarily blocking the IP address in question; though it’s not clear what exactly is meant by “large” (more than 100?) and “short” (less than a minute?) in that phrase. Furthermore, when I search using my specially-minted API key, there seems to be a hard limit of 1000 queries per day in any case: so this job is not going to be finished very quickly. Still, I do now have an extra 1517 records to show for two day’s work.

Once I’ve created all these lists, I run the merger.xsl script to add <ref> elements to the ATCL-TEI file I created in the first step. This makes for some redundancy, for two reasons: firstly, for most of the archives a three volume novel is likely to get a separate entry for each volume; secondly, for many titles, there exist multiple digitizations – which may (or may not) derive from the same source. The following table shows for each archive the number of records selected for processing, the number of references to ATCL titles found, and the number of titles affected. Note that I haven’t yet done any de-duplication to remove overlaps.

British Library 62015 9920 5104  
Hathi Trust 460070 18891 5655  
Internet Archive 7829 4691 1655  
Project Gutenberg 38338 2880 2275  
Google Books ? 1517 1517  

I haven’t made available the CPF files for each archive, nor the final merged TEI version of the ATCL dump, since this is not really my data to share. But I have made available a file called atcl-links.csv, which is a spreadsheet with a row for each ATCL title digitized in one or more publicly available digital collection, mapping its ATCL identifier to its identifier in each repo. I’ll  update these as and when the data improves.

Encoding the history of the OuLiPo

At the beginning of February, I had the pleasure of co-organising (with Sebastian Rahtz, Camille Bloomfield, and Hélène Campaignolle-Catel)  a workshop on data capture, as the second event in the Algoritm Seminar Series which forms part of an interesting ANR funded project called DifdePo. The project is a collaboration between the BnF and Ecritures de modernité, a research unit located at Paris III, and its objectives include creation of a TEI-based digital archive of the archives of the OuLiPo, which are currently stashed away in boxes at the Bibliothèque Nationale’s Arsenal depository. The papers include letters, photos, press cuttings, postcards, drafts, and notes of all sorts, but for the purpose of this exercise we decided to focus on the records of the OuLiPo’s regular meetings, which began back in the early 1960s. The Archive has already been catalogued, and work is in hand to produce digital images of a sizeable proportion of it. The object of our workshop was to explore ways of transcribing these documents, given that the project has very little funding, and will therefore have to rely on the good will of volunteer transcribers, enthused by things OuLiPien but maybe a little deficient in TEI knowledge.

About a dozen people participated, most of them surviving to the end of the day. We began by asking them to transcribe a page from a small collection of pre-selected digital page images, using Word. (I freely admit to a degree of smugness on discovering at the last minute that the teaching room was initially equipped only with old-style doc-producing Word, which had to be enhanced to a more modern docx-producing version at rather short notice by the unflappable Joël) This exercise demonstrated, as we had hoped, quite a bit of variation about what exactly should be transferred from the image to the text, and on what editorial principles, thus motivating a useful initial discussion about the principles and praxis of text encoding. One of the participants proposed (unprompted) the principle of “fidelity” to the source, while another argued repeatedly for “capturing the meaning”.

Once lulled into a false sense of security by this exercise, participants were exposed to the weirdness of an XML-editing environment using everyone’s favourite XML Editor  oXyGen and my usual tutorial — create a document, learn how to tag parts of it, learn how to manipulate the structure, etc. We then offered them a more demanding work flow, involving first capturing a document in Word using a Word Template, which defined Styles to highlight a number of significant features (headings, list items, etc., but also personal names etc.), secondly converting this to a TEI form, using oxGarage and a specialised profile, thirdly looking at (and possibly modifying) that in oXyGen, and then converting it back to Word to confirm the feasibilty of round-tripping.  Sebastian Rahtz of Oxford (whom God preserve) invested quite a bit of pre-workshop effort into setting up the necessary infrastructure for this, and making sure that it all worked correctly on the day. He also made it possible for us to inflict on the encoders a third alternative approach, based on an experimental installation of Ben Brumfield’s “From The Page” crowd-sourcing prototype software. I had expected this to be everyone’s favourite, but (maybe because we had already by then sensitized them to the delights of structural markup) our encoders seemed to find the simplicity of its interface made it hard to take seriously. We had prepared tutorial scripts for each of the three approaches (TEI source code available from my tei-fr repository, if you’re interested) so I was able to spend some of the time wandering about taking photos of hard-working encoders.
By the end of the day, everyone had tried all three approaches, and everyone had produced a couple of TEI XML files conforming to a simple transcription schema I had prepared earlier. We collected them all up and Sebastian showed how our pretend archive could be displayed on a web page, complete with corresponding page images, and vocabulary lists, and personography. This was (of course) all done with a straightforward customization of the standard TEI-HTML stylesheets, now available in the Stylesheet package as part of the Difdepo profile.
Conclusions? We still don’t really know whether our TEI-XML transcriptions are aiming for “fidelity” or “meaning”, but we have at least demonstrated the possibility of either (or both) . And we do know that the participants all seemed to be more enthusiastic about using the customized-word-template approach than either raw Oxygen or (possibly over-cooked) From The Page. We didn’t explore the idea of a pre-customised oXygen author-mode interface, which might well repay the necessary investment of effort, if there is a lot of metadata to be entered, for example.

 

Joel and I sample the oXygen
Joel and I sample the oXygen

I take the liberty of listing the names of the registered participants, for their greater glory:

  • Camille Bloomfield
  • Hélène Campagnolle-Catel
  • Paula Klein (Projet DifdePo)
  • Chris Clarke (Projet DifdePo)
  • Jeanne Devautour
  • Julie Bernard (Poitiers)
  • Marie Bonnot
  • Marianne di Benedetto (ENS Lyon)
  • Guillermo Hector
  • Pradeep Claassen
  • Louise Kari-Merau
  • Leïla Berlot
  • Barbara Servant (Univ Rennes II)
  • Clara de Reigniac
  • Gabrielle Bruzzone (Poitiers)
  • Claire Leroy

All affiliated with Paris III, unless otherwise indicated.

TEI++ : une formation avancee

Cet été, j’étais invité par le consortium CAHIER, en partenariat avec les consortium Corpus Ecrits et IRCOM,  d’organiser un atelier dite “TEI avancée” sur quatre journées.  Je leur ai propose une mode d’opération divisée en deux parties (présentation, travaux pratiques) et une organisation selon trois axes:

  1. modélisation des ressources et séléction des traits signifiants
  2. encodage et explicitation TEI des structures modelisés
  3. exploitation et analyse des ressources structurés

Je leur avais aussi propose de partager les  travaux de formation avec quelques experts francais.  La formation s’est tenue à l’Institut de Linguistique Française à Paris du 19 au 22 novembre 2012.

Voici en sommaire (anglophone, désolé) ce qui s’est enfin passé …

Day 1

Proceedings began on the fifth floor of the ILF, a nice light room but not quite big enough for the 18 or so participants. This was not altogether bad, since the consequent huddling together encouraged emergence of a cohesive group identity, which I also tried to encourage by getting the participants to place themselves on an improvised graph along two intersecting axes : literarature vs linguistics, and researchers vs support staff. Most people turned out to be in the bottom right quadrant, i.e. ingénieur + linguistique, but there was also a smattering in the littéraire + scientifique box, to say nothing of two sociologues who insisted on positioning themselves in the middle of the littéraire vs linguistique axis. The rest of this first introductory session was given over to a rapid review of some fundamentals of encoding, and a sampling of the websites of half a dozen real TEI projects around the world, which might have gone better had I rehearsed it better, but got the message across that quite a few very different projects were doing seriously cool stuff with the TEI.

After coffee, I introduced them very quickly to a spot of data analysis, using as vehicle the celebrated postcard archive of M Marcel Virgolos, and Lauranne then took over for a refresher on using Oxygen. They marked up a postcard or two, and reviewed commonly used TEI tags for each of some pre-selected texts : a French novel, a poem, and a play. Most students completed all of these, mastering most of the key features of the Oxygen XML Editor quite rapidly, but I think we did not allow enough time for this session, given the mix of abilities present.

Lunch in the form of a large cardboard box containing a “plateau” of cold cuts, salads, bread roll, plastic cutlery etc. duly appeared and was despatched. Thus strengthened, I embarked on an all-singing all-dancing overview of all the TEI modules, and what you can do with some of them. That took about an hour, but helped motivate Lauranne’s traditional exercise on using Roma to make a schema by reduction from TEI-ALL which followed it. By the end of the day, everyone seemed quite comfortable with the idea of pérsonalisation de schéma, and reasonably convinced that they might find what they wanted to mark-up somewhere, somehow in the TEI.

Day 2

On this and subsequent days we were were displaced to a much better (because bigger) teaching room. I began the day in seriously magisterial mode by explaining (many of) the components of the ODD language, and why you might care to know about it. This was quite punishing, both for me and for some of the less technically-minded participants, but no-one visibly fell asleep. For the subsequent practical, Lauranne had prepared a script in which participants submitted their own texts to analysis by OddByExample, generating a personalised ODD. The majority of course had not come with their own texts, or had texts not in TEI P5, so they ran the exercise on a rather inadequate sample of the Virgolos corpus instead. With a bit more prep, I think this could be a really fun exercise and an excellent way of getting people to learn ODD properly. It also revealed a bug in OddByExample which Sebastian graciously fixed overnight

Lunch being cardboard plateaux once more, I went for a stroll round the nearby Parc de Montsouris to see some of the gray autumnnal paris daylight while it was available. I then droned on for well over an hour on the subject of the TEI Header, which I am now selling as being “metadata for the rest of us”. They made me do it. The exercise we adopted was the French version of creating an ms description for the W. Owen ms, last seen in Berne, This is quite useful for the purpose, especially in combination with the following exercise on marking up a transcription of that manuscript ; time permitting however I would have instead preferred to use a different French manuscript for both. If I had one.

Day 3

Wednesday I had carefully billed as “journee des guest stars” since the idea was to make other celebrated French TEI enthusiasts share in the work. So we began with a presentation about TEI recommendations for dealing with named entities and their names given by Alexandre Gefen. Since the room contained more than a few French linguists this gave rise immediately to a heated debate on the philosophical basis and nature of nominal reference. The exercise in marking up names was a little under-prepared and one of the students insisted on asking the Emperor’s New Clothes question (why bother?), which was answered by another participant citing (at length, and with enthusiasm) the work of Nicole Dufournaud inter alia, which was nice. Since Alexandre had to leave early, I filled the gap by moving my brief overview of tools options up, giving a plug for Sebastian’s stylesheets, and letting them experiment with OxGarage, which they loved.

For lunch we went to the brasserie down the road, which was a much much better idea than the plateaux. Everyone got very jolly and there was a fair amount of shouting. Our second invited expert of the day was Bertrand Gaiffe, from ATILF, who delivered an excellent pair of lectures about encoding of oral and linguistic data respectively, also involving a fair amount of interaction and discusion, but not much actual tagging, since TEI interfaces for the appropriate tools remain elusive, best efforts of the LingSig notwithstanding. .

Day 4

The final day began with a presentation on the various TEI orthodoxies concerned with the editing of primary resources given by our third invited expert : Alexei Lavrentev from ICAR. Participants were then offered the choice of doing either the reverse transcription exercise or the visual encoding exercise from Berne; both options were taken up, though I was too busy sorting out the website to see how far they got with either.

After another nice brasserie lunch (roast duck), I spent about 15 minutes showing how to use TEIBoilerplate, which went down remarkably well, “Génial” they cried, as they saw all that tricky encoding in John’s demo file being rendered beautifully by Safari (France is still largely land of the Mac). The rest of the afternoon, was devoted to a more ambitious TEI-savvy piece of software : txm, from the textometrie project at Lyon. Alexei showed us what it was, and demonstrated how to make it sit up and do tricks with the Graal and Brown corpora, which participants had pre-installed. He also showed it working with a selection of literary texts prepared for use throughout the workshop.

Verdict

I think this workshop worked much better than it deserved to (I always think that). All the participants seemed very happy at the end, and several of them said they had learned more than they expected to. I think the organisation of the programme made good sense, and the balance of exposition and exercise was approximately right, though we probably didn’t do enough to make the practicals consistent and relevant. A few parts suffered from lack of preparation, and I think we could have done more to get a single case study working throughout the course of the four days, in addition to the various more specialised materials we introduced. But next time we’ll definitely get everything right. Thanks are due to the participants, the organisers, and all my co-formateurs, especially Lauranne for calming me down at moments of high anxiety.
All the materials used in the workshop are available in PDF starting from http://meet.tge-adonis.fr/sites/default/files/2012-11-initial.pdf. Dedicated TEI hackers may also be interested in the XML sources of the presentations which are available from my svn repository at http://code.google.com/p/tei-fr/source/browse/#svn%2Ftrunk%2FTalks%2F2012-11-paris