Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

Consistency is a good thing…

Now that I have all the available files in a form which is at least valid according to a TEI P5 schema, I can start fussing about the consistency of the markup they contain.

Let’s start with an easy one. Attribute names may be marked up using the element <att>, or using the element <ident>, or using the element <name>, or just flagged in the running text, by a preceding @ (I didn’t find any cases of a following =, though I suspect there are some). I really don’t care a lot which is used, but I do care that there should be just one rule to bind them, not four. So how do things stand at present?

There are 298 <att> elements, 212 <ident> elements, and 98 <name> elements. <name> is used both for personal names and for names of attributes, classes etc. A first step therefore is to turn all name[@type] elements into idents, and to simplify the values for type. A second step might be to look at all occurences of attribute names simply flagged with an @ sign. in wPressDox, the regex [[:space:]]@[[:alpha:]][[:alnum:]+] matches 608 times; in legacyTEI files, only 18. Some careful regex matching might enable me to turn the @thing cases into <att>s in most cases. But hold on a second: if consistency is the goal it would be much easier to turn all <att>thing</att>s into @thing than the reverse. The question really is : is faithfulness to the original tagging goals completely unimportant? We have 16 cases of ident[@type=’attr’] and 47 of ident[@type=’attrName’] and 21 cases of ident[@type=’attr’] as well as 32 name[@type=’attr’], and indeed some random occasions where <code> (with no @type value) is used to delimit an attribute name. Making all of these consistently <att>seems to preserve the original encoder’s goals. Turning them all into @xxx however seems somewhat different.

There are also cases where the original tagging has tried hard to make distinctions we would probably no longer bother with: for example <mentioned> and <soCalled>. I leave these well alone.

Now a more tricky one.

There are 1817 <div> elements, 705 of them being @typed. 619 of those are type=’h2’, but this does not necessarily reflect their hierarchic position, merely that they were originally indicated by an h2 level heading in the WordPress file. Of the 83 typed divs which are not “h2”, the most frequent are h3 (45), followed by h1 (8). However, of the 45 h3 divs, only 20 have an h2 parent; the remaining 25 are contained by an untyped div. Other very rare values include “agendaItem” (3), appendix (4), and glossary (3).

It seemed like a good idea to check that <div> elements contain something other than a nested div and remove the redundant layer if not. The xpath //div[count(*) eq 1][div] finds 52 items , though this may be an artefact of my retagging script. Somewhat more problematic are <div>s which contain just a <head: in some cases, these are probably genuine: for example

<div><head> 12:30 – 13:30: Lunch</head></div>
<div type="h2"><head>Goodbye Peter and thank you! 😟❤️</head></div>

but absent supernatural powers, it’s in principle impossible to know how to interpret a sequence of headings in the wordpress files. For example, here’s a snippet from the 2020-10_30824 document:

The “Review…” lines are div/head s containing a link to another part of the document (this kind of transclusion happens nine times in all). Are they siblings or parents of the “SUNDAY… ” div/head s ? Is the “Proposal on ruby glosses” a child or a sibling of “SUNDAY, 25…” ? You tell me. I have mostly left them all as siblings for the moment.

Then there’s lists: there are 2456 <list>s, only 109 of them typed, @type values are “unordered” (33), “simple” (21), “ordered” (25) and “gloss” (30).

There are 45 untyped lists which contain one or more items containing a label; three of these contain lists with labels as a child of the list as well as labels as child of the list/item; mercifully all confined to one file (2009-12). The label tag is used ambiguously: sometimes it contains a person siglum; sometimes a subheading. On further investigation, the siglum usage occurs only in one file (tcm24) so I changed all these to rs. The 30 explicitly gloss lists are all in the TEI legacy files too.

Further down the slippery slope: there are plenty of minority interest tagging distinctions: I have turned a scattering of <quote>s into <q>s, and all <ptr> elements into <ref>s, but not yet checked that all the @targets actually go somewhere. I have retained 9 cases of <time> which might just as well be <date>s; and six cases of <foreign>; also 76 <lb/> elements standing in (mostly) for proper structural markup. I have retained and made valid the one document in which <sp> and <speaker> elements have been used, but not tried to add such markup to the couple of occasions where the minutes launch into dramatic mode.

But rather than continue to polish this pig’s head, I have spent today providing it with some infrastructure, and putting it all in an accessible repository at https://github.com/lb42/theCellar

I also spent far too much time providing a CETEICEAN front end to it, at https://lb42.github.io/theCellar/tcMins/index.html

Plenty more to do. Of course.

The usual problem

Now that I have all of the TEI Council minutes in XML which is more or less valid against TEI-All, I can start worrying about defining a sensible schema for them, oh bliss. One possibility might be just to accept and preserve every tagging decision taken during the long history of this archive, even the silly ones. Another might be to retro-convert everything to a single brutalist vision of how things Ought To Be. Or somewhere between the two extremes, perhaps.

Over the last 23 years, different editors of TC minutes have taken different views in all the places where you might expect them to. Even in the days when minutes were prepared in kosher TEI, mostly conforming to TEI Lite, there was still plenty of scope for different practice. Shall we distinguish soCalled, and mentioned, q and quote, term and emph? Are we consistent in using emph for linguistic emphasis rather than formatting? Do we distinguish q and quote, and if so, for why? If we have gi and att (or occasionally ident type=’att’) do we also need tag, and code and ident?

In more recent times, when such ontological anxieties have become perhaps less feverish, the minutes use a comparatively restricted set of distinctions, mostly to do with whether a snippet of text is in italic or bold, or used as a heading or a link, or is a list item. Indeed, sometimes the tagging decisions we see in the XML file are purely an artefact of the formatting tweaks needed to present the minutes on the WordPress website and have little to do with document structure or meaning. And in sadly many cases, if a semantically tagged version of such documents ever existed, it is now lost. Should we, in the interests of consistency, enforce the lowest common denominator across the whole set of documents?

Consistency at least in the way major components of each document are presented would surely be advantageous. To take a simple example, every set of minutes begins with a list of the persons participating in the meeting. Sometimes it is presented as a list of items; sometimes as a single paragraph; sometimes as a sequence of paragraphs. Almost always the names of individual attendees are associated with a siglum or set of initials, but the way in which this is all represented in the XML structure varies considerably. This sort of thing is easy, if time consuming, to make consistent. And probably something like the current conventions, in which each person’s name is given as a distinct <item> within a <list> should be aimed for, since it is clear that the various ways these lists are currently presented is really only an accident of formatting, changes which are of much lesser interest than ease of processing the list in various ways. Whether or not the full TEI paraphernalia linking occurrences of a person’s initials in the text to their appearance in the list of attendees, is another question.

Of courser, if we were starting this exercise from zero, we would follow the textbooks and first carry out a data analysis. What are the important entities in a set of minutes, and what are their properties? Each of these documents relates to a meeting which took place over one or more days, in a specific place, or in cyberspace, with a specified set of participants. The minutes indicate the topics discussed, to some extent formalised in terms of identified issues, or action points. We might also ask what sorts of research questions should our analysis facilitate: how often do particular individuals or kinds of individual intervene? How long does it take for an issue to be resolved? How many different issues are under consideration at a particular time? Where do issues come from? And so on.

But we are not starting this exercise from scratch. The documents already exist. Moreover, the conceptual entities they are concerned with, and therefore represent, change over time, reflecting the Council’s evolution both in terms of its practice and its sense of purpose. That purpose has always been to maintain and develop the technical content of the TEI Guidelines, of course; but with the availability of sophisticated issue-tracking and reporting software the way in which this is carried out has changed a great deal. Consequently the operational model – the modus operandi – of the Council has also changed a great deal. These changes are necessarily reflected in the organization and content of the minutes.

Writing a full history of the TEI Council’s evolution is not however the purpose of this document, tempting though it is. A few salient aspects of that history do however affect our document analysis. For example, it’s necessary to understand that when first set up, the Council worked very much in the same way as the original TEI project: its role was largely to initiate, supervise, and integrate work carried out in more or less autonomous working groups. This had worked well for some major expansions of the P5 Guidelines, such as the addition of manuscript description, or character encoding issues following the adoption of Unicode, where the TEI had been able to constitute a motivated and informed group of experts to produce concrete proposals; less well in areas where such a group proved harder to constitute or motivate. For the first five years of its existence, from 2002 to the publication of TEI P5 1.0.0 in 2007, however, the Council’s minutes are full of reports from specific working groups, and actions on someone to pursue them.

This was also a period during which the TEI enjoyed the luxury of two paid editors. The process by which the Council itself took over editorial responsibility probably started with the full scale review of the first draft of P5, in which each chapter was assigned to a Council member for review, though actual implementation of changes to the Guidelines (which involved a content management system called perforce) remained a specialised activity, not available to all. The minutes from this period necessarily therefore have many “action points” aimed specifically at the editors.

For releases 1.0.1 to 2.7.0 (2008 to 2014) the following formulation of the Council’s role appeared on the PDF title page

TEI P5:
Guidelines for Electronic Text
Encoding and Interchange
by the TEI Consortium
Originally edited by C.M. Sperberg-McQueen and Lou
Burnard for the ACH-ALLC-ACL Text Encoding Initiative
Now entirely revised and expanded under the supervision
of the Technical Council of the TEI Consortium
edited by Lou Burnard and Syd Bauman

Only in September 2014, with the 2.7.0 release, did that last line disappear, establishing finally that the Council was now editorially responsible for the whole.

By this date the Council’s modus operandi had also changed considerably. Already, in 2009, we find the Council reviewing and acting on proposals for change to the Guidelines known as “feature requests”, originating from the wider TEI community rather than from the Council or the Board. A key step towards expanding this practice was the adoption of the open source issue tracker provided by sourceforge, which hosted the TEI Guidelines source from 2007 onwards, and remains a recognizable forerunner of the current github based system.

The move to such systems has several implications for the current archival project. Firstly it means that a substantial amount of the TEI’s intellectual history is now exhaustively documented, including all sorts of crazy ideas and false starts and frequent repetition, but all on a platform which the TEI itself does not own or control. Secondly, it means that the links into the documentary base provided by those external systems and the more diplomatic narrative constructions provided by the current minutes are really quite important if we wish to develop a proper historical understanding. And finally, of course, the availability of this detailed repository of issues and their resolution has changed dramatically the way the TEI Council does its work.

Defining the target

Defining the target

It’s easy to say rather glibly that TEI markup is a good archival format, and in many respects it is: experience shows that a TEI file can nearly always be read without too many assumptions about the platform or software needed to read it. Because a TEI document uses a very basic form of labelled bracketting, developing software to act upon the markup is a breeze; moreover because the semantics and syntax of the markup are well defined, the software can perform whatever tricks it likes on the basis of an explicit model of the document’s structure and semantics. The tricky part is deciding what exactly the components of that explicit model should be: what (to coin a phrase) is this text really?

On the screen I am currently typing at I see that the phrase “Defining the target” and the word “really” are both in an italic font. The first is a heading, and the second is a word I wish to emphasize. In neither case is it particularly helpful to state that there’s a font change here: if I lost that information for the first case you would still (probably) recognise the words as a heading on other grounds (it’s not a sentence; it’s a separate block; it’s in a place where a heading is conventionally appropriate etc.) but in the second, without the signal given by the font style change you have no easy way of noticing that this word is meant to be more salient than the others, still less of recognising the allusion it makes to a famous journal article. Is recognising (and preserving) this emphasis as essential a part of this document as distinguishing the heading from what follows, or noting the paragraph divisions ?

Although it’s tricky, it’s something I and others have been doing for decades, this business of deciding which are the “essential” components of a document, independently of its realisation on screen or paper. The claim is not just that this separation of the document from its realisation is meaningful, but that it’s also useful. Certainly it makes it much simpler to process masses of similar but different documents in a reasonably intelligent way if their structural components and semantically salient properties are explicitly and exhaustively flagged in the same way. Certainly there may be a price to pay for that simplicity: we may have to renounce the ability to visualise the document exactly as one or other of its many historical realisations did; just as we do for other cases where such a realisation depended on a specific software infrastructure. Good luck emulating a pixel-perfect WordPerfect 4.2 or WordStar view of your TEI document on the basis of its TEI archival form.

All this by way of prelude to the next stage in my attempts to recover/reconstruct a usable TEI archive of the deliberations of the TEI Council. Those deliberations currently exist (as previous blog entries have shown) in one or more of three different forms: as Google Docs, as WordPress HTML pages, or in one or other legacy TEI format. All of these formats are relatively simple to convert into XML without loss of such information as they already contain: the task is to define a minimal TEI markup scheme to which they can all be reduced, without losing anything essential. It is that classic TEI markup problem: what do you want to distinguish in your documents? With the added constraint that I’d rather not have to introduce distinctions not already explicit (one way or another) in the sources.

I started with the Word Press XML files, since these constitute the official published record, even though they have many shortcomings. I wrote another perl script to extract a list of all the different XML tags present in the files, and an XSLT stylesheet containing a default template for each of them, mapping format-oriented tags like <h1> and <b> to semantic ones like <head> and <hi>. I then spent a happy hour or three fiddling with that, before deciding that this approach was too labour intensive to be a general solution.

So I moved on to the Google Doc files. I exported them all as docx files, applied the default TEI docxtotei conversion, and started looking at my 100 or so allegedly TEI documents. The first step was to generate an ODD which described their actual tagging practice, for which I used the TEI

oddByExample utility. This is a good way of starting the process, but has some quirks (like specifying all the element classes you might use, even though you don’t actually use any of them and explicitly deleting each attribute supplied by a class rather than deleting the class), and one major drawback. The drawback is actually perhaps a virtue: the schema you get from the ODD it generates is a strictly conformant TEI subset of TEI All. So if your data has features which are not valid in TEI All, shall we say @xml:id values which are of the wrong datatype, or empty <list> elements, or <list> or <table> elements appearing directly inside <front> instead of being decently wrapped inside a <div> … it won’t be valid.

(These examples were not chosen at random, by the way: they are all the consequence of a bug– issue https://github.com/TEIC/Stylesheets/issues/604 reported this morning – in the current docxtotei tool). Anyway, this means that either the ODD needs to be adapted to be more forgiving, or the data needs to be corrected to be less weird. Doing the former would also mean tweaking the data (to avoid polluting the TEI namespace with the weirdness), so maybe choosing the latter course of action is the wiser decision. Especially since it’s not so hard to correct the aberations I have identified so far.

So my first XSLT stylesheet is simplify.xsl, which does just that. If it finds a list or a table directly inside a front it wraps them in a p and looks the other way. When it finds an anchor it sticks an extra letter in front of its identifier. After its ministrations, all 112 generated XML files (bar one, which had an empty <list> element) were valid against the generated ODD schema. Hosannah.

That leaves 51 items with no easy XML representation, or 12 items if we assume that the legacy TEI format also counts as potentially easy XML. Sadly all but 1 of those 12 are in the “ill formed” Word Press XML format, so some (more) manual tweaking will be required before I can safely apply the retagfromWP conversion to them. Then I will have to work out what to do with the legacy TEI files, some of which are still in P4. But I think I see a way forward…

Surveying the Remains

There have been 161 TEI council meetings up to February 2023. The minutes of each meeting (conference call or face to face) – except one – are available on the Council website, but only as Word Press pages.

I have tracked down a P4 or P5 source file for 40 of them, covering meetings up to October 2008. I think there must once have been more, because some of the WordPress pages show clear signs of having been adapted or converted from a TEI original. In several cases, some TEI tags are still present, notably <gi> (appears in 20 cases between 2009-04 and 2014-06) or publicationstmt (sic), which appears along with other remnants of a TEI header in 38 cases up to 2016-03. But there is no trace of the original source files anywhere on the current website.

From 2016 onwards, the website provides only Word Press format files, in which HTML tagging is used. However, this tagging is not entirely well-formed: there are many cases where hard line breaks in a table cell are marked by HTML p end-tags, for example. And at least one where the internal structure of a table row has been completely lost.

As a first step, I wrote a perl script which did its best to extract a single well-formed XML document from each set of Word Press pages. This failed consistently for the 36 pre-2016 pages which contain residual TEI tagging but worked reasonably well for the remainder, most of the time. Only 13 of the post-2016 files (out of 85) needed hand-editing to make them well formed, though the tagging still leaves much to be desired. In particular, I realised that some of of the WordPress files made no attempt to preserve the often deeply-nested structure of the minutes, or distinguish marginal annotation from the text.

Since 2016 the minutes have been edited in Google Docs and drafts are therefore (currently) available in Word, ODT, or other formats from the Google Docs website, if you know where to find them. This part (finding them) became much easier when I asked former Council colleagues to share their secret stash of drafts with me. Converting from Google Docs to TEI is comparatively simple and much less error prone than working with the WordPress pages directly. It really ought to be the WordPress pages which constitute the document of record for these minutes, but …

It seemed like a good idea to do a bit of checking in any case. So here’s what I did:

  1. use curl to download all the word press pages to 161 separate files called yyyy-dd.html
  2. use a perl script `articulate.prl` to extract from each of them a (hopefully) well formed xml file containing just the ‘article’ recognised by wordpress; save the result in a file called yyyy-dd_dddd.xml (where dddd is the wordpress article number)
  3. check the well formedness of the resulting files with `xmlwf` and spend no more than a day or two fiddling with the ill-formed ones to improve them
  4. spend a lot of time downloading and renaming files from Google Docs. The downloading was needed for files not in the zip James sent me; the renaming was essential for my personal sanity.
  5. I then enriched the XML file I made in the previous blog entry with links to all the files collected together.

At the last count, there are 162 entries (this includes one which is mysteriously missing from the current TEI website). Of these,

  • 85 are available as well formed wpressxml files
  • 37 are ill formed wpressxml files
  • 41 are only available in a legacy TEI format
  • 115 are available as draft versions from Google Docs

Of the 37 ill-formed word press files, 11 are not also available in Google docs format.

The Google Docs collection lacks anything before 2012-04, and (for no apparent reason) three more recent items : minutes from 2014-01, 2015-10, and 2017-11.

So my next step will probably be to define a target TEI format (with an ODD of course) and set about writing snippets of XSLT.

Yesterday’s Information Tomorrow (maybe)

If you go to the TEI’s website at http://www.tei-c.org you will find, as you might hope, a respectable number of documents tracking the evolution of the Text Encoding Initiative over the last umpteen decades. Curiously, though, the record for the most ancient period (before 2008, shall we say) is a lot easier to find and manipulate than for most recent times. This posting records my attempts to put together in archival format the full record of the meetings of the TEI Technical Council.

The Council, as any fule kno, met for the first time in 2002, and is still producing regular reports of its debates and its decisions. There is a page on the TEI website (https://tei-c.org/activities/council/Meetings/) which “lists TEI Technical Council meetings and teleconferences, with links to the meeting minutes.”

I downloaded that list (it’s a WordPress HTML file, of course), ran it through HTML Tidy, and processed the result to produce a nice simple TEI file of entries like this

<list>
<head>2022</head>
<item> conference call <date>8 December 2022</date><ref
target=”https://tei-c.org/activities/council/meetings/tei-technical-council-teleconference-2022-12-08/”
> [on website]</ref></item>
<item> conference call <date>10 November 2022</date><ref
target=”https://tei-c.org/activities/council/meetings/tei-technical-council-teleconference-2022-11-10/”
> [on website]</ref></item>

My quixotic goal is to enrich this data with links to TEI source files for each set of minutes, preferably in a consistent TEI format.

Now, twenty years ago this would have been quite a reasonable proposition since (as the current TEI Vault shows), the TEI once had an “eat your own dogfood” policy of producing all of its documents in TEI. Over the years, this policy has varied somewhat, largely as a consequence of changes in the tools available, and the culture that goes with them. These policy changes relate not just to the look and feel of the website itself but also to which versions of its contents are preserved and how. Today, I think it is not unreasonable to say that much of the TEI website exists only as WordPress pages: many of those pages were first created as Google Docs and then converted to WordPress, some of the older ones were created originally in hand-crafted TEI XML, and the very oldest were created in TEI P4 SGML, but the only versions that can reliably be downloaded from the current website are in WordPress HTML.

Much of the time, of course, this is unproblematic. Mostly, we just want to read the stuff, not analyse it. But occasionally, and especially for the older material, whoever or whatever was responsible for producing the WordPress files has really made a hash of it. Consider, for example, the working paper

https://tei-c.org/activities/council/working/tcw02-approaching-the-son-of-odd-source-markup-for-p5/

This working document is quite an important one in the history of ODD. But as currently presented on the TEI web site it is badly broken, to the extent that the text has become incomprehensible.  Consider this paragraph:

Comparison with earlier versions of the page (thank you Wayback Machine) show that this is a recent breakage: here for example is the same paragraph as it appeared back in 2005 when it entered the Internet Archive.

The Wayback machine, of course, can only archive what its crawlers find. They found this page a couple of times between 2005 and 2018, both of them looking fine, but thereafter only the WordPress version. This would not matter so much were it not for the fact that the original TEI P5 source has not apparently been archived anywhere, so the breakage cannot easily be fixed.

Such losses in translation occur occasionally in more recent documents too. Here’s a paragraph from the WordPress view of document tcm46  (Minutes of the TEI Council’s April 2011 meeting) for example:

Again, until or unless I track down the original version of this file, there’s no way of filling that particular gap.

Less annoying, but more pervasive is the fact that the WordPress files rarely try to preserve any structural or semantic information. The markup will mostly contain a long series of list items, some of which may pertain to the same topic, some of which may in fact be headings, some of which are an accident of formatting. In the text (apart from links) there’s no explicit indication of interesting things you might want to search for, such as names, places, or dates.

Very few WordPress files are well formed HTML, though the wonderful w3c utility tidy does a good job on pushing them into a processable shape. Out of 120 wordPress files, 38 (nearly a third) failed to respond to this treatment, mostly because they contained an unhealthy mixture of HTML and TEI or TEI-like tags.

And finally it has to be said (I’ll be brief) that it seems really sad that the TEI is preserving its deliberations in a proprietary, tool-dependent, presentation-oriented format … the kind of format which the TEI was set up to preserve scholarship from. What kind of apostacy is that?

The origins of ODD

I’m moving house this week, which involves packing up thirty years of accumulated junk of various sorts. As a result, every now and then I stumble upon some long lost historic document, like this one. It dates from a lunch that Michael Sperberg-McQueen and I enjoyed at the Lido restaurant Bergen in November 1991. This being a family restaurant, it was equipped with paper table cloths and wax crayons, Norwegian kids for the use of, which Michael and I were quick to reappropriate to our immediate needs, namely some kind of visual representation of the production system we wanted to create for the editing and processsing of the TEI Guidelines, version P2. We knew we were going to write and edit it in some version of TEI SGML; we had faith that anything in SGML could be transformed into anything else. We just had to work out how, and what.

P1 had been produced by some devious hackery that only Michael understood, and more critically which only ran on the mainframe at UIC; we wanted something that would be platform (hardware and software) independe nt. Such was the promise of TEI SGML, after all. Somewhat to our horror, the only reasonable high level programming language in which we were both reasonably competent and for which there were decent implementations on all the machines we collectively used (IBM CMS, VAX VMS, IBM PC, Macintosh…) seemed to be a now largely-forgotten string handling language called Macro Spitbol, so we decided that our production system (what nowadays we’d call a work flow) would have to be written in that. But of course the heart of everything would be a nice author-friendly TEI SGML dialect, for which we optimistically coined the acronym ODD: One Document Does-it-all. ODD files would be parsed by an SGML parser, and its output filtered through a variety of Spitbol processors to create other formats. And that, more or less, is what we did.

On this schematic you can see the basic idea in blue. The big blue circle is the ODD format, from which are generated canonical TEI files (with extension .TIN (for Tiny) or .TEI), RL files (extension .TD), and DTD files, the three little blue boxes. DTD files are of course SGML DTD files, which ois why you see a green line going back from them to validate individual ODD files (I dont know why it’s labelled LB though). “Tiny” files would use a subset of the TEI Lite schema defined back in 1988; RL (later renamed .REF) files would use the TEI vocabulary Michael had developed for reference documentation of individual elements (“TD” for tag documentation). Down the middle you see a list of TLAs in blue which I think must have been attempts to decide on a name for the format (WEB, Joe, LAM, RDF, CSP…), though what they expand to I really don’t remember – what a pity we didn’t choose RDF. Or not. And over on the left in red you see some notes which eventually became the canonical structure of the TEI Guidelines: there is a chapter about the “blort”, containing prose paragraphs; there is a documentation element referencing the blort tag, and there is a parameter entity reference which pulls in the definitions for the blort chapter.

What happened next? Well, we did set up a workflow more or less on this model, and we did use three separate filters written in macro Spitbol (mostly by Michael) which turned our ODD SGML into two flavours of straightforward TEI-lite-like SGML, which we called “P2X” and “REF” and also generated SGML DTD fragments. After experimenting with a generic filter called “tf” (also in Spitbol) to translate the generated TEI files into LaTeX, and dallying with a Canadian tool called Omnimark, we finally settled on a rather swish transformation engine called Balise, which was produced by a French company called AIS. Either way we were able to print the fascicles of P2 in something that not only looked quite nice but also looked just the same whether I printed it in Oxford, or Michael in Chicago. Except for the paper size, of course: ain’t standardisation a marvellous thing.

And what happened to ODD? It turned out to be quite a good idea. We gave a presentation about it at the ACH-ALLC conference in 1994, though I cannot remember what we said and we never got round to writing it up. Michael developed the ideas in the “tag documentation” part quite extensively, and (I believe) used them also in his next job working for the W3C, but the TEI’s ODD stayed more or less unchanged until work started on the TEI’s XML reincarnation, at which time the whole system was re-imagined and redesigned as the lean mean generic schema generation system we know and love today. But that’s another story.