Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

How to make a sow’s ear into a silk purse/Comment faire d’une buse un épervier

As the proverb says, you can’t turn a buzzard into a sparrow-hawk. But here’s how I went about producing a TEI-conformant minimally encoded edition of the 1611 King James bible, starting from an all-singing, all-dancing, vastly over-complicated, web site to the existence of which Martin Mueller had alerted me last Friday in a plaintive posting to TEI-L

The problem with web sites like this is that they expose only various HTML views of their underlying data, usually heavily infested with javascript, often deploying all sorts of nonstandard gizmos to make the pages look just so. I’ve no quarrel with their doing that, I just wish they’d make it a bit easier to get at the real data, i.e. the transcribed text and the pointers to the images that go with them. This site is a case in point: after spending some time looking at some of the gazillion HTML files a simple-minded wget -r starts hoovering up, I hypothesize that the data is actually stashed away somewhere inaccessible and all that you see on the site is a bunch of variously configured static web pages which have been created from it. The good thing about that is that the static web pages were created by an automaton, and the process can therefore be reversed by another automaton. The bad thing is that (in this case at least) clearly different automata have been used to generate vaguely different versions of the same page. For example: the following three URLs all show subtly different versions of the same first page of the 1611 bible : https://www.kingjamesbibleonline.org/1611_Genesis-Chapter-1/  https://www.kingjamesbibleonline.org/Genesis-Chapter-1_Original-1611-KJV/ and https://www.kingjamesbibleonline.org/Genesis_1_1611/

These are not simple redirects: each URL delivers a file in a different format. Well, I could have asked whoever runs this site what was going on, but that would have spoiled the fun, so I chose the format I liked best (the last of the three above) and wrote a script to generate requests for just those pages. I had to assume that the URLs would follow a consistent naming format, encouraged by a table of the names of the books of the bible I found in one of the chunks of embedded javascript, and which I moved into my little perl script. Running this script produced a bash script which grabbed each page using curl and saved it as an HTML file on my machine.
What went wrong with this process? Surprisingly little: I didn’t find out till Sunday lunchtime that not only had I completely overlooked the Apocryphal books, but also that the file naming conventions used for these were not quite the same as for the canonical chapters (“Judith” for example is actually spelled “Iudeth”), but for the bulk of the 1300 or so chapters my guesses about the URL to use were spot on.  [This was Hubris. See my comment below]
The real challenge was disentangling the useful data from the resulting screen-scraped mess of pottage. My usual approach here is to run the HTML I get through Dave Ragget’s utterly wonderful tidy utility to turn it into kosher XML, and then write an XSLT script to pick out the bits I want. In this case, however, the pottage I got was so messy even tidy threw up its metaphorical hands and refused to proceed with it, so I had to concoct a little perl script to pre-process each file and extract just the useful part, before running it through tidy, and processing the resulting XML into conformant TEI by means of saxon and a little XSLT stylesheet. Like this:

for f in webScraped/*.html; do \
FNAME=`basename $f .html`;\ 
echo ${FNAME} ;\ 
perl extract.prl $f | \ 
tidy -q  --error-file tidyerrs --doctype omit --numeric-entities yes - asxml | \ 
saxon -o:chaps/${FNAME}.xml -s:- -xsl:postGrab.xsl chapId=${FNAME};\ 
done;

And what went wrong with this process? Well, quite a lot, as you might expect, but nothing that couldn’t be fixed by tweaking and rerunning the process (this is why I always put the effort into writing bash scripts for jobs like this: they can be rerun till I get it right). For example, I was deriving an XML id for each chapter from the name of the input file, but some of the files had names beginning with digits. My plan was to put each chapter into a separate XML file and then use xInclude to munge them together into a TEI document (to maximize the flexibility of said mungeing) — but for this to work I had to get namespace declarations in the right place, and as anyone who has used them knows, anything involving namespace declarations never works the first time. To say nothing of unexpected inconsistencies in the data : which in fact were really very few. In fact so far, the only thing that has so far caught me out was a handful of chunks which looked like biblical data in the HTML markup but were not. Fortunately these were detectable because they wound up with an invalid @n attribute in the XML output and could therefore be weeded out after the event.
While all this was going on, I wrote my driver file and did some further sniffing around on the interwebs for sources from which to complete the work. The 1611 Bible has a title page and loads of prefatory matter not all of which is available on www.kingjamesbibleonline.org (for the record, I found the title page on Wikimedia, and the rest of the front matter at several places, but in page image form only).
How should the Bible be modelled as a TEI document? In my first view, each verse is an <ab>, each chapter is a <div>, each book is a <text>, each testament is a <group>. This made sense to me, but then I realised that processing would be simpler if instead each book were regarded as a <div> of a different type; hence, that is what the current versions of both driver file and ODD say. Considerations of where to put the genealogies, tables for Easter, almanachs etc. which arguably do not belong in the front matter may lead me to review that decision, so I have left <group> lurking in my ODD. It should be easy to produce a separate driver file representing that alternative view.

A more pressing need, however, is to sort out the placement of the page image links: in the HTML source, and therefore in my XML, links to the page images for the whole of a chapter are given as a sequence at the start of the transcribed text for that chapter, rather than being placed at the point in the transcript where that page begins. In some cases that point is identifiable because the transcribers have chosen to include part of the running page header in the transcription there (it winds up as a <fw> element); but in many it isn’t. Most chapters only occupy a few pages so sorting this out would not be a major effort, just a rather tedious one, and not-easily-automatable one.
So what have we learned? Actually it’s not that hard to make a usable TEI document even from something as idiosyncratic as this web site. Which is encouraging. Let me also hasten to add that I mean no disrespect for the hard work (still less for the generosity) which must have gone into the production of that website: everyone is entitled to their own design decisions. I am pleased to express my thanks to the anonymous people who have put in that effort, and decided to share its fruits even with the godless multitude. As the 1611 translators say in their preface ‘Zeale to promote the common good, whether it be by devising any thing our selves, or revising that which hath bene laboured by others, deserveth certainly much respect and esteeme’.

Notes towards a definition of TEI Conformance

 

 

 

 

 

 

 

Each of the blobs here represents three subtly different things:

  • an ODD : that is, a collection of TEI specifications
  • a formal schema generated from that ODD, and its natural language documentation
  • the set of documents considered valid by that schema.
    The TEI provides TEI All : a set of over 500 uniquely identifiable elements, classes, attributes, etc. and a schema in which they are all permitted. For all practical purposes a user of the TEI must make a selection from this cornucopia, and we call that selection a TEI subset. Of course there are many many possible TEI subsets, each making different choices of elements or attributes or classes, but the sets of documents which each consequent schema will validate all have in common that they will also be considered valid by the schema TEI All.

A user of the TEI may however do more than simply choose a subset of the provided specifications. They may also provide additional constraints for aspects of an encoding left underspecified by the TEI, for example by requiring that attribute values be taken from a closed list of possible values rather than being any syntactically valid token. They may simply change the datatype of an attribute, for example from a string to an integer or a date. They may also provide an alternative identifier for an element or an attribute, for example to change its canonical English name for one from another language. In some cases, attribute value changes are equivalent to a subsetting operation; in others not. Renaming operations never result in a subset: a document in which the element names have all been changed to their French equivalents cannot be validated by an English language version of TEI All. A user of the TEI can also change the content model or the class membership of existing TEI elements, in ways which may or may not be equivalent to a subsetting operation.

We use the term customised subset for all these kinds of personalisation because they result in something which is not necessarily a further subset of the TEI subset concerned, but a further modification of it. In the general case, their conformance with TEI All can be determined only by inspection, and their validation may require some additional processing.

Finally, a user of the TEI is at liberty to define entirely new elements and attributes, and to make such components members of existing TEI classes so that existing TEI elements may refer to them. They may also modify the content models of existing TEI elements to refer explicitly to such new elements. This results in an extended subset, since it contains elements or attributes additional to those provided by the TEI All schema. Such additional components should always be labelled as belonging to a non-TEI namespace. A processor can then determine that these components may be left out of consideration when determining the validity of a document with respect to TEI All.

In additional to these formal considerations, TEI conformance involves attention to some less easily verifiable constraints, specifically the twin requirements of honesty and explicitness. By honesty we mean that elements in the TEI namespace must respect the semantics which the TEI Guidelines supply as a part of their definition. By explicitness we mean that all modifications (i.e. both customized and extended subsets) should be expressed using an ODD to document exactly how the TEI declarations on which they are based have been derived. (An ODD need not of course be based on the TEI at all, but in that case the question of TEI conformance does not arise)

Formally speaking, we can say of a conformant TEI document :

  • it must be a well formed XML document and
  • it is valid against the TEI All schema :
    • without modification (it is a TEI subset), or
    • after deletion of any elements it contains which are not in the TEI namespace including their children, irrespective of namespace (it is a TEI extension), or
    • after application of any canonicalization algorithm specified by its associated ODD (it is a TEI customized subset)

The purpose of these and similar rules is to make interchange of documents easier. They do not however guarantee it, and they certainly do not provide any guarantee of interoperability. Unlike many other standards, the goal of the TEI is not to enforce or impose consistency of encoding, but to provide a means by which encoding choices and policies may be more readily understood, and hence (to some extent) algorithmically comparable.