How to make a sow’s ear into a silk purse/Comment faire d’une buse un épervier

As the proverb says, you can’t turn a buzzard into a sparrow-hawk. But here’s how I went about producing a TEI-conformant minimally encoded edition of the 1611 King James bible, starting from an all-singing, all-dancing, vastly over-complicated, web site to the existence of which Martin Mueller had alerted me last Friday in a plaintive posting to TEI-L

The problem with web sites like this is that they expose only various HTML views of their underlying data, usually heavily infested with javascript, often deploying all sorts of nonstandard gizmos to make the pages look just so. I’ve no quarrel with their doing that, I just wish they’d make it a bit easier to get at the real data, i.e. the transcribed text and the pointers to the images that go with them. This site is a case in point: after spending some time looking at some of the gazillion HTML files a simple-minded wget -r starts hoovering up, I hypothesize that the data is actually stashed away somewhere inaccessible and all that you see on the site is a bunch of variously configured static web pages which have been created from it. The good thing about that is that the static web pages were created by an automaton, and the process can therefore be reversed by another automaton. The bad thing is that (in this case at least) clearly different automata have been used to generate vaguely different versions of the same page. For example: the following three URLs all show subtly different versions of the same first page of the 1611 bible : https://www.kingjamesbibleonline.org/1611_Genesis-Chapter-1/  https://www.kingjamesbibleonline.org/Genesis-Chapter-1_Original-1611-KJV/ and https://www.kingjamesbibleonline.org/Genesis_1_1611/

These are not simple redirects: each URL delivers a file in a different format. Well, I could have asked whoever runs this site what was going on, but that would have spoiled the fun, so I chose the format I liked best (the last of the three above) and wrote a script to generate requests for just those pages. I had to assume that the URLs would follow a consistent naming format, encouraged by a table of the names of the books of the bible I found in one of the chunks of embedded javascript, and which I moved into my little perl script. Running this script produced a bash script which grabbed each page using curl and saved it as an HTML file on my machine.
What went wrong with this process? Surprisingly little: I didn’t find out till Sunday lunchtime that not only had I completely overlooked the Apocryphal books, but also that the file naming conventions used for these were not quite the same as for the canonical chapters (« Judith » for example is actually spelled « Iudeth »), but for the bulk of the 1300 or so chapters my guesses about the URL to use were spot on.
The real challenge was disentangling the useful data from the resulting screen-scraped mess of pottage. My usual approach here is to run the HTML I get through Dave Ragget’s utterly wonderful tidy utility to turn it into kosher XML, and then write an XSLT script to pick out the bits I want. In this case, however, the pottage I got was so messy even tidy threw up its metaphorical hands and refused to proceed with it, so I had to concoct a little perl script to pre-process each file and extract just the useful part, before running it through tidy, and processing the resulting XML into conformant TEI by means of saxon and a little XSLT stylesheet. Like this:

for f in webScraped/*.html; do \
FNAME=`basename $f .html`;\ 
echo ${FNAME} ;\ 
perl extract.prl $f | \ 
tidy -q  --error-file tidyerrs --doctype omit --numeric-entities yes - asxml | \ 
saxon -o:chaps/${FNAME}.xml -s:- -xsl:postGrab.xsl chapId=${FNAME};\ 
done;

And what went wrong with this process? Well, quite a lot, as you might expect, but nothing that couldn’t be fixed by tweaking and rerunning the process (this is why I always put the effort into writing bash scripts for jobs like this: they can be rerun till I get it right). For example, I was deriving an XML id for each chapter from the name of the input file, but some of the files had names beginning with digits. My plan was to put each chapter into a separate XML file and then use xInclude to munge them together into a TEI document (to maximize the flexibility of said mungeing) — but for this to work I had to get namespace declarations in the right place, and as anyone who has used them knows, anything involving namespace declarations never works the first time. To say nothing of unexpected inconsistencies in the data : which in fact were really very few. In fact so far, the only thing that has so far caught me out was a handful of chunks which looked like biblical data in the HTML markup but were not. Fortunately these were detectable because they wound up with an invalid @n attribute in the XML output and could therefore be weeded out after the event.
While all this was going on, I wrote my driver file and did some further sniffing around on the interwebs for sources from which to complete the work. The 1611 Bible has a title page and loads of prefatory matter not all of which is available on www.kingjamesbibleonline.org (for the record, I found the title page on Wikimedia, and the rest of the front matter at several places, but in page image form only).
How should the Bible be modelled as a TEI document? In my first view, each verse is an <ab>, each chapter is a <div>, each book is a <text>, each testament is a <group>. This made sense to me, but then I realised that processing would be simpler if instead each book were regarded as a <div> of a different type; hence, that is what the current versions of both driver file and ODD say. Considerations of where to put the genealogies, tables for Easter, almanachs etc. which arguably do not belong in the front matter may lead me to review that decision, so I have left <group> lurking in my ODD. It should be easy to produce a separate driver file representing that alternative view.

A more pressing need, however, is to sort out the placement of the page image links: in the HTML source, and therefore in my XML, links to the page images for the whole of a chapter are given as a sequence at the start of the transcribed text for that chapter, rather than being placed at the point in the transcript where that page begins. In some cases that point is identifiable because the transcribers have chosen to include part of the running page header in the transcription there (it winds up as a <fw> element); but in many it isn’t. Most chapters only occupy a few pages so sorting this out would not be a major effort, just a rather tedious one, and not-easily-automatable one.
So what have we learned? Actually it’s not that hard to make a usable TEI document even from something as idiosyncratic as this web site. Which is encouraging. Let me also hasten to add that I mean no disrespect for the hard work (still less for the generosity) which must have gone into the production of that website: everyone is entitled to their own design decisions. I am pleased to express my thanks to the anonymous people who have put in that effort, and decided to share its fruits even with the godless multitude. As the 1611 translators say in their preface ‘Zeale to promote the common good, whether it be by devising any thing our selves, or revising that which hath bene laboured by others, deserveth certainly much respect and esteeme’.

Notes towards a definition of TEI Conformance

 

 

 

 

 

 

 

Each of the blobs here represents three subtly different things:

  • an ODD : that is, a collection of TEI specifications
  • a formal schema generated from that ODD, and its natural language documentation
  • the set of documents considered valid by that schema.
    The TEI provides TEI All : a set of over 500 uniquely identifiable elements, classes, attributes, etc. and a schema in which they are all permitted. For all practical purposes a user of the TEI must make a selection from this cornucopia, and we call that selection a TEI subset. Of course there are many many possible TEI subsets, each making different choices of elements or attributes or classes, but the sets of documents which each consequent schema will validate all have in common that they will also be considered valid by the schema TEI All.

A user of the TEI may however do more than simply choose a subset of the provided specifications. They may also provide additional constraints for aspects of an encoding left underspecified by the TEI, for example by requiring that attribute values be taken from a closed list of possible values rather than being any syntactically valid token. They may simply change the datatype of an attribute, for example from a string to an integer or a date. They may also provide an alternative identifier for an element or an attribute, for example to change its canonical English name for one from another language. In some cases, attribute value changes are equivalent to a subsetting operation; in others not. Renaming operations never result in a subset: a document in which the element names have all been changed to their French equivalents cannot be validated by an English language version of TEI All. A user of the TEI can also change the content model or the class membership of existing TEI elements, in ways which may or may not be equivalent to a subsetting operation.

We use the term customised subset for all these kinds of personalisation because they result in something which is not necessarily a further subset of the TEI subset concerned, but a further modification of it. In the general case, their conformance with TEI All can be determined only by inspection, and their validation may require some additional processing.

Finally, a user of the TEI is at liberty to define entirely new elements and attributes, and to make such components members of existing TEI classes so that existing TEI elements may refer to them. They may also modify the content models of existing TEI elements to refer explicitly to such new elements. This results in an extended subset, since it contains elements or attributes additional to those provided by the TEI All schema. Such additional components should always be labelled as belonging to a non-TEI namespace. A processor can then determine that these components may be left out of consideration when determining the validity of a document with respect to TEI All.

In additional to these formal considerations, TEI conformance involves attention to some less easily verifiable constraints, specifically the twin requirements of honesty and explicitness. By honesty we mean that elements in the TEI namespace must respect the semantics which the TEI Guidelines supply as a part of their definition. By explicitness we mean that all modifications (i.e. both customized and extended subsets) should be expressed using an ODD to document exactly how the TEI declarations on which they are based have been derived. (An ODD need not of course be based on the TEI at all, but in that case the question of TEI conformance does not arise)

Formally speaking, we can say of a conformant TEI document :

  • it must be a well formed XML document and
  • it is valid against the TEI All schema :
    • without modification (it is a TEI subset), or
    • after deletion of any elements it contains which are not in the TEI namespace including their children, irrespective of namespace (it is a TEI extension), or
    • after application of any canonicalization algorithm specified by its associated ODD (it is a TEI customized subset)

The purpose of these and similar rules is to make interchange of documents easier. They do not however guarantee it, and they certainly do not provide any guarantee of interoperability. Unlike many other standards, the goal of the TEI is not to enforce or impose consistency of encoding, but to provide a means by which encoding choices and policies may be more readily understood, and hence (to some extent) algorithmically comparable.

(Another reason) Why I love the Internet

So, at the recent TEI Conference in Vienna, Elisa and I were indulging in a little mutual admiration on our knowledge of an obscure work entitled Thalaba the Destroyer by the early English Romantic Poet called Robert Southey (rhymes, as any fule kno, with « mouthy »). So when I got back home, I went to look for the volume containing said work which I dimly remembered having on my shelves, in the decrepit-but-too-nice-to-throw-away-section. And sure enough, there it was. The front board has come loose, but the first three openings  look like this:

frontboardhalftitletitlepage

Having scanned those first few pages, I naturally asked Mr Google what he knew about the matter. And was thus able rapidly to confirm :

  • My copy of Thalaba is the cheap reprint (two volumes in one) published
    by Vizetelly and Beeton in 1853. There is a Google-scanned version of the same edition, available from the Internet Archive. They have included with it a couple of pages of  advertisements for other works published by Clarke Beeton (p 7 and 8) which are missing in mine however.
  • What seems like another copy of the same edition is currently on sale at Abe Books for the startling sum of $199.76. Mine is in poor condition,  which is why it only cost me half a crown back in 1967, when I used to frequent Oxford’s second hand bookshops (there aren’t any to frequent these days).

As you may have noticed above, my copy also contains several signs of its previous owners. As well as the book plate, and the inscription above, there’s a nice message from Aunty Sarah, the donor,  opposite the preface:

front-1and there’s also an intriguing note from « JB » dated some twenty years later, opposite the start of the poem proper.

body-01

So… what have we learned? Rosamund was given this book by her aunt, Sarah Brent, in 1860. And in 1882, her husband felt compelled to record his own experience of the Eastern exotic in the same book « We met at Persepolis an Arab maiden of most lovely form and features — she was a dream of beauty never to be forgotten ». What she made of it, one can only conjecture.

But why I love the Internet, is that (pondering these matters after breakfast this morning), it has helped me place these people a little more precisely in time and place. A search for « Rosamund Borrowman » told me that  the 1861 Census shows a person of that name, born 1825 in Kent, married to John Borrowman, born 1830 in Midlothian, residing in Middlesex in 1861. The ancestry.co.uk site where I found this record is pay-walled so no further details available, but that seems reasonably plausible.

And searching for « Rosamund Borrowman John » I was able to find a record of her death. Some industrious volunteers have been surveying the gravestones of a place called Hambledon in Surrey, and there she is:   « Rosamund Vertue the beloved wife of John Borrowman. She died 25th August 1895. Also the above named John Borrowman son of Robert Borrowman born in Edinburgh 3rd April 1830 died at Hambledon 4th July 1906. Also Elizabeth daughter of the above died 22nd October 1932 aged 72 years » It’s all in the spreadsheet.

My next step, obviously, will be to find out where Hambledon is, and whether you can get there by train. Maybe.

Rompons avec le ronron techno-productiviste des institutions!

Back in August or September, I remember bleating anxiously on this blog about having rashly accepted to give a talk on les Humanités Numériques as part of a seminaire « Avenue Centrale » organised by the MSH in Grenoble. I can now report that eventually (in the English sense) I did manage to get some old slides updated and licked into shape, aided by the four hour T-not-so-GV journey to Grenoble last December, and a week or so of being unbearable to myself and everyone else around me. The slides duly appeared on Slideshare, and the folks at Avenue Centrale have even published a nice video and a podcast of me delivering them, but that’s not nearly as interesting  as what actually happened on the day.

Coming to terms with an implausibly pink armchair
Coming to terms with an implausibly pink armchair

On 20th December, after a meeting of the conseil scientifique du MSH Alpes, I found myself esconced in an implausibly pink fauteuil,  clutching a microphone, and ready to go, having delayed the obligatory 30 minutes for bigwigs to turn up, when there was a minor kerfuffle  as the organisers realised that a bunch of scruffy students were busy at the front door handing out an A2 sized pamphlet promisingly titled Humanités Numériques: Gare à la propagande!!! 

The source of the pamphlet, which characterizes me as un petit soldat de la conversion au numérique  des Humanités,  was  subsequently tracked down online by one of the French DH twitterati  (à savoir, Martin Grandjean) within a few minutes of my tweeting this image of it  after the show. Aside from the distribution of the pamphlet, the promised  Action-critique  took the form of three or four extra persons attending my lecture, one of whom also gave a brief speech deploring the industrial and social cost of mass digitization (I think) during the Q&A session. An agreeable though brief debate ensued, none of which sadly seems to have made it to the published version of the video, and we then all adjourned for coffee and horrible sandwiches downstairs, during which I was able to continue to chat amicably with the protesters, though the term seems barely appropriate. I learned that these were actually eco-warriors with concerns about the way big business was driving technology into inappropriate places (There have been somewhat critically received plans to hand out tablets to all school children, in an interesting reprise of the UK Government’s BBC Micro initiative in the 1980s)  On my way out I also tried to take some photos of the activists using my new tablet, which involved much banter and cursing, as I have barely mastered this new device. Out of deference to their desire for anonymity, the photos will have to stay in my personal archive for a few more decades though.

Tidings of this unusual  event caused a (very brief) flurry of excitement on twitter. Frederic Clavert was  a bit peeved to find that his logo had been appropriated for the pamphlet; others were disappointed to find no coherent plan for action in it. And there were also   (tee hee) expressions of extreme jealousy from a few of my DH colleagues — Moi aussi une affiche!  A brief sample of my first significant « moment » in the political history of DH  in France (Marjorie told me that’s what it was) follows.

tweets

Data versus Reality

… is not the title of the book I’ve been re-reading this week, though it might well be. Bill Kent’s Data and Reality was first published in 1978, and comes from the heroic age of database design and development, a period when such giants as Astrahan, Chen, Chamberlin, Codd, Date, Nijssen, Senko and Tschritzis were slugging it out over the relative merits of the relational, network, and binary database models and the abstractions they supposedly modelled : a struggle predominantly over terminology and ways of thought since (as Kent shows) almost all of these differently named and passionately advocated models were fundamentally very similar, differing only in the specific compromises they chose when confronted by the messiness of reality. Whether you call it a relation or an object or a record, the globs of storage handled by every database system were still records, combinations of fields containing representatives of perceptions of reality, chosen and combined for their utility in a specific context. The claim that such systems modelled reality in any complete sense is easy to explode; it’s remarkable though that we still need to be reminded, again and again, that such systems model only what it is (or has been) useful for their creators to believe. Kent is sanguine about this epistemological lacuna : “I can buy food from the grocer, and ask a policeman to chase a burglar, without sharing these people’s view of truth and beauty”, but for us, living in an age of massively interconnected knowledge repositories, which has developed almost accidentally from the world of more or less well-regulated corporate database systems, close attention to their differing underlying assumptions should be a major concern. This applies to the differently constructed communities of practice and knowledge which we call “academic disciplines” just as much as it does to the mechanical information systems those communities use in support of their activities.

In its time, Data and Reality was remarkable for introducing the idea that data representations and the processes carried out with them should be represented in a unified way, the basic idea of what we now call object-oriented processing; yet it also reminds of some fundamental ambiguities and assumptions swept under the carpet even within that paradigm. Are objects really uniquely identifiable? “What does ‘catching the same plane every Friday’ really mean? It may or may not be the same physical airplane. But if a mechanic is scheduled to service the same plane every Friday, it had better be the same physical airplane.” The way an object is used is not just part of its definition. It may also determine its existence as a distinct object.

Kent’s understanding of the way language works is clearly based on the Sapir-Whorf hypothesis: indeed, he quotes Whorf approvingly “Language has an enormous influence on our perception of reality. Not only does it affect how and what we think about, but also how we perceive things in the first place”. There is an odd overlap between his reminders about the mocking dance which words and their meanings perform together and contemporaneous debates within the emerging field that Wilks has charmingly characterized as “Good Old Fashioned AI”. And we can also see echoes of similar concerns within what was in the 1970s regarded as a new and different scientific discipline called Information Retrieval, concerned with the extraction of facts from documents. Although Kent explicitly rules text out of discussion (“We are not attempting to understand natural language, analyse documents, or retrieve information from documents”) his argument throughout the book reminds us that data is really a special kind of text, subject to all the hermeneutical issues we wrongly consider relevant only to the textual domain.

This is particularly true at the meta-level, of how we talk about our data models, and the systems we use to manipulate them. Because they were designed for the specific rather the general, and because they were largely developed in commercially competitive contexts, the database systems of the 1970s and 1980s proliferated terms and distinctions amongst many different kinds of entity, to an extent which Kent (like Ockham before him) argues goes well beyond necessity. This applies to such comparatively arcane distinctions as those between entity, attribute, and relationship, or between type and domain, all of which terms have subtly different connotations in different contexts, though all are reducible to a more precise set of simple primitives. It applies also (and here the TEI in me sits up and smirks) to the distinction between data and metadata. Many of the database systems of the eighties and nineties insisted that you should abstract away all the metadata for your systems into a special kind of database variously called a data dictionary, catalogue, or schema, using entirely different tools and techniques from those used to manipulate the data itself. This is a needless obfuscation once you realise that you cannot do much with your data without also processing its metadata. In more recent times, one of the more striking improvements that XML made to SGML was the ability to express a schema and the objects it describes using the same language. Where what are usually called the semantics of an XML schema should be described and how remains a matter which only a few current XML systems (notably the TEI) explicitly consider.

Kent seems to have been a modest and likeable man. He retired in 2000, and died five years later, leaving a legacy of accessible and still provocative papers, most of them available from his website . Like many other pioneers in computer science, his academic qualifications come from unrelated fields (in his case, chemical engineering and maths); like many others he worked long hours for IBM and HP, but achieved fame and intellectual satisfaction outside the corporate world in the development of industry standards and professional associations. Maybe that experience is also what underlies the much quoted paragraphs which end his book:

default

Unexpected adieux

This sunny Sunday morning sees me setting off for a couple of weeks of TEI workshops, one in Paris, one in Graz. Nothing unusual there, nor in the fact that one is better prepared for than the other. But it has been an unusual week all the same, with two deaths and possibly a new beginning. The deaths first, since they are more difficult to write about. They perturb habitual patterns, making me confront and try to articulate parts of life that are hard to fit into a public blog, yet belong there in the absence of any other personal journal. (I say « public » but doubt that anyone except me reads this).

On Tuesday morning, I received from my friend Guy in Italy a text message saying that his partner Daniela had suffered a stroke and was in a coma; 24 hours later came another announcing her death. It is hard to react adequately to such events at a distance, and particularly so by text message, so I am waiting for a later less painful time to talk to Guy. I learned from a mutual friend that the funeral was yesterday. I dont want to obituarize but Daniela was a very generous and very affectionate person, as well as a fiercely independent one. I am very glad that she did not stay long in her coma, nor return from it badly scarred; I am also very glad that the last time I saw her was at a joyful family occasion in London.

On Saturday evening, yesterday, I received an email informing me of another death, also coincidentally on Tuesday: Chris Sheppard, in whose company I passed my adolescence and early twenties, chasing the same girls, crashing the same teenage parties, growing up to pursue not the same but similar academic careers. Chris was the first person in my school to know where to buy Levi 501s and how to shrink them to fit (in the bath). He introduced me to the works of Raymond Chandler and the collection of cigaratte packets. He was far too cool to take fashionable drugs at Oxford, but was on good terms with those who did. It was largely following his example that I returned to Oxford to take my masters degree in 1969, a year behind him. As graduate students we shared a rented hovel in Stanton St John (chemical toilet, coal fire, wall to wall books) for a year during which Chris taught me almost everything I know about literary scholarship and the love of books, not by precept, but simply by example. I was best man at his wedding back in 1976, but our paths diverged thereafter. At his retirement a couple of years ago, he was head of special collections at Leeds University’s Brotherton Lribrary, where I  emember visiting him and being shown some of his more recondite treasures (a lock of Mozart’s hair, Conan Doyle’s photos of faery folk); I think the last time I saw him in person must have been at a lunch with P.N.O. Pullman some time in the 90s. Now of course that it is too late, I regret bitterly even the dwindling flow of Xmas Card exchanges and the fact that my last email with him was more than six years ago.

As to the new beginning — well, it seems a small thing in this context, but I am now feeling quite positive about the idea of buying a house some distance from the back of beyond in rural France. A specific house, that is, of which perhaps more anon. But for now, I will go back to worrying about tomorrow’s training course at the EPHE in Paris, and that in Graz a week later.

A trip to La France Profonde

So this week I have mostly been not thinking about writing academic papers at all, which may or may not be a good think. Instead I spent the first part of the week tidying up materials for the next TEI training course, which is now pretty well polished, and also for the one after that, which is not. The process of thinking about what materials to use follows a fairly recognisable pattern in which ambitious optimism (I’m going to completely revise this bit, make up something new and exciting, strike out into unknown territory) has to eventually give way to pragmatic opportunism (I’ve got this already, it just needs checking, minor tweaks, translating). When I am preparing two courses which are due within a few weeks of each other, this means that the first course moves on to the second stage while the second one is still rejoicing in the first stage. Which was the case this week. Oh, and about the only thing I have in common with Sam Beckett is that I can no longer say whether my material is originally French translated into English or the reverse, since most of it has been through the process both ways several times.

Aside from that, I spent most of the week on trains, or other forms of transport, on an expedition to La Vergne, returning at the weekend via Nottingham. As follows:

depart arrive
Weds 27 Aug home 0910 on foot/bus
Oxford to Paddington 1001 1130 train
London St P to Paris Nord 1225 1547 eurostar
Paris Austerlitz to La Souterraine 1652 1921 Ic 3655
Thurs 28 Aug
La Souterraine to Gueret 930 1005 ter bus
Gueret to La Vergne 1100 1130 taxi
La V to Bussieres Dunoise nice walk
Bussieres to La Souterraine 1445 1615 taxi
Frid 29 Aug
La Souterraine to Paris A 1038 1318 ic3620
Paris Nord to St Pancras 1513 1640
Kings Cross to Grantham 1719 1827
Grantham to Nottingham 1857 1934
Sund 31 Aug
Nottingham to Oxford 1310 1546

And what have we learned? It’s possible to get to and from La Vergne by train within a day for about £400 pounds return (less if you cash in some Eurostar points). There’s not much happening in La Souterraine , and even less in Bussières-Dunoise. Guéret seems like a decent sized town though and is accessible by train from at least two different directions. Generally speaking, the Creuse is not the back of beyond: it’s behind the back of beyond. There are many cows, and many hills. There are probably no decent restaurants for miles. There used to be a railway to transport potatoes and beef to Paris, but they took it away years ago, and now all that’s left is a rather nice rural track which has the merit of avoiding most of the aforesaid hills. And there’s a lake behind the house, where you can fish but not (allegedly) swim.

Genetic editors, please note

After finishing last week’s entry about my rash commitment to write a book chapter, I secretly vowed to monitor my progress by producing weekly reports here. I then spent the entire week (when not shopping, eating, or sleeping) working on next month’s TEI course in Paris, essentially a revision of the one I gave in May. Almost, but not quite, because half way through the week, I received another reminder of another rashly-made commitment, this time to deliver a public lecture in Grenoble in December. I promptly dashed off the following proposition :

Ceci n’est pas une pipe: l’importance de la modélisation aux humanités numériques

Lou Burnard

Récemment, on a vu emerger de l’ombre de la inter-disciplinarité une discipline nouvelle qui s’appelle les Humanités Numériques (Humanités Digitales en Suisse, Digital Humanities ailleurs). Elle represente d’abord la confrontation, et ensuite l’adaptation aux méthodes et possibilités des technologies nouvelles de l’entreprise intellectuel et scientifique de toute la domaine des sciences humaines. Ces technologies comportent notamment l’informatique, mais aussi de la statistique, de la linguistique computationelle, et de la visualisation des données. Mais en effet cet emergence ne serait qu’une évolution, voire une continuation, d’un débat assez vieux – déjà percéptible au 19ème siècle — qui opposerait les sciences dures aux sciences humaines. Dans cette intervention, je propose que cette opposition semble d’origine plus sociale que méthodologique, et que les méthodes des SHS et les méthodes des sciences dites dures ne sont pas tellement loin l’une de l’autre. C’est la création, l’évaluation, et la manipulation des modèles et des hypothèses qui caractérise tout effort d’élargissement de science, et la modélisation comme processus abstrait donc devrait être au centre de nos disciplines, qu’il s’agit de la modélisation des structures textuels et linguistiques, de la modélisation des procédures informatiques, ou de la modelisation du monde physique.

Né en 1946, Lou Burnard a pris son DEA en littérature anglaise du 19e siecle à Oxford en 1971. De 2002 a 2012, il est Directeur-adjoint aux Services informatiques de l’Université d’Oxford où il s’occupait des applications informatiques dans les domaines des sciences humaines depuis des années, surtout en linguistique de corpus (British National Corpus), en bibliothèque numérique (Oxford Text Archive), et en l’encodage de textes. Actuellement retraité, il est reconnu comme expert dans ces domaines. Il a travaillé en France comme prestateur de services aux agences Adonis et Hum-Num et ailleurs en France: il est membre des Comités Scientifiques des Maisons de Sciences de l’Homme à Caen et à Grenoble.

Cunning, or what, I said to myself : if I have to produce a chapter in English on a topic I know nothing about it, I might as well repurpose it in French and get good value for money. And then, just to be on the safe side, I ran this text by my friend Marjorie, who is a native French speaker amongst many other good qualities, and thus well placed to tactfully remove the many barbarisms in this first draft. I was duly humbled by her response :

Ceci n’est pas une pipe : l’importance de la modélisation pour les humanités numériques
Lou Burnard

Récemment, on a vu émerger de l’ombre de linter-disciplinarité une discipline nouvelle qui s’appelle les Humanités Numériques (Humanités Digitales en Suisse, Digital Humanities ailleurs). Elle représente, pour tout le domaine des sciences humaines, la confrontation puis l’adaptation aux méthodes et possibilités des technologies nouvelles. Ces technologies comprennent notamment l’informatique, mais aussi la statistique, la linguistique computationelle, et la visualisation de données. Mais cette émergence ne serait en fait qu’une évolution, voire une continuation, d’un débat assez ancien – déjà perceptible au 19ème siècle — qui opposerait les sciences dures aux sciences humaines. Dans cette communication, j‘avance l’idée que cette opposition semble d’origine plus sociale que méthodologique, et que les méthodes des SHS et celles des sciences dites dures ne sont pas si éloignées. C’est la création, l’évaluation, et la manipulation des modèles et des hypothèses qui caractérise tout effort d’élargissement de la science, et la modélisation comme processus abstrait devrait donc être au centre de nos disciplines, qu’il s’agisse de la modélisation des structures textuelles et linguistiques, de la modélisation des procédures informatiques, ou de la modélisation du monde physique.

Né en 1946, Lou Burnard a obtenu son DEA en littérature anglaise du XIXe siècle à Oxford en 1971. De 2002 à 2012, il a été Directeur-adjoint du Service informatique de l’Université d’Oxford, où il s’occupait depuis des années de l’applications de l’informatique au domaine des sciences humaines, surtout pour la linguistique de corpus (British National Corpus), les bibliothèques numériques (Oxford Text Archive), et l’encodage de textes. Actuellement retraité, il est un expert reconnu de ces domaines. Il a travaillé en France comme prestataire de services auprès d’Adonis et Huma-Num, et ailleurs en France : il est membre du Comités Scientifiques des Maisons de Sciences de l’Homme de Caen et de Grenoble.

Suitably chastened by this salutary reminder  that my command of the French language is not as perfect as might be wished for, I removed the green ink, and sent it off to Grenoble, from which I rapidly received the following reminder that sometimes less is more :

Le résumé que vous nous avez envoyé est de fait plus important (environ 1300 caractères), je vous propose donc
(pour la version papier uniquement, la version web pouvant elle rester plus développée) de le réduire quelque peu. Seriez-vous
d'accord pour que, par exemple, nous enlevions la partie finale (cf proposition ci-dessous) et les déclinaisons autour du nommage
((Humanités Digitales en Suisse, Digital Humanities ailleurs)  ou préférez-vous le retoucher vous- même ?

Pour brochure : "Récemment, on a vu émerger de l'ombre de l'inter-disciplinarité une discipline
nouvelle qui s'appelle les Humanités Numériques. Elle représente pour tout le domaine des sciences humaines la confrontation, puis
l'adaptation aux méthodes et possibilités des technologies nouvelles. Ces technologies comprennent notamment l'informatique,
mais aussi la statistique, la linguistique computationelle, et la visualisation de données. Mais cette émergence ne serait en fait
qu'une évolution, voire une continuation, d'un débat assez ancien – déjà perceptible au 19ème siècle -- qui opposerait les sciences
dures aux sciences humaines. Dans cette communication, j'avance l'idée que cette opposition semble d'origine plus sociale que
méthodologique, et que les méthodes des SHS et celles des sciences dites dures ne sont pas si éloignée."

That’ll teach me. Maybe.

Metamodelling through : the prolegomena

So back in February I was asked to contribute a chapter to a new book being confected by some top people in the domain of the digital humanities,  an invitation which I naturally accepted with alacrity, and only a small sense of alarm. I admit: I was flattered, though naturally also felt it was about time my eminence was recognised in such a way.

Dashing off an abstract is an easy task, so I did that, and then forgot all about it. Here’s the abstract. Like other such pieces,  it promises much, and even gets mildly polemical towards the end, which seemed to do the trick, as the proposal was, in due course, accepted.

 

Where do metamodels come from and how do they survive?
Lou Burnard

There is a very old joke about standards which says "Standards are a
 good thing because there are so many to choose from". Like many old
 jokes, this plays on an internal contradiction (the structuralist
 might say "opposition") in its topic. Standards are, on the one hand,
 of most benefit to the extent that they reflect and facilitate
 diversity ; on the other, they are of necessity managed or even
 imposed by a centralising authority. This contradiction is
 particularly noticeable when the process of standardisation has been
 protracted because the technologies concerned are only gradually
 establishing themselves. We see this tension even in consumer
 electronics where there is a financial market-driven imperative to
 establish standards as rapidly as possible; but the same tension
 underlines the gradual evolution of ways of thought via communities of
 practice into de facto and (eventually) "real" standards. This article
 explores the evolution of standards for data modelling methodologies
 with regard to this tension. It considers some significant early
 experiments with the application of data modelling techniques to
 humanities research data (Manfred Thaller; J-C Gardin) and discusses
 to what extent some researchers simply adopted technical standards
 emerging in the wider data processing community (relational databases,
 information modelling), while other communities strove to define their
 own models (AI, language understanding systems). It will present in
 some detail the theoretical model (metamodel) underlying the Text
 Encoding Initiative's approach to standardisation and ask the question
 whether, over time, all such community-based efforts are forced
 further towards convergence and away from diversity. The TEI currently
 maintains a balance between "do it like this" and "describe it like
 this" schools of standardisation; in the long run, it therefore risks
 being superceded by advocates of the latter who distrust the former,
 or advocates of the former, who are impatient with the latter. 
Oxford, 1 Mar 2014

Summer came and summer is now going, and this particular bird is coming home to roost. I received last week a polite reminder that my manuscript should be delivered by the end of the current month, should conform to a defined house style, and would I please sign in blood the form I was sent back in April assigning my rights in this non-existent work to non-existent publishers Snipcock and Tweed ? Naturally I replied at once pleading for a stay of execution (but ignoring the rights assignment question) which was graciously accorded, somewhat to my surprise, even unto mid October. So now I really have little excuse not to find out what grand idea this abstract is abstracted from, really ought to get down to doing the research it grandly promises to summarise, and write the wretched piece. If only I didn’t have all those other more interesting (or less interesting but more urgent) things to do.

Well, let’s see., I plan to use this blog as a record of the painful process, just so that in years to come I can look back and see where it all went horribly wrong. At least no-one is likely to find me here.

 

 

 

What does the textual scholar require of computer science?

[Here’s the text of the talk I gave under the above title at the Fifth Postdamer I-Science-Tag « Digital Humanities Meets Information Science » on 19  March. I haven’t revised it properly yet — there’s nothing like reading a text aloud for making you aware of the places where it’s wandering off into the land of waffle — but here it is anyway] 

A recent diatribe by Roger Scruton (‘Scientism in the Arts and Humanities’, in New Atlantis, Fall 2013) has got me thinking about that old chestnut “what is the digital humanities”. Scruton argues passionately and persuasively against what he terms “scientism” – the pretension to scientific method – in the humanities, reserving particular disdain for the notion of “research” in the humanities as the term is currently used by cross-disciplinary “xxxx-studies” in humanities departments across the English-speaking world. He points out that “research” in the sciences is concerned with the establishment by scientific method of evidence to support or refute a pre-existing hypothesis about the world, whereas in the Humanities, it is applied to just about kind of activity that may add to the sum of cultural knowledge at our collective disposal, or may simply act as a substitute for such knowledge. I was struck by his unashamedly Arnoldian appropriation of the term “culture” and what follows represents some further thoughts along similar lines.

 

If science aims to deepen our understanding of “the world”, and the humanities to deepen our understanding of “culture”, we do need to find a definition for culture which goes beyond simply saying (as Scruton does) that it is about the “I and I” (probably not so much a hint of Rastafarian influence as an insistence on the subjectivity of cultural thinking), though it is true that any account of culture which ignores its effects on the individual cultural consumer will be sadly deficient. The laws of physics operate whether we know about them or not; the same cannot be said of cultural norms. And yes, of course, culture, particularly “high culture”, is a social and political construct, reflecting or reacting against the social and political power structures of the context in which it is articulated, and thus seemingly entirely contextual and contingent. But such naïve cultural relativism simply ignores the effectiveness with which the very contingency of culture also reveals, often unconsciously, its context, enabling us to construct hypotheses around the social and political norms concerned, and to assess it with reference to a wider context. The pre-occupations of human culture have not changed so much over the centuries, though different reactions to (say) birth, sexual partnership, time, death, and the construction of society are readily discernible, as are different reactions to those reactions. It seems to me that a study of culture, in the sense for which the Germans used to use the term geistwissenschaft , is a study of human reactions to, and constructions of, the world, and of those constructions. I further suggest that the relative merits of the various possible explanations it offers may be assessed in the same way as we evaluate purely scientific explanations.

 

A scientific explanation is valued according to the effectiveness with which it provides evidence in support of a hypothesis. If however the hypothesis is very general, for example that there is a single elegant principle governing the behaviour of space, mass, and time, it may not readily be identifiable as a hypothesis. When Eco says (in Interpretation and overinterpretation, 1992) that we value Copernicus’ model of the Universe more than Ptolemy’s not only because the former explains aspects left mysterious by the latter, but also because Copernicus enables us to understand the reasoning behind Ptolemy, he is not simply applying a humanistic perspective, exercising the hermeneutic meme to rhetorical effect, but demonstrating that evaluation always proceeds in the same way, whether we are considering the motions of the planets, or the relative merits of 19th century pulp fictions. For cultural objects do exist in the real world, and the cultural readings which confer “cultural” status upon them are also phenomena of the real world. Hence there is nothing inherently implausible about using scientific methods to gain some understanding of their behaviour, and how they function,

We should not however fall into the trap of supposing that in applying such methods to generate “scientistic” descriptions we have exhausted all there is of value in understanding a cultural object, a work of art. The history of a cultural object includes the history of its status considered as a work of art, but its meaning goes beyond the aggregation of perceptions of it as manifested by recorded opinion. Some of those perceptions may be ill-conceived or unhelpful, failing the Eco test of greater explanatory power for example, or other conceptual norms. To read King Lear solely as a political argument about kingship ignores the greater resonance of what it has to tell us about family life. To read Hamlet solely as an instance of the vogue for “revenge” tragedies that seems to have occurred on the English stage around the turn of the 16th century seems similarly wide of the mark. Contemporary African readers of Dickens’ Great Expectations sometimes reduce it to a fable in which Pip’s innocent life as a village dweller is corrupted by wealth and social class as soon as he moves to the town. Such a reading is one which Dickens might have recognised, and which the text certainly licences, but historically-minded critics may still feel that there is something wrong with implicitly equating the experience of a 20th century upwardly-mobile African villager and that of an imagined member of the 19th century rural poor. (Even so, a judgement we might consider inappropriate on the grounds of anachronism when applied to a specific cultural product – for example, the use of racist or sexist terms in early 20th century writings – is surely appropriate when applied to the context in which such writings are created or delivered; indeed, the writings constitute essential evidence warranting such judgements.

Consider, for example, linguistics. Language is surely the archetypal manifestation of a cultural object, almost a metaphor for culture itself (we talk about the “language of art”, we say that paintings and poets “talk to us” in a particular way, we even talk of a “vernacular” architecture). Over the last few decades, it has become increasingly clear that new technologies have facilitated a new perspective on the ways languages are used, hence how they change, and even perhaps what fundamentally they are. Corpus linguistics emphasizes the performative aspects of language, seeking to identify recurrent, possibly unconscious, regularities of usage, patterns which demand an explanation. Some have even claimed that no linguistic structures exist beyond such regularities of usage and the patterns associated with them, that there is no such thing as “grammar” analogous to the laws of physics in the real world. Even so, some explanation has to be found for these patternings. It is not necessary to subscribe to atavistic Chomskyan theories about innate grammar to seek explanations for them in terms of some general (and falsifiable) hypotheses about how languages function, for example to explain language variation and change by reference to a principle that innovation must always show itself first as deviation and is frequently associated with an assertion of group identity, that language users always value mutual comprehension above formal coherence or adherence to predefined norms, and so on.

So we should avoid depending only on a scientifically-derived and statistically-justified assessment of the facts of cultural reception. The history of digital humanities is dotted with the corpses of over-enthusiastic systematisers, from T.C. Mendenhall’s “characteristic curves of composition”  to J.F. Burrow’s reduction of Jane Austen’s style to vectors of frequency data (Computation into Criticism, OUP, 1987).  This is not, of course, to say that statistical stylometry has nothing to tell us; just that it can only ever be a means to an end. The most scientific of stylometricians will always use the objective evidence revealed by their analysis in support of an entirely subjective judgment, be it about authorship or about style. As Stanley Fish did not quite say “There is always a text in this class”: in all such judgements, the constructed text, the reading of the evidence, is the end result of the research, whether it is obtained by meticulous statistical methods or good old fashioned introspection. And I tend to agree with Arnold, and with Scruton, that constructing such readings and transmitting them is actually the purpose of the Humanities.

 

A reading is however by no means the same kind of thing as a model.

 

model

 

When I give introductory talks about the purpose and nature of text encoding, I often use the following schema to represent the distinction:

In its usual context, this schema is meant to show several things:

  1. The process of transforming resources (cultural objects such as books, paintings, historical documents etc.) into digital form is always a form of re-presentation, abstraction, reduction, reinterpretation, or encoding. Or, one might say, reading.
  2. The results of that transformation into digital form can be analysed and re-interpreted, automatically giving rise to an enriched version of that reading, which in turn can continue to be enriched by analysis in a kind of virtuous hermeneutic circle
  3. The process of encoding, and the process/es of analysis, must however be informed by the same abstract model

Perhaps this is merely a long winded way of saying that you cannot get more out of a system than you put into it, but it does suggest that the conceptual model underlying a set of readings is a different kind of thing from any of those readings and operates at a higher level of abstraction.

I freely confess that my ideas about what the humanities are or should be were formed during a distant epoch: the end of the 1960s. And my ideas about what computer science is or should be were formed during one that feels even more remote : the end of the 1970s. I have also lived long enough in the hinterland between the two disciplines to see how the intellectual territory laid claim to by either has evolved.

In the 1960s, the discipline associated with the study of English literature, at least as I experienced it at Oxford University, was going through one of its periodic fits of self doubt. At other universities, these were tumultous times, as the waves of what was to become known as Theory (with a capital T) began to sweep away the Arnoldian consensus that works of art existed independently from their creators and consumers and were invested by an innate cultural value. Even at Oxford, traditionally skeptical about such French (or worse Cambridge) vulgarity as the search for theory, it was advisable to be aware of such theoreticians as Beardsley and Wimsatt, and to be able to shoot down the intentionalist and affective fallacies. We agreed that an understanding of its author’s intention (as far as these could be determined) did not exhaust the meaning of a work of literature, any more than did an itemisation of its effects as recorded by its readers. We felt obscurely that we needed to place works within their historical and social context, to assess the extent to which they deviated from or confirmed reader expectation at different times, but we lacked the tools to do that, other than by the painstaking process of reading and remembering many, many books. We lacked an abstract model of how literature functioned or what it was, and did not know how to construct one.

 

The computer science I encountered at the end of the 1970s, by contrast, seemed obsessed by ways of representing knowledge, and constructing models. The entity-relationship model was succeeded by the Codasyl network model, which was blown away by the relational model, just as the giant mainframes began to be blown away by distributed networks of “personal” computers. Under the influence of large amounts of money and requirements for increasingly complex centralized information systems, these modelling techniques naturally evolved into methodologies such as SSADM (Structured Systems Analysis and Design Method, a set of standards developed in the early 1980s for systems analysis and application design widely used for UK government computing projects) . It is easy to poke fun at that expansive pre-web era, in which modish re-brandings of essentially identical techniques succeeded each other with confusing regularity, always with extravagant claims of advanced capabilities just around the corner, in the “next generation” architecture. The next generation when it actually arrived in the nineties was distributed, decentralized, and almost entirely uninterested in all of the effort which the database designers and conceptual modellers of previous generations had put into trying to construct and impose a federated approach to the representation and storage of knowledge. (Which is why we now see the reinvention of logic programming in the form of linked data: but that is a different story.)

 

Nevertheless, like many others at the time, I found that the tools and techniques of computer science, though they might be described in terms of a particular jargon, and though their field of application might seem entirely alien, still had something to offer the humanist. Could it be that an abstract model for the way that texts and documents function – which I take to be the essential business of the humanities – might be expressed using the same language as that used to model the data flows and processing requirements of East Midlands Gas?

It seemed clear to me that texts and documents should be described from at least three perspectives:

txttrin

 

as physical objects with a visual representation; as linguistic objects made up of words and phrases drawn from some kind of linguistic system; and as intensional objects with reference to real world objects, events, or entities. Most computer systems of the time tended to prefer one or other of these aspects. A word processor would help you produce nice printed copies of your documents; an information retrieval system would help you investigate their language; a database would help you describe what they were about. Systems which crossed these frontiers, enabling you to control the appearance of particular words used to describe felonious transactions in a court record, for example, were harder to find, and usually required to be custom-built, with many compromises along the way.

With the arrival of markup languages such as SGML in 1987 and XML a few years later, it became possible at last to describe a document in a detailed way independently of whichever of these three aspects was to predominate in its processing, and hence in a way that facilitated all of them equally. And with the arrival of the Text Encoding Initiative around the same time, an extraordinary adventure in document modelling got underway. Much has been written about the TEI (not all of it by me) and its significance; my favourite comment is that whatever else we may say of the TEI Guidelines, as Basil Bunting said of Pound’s ‘Cantos’, “they resemble the Himalayas: you can ignore them if you like – but you will have to go an awfully long way round.” The TEI’s relevance to the present paper is that it represented the first and so far only time that scholars from across the humanities disciplines were succesfully corralled into achieving some kind of consensus about the “significant particularities” of the documents they studied. The TEI was (and perhaps remains) a unique exercise in inventorising the components of the models underlying research in the humanities, from the disparate points of view of lexicographers, linguists, critical editors, manuscript scholars, historians, literary scholars, and librarians. To find an abstract language adequate to represent such divergent perspectives within a single framework we naturally sought to apply data modelling techniques inherited from computer science, expressed not in UML or SQL but using the new text-friendly features of SGML. The rest, as they say, is history.

If the success of the TEI shows us that the modelling techniques inherent to computer science could successfully be imported and made to function within the humanities paradigm, it seems reasonable to conclude by asking whether this is a unique instance of such synergy.

So, parodying Monty Python, What did computer science ever do for us humanists?  Unsurprisingly perhaps, the things that working textual scholars seem to most appreciate about the impact of information technologies on their working practices are all things that computer science as a discipline tends to take for granted. When asked a version of my working title for this piece, an English professor of my acquaintance replied:

I think I owe the discipline a great deal… the advantages of on-line ordering before a visit to the British Library (say), and (from home) easy access to bibliographical and biographical information when preparing a book or essay ms for the press. I’ve regularly used Google to track down unattributed quotations which might otherwise have taken me ages to locate; I also use the electronic databases ECCO and EEBO, although I think the interfaces and general tractability have some way to go. I ought to add, I think, the sheer convenience of being able to assemble large and complex texts–such as editions—electronically, where relevant information comes to hand over a period of time. I only wish word-processing had been available when I completed my Ph.D. in 1979… Finally, I assume that without information science there could be no email, and without email I think that academic exchange as we know it might grind to a halt.”

This reply perhaps demonstrates how deeply embedded information science has become. The aspects selected by my colleague – networked access to information resources both of the kind traditionally held by libraries and of the kind traditionally embodied in one’s peers – constitute a change in the knowledge infrastructure, the context in which work is done. There is much to be added if we are to give an adequate account of that infrastructure: about the politics of open source access, about the alleged democratising (or to use the French word vulgarisation) of access to cultural resources, about all the ways in which the Internet has transformed our ways of knowing about the world, and the world that we know about. “Never before have so many people known so little about so much” … but these changes are driven more by commercial and social imperatives than they are by the interplay of academic disciplines which is my subject.

My colleague’s reference to word processing also hints at a more subtle change in the way that work is done itself. Of course writing on a word processor is only superficially like writing on a typewriter, just as a typewriter is only superficially like a quill pen. The extent of quantitative change in going from a machine in which making corrections is an expensive and limited process to one in which documents are never finished such is their fluidity and plasticity really does approximate to a qualitative change. In the 90s this occasioned anxiety about apparently fundamental shifts in the very nature of scholarly communication, even the thinking process itself, induced by the spread of new technologies. A couple of decades later, in a seemingly entirely fragmented, and decentred world, drowning in media which seem to be dominated by twitter and sound bites, we do well to remember that there is a positive side to this transformation.

In placing first the availability of digital resources, however, I think my colleague hits the mark exactly. The challenge for computer science has always been to find better tools for coming to terms with information glut, whether in the form of paper archives or millions of digitized books. The success of Google may have suggested to some that the indexing and cataloguing techniques associated with classical information retrieval were entirely superannuated. But that model of the document as witness to a mode of expression, a particular discourse, suggests that such a view is premature. Indexing techniques are beginning to take on new and more sophisticated clothing, their function rebranded as text mining or text modelling. If it is the words that conspire to form the meaning of a text, we should be able to able to formulate new, more coherent, and better informed hypotheses about that meaning on the basis of their relative co-occurrences and absences in the immense bodies of digital text now at our disposal. To quote Ted Underwood, “The notion that documents are produced by discourses rather than authors is alien to common sense, but not alien to literary theory.” As we do so, the availability for the first time of massive quantities of digital text structured and organized in terms of our traditional models of text and textuality (rather than their purely visual properties) will enable us to make richer (and thus more explanatory) models against which to judge the salience of individual works, and in terms of which to categorise their context. Rather than looking for the proverbial needle in a haystack, we should start considering why hay is such a good home for it. I do not know whether that is a notion that computer science has fully assimilated as yet.

Encoding the history of the OuLiPo

At the beginning of February, I had the pleasure of co-organising (with Sebastian Rahtz, Camille Bloomfield, and Hélène Campaignolle-Catel)  a workshop on data capture, as the second event in the Algoritm Seminar Series which forms part of an interesting ANR funded project called DifdePo. The project is a collaboration between the BnF and Ecritures de modernité, a research unit located at Paris III, and its objectives include creation of a TEI-based digital archive of the archives of the OuLiPo, which are currently stashed away in boxes at the Bibliothèque Nationale’s Arsenal depository. The papers include letters, photos, press cuttings, postcards, drafts, and notes of all sorts, but for the purpose of this exercise we decided to focus on the records of the OuLiPo’s regular meetings, which began back in the early 1960s. The Archive has already been catalogued, and work is in hand to produce digital images of a sizeable proportion of it. The object of our workshop was to explore ways of transcribing these documents, given that the project has very little funding, and will therefore have to rely on the good will of volunteer transcribers, enthused by things OuLiPien but maybe a little deficient in TEI knowledge.

About a dozen people participated, most of them surviving to the end of the day. We began by asking them to transcribe a page from a small collection of pre-selected digital page images, using Word. (I freely admit to a degree of smugness on discovering at the last minute that the teaching room was initially equipped only with old-style doc-producing Word, which had to be enhanced to a more modern docx-producing version at rather short notice by the unflappable Joël) This exercise demonstrated, as we had hoped, quite a bit of variation about what exactly should be transferred from the image to the text, and on what editorial principles, thus motivating a useful initial discussion about the principles and praxis of text encoding. One of the participants proposed (unprompted) the principle of « fidelity » to the source, while another argued repeatedly for « capturing the meaning ».

Once lulled into a false sense of security by this exercise, participants were exposed to the weirdness of an XML-editing environment using everyone’s favourite XML Editor  oXyGen and my usual tutorial — create a document, learn how to tag parts of it, learn how to manipulate the structure, etc. We then offered them a more demanding work flow, involving first capturing a document in Word using a Word Template, which defined Styles to highlight a number of significant features (headings, list items, etc., but also personal names etc.), secondly converting this to a TEI form, using oxGarage and a specialised profile, thirdly looking at (and possibly modifying) that in oXyGen, and then converting it back to Word to confirm the feasibilty of round-tripping.  Sebastian Rahtz of Oxford (whom God preserve) invested quite a bit of pre-workshop effort into setting up the necessary infrastructure for this, and making sure that it all worked correctly on the day. He also made it possible for us to inflict on the encoders a third alternative approach, based on an experimental installation of Ben Brumfield’s « From The Page » crowd-sourcing prototype software. I had expected this to be everyone’s favourite, but (maybe because we had already by then sensitized them to the delights of structural markup) our encoders seemed to find the simplicity of its interface made it hard to take seriously. We had prepared tutorial scripts for each of the three approaches (TEI source code available from my tei-fr repository, if you’re interested) so I was able to spend some of the time wandering about taking photos of hard-working encoders.
By the end of the day, everyone had tried all three approaches, and everyone had produced a couple of TEI XML files conforming to a simple transcription schema I had prepared earlier. We collected them all up and Sebastian showed how our pretend archive could be displayed on a web page, complete with corresponding page images, and vocabulary lists, and personography. This was (of course) all done with a straightforward customization of the standard TEI-HTML stylesheets, now available in the Stylesheet package as part of the Difdepo profile.
Conclusions? We still don’t really know whether our TEI-XML transcriptions are aiming for « fidelity » or « meaning », but we have at least demonstrated the possibility of either (or both) . And we do know that the participants all seemed to be more enthusiastic about using the customized-word-template approach than either raw Oxygen or (possibly over-cooked) From The Page. We didn’t explore the idea of a pre-customised oXygen author-mode interface, which might well repay the necessary investment of effort, if there is a lot of metadata to be entered, for example.

 

Joel and I sample the oXygen
Joel and I sample the oXygen

I take the liberty of listing the names of the registered participants, for their greater glory:

  • Camille Bloomfield
  • Hélène Campagnolle-Catel
  • Paula Klein (Projet DifdePo)
  • Chris Clarke (Projet DifdePo)
  • Jeanne Devautour
  • Julie Bernard (Poitiers)
  • Marie Bonnot
  • Marianne di Benedetto (ENS Lyon)
  • Guillermo Hector
  • Pradeep Claassen
  • Louise Kari-Merau
  • Leïla Berlot
  • Barbara Servant (Univ Rennes II)
  • Clara de Reigniac
  • Gabrielle Bruzzone (Poitiers)
  • Claire Leroy

All affiliated with Paris III, unless otherwise indicated.

Lodelisation

Lodel (Logiciel d’edition electronique) is the name of the CMS which drives Open Editions, one of Europe’s leading open access publishers. Back in 2009, Marin Dacos announced at the TEI council meeting in Lyon that Lodel would start using a TEI schema for its internal processing, while continuing to accept manuscripts for publication in any of the commonly used office document formats. Documents would be worked on in ODT, and automatically converted to a simple TEI schema for internal processing, from which they would be converted for publication on the web and on paper.

Documentation subsequently appeared on how to prepare documents in TEI for processing by Lodel (in French at http://lodel.org/701 and also in English). An XSD schema for it is documented at http://lodel.org/715.

This blog entry summarizes what I needed to do to a real TEI document (specifically my forthcoming title What is the TEI?) to get it to work with Lodel: the full story is implicit in an XSLT stylesheet I wrote for the purpose. Actually, when I say ‘I’, I should make clear that the conversion was in fact handled by the nice people at Open Editions, who were remarkably patient with my eccentric use of TEI, and my even more eccentric wish to generate a Lodel document directly with as little manual intervention as possible. My thanks to Jean-François Rivière and Martin Dulong from Open Editions for their helpfulness, both in steering my TEI manuscript through the process, and in responding politely to my inane questions about what on earth was wrong with my lovely tagging.

The following list shows (in no particular order) the chief changes I found necessary and in some cases a bit unexpected.

  1. As might be expected, the Lodel schema doesn’t have any of the following semantic elements, which I have found useful when marking up technical documents: <gi>, <att>, <ident>, <val>.  More surprisingly perhaps, it doesn’t seem to have <foreign>, <emph>, <soCalled>, <mentioned>, <q> or <quote>either. My stylesheet turned all of them into <hi rendition= »#gi »>, and also generated a <rendition> element with an appropriate default style for them.
  2. The Lodel schema doesn’t allow lists or quotes to be contained by paragraphs. This is a generic HTML limitation, if I understand aright, but that doesn’t make it any less annoying. Call me verbose if you will, but I often write a single para with a bit of prose, followed by a list, a bit more prose, and another list. My stylesheet had to do some clever fiddling to deal with this (tx SPQR) but this is one case where I think Lodel should be a bit more broad minded.
  3. In fact, the Lodel schema only knows about two kinds of list: type=ordered which are numbered, and type=unordered, which are not. Gloss lists are not supported, so my stylesheet had to tweak <label> elements into a <hi rend= »#label »> child at the start of an unordered list item (but with @rendition of « gloss »).
  4. My TEI documents can have lots of XML examples, which are easier to read if they are wrapped in a differently-namespaced <egXML> container. The Lodel schema requires use of the <code> element, containing either a CDATA marked section inside it to preserve the layout, or XML tagging escaped by entity references. The only problem with this is that <code> is a phrase level element, not a block, which means that some hand tweaking is needed at the Lodel end.
  5. Lodel is intended for journal articles and manages each of them separately as a distinct TEI document. Chapters of a book have to be treated in the same way, which seems a bit odd — for example, each chapter gets its own TEI header. My stylesheet splits things up rather crudely, assuming that each top level <div> within the body is intended to be a separate document.
  6. Lodel insists on having an explicit indication of the nesting level of each subdivision, using (bizarrely) the @subtype attribute on <div> with values level1 level2 etc. My stylesheet grits its teeth and generates these automatically, but I think this is one design aspect of Lodel which might merit a second thought.
  7. The Lodel schema doesn’t allow headings within anything except sections, so you cannot provide them for lists, tables, or figures without some fiddling about.
  8. Lodel doesn’t number headings for you. Even if you supply a number for a section (using the @n attribute on a <div>, as recommended in the Guidelines), Lodel will not use it. My stylesheet does nothing about this: I just decided to live without numbered sections.
  9. Lodel handles cross references using much as you’d expect, provided that the value of @target is a complete URL, i.e. a link outside the current document. This means you cannot cross reference other sections of the document being encoded which seems rather an odd restriction. Put together with the foregoing lack of automatic section numbering, this can make for quite a lot of rewriting.
  10. Lodel knows about <bibl>, but not <biblStruct> or <biblFull>. Up to a point. Most of the semantic elements defined for the content of bibliographic elements (<publisher>, <biblScope>, etc.) are allowed, but it doesn’t actually do anything with them. To produce a correctly formatted bibliography, such encodings have to be converted to a fully styled version, following the requirements of the Open Edition style guide. I wrote a stylesheet to do (most of) this for one small bibliography: in the general case something much more complicated would be necessary.

That last caveat is of course true of all the rest: I’ve only tested this process properly on one text, albeit a reasonably large one, and only on a born-digital document. If you’re thinking of authoring documents in TEI though, chances are you won’t do it significantly differently from me, so some of the issues I encountered will affect you too. And, for the avoidance of doubt, let me repeat that none of this is meant to discourage anyone from using Lodel!

Poster Slamming

At the TEI 2013 member conference today, I had the pleasure of participating in the « Poster Slam », a well established TEI ritual in which each poster-presenter is given one minute (and one slide) to introduce the topic of their poster as a means of persuading people to come to it. Preferably in verse. This year, Syd made the fatal mistake of allowing presentations in languages other than English, providing they were accompanied by a translation. So Nicolas Larrousse and I naturally presented the following poem:

Le Tageur et L’Archiviste

Le tageur ayant tagué tout l’été
Se trouva embarrassé l’avenir étant arrivé.
Pas un seul petit morceau
d’explication claire de ses travaux

Ze tagger having tagged all summer long
Found himself embarassed when the future has arrived
Not one little bit of explanation survived for all his efforts

Il alla chercher des avis malins
Chez l’archiviste son copain
Le priant de lui prêter
De la sémantique pour tout regler.

E went to ask some tricky advice from his friend ze archivist
Begging im to lend him some semantics to sort sings out

Les archivistes ne sont pas créateurs
C’est là leur moindre défaut.
Que faisiez-vous au temps chaud ?
Dit-il à cet emprunteur.

But archivists are not creators, that’s their smallest problem
What did you do during the fine weather?
He asked the borrower

– Nuit et jour à tout venant
Je taguais, ne vous déplaise.
– Vous taguiez ? j’en suis fort aise.
Eh bien! transformez maintenant.

Day and night I was tagging for anyone, if you dont mind
You were tagging? Oh that’s fine. So now you can do transformations!

Nicolas did the English bits, and I did the French bits, under the inspiration of the late great C. Trenet.

Further adventures with ODDs

This post is mostly an aide-memoire, since how to do the ODD things I want to do is not very well documented in the TEI as such.

First challenge

I have an ODD which was produced by webRoma some time ago and which (naturally) uses the traditional « exclude » syntax. I want to convert this to the new « include » format and also to ensure that it won’t get any of the new elements added to P5 since it was first defined.  I proceeds as follows:

  1. I look at the source of my ODD and I see the comment Roma inserted in the <sourceDesc> « created by ROMA on Monday 21st June 2010 »
  2. I go to the list of releases on the TEI sourceforge site to find which release of P5 must have been in use at that date. Judging by the dates here, it is probably release 1.6 I want
  3. whichversionBuried away in the standard release of the TEI Stylesheets there is a a cool utility for converting an « exclusion » ODD into an « inclusion » one. It’s called tools/odd2nuodd.xsl and I run it like this:
saxon -p defaultSource=http://www.tei-c.org/Vault/P5/1.6.0/xml/tei/odd/p5subset.xml 
myOldODD.xml tools/odd2nuodd.xsl > myNewODD.xml

Note the inclusion of the 1.6.0 release number as the source directory to be used when the stylesheet starts looking for TEI definitions.

Second challenge

I have two or more new style ODDs and I want to compare their use of the TEI to assess their interoperability. So far, I only have an approximation to an answer for this, in part because I am too lazy to improve the scripts I hacked together for it last time, in part because it’s actually a rather ill-defined problem, and hence hard.

The approximation goes as follows:

  1. Run an XSL transformation on each ODD in turn, appending the results to a big text file listing element names and what happened to them in which ODD;
  2. Run a perl script (ouch) on the results of (1) to produce a summary table which starts like this:
<table>
<row role='label'><cell>Element</cell><cell>lodel</cell><cell>tei</cell><cell>sc
ore</cell></row>
<!-- 2 --><row><cell><ref target='http://www.tei-c.org/release/doc/tei-p5-doc/en
/html/ref-TEI.html'>TEI</ref></cell> <cell>change</cell> <cell>use</cell><cell>2
</cell></row>
<!-- 2 --><row><cell><ref target='http://www.tei-c.org/release/doc/tei-p5-doc/en
/html/ref-ab.html'>ab</ref></cell> <cell>use</cell> <cell>use</cell><cell>2</cel
l></row>
<!-- 2 --><row><cell><ref target='http://www.tei-c.org/release/doc/tei-p5-doc/en
/html/ref-abbr.html'>abbr</ref></cell> <cell>use</cell> <cell>use</cell><cell>2<
/cell></row>

OK, work is needed in this area. But it’s a start.

 

 

 

Interoperability of TEI projects : apotheosis or chimera?

This was the title (sounds better in French) of the closing talk I gave at an interesting workshop last week. A prevous COST-funded meeting in Krakow had brought together Czech, French, Catalan, German, and Polish teams working on several different dictionaries of medieval Latin to elaborate the idea that maybe they could make their various lexica interoperable if only they could agree on a common format, for which the TEI seemed the most plausible candidate. Susanna Allés, the energetic organizer of this workshop, got funding for it from several sources, notably the ALLC (or European Society for Digital Humanities as it now prefers to be known). She also seems to have hit on the wheeze of inviting a number of luminaries to make the case for the TEI dictionary tagset (notably L. Romary, F. Glorieux, P. Banski);  alas, in the event only I turned out to be available. Which was useful for me, since preparing for the workshop meant rediscovering all sorts of dusty and neglected (by me, though not by others) parts of the Guidelines.
The workshop was held in the CSIC (Spanish for CNRS) Istitucio Milà i Fontanals, which occupies a rather grand building conveniently located in the Raval, a picturesque if slightly seedy district of Barcelona to the left of the Ramblas, and we were all accommodated next door in the splendidly-named Investigators’ Residence on Calle Hopital. Barcelona is not a place for those uninterested in food and drink; and we were very well fed, in large quantities, if at strange hours and (on one occasion) after a lengthy walk up town through an unexpected tropical-style deluge. The ravioli stuffed with pears and cheese offered by the resto « En Ville » for lunch was particularly memorable, invidious though it is to single out this one occasion.

More intellectual fare was also on offer, of course. It is always a pleasure to arrive amongst a group of specialists personally unknown to me and from a domain of which I am more or less totally ignorant  to find that word of the TEI has already reached them, and often in a far from superficial way. So I was made very happy indeed to hear Sabine Thuillier (currently working in Madrid on the Diccionario Griego-Espanol but ‘formed’ as they say at the Ecole Nationale des Chartes) evangelise for the TEI as an international open source community, and impressed by the way she is implementing it in a workflow which though its editors remain obstinately based on Word Perfect, remains determed to envisage production of a respectable TEI P5 version.
Similarly, the team responsible for the eLexicon Mediae Latinitatis Polonorum lead by Krzysztof Nowak from Krakow, while maintaining a proper scepticism about some aspects of the TEI’s conceptual model, was clearly persuaded of its virtues as an open standard, notably as evidenced by both the amount of open source software (they mentioned XTF, Philogic, and TXM) and the number of comparable projects (they mentioned the Anglo Norman Dictionary, the Glossarium DuCange, and several others) using TEI. Their workflow starts with an OCR phase, since they are starting from an extensive library of source texts  and then uses LibreOffice and a customised library of styles to enhance it to the point where it can be automaticalty converted to TEI, thus (apparently independently) following the same path as is used by Lodel, OxGarage, Agora, and no doubt others, to combine the user friendliness of a word processing style interface with the rigour of a TEI structured maintenance format.

Catalonia has ambitions (as posters everywhere proclaim) to become politically independent of Spain, and certainly its linguistic independence is a well established fact. As a confirmed non-speaker of either Spanish or Catalan (nor of Basque, Galician, or Portuguese for that matter) I regretfully let the interventions in those languages wash over me, and thus missed out, notably on Jose Manuel de Bustamente’s insights on the relation between textual corpus and dictionary. I did however manage to understand the German colleagues present since they made the effort to speak in English or French, for example Alexandra Gorbrecht from the Trier Centre for Digital Humanities gave a brief overview of the dozen or so dictionaries put together online at woerterbuchnetz.de with a well designed query interface. Allegedy all of these dictionaries are locally stored in TEI XML, but as this is not currently exposed one cannot tell how consistently it has been done. None of the other major TEI dictionary projects in Germany I am aware of was represented here: presumably because none of them is specifically concerned with Mediaeval Latin. I had to console myself for the absence of Werner Wegstein from Wurzburg by stealing one of his examples for my own talk.
Bruno Bon and Renaud Alexandre from the IRHT in Paris had the advantage, if advantage it be, of being able to develop their proposals for an over-arching Novum Glossarium Mediae Latinitatis on the basis of the already existant complete Glossarium of Du Cange which has been freely available in TEI-inspired XML markup for some time now, thanks to the work of Fréderic Le Glorieux. The idea seems to be to develop a set of proposals able to express the (not inconsiderable) variation in practice amongst these and others working on different lexica of medieval Latin in Europe, and thus create what (inevitably) Bruno suggested would be called NGML (the Novum Glossarium Markup Language). As a first step they have set up an exploratory multilingual wiki with some nice visualisation tools, based on a few sample entries taken from each of the five different lexical projects (specifically, in Barcelona, Prague, Krakow, Munich and Paris), and are inviting more. This could be fun, though I think expressing NGML as a real TEI ODD would be more of a challenge.

Susanna Allés and her graduate student Frédérique Laugrost (on secondment from the Ecole Nationale des Chartes) talked about the specific problems they faced when starting to apply the TEI to the text of their dictionary: the Glossarium Mediae Latinitatis Cataloniae. Many of these are familiar, of course: notably those which derive directly from the wish to preserve the punctuation and use of abbreviation which characterize such sources and at the same time model the logical structure which they determine. Some of these problems do however point to aspects of the current TEI dictionary model which could be improved.

I started making a short list of such points during the workshop, but sadly did not get very far:

  • too many of the proposed TEI dictionary elements relate only to modern lexicographic practice. Deciding which ones to filter out to make a kind of TEI Lite for dictionaries would be very desirable.
  • an element for « translated segment » is desired, even if it is just syntactic sugar for with a value for xml:lang other than that of the surrounding text
  • some dictionaries have entries which are large enough to have multiple paragraphs but there is no place for <p> in any model.entryLike element
  • when a term is identical in two or more languages, can xml:lang take more than one value (I confidently said it could, but I think I am wrong)
  • how should you mark a word which is clearly readable in the text when its meaning is entirely uncertain? (I suggested <orig>, but there must be better ideas)
  • The typology currently used for <form> combines categories from entirely dissimilar taxonomies, e.g @type=lemma is an entirely different kind of thing from @type=compound. Likewise, the typology one might want to use for should be more to do with the way the sense had evolved. To both these points I said (in my best French) « Bof », Or, more precisely, it’s only by receiving proper input from specialists in the field — those best able to define more appropriate typologies — that the TEI progresses…

I’m hoping that the undoubted success of this workshop will encourage the participants to form a SIG on the subject or (as Piotr Banski had previously suggested to several of them) to make an active contribution to the existing LingSig. Plenty of scope for very interesting work to come, not to mention the opportunity of returning to Barcelona for the paella which I somehow failed to find time for on this occasion.