Counting the books

As a follow up to my previous rather excited posting I have finally got round to actually trying to count how many copies of each title selected for the ELTeC English collection various prestigious national libraries hold. Here’s how I have operationalised the need for some kind of metric approximating to the persistence or canonicity of a given title.

First I run a little XSLT script against the corpus to create a file full of lines like the following:

f @and @attr 1=1003 sinclair @attr 1=4 "modern flirtations"
set marcdump ENG18410.usmarc
show all

This means:

      • find records in which the author field contains “sinclair” and the title contains the words “modern” and “flirtations”.
      • send the output to a file called ENG18410.usmarc
      • display all the results from that query

Creating this query automagically is not without problems. Including words like “the” or punctuation like the question mark is ill advised. Some records include subtitles in their “titles” but most don’t. When the records do contain subtitles they may result in false hits: see further below.

Next I throw this at a z3950 server and go make myself a cup of tea while it chunters away. As noted in my previous posting, getting Z3950 access to a library in question is mostly just a matter of knowing the address of the server and its port, the name of a database, and sometimes (as with the British Library) also wheedling a login and password. The reason I use the recondite syntax above for my query input, and the reason that I accept the results in usmarc 21 format is … that’s what every z3950 server I have looked at so far promises to provide. Some have other exotic options for query or for output, but nothing else is universally guaranteed to work.

Returning with my cup of tea, I now have a bunch of inscrutable marc21 records tidily filed away. I wasted the best part of an evening yesterday trying but failing to find a simple online tool which would convert them into marcxml or indeed anything readable, but the best I could come up with was a perl utility called marcdump. Here’s the start of the output it gives me for ENG18410.usmarc

LDR 00535nam a2200181uu 4500
001 006812208
005 20100212180700.0
008 040420s1841 xx || 000 ||eng
019 u _aG11034382
040 _aUk
_cUk
082 04 _a823
100 1 _aSinclair, Catherine,
_d1800-1864.
245 10 _aModern flirtations :
_bor, A month at Harrowgate /
_cCatherine Sinclair. Vol. 1.
260 _a[S.l.] :
_b[s.n.],
_c1841.
336 _atext
_2rdacontent
337 _aunmediated
_2rdamedia
338 _avolume
_2rdacarrier
852 41 _aBritish Library
_bDSC
_jW5/2649

Exciting stuff, eh. The useful bit here is the publication date, which appears as subfield _c of field 260 here (sadly, there are other possibilities), and even more useful the following, which appears at the end of the output file:

Recs Errs Filename
----- ----- --------
4 0 ENG18410.usmarc

Tis but a matter of moments to grep through these files and extract a list of record counts for each title, together with a list of publication dates.

Furthermore, and much to my relief, the counts do seem to reflect my initial expectations as to which titles would be highly rated and which not. The top ten titles in my 90 are (drumroll)…

94 ENG18860 Hardy: The Mayor of Casterbridge
106 ENG18531 Yonge: The Heir of Redclyffe
135 ENG18621 Braddon : Lady Audley’s Secret
143 ENG18481 Dickens: Dombey and son
148 ENG18610 Eliot: Silas Marner
152 ENG18530 Dickens: Bleak House
157 ENG18540 Dickens : Hard Times
168 ENG18480 Thackeray: Vanity Fair
298 ENG18471 Bronte: Wuthering Heights
664 ENG18652 Carroll: Alice in Wonderland

Nearly all of these would figure on any list of long-lasting 19th c English novels. An eyebrow night be raised by some in the English department about the appearance of Yonge and Braddon, but the explanation is simple: both ladies (or their publishers) were very fond of including the phrase “by the author of ‘Most Famous Title’ ” on the title of their less famous works, and I have not yet worked out how to remove such imposters as “Work you’ve never heard of (by the author of Most Famous Title) ” from the results of a search for “Most Famous Title”.

Another eyebrow might be raised at the frequency distribution of the scores found: there is a very long tail, with nearly two-thirds of my 90 titles scoring 20 or less, while the top scorers, as shown above, score very much more. To some extent, this is explained by the crudity of my search technique, which will include musical adaptations, commentaries, versions for the use of slow readers, study notes, etc etc provided that “Most Famous Title” appears in the title somewhere. This worries me less, since the existence of such things is surely also testimony to the salience of the title in question. This factor does however have an inflationary effect on the scores, so that titles which don’t benefit from it appear lower than might be expected. “Middlemarch” for example – widely regarded as amongst the greatest English novels of the period, but not subject to – scores only 77, ahead of Sherlock Holmes debut novel “The sign of four” (72) but behind George Eliot’s closest rival for the depiction of provincial life Mrs Gaskell’s “Mary Barton” (82).

But these scores should not be subjected to such close scrutiny. If we are looking for a proxy metric for the “impact factor” of these works, it’s not implausible to be guided by the numbers of different editions of them that have accumulated in our great national libraries. If we say that a score of less than (say) 20 suggests a low impact, and anything above (say) 50 a high one, we should not go too far wrong.

So far I have tested this procedure only on the British Library’s collection. An obvious next step is to try a different English-language library (COPAC springs ro mind) to check that the ranking is not too widely different. And then to try out a different language: the BNF also has a z3950 server so I plan to subject the French collection to the same treatment.

Hoorah for standards (and librarians)

Librarians do it with Z3950. Of course, I knew that all along. I just didn’t know what it implied for the non-librarian.  I have been wondering for some time how to get at the bibliographic riches of our national and university library catalogues without using one of the artfully constructed snazzy web interfaces they all seem to have set up in the interests of “usability”. Which interfaces don’t usually include any means of answering such questions as “List dates of publication for all editions of this title which you hold”. BUT (tl:dr) most library catalogues also run a server which exposes their data via an antique API called “Z3950”, originally developed by the Library of Congress and others, apparently so that they could steal catalogue records from each other without the pain of sending round a man with a truck.

When I say antique, believe me I am not joking. Some Z3950 servers I have looked at (eg the Bodleian) will deliver records in funky modern formats like SGML, but the only format you can be sure of getting is US MARC 21. This is an old school binary format, remarkably similar to the ones I used to have to deal with when debugging IDMS databases or alien magnetic tape formats back in the 1970s. You have pointers and variable length fields and a mix of character data and binary and you really don’t want to hack this stuff in Python, honest. Which is why there are dozens of snazzy interfaces and convertors and alternative syntaxes listed in bewildering plenitude on the Library of Congress website. Some of them cost money. Most of them are for Windows only. I took an instant dislike to nearly all of them… until I discovered YAZ.

YAZ is a properly constructed library of utilities offering all sorts of features I don’t claim to understand; what I liked is that it also provides a small number of command line utilities which do things like access a z3950 server, search for records, save the results, and convert from US Marc to something a tad more readable. And it’s open source, and free, and runs on linux, just like that, no need to install lots of bloatware as well. Sudo apt install yaz* and I’m done.

Now, let’s say I want a list of the publication data for all copies of Disraeli’s Sybil held at the British Library. Simple. On the unix command line I type:

$ yaz-client -u xxxxxxxx z3950cat.bl.uk:9909/ZBLACU

(the row of xs there is my authentication string, which I promised the BL I wouldn’t share with anyone as a condition of getting access to their server for nothing. The rest is common knowledge though – see http://www.bl.uk/bibliographic/z3950configuration.html)

The yaz client operates a bit like telnet used to: at each prompt I can type special commands like “find” and “show” and see the results. By default it uses something called Prefix Query Notation, or PQN, though I could also use something called Common Command Language (CCL) “an international standard query language often used in libraries” according to Wikipedia.

Z> find @and @attr 1=1003 disraeli @attr 1=4 sybil

Don’t you just love that PQN syntax? It’s like typing in Latin. It means Find records with “disraeli” in marc field 1003 and “sybil” in marc field 4. Who needs high falutin nonsense like Xquery?

Then I can type:

Z>show 1+100

to see the first 100 records (it’s obvious, surely, that “n+m” must mean “up to m records following record number n”)

Even better, I can type

Z>set_marcdump wibble

and have a copy of  all the marc records returned saved to the file wibble. Why should I do that? Because I can then process that file to something readable with another little command line utility called marcdump:

$yaz-marcdump wibble | grep 204

which will convert wibble to something more readable, and pick out all the lines containing publication data! Yes, you do have to know what those Marc field labels mean, Cynthia.

And yes, there are features for batch mode operation so I could run my commands from a file, save the results, tidy them up and put my own super-simplistic interface on the whole business. I can see this is going to keep me happily occupied for quite a while.

Building the ELTeC : the quest continues

My quest for a reasonably reliably complete list of 19th c. English novels complete with indication of available electronic versions thereof has made a large leap forward as a result of the generosity of others.

Firstly, Professor Troy Bassett of Purdue University Fort Wayne, whom God Preserve, tells me that he will now start distributing regular extracts from his database at http://www.victorianresearch.org/atcl/index.php in CSV form for my (and others’) mungeing pleasure. I spent some time testing this out, using Libre Office to convert the CSV into an XML form which I could then reprocess with yet another xslt stylesheet. This worked reasonably well (eventually) and I was able to create a new version of this data in TEI, all 15,682 records of it. To merge in identifiers for VWWP, Internet Archive, Gutenberg, and Google texts I used a simple minded stylesheet which tries to match on the magic keys I created earlier, as discussed in previous blog postings. This worked, though not exactly with blistering speed, and I produced a second version of the Bassett TEI file with added references. Of those 15,682 titles, only 3,430 have at least one digital version and there are a total of 3,813 digital versions in all.

Secondly, and with some trepidation, I approached the sacred grove of Big Data and sent a plea for help to the nice people at Hathi Trust, that extraordinary repository of 16 million or so books in digital form. You can download a full list of their holdings in a nice simple processable form from https://www.hathitrust.org/hathifiles : the catch is that their monthly snapshots contain 16 million (and counting) records derived, I think automatically, from contributing libraries’ catalogue systems, not all of them entirely consistent in their metadata usage, and certainly none of them clearly identifying which titles might be considered to be novels or have female authors. I did some initial poking around with a perl script and ascertained that I could whittle the 16,208,265 entries in the May 4th version of the file down to a mere 1,253,950 by selecting only those which are publically acccessible, have a publication date between 1849 and 1820, are books in English, and are not US government documents. These counts are inflated by the fact that we are counting volumes here, rather than works.

Meanwhile, a helpful person on the Hathitrust help desk had got in touch pointing me to the existence of various selections created by other researchers before me, notably Ted Underwood’s file filtered_fiction_metadata.csv [available on Github at https://github.com/tedunderwood/character/blob/master/metadata/filtered_fiction_metadata.csv]
which lists 93,708 volumes of fiction, published between 1800 and 2007. “It focuses on fiction, but will include works in translation. It’s also not restricted to ‘the novel’; short stories and even folk tales are included. But we have tried to exclude fiction aimed explicitly at a juvenile audience across the whole timeline.” Underwood points out that “There is no authoritative master list of all English-language fiction from 1800 to the present. So scholars who want to do research at scale have to construct their own list.” which is true enough, though I wonder if he’s aware of Bassett’s work. Anyway, I wrote more scripts to process his CSV file into TEI, adding my magic keys to enable me to link his data with the titles in the Bassett database. All of which meant that I could provide Bassett with a list of Hathtrust links as well as those already provided for Gutenberg et al.

Bassett’s data includes some titles published only in the US, which I would like to exclude, as well as some “aimed explicitly at a juvenile audience” (the two sets of course have nothing in common, except that I would like to exclude them both). More annoyingly, it stops in 1901. I therefore went back to thinking about how to extract comparable data from the Hathifiles for the period 1901-1920. The most recent Hathi file (dated 1 June) has entries for 16,370,821 volumes. I tweaked my perl script to extract records for books in English published between 1900 and 1921, as before, tweaking it also to suppress duplicate titles, which gave me a more manageable but still daunting total of 348,232 titles.

The question is, as ever, how to pick out the novels from the chaff? The obvious answer is to apply some basic text analytic tools. I have a list of 15k titles which the obliging Bassett has already identified as novels. I have 348,232 titles extracted from the Hathitrust file. If I had to scan through them choosing the titles which are most likely to be novels, which words would catch my eye? Which words in fact appear significantly more frequently in the first list than their frequency distribution in the second would lead you to suppose? This sort of question is surely what computer linguists deal with ten times a day before breakfast.

I install Lawrence Anthony’s excellent AntConc program from http://www.laurenceanthony.net/software.html and battle through its interface a bit. Here are the top twenty “keywords” for the Bassett title list, when I use the aforesaid Hathitrust titles list as the “base corpus” for comparison. The underlying statistic (need you ask) is log likelihood (4 term); the threshold being p < 0.05 (+Bonferroni), i.e. the defaults.

#Keyword Types: 611
#Keyword Tokens: 47261
#Search Hits: 0
1 2693 + 15387.79 0.0556 novel
2 8559 + 10677.59 0.0763 a
3 1969 + 6571.89 0.0388 or
4 1262 + 6402.77 0.0266 tale
5 1426 + 4104.1 0.0283 story
6 721 + 2599.46 0.0152 romance
7 2195 + 1957.72 0.0327 s
8 471 + 1072.51 0.0098 stories
9 364 + 1058.08 0.0077 love
10 837 + 1019.33 0.016 life
11 355 + 1001.66 0.0075 tales
12 9268 + 875.37 0.0384 the
13 272 + 773.82 0.0058 adventures
14 163 + 594.6 0.0035 daughter
15 224 + 591.09 0.0048 lady
16 217 + 546.42 0.0046 woman
17 150 + 484.95 0.0032 wife
18 570 + 478.07 0.011 other
19 221 + 422.18 0.0047 little
20 131 + 395.37 0.0028 miss

This seems like pretty good evidence that simply looking for titles which contain the words “novel”, “tale”, “story”, “romance” or – this one a surprise – “or” might do pretty well. It also suggests that the Bassett novel is quite concerned about women, but that’s another story.

Reassured, I re-tweak my Perl script to extract bibliographic records as before, but this time only if their title contains one of the words “novel”, “tale”, “story”, or “romance”. A glance at the results shows that this approach cuts down the number of titles to a more manageable 16,908 but still has many false positives (“My life and the story of the Gospel hymns and of sacred songs and solos” for example) . A more sophisticated approach is needed, clearly. Time to make a cup of tea.

Building the Eltec (stage 0) … continued

Have at you, Project Gutenberg…

I am for sure not the first person to think it would be nice to try to make the Project Gutenberg metadata more easily machine tractable. Matthew Jockers wrote a python script to hack usable metadata out of the individual texts back in 2010 (see this blog entry ) ; Damon Cavar wrote some java to do something similar but starting from the RDF form of the Gutenberg catalog, as part of an ambitious
(but I think as yet incomplete) Project Gutenberg to TEI XML conversion project  last updated 2012.  More recently, Jonathan Reeve has announced an interesting project which is hacking together various bits of Gutenberg, Gitenberg, and Wikipaedia to make a  project Gutenberg database for text mining  … one day.

My objectives are not so ambitious and I like to keep things  simple. I just want to know how many Gutenberg titles are listed in the Bassett database of 19th c British fiction. (I’d also like to be able to extract a list of all British novels in English published for the first time between 1902 and 1920, but that’s a separate problem) Having experimented with other plain text options, I reluctantly decided to start from the Gutenberg RDF catalogue. At least that is expressed using a syntax which xslt can handle and validate. No claims that its semantics are entirely reliable, of course.

Step 1 is to download and unpack a massive zip file from the Gutenberg site. We want the RDF format data is linked to from a page in the Gutenberg wiki:  It is massive because it actually contains nigh on 50,000 subdirectories, each containing a single file, describing a single text. So, for example, the RDF format catalogue entry for text number 1234 is in the unpacked file cache/epub /1234/pg1234.rdf When I looked there was also just one directory called DELETE-55495 which contained a variant of the entry for pg55485.rdf, but I pretended I hadn’t noticed that.

Step 2 is to develop and perfect a simple XSLT script to extract the useful grains from the enormous amount of chaff in each RDF file. This script (rdftotei) is designed to meet the needs of the ELTeC, so it rejects anything which is clearly out of the desired time zone (author born after 1920 or before 1800), or definitely not a novel (some records use a marc edt descriptor to show that they are edited compilations). If I could find a way of identifying books which are not in English I would exclude them too.  It cranks out simplified TEI bibl records like this:

<bibl xml:id="10037" n="abeautifulpossibility|Black">
<title>A Beautiful Possibility</title>
<author dates="1857 1936">Black, Edith Ferguson</author>
</bibl>

As you can see, this includes a  magic key that I will later use for matching with other ELTeC bibliographic records, notably the Bassett database I blogged about last week.

Step 3 is to find a way of running this script against 50,000 files which does not cause my computer to melt down, and preferably will complete in my lifetime. My first simple minded approach was  a shell script that invokes saxon on each file. But this has to set up a JVM afresh each time it runs, so it takes forever. I considered glomming the individual files together into a smaller number of larger files, so that loading the JRE gets done less frequently, but this is fiddly because each of the individual files begins with an XML declaration that would have to be removed during the glomming process. A question to the oxygen users list evinces 3 helpful alternative suggestions in ten minutes: the easiest and quickest of which is to use a feature I didn’t even know existed in saxon: specifying a directory as input and as output. So with all my RDF files in the folder RDF and nothing in the directory RDFx, I do the following two shell commands:

saxon -s:RDF -o:RDFx rdftotei.xsl
cat RDFx/* > gutenList.xml

and the whole thing is done in a couple of minutes.

Step 4 is to repeat the process as before: pick out the magic keys and then look for overlaps between those keys and those in the Bassett database (like this:

saxon guten-list.xml getKeys.xsl > gutenKeys.txt
comm -12 <(sort gutenKeys.txt) <(sort bassetKeys.txt)

Result on the first round: 1478 Gutenberg titles are already known to Bassett. Not as many as I’d expected, but not bad. Here are the full results for all three digital collections.

Out of 13,859 titles in Bassett’s database,  a total of 2937 appear in at least one of Gutenberg, Internet Archive, Google Books, or VWWP, i.e. more than 20% (which is better than I was expecting).  Here are the counts for the individual collections:

Gutenberg InternetArchive Google Books VWWP
1478 1155 594 32

 

Also to be expected, there’s a bit of overlap. 2638 appear in only one digital collection; 276 in two, and 23 in all 3. You can probably guess which titles those are, though one of them came as a bit of a surprise. What’s so great about Mary Ward’s “Marcella”?.

Building the ELTeC : stage 0

Problem: if the ELTeC is supposed to represent in some sense the full range of novel production in a given language (EN in my case) for a given time slot (1850 to  1920, it says here) how do you find out what the population actually is before starting to sample it?

Enter at this point a wonderful database: Bassett, Troy J. At the Circulating Library: A Database of Victorian Fiction, 1837-1901. Victorian Research Web. [accessed 2018-02-09] (http://www.victorianresearch.org/atcl). I say “wonderful” advisedly: I have found nothing comparably complete and usable in a week of scratching around the internet.

According to Bassett’s technical notes, “This website was written on a Macintosh using a MySQL database and PHP.” Consequently its contents are pretty consistently organized and tagged, and consequently screen scraping and munging into a different format are both pretty easy: like this

 #!/bin/bash
 for number in {1850..1901}
 do
 echo "$number "
 wget "http://www.victorianresearch.org/atcl/show_year.php?year=$number" -O $number.html ; tidy -asxml -n --new-empty-tags image $number.html | saxon - bassetConv.xsl > $number.xml
 done
 exit 0

The `bassetConv.xsl` stylesheet generates a TEI format bibl entry for each of the 15,000 plus titles, like this:


<bibl xml:id="11169" n="acounselofperfection|Malet">
<author n="576">Lucas Malet.</author>
1 vol.  London: Kegan Paul.</bibl>
The numbers are the identifiers used by the mySQL database, so they should be unique. The @n attribute on the bibl element is a key I generate for matching purposes (see later).

I would rather like to know how many of these titles are available in
digital form, and from where. The current database has links to
Google Books (500 or so, it claims: I haven’t worked out how to find
them without downloading the entire database) but nothing else so far
as I can tell.

How to accomplish this?

My best plan so far is to extract  lists of keys or fingerprints, derived from the cataloguing information supplied by each online repository  and then start looking for overlaps with Bassett. My assumption is that for any relevant title not to exist in Bassett would be rather remarkable. Some initial experiments suggest that the first few words of the title combined with the surname of the author is a reasonable approximation to a fingerprint for each title, though obviously not
perfect.

So far I have looked at the following four online repositories:

1. Victorian Women Writers Project

This has 75 titles identified as fiction, and all in good TEI format; the following grabs the catalogue info for them

http://webapp1.dlib.indiana.edu/vwwp/search?docsPerPage=100&browseText=fiction&text1=fiction&field1=browse-genre&style=&smode=simple&brand=general

Life is too short to process all the resulting HTML: in any case I am sure if I wanted a proper XML catalogue my friends at Indiana would give me one. So I manually separate out the useful chunk of HTML: and mangle it through a simple XSLT stylesheet (`vwwpConv.xsl`), to produce a file of entries like this:

<bibl xml:id="VAB7046" n="daphne|Ward">
<title>Daphne, or Marriage a la Mode</title>
<author>Ward, Humphry, Mrs., 1851–1920</author>
<publisher>London; New York: Cassell, 1909. 315 p.</publisher>
</bibl>

Note the cunningly constructed @n attribute supplying a key which, all things being equal, should match an entry in the Bassett database, if there is one.

2. The ebooks@adelaide project

This is a quirky site hosted by the University of Adelaide library which makes many texts available in a well structured XHTML format which is easy to munge into TEI. For info about this apparently little known but rather splendid project see https://ebooks.adelaide.edu.au/about/
Selecting relevant titles for our purposes is not so easy, so I grabbed a readable version of their entire catalogue in HTML from the website, and then grepped through it for potentially useful entries, resulting in a file full of lines like this:

<li><a href="/a/abbott/edwin/flatland/">Flatland: a romance of many dimensions / Edwin A. Abbott [1884]</a></li>

along with a lot of less useful lines like this

<li><a href="/a/aristotle/meteorology/">Meteorology / Aristotle; translated by E. W. Webster</a></li>

A perl script was the quickest way of munging this into minimal TEI entries like this:

<bibl><title>Flatland: a romance of many dimensions </title><author>Edwin A. Abbott </author> <date>1884</date><ref>/a/abbott/edwin/flatland/</ref></bibl>

I am not sure that there is much wheat of this kind in this chaff however: I will come back to it later.

3. And then there’s Gutenberg

Yes, the Gutenberg Project also has a catalogue: a huge one, which you
can download as an incomprehensible RDF file or a plain text
monster. I took the latter option, and hacked out of it 49639 lines
like this:

<title>100 Desert Wildflowers in Natural Color</title><author>Natt Noyes Dodge</author><idno>54631</idno>

I won’t rehearse here the manifold  inconveniences of using Gutenberg texts. But it would be useful to come back to this and see how many of the Bassett titles are available here. As a first approximation, I first reduced the title list to just the first part of the title and the author’s surname, with no spaces or punctuation, and then used the invaluable unix  comm command to identify common lines in the two files. Obviously, this procedure is vulnerable to typos and inconsistent editorial practices, which are not uncommon, but in my first experiment I found 984 items with identical titles and authors in both the Gutenberg title list and the Bassett title list.

4 Internet Archive

This wonderful collection has a good search interface, spoiled somewhat by the unreliability of the data. I used it to grab all the entries for a collection called “19thcennov” which looked promising. Sorted by descending date, the very first item had a date of “1983” which, on inspection, turned out to be a typo for “1883”, so a good thing I didn’t use “date” to limit my search. OTOH, the second item in the list was something called “Mathematics in urban science, V : Catastrophe theory” published by the “Monticello, Ill. : Council of Planning Librarians”. No matter: at least the output is in XML, and the text identifiers used by the IA bear a striking resemblance to those I thought I had invented. My search gives me 7829 items which look like this:


<doc>
<str name="creator">North, William, d. 1854</str>
<str name="date">1847-01-01T00:00:00Z</str>
<str name="identifier">impostororbornwi03nort</str>
<str name="language">eng</str>
<str name="publisher">London : T.C. Newby</str>
<str name="title">The impostor, or, Born without a conscience</str>
<str name="volume">3</str>
</doc>

which I then munge into

<bibl xml:id="impostororbornwi01nort" n="theimpostor|North">
<author>North, William, d. 1854</author>
<title>The impostor, or, Born without a conscience</title>
London : T.C. Newby</bibl>

The IA gives each volume a separate catalogue entry, so these 7829 entries boil down to only 2739 unique keys which I can compare with the Bassett keys. On a first pass through I identify 1235 matches, i.e. nearly half. I also identify lots of ways of improving the matching procedure, but for the moment this seems like it might be useful. Though I am not sure that a success rate of around 10% in identifying matches is altogether worth shouting from the rooftops about.

Digitalising in Le Mans and eating in Oulx

Monday morning, July 3rd and here I am  again at St Pancras…  waiting for the   Eurostar to Paris which is unexpectedly 30 mins late leaving, and   therefore, likewise arriving. But I have plenty of time to get across Paris by metro to Montparnasse Le Bienvenu, and then by surface street, very hot and sunny, to the grand station where the TGVs hang out. How pleasant, a train system which works, I reflect as it zooms across the unutterably boring flat countryside of the Ile de France, and so to Le Mans, where I am due to address a DARIAH “Humanities At Scale” (whatever that means) funded workshop which rejoices in the title of Bibliotheca Digitalis, though I am pretty sure it will have nothing to do with foxgloves. Out of the station, five minutes up the road, and into the Hotel Chantecler, as advertised. Various emails insist I will be taken out for dinner by and with chums from Tours, and so it comes to pass. It’s time to be lionized for tomorrow I am lecturing.

July 4th

The morning starts with a (guided) walk down to the Médiathèque Louis Aragon, which is a newer and less brutally modernist building than the Maison de Culture we pass en route, and which is in also in a state of unexpected chaos because of a major reshelving exercise. All the paperbacks and DVDs and what have you are spread out on tables under the watchful glare of lady librarians. But this is nothing to do with us: we are in a nice lecture room where we are being welcomed in broken English by various dignatories as per usual. I am second on the bill, after Jean-Yves Ramel from Tours who gives a comprehensible and accessible introduction to OCR and how it doesn’t quite work. After coffee, I pick up the theme and wax eloquent on what sort of not not-quite-working we are talking about, the need to go “beyond the page” etc etc. Then we all depart for lunch in a state of advanced self satisfaction. Lunch is across town in a nice bistrot and is enlivened by the arrival of Mark Greengrass who I havent seen in a million years, when he was starting down a route he seems now to have abandoned, digitizing cultural networks in the Hartlib Papers project at Sheffield. It’s also rather tasty, as French bistrot lunches tend to be when ordered well in advance. Back at the Mediatheque, our local librarian hosts address us in French, with a somewhat hesistant on-the-fly translation from Toshi and then we get to hear from all of the participants, most of whom turn out to be EU researchers or librarians from Italy, Romania, Bulgaria, Hungary, etc. Impossible to take in fully some thirty or so projects, presented at lightning speed (two minutes each!) thus ensuring that only the outliers stick in the memory, but I gleaned a general impression of keen competence ready to be enlarged, as well as recognizing a couple of TEI acolytes. And so to the first of the formal public lectures: a thoughtful discussion about the relative influences of social networks and of the nascent publishing industry in the production of some major 17th century works. Being public, Mark was obliged to deliver this lecture in French unlike the rest of the proceedings, and some of the good citizens of Le Mans joined us to benefit from it. Huzzah. And then it’s dinner time in a nice old restaurant, of course.

July 5th

I make no effort to arrive on time this morning, which means that the door of the Mediatheque is locked when I get there, so I have to send imploring text messages to gain entry. But those extra minutes in bed were worth it. Somewhat to my surprise, the two technical briefings being given (one from Aurelian Ruellet on how social networks should really correctly be modelled; the other from Eduard Frunzeanu and Régis Robineau on how IIIF works) are both informative and accessible. I don’t think the former mentioned anything that the TEI does not already support; and the second reminded me how overdue is proper discussion of a bridge between TEI and IIIF world views. This led me to interrogate Robineau over lunch, perhaps a tad too aggressively. After lunch, we moved to another room for the first proper workshop session in which participants were invited to do some data modelling based on a 17th c register of permissions to print emanating from the royal Chancellerie in Paris, which they did enthusiastically and in groups. Most groups decided to design a database structure to hold extracts from the document, but a couple of hardy souls did make the effort of thinking about how they’d model the document itself. The evening was a fairly full social programme, starting with a visit to the celebrated Le Mans abbey, followed by a civic reception, and concluding with a guided tour of the old town. I behaved fairly deplorably at all of these, firstly by turning up at the wrong cathedral (I blame Pierre-Yves, who bought the beers); secondly by drinking and eating too much; and thirdly by sloping off early as soon as it became apparent that serious walking was required. Which is a pity, since the bits I did manage to see of the Plantagenet old town looked really rather nice. The same indeed, might be said of the whole workshop. (See further my photo album) But I have a scheduling conflict, which means that I have to skip back to Paris the next day

July 6th

I rise late and take a lingering breakfast, before packing in a leisurely way and accidentally stealing the Hotel Chantecler’s TV remote. And then back to the station for the 1134 to Paris. It is still infeasibly hot and the countryside is still rather dull so I work on the DifDePo schema resolutely and don’t look out the window much. Back in Paris, I catch the bus to Gare de Lyon, walk to my hotel (de Venise, in an interesting little neighbourhood), dump my bag, and only then realise that there isn’t a bus from there to anywhere near rue Monge, where I am due for lunch in er 7 minutes time. Bother. I suddenly remember the existence of Parisian taxis (quite plentiful around the gare de Lyon) and persuade one to take me to the village Monge where Helene and the plat du jour are waiting, arriving a mere 30 minutes late, quite normal by Parisian standards I am assured. And so to a somewhat fraught meeting on the 4th floor of Censier, where I plea for some attention to the XML validation of the DifDePo transcriptions. Nice beers with Marc and Chiara afterwards: it’s still very hot. Then to BVH where I fail to find a nice cool shirt and am assured that there are no more linen shirts in Paris, because of the heat. And so to the Gare du Nord in time to meet L, and then shepherd her by RER (very hot) to the Gare de Lyon, and then back to the Hotel de Venise. After inspecting the local eateries, we settle on a small Japanese supper nearby and retire to bed.

7 July

Up early to catch TGV to Oulx, but we dilly dally (did I say that it was already quite hot?) and SNCF mysteriously decide to make the train leave four minutes earlier than advertised, just as we arrive at the wrong end of platform 15, and realise that it contains two TGVs stuck together, both of which are about to leave, but only one of which has seats for us. Damn and blast. We pile into the cattle class carriage of the wrong TGV, and resign ourselves to two hours of (actually quite well-behaved) excitable children en route to a colonie de vacances somewhere near Grenoble. We escape at Lyon, and find our comfy seats in the right TGV, though they are facing the wrong way for Lilette to enjoy the (very scenic) route from Grenoble to the Alps. The Residence du Commerce in Oulx is opposite the railway station and next door to a nice bar where we refresh ourselves while waiting for the hotelier lady to appear. All very peaceful. Oulx is a tiny place, boasting a scenic bridge over a river, some snow topped mountains, and one street of touristic shops, none of which has a shirt to offer me nor even soap to wash one of the numerous dirty ones I have now accumulated, but never mind. On Guy’s recommendation, we reserve a table at the Ristorantino La Stella, down the road from the station, which turns out to be really rather nice. It is run by a serious Sardinian gentleman, aided by two cheerful ladies, and a third who is his wife. There is only space for about six tables, and the food is cooked to order. We had a salad, which reminded me why one should only eat Italian tomatoes, followed by rabbit and polenta, which reminded us that rabbit can be tender and juicy. I revealed my complete ignorance of Italian history by wondering aloud why a Sardinian should be living in Piedmonte, and the proprietor’s wife (who comes from Bologna anyway) was too polite to explain.

We have come to Oulx, if you were wondering, not just because it has a curious name, but also because it is a sensible place to break the journey to Liserna (where we are now headed,) being the first civilised town you encounter  on the Italian side of the border.

How to make a sow’s ear into a silk purse/Comment faire d’une buse un épervier

As the proverb says, you can’t turn a buzzard into a sparrow-hawk. But here’s how I went about producing a TEI-conformant minimally encoded edition of the 1611 King James bible, starting from an all-singing, all-dancing, vastly over-complicated, web site to the existence of which Martin Mueller had alerted me last Friday in a plaintive posting to TEI-L

The problem with web sites like this is that they expose only various HTML views of their underlying data, usually heavily infested with javascript, often deploying all sorts of nonstandard gizmos to make the pages look just so. I’ve no quarrel with their doing that, I just wish they’d make it a bit easier to get at the real data, i.e. the transcribed text and the pointers to the images that go with them. This site is a case in point: after spending some time looking at some of the gazillion HTML files a simple-minded wget -r starts hoovering up, I hypothesize that the data is actually stashed away somewhere inaccessible and all that you see on the site is a bunch of variously configured static web pages which have been created from it. The good thing about that is that the static web pages were created by an automaton, and the process can therefore be reversed by another automaton. The bad thing is that (in this case at least) clearly different automata have been used to generate vaguely different versions of the same page. For example: the following three URLs all show subtly different versions of the same first page of the 1611 bible : https://www.kingjamesbibleonline.org/1611_Genesis-Chapter-1/  https://www.kingjamesbibleonline.org/Genesis-Chapter-1_Original-1611-KJV/ and https://www.kingjamesbibleonline.org/Genesis_1_1611/

These are not simple redirects: each URL delivers a file in a different format. Well, I could have asked whoever runs this site what was going on, but that would have spoiled the fun, so I chose the format I liked best (the last of the three above) and wrote a script to generate requests for just those pages. I had to assume that the URLs would follow a consistent naming format, encouraged by a table of the names of the books of the bible I found in one of the chunks of embedded javascript, and which I moved into my little perl script. Running this script produced a bash script which grabbed each page using curl and saved it as an HTML file on my machine.
What went wrong with this process? Surprisingly little: I didn’t find out till Sunday lunchtime that not only had I completely overlooked the Apocryphal books, but also that the file naming conventions used for these were not quite the same as for the canonical chapters (“Judith” for example is actually spelled “Iudeth”), but for the bulk of the 1300 or so chapters my guesses about the URL to use were spot on.  [This was Hubris. See my comment below]
The real challenge was disentangling the useful data from the resulting screen-scraped mess of pottage. My usual approach here is to run the HTML I get through Dave Ragget’s utterly wonderful tidy utility to turn it into kosher XML, and then write an XSLT script to pick out the bits I want. In this case, however, the pottage I got was so messy even tidy threw up its metaphorical hands and refused to proceed with it, so I had to concoct a little perl script to pre-process each file and extract just the useful part, before running it through tidy, and processing the resulting XML into conformant TEI by means of saxon and a little XSLT stylesheet. Like this:

for f in webScraped/*.html; do \
FNAME=`basename $f .html`;\ 
echo ${FNAME} ;\ 
perl extract.prl $f | \ 
tidy -q  --error-file tidyerrs --doctype omit --numeric-entities yes - asxml | \ 
saxon -o:chaps/${FNAME}.xml -s:- -xsl:postGrab.xsl chapId=${FNAME};\ 
done;

And what went wrong with this process? Well, quite a lot, as you might expect, but nothing that couldn’t be fixed by tweaking and rerunning the process (this is why I always put the effort into writing bash scripts for jobs like this: they can be rerun till I get it right). For example, I was deriving an XML id for each chapter from the name of the input file, but some of the files had names beginning with digits. My plan was to put each chapter into a separate XML file and then use xInclude to munge them together into a TEI document (to maximize the flexibility of said mungeing) — but for this to work I had to get namespace declarations in the right place, and as anyone who has used them knows, anything involving namespace declarations never works the first time. To say nothing of unexpected inconsistencies in the data : which in fact were really very few. In fact so far, the only thing that has so far caught me out was a handful of chunks which looked like biblical data in the HTML markup but were not. Fortunately these were detectable because they wound up with an invalid @n attribute in the XML output and could therefore be weeded out after the event.
While all this was going on, I wrote my driver file and did some further sniffing around on the interwebs for sources from which to complete the work. The 1611 Bible has a title page and loads of prefatory matter not all of which is available on www.kingjamesbibleonline.org (for the record, I found the title page on Wikimedia, and the rest of the front matter at several places, but in page image form only).
How should the Bible be modelled as a TEI document? In my first view, each verse is an <ab>, each chapter is a <div>, each book is a <text>, each testament is a <group>. This made sense to me, but then I realised that processing would be simpler if instead each book were regarded as a <div> of a different type; hence, that is what the current versions of both driver file and ODD say. Considerations of where to put the genealogies, tables for Easter, almanachs etc. which arguably do not belong in the front matter may lead me to review that decision, so I have left <group> lurking in my ODD. It should be easy to produce a separate driver file representing that alternative view.

A more pressing need, however, is to sort out the placement of the page image links: in the HTML source, and therefore in my XML, links to the page images for the whole of a chapter are given as a sequence at the start of the transcribed text for that chapter, rather than being placed at the point in the transcript where that page begins. In some cases that point is identifiable because the transcribers have chosen to include part of the running page header in the transcription there (it winds up as a <fw> element); but in many it isn’t. Most chapters only occupy a few pages so sorting this out would not be a major effort, just a rather tedious one, and not-easily-automatable one.
So what have we learned? Actually it’s not that hard to make a usable TEI document even from something as idiosyncratic as this web site. Which is encouraging. Let me also hasten to add that I mean no disrespect for the hard work (still less for the generosity) which must have gone into the production of that website: everyone is entitled to their own design decisions. I am pleased to express my thanks to the anonymous people who have put in that effort, and decided to share its fruits even with the godless multitude. As the 1611 translators say in their preface ‘Zeale to promote the common good, whether it be by devising any thing our selves, or revising that which hath bene laboured by others, deserveth certainly much respect and esteeme’.

Notes towards a definition of TEI Conformance

 

 

 

 

 

 

 

Each of the blobs here represents three subtly different things:

  • an ODD : that is, a collection of TEI specifications
  • a formal schema generated from that ODD, and its natural language documentation
  • the set of documents considered valid by that schema.
    The TEI provides TEI All : a set of over 500 uniquely identifiable elements, classes, attributes, etc. and a schema in which they are all permitted. For all practical purposes a user of the TEI must make a selection from this cornucopia, and we call that selection a TEI subset. Of course there are many many possible TEI subsets, each making different choices of elements or attributes or classes, but the sets of documents which each consequent schema will validate all have in common that they will also be considered valid by the schema TEI All.

A user of the TEI may however do more than simply choose a subset of the provided specifications. They may also provide additional constraints for aspects of an encoding left underspecified by the TEI, for example by requiring that attribute values be taken from a closed list of possible values rather than being any syntactically valid token. They may simply change the datatype of an attribute, for example from a string to an integer or a date. They may also provide an alternative identifier for an element or an attribute, for example to change its canonical English name for one from another language. In some cases, attribute value changes are equivalent to a subsetting operation; in others not. Renaming operations never result in a subset: a document in which the element names have all been changed to their French equivalents cannot be validated by an English language version of TEI All. A user of the TEI can also change the content model or the class membership of existing TEI elements, in ways which may or may not be equivalent to a subsetting operation.

We use the term customised subset for all these kinds of personalisation because they result in something which is not necessarily a further subset of the TEI subset concerned, but a further modification of it. In the general case, their conformance with TEI All can be determined only by inspection, and their validation may require some additional processing.

Finally, a user of the TEI is at liberty to define entirely new elements and attributes, and to make such components members of existing TEI classes so that existing TEI elements may refer to them. They may also modify the content models of existing TEI elements to refer explicitly to such new elements. This results in an extended subset, since it contains elements or attributes additional to those provided by the TEI All schema. Such additional components should always be labelled as belonging to a non-TEI namespace. A processor can then determine that these components may be left out of consideration when determining the validity of a document with respect to TEI All.

In additional to these formal considerations, TEI conformance involves attention to some less easily verifiable constraints, specifically the twin requirements of honesty and explicitness. By honesty we mean that elements in the TEI namespace must respect the semantics which the TEI Guidelines supply as a part of their definition. By explicitness we mean that all modifications (i.e. both customized and extended subsets) should be expressed using an ODD to document exactly how the TEI declarations on which they are based have been derived. (An ODD need not of course be based on the TEI at all, but in that case the question of TEI conformance does not arise)

Formally speaking, we can say of a conformant TEI document :

  • it must be a well formed XML document and
  • it is valid against the TEI All schema :
    • without modification (it is a TEI subset), or
    • after deletion of any elements it contains which are not in the TEI namespace including their children, irrespective of namespace (it is a TEI extension), or
    • after application of any canonicalization algorithm specified by its associated ODD (it is a TEI customized subset)

The purpose of these and similar rules is to make interchange of documents easier. They do not however guarantee it, and they certainly do not provide any guarantee of interoperability. Unlike many other standards, the goal of the TEI is not to enforce or impose consistency of encoding, but to provide a means by which encoding choices and policies may be more readily understood, and hence (to some extent) algorithmically comparable.

(Another reason) Why I love the Internet

So, at the recent TEI Conference in Vienna, Elisa and I were indulging in a little mutual admiration on our knowledge of an obscure work entitled Thalaba the Destroyer by the early English Romantic Poet called Robert Southey (rhymes, as any fule kno, with “mouthy”). So when I got back home, I went to look for the volume containing said work which I dimly remembered having on my shelves, in the decrepit-but-too-nice-to-throw-away-section. And sure enough, there it was. The front board has come loose, but the first three openings  look like this:

frontboardhalftitletitlepage

Having scanned those first few pages, I naturally asked Mr Google what he knew about the matter. And was thus able rapidly to confirm :

  • My copy of Thalaba is the cheap reprint (two volumes in one) published
    by Vizetelly and Beeton in 1853. There is a Google-scanned version of the same edition, available from the Internet Archive. They have included with it a couple of pages of  advertisements for other works published by Clarke Beeton (p 7 and 8) which are missing in mine however.
  • What seems like another copy of the same edition is currently on sale at Abe Books for the startling sum of $199.76. Mine is in poor condition,  which is why it only cost me half a crown back in 1967, when I used to frequent Oxford’s second hand bookshops (there aren’t any to frequent these days).

As you may have noticed above, my copy also contains several signs of its previous owners. As well as the book plate, and the inscription above, there’s a nice message from Aunty Sarah, the donor,  opposite the preface:

front-1and there’s also an intriguing note from “JB” dated some twenty years later, opposite the start of the poem proper.

body-01

So… what have we learned? Rosamund was given this book by her aunt, Sarah Brent, in 1860. And in 1882, her husband felt compelled to record his own experience of the Eastern exotic in the same book “We met at Persepolis an Arab maiden of most lovely form and features — she was a dream of beauty never to be forgotten”. What she made of it, one can only conjecture.

But why I love the Internet, is that (pondering these matters after breakfast this morning), it has helped me place these people a little more precisely in time and place. A search for “Rosamund Borrowman” told me that  the 1861 Census shows a person of that name, born 1825 in Kent, married to John Borrowman, born 1830 in Midlothian, residing in Middlesex in 1861. The ancestry.co.uk site where I found this record is pay-walled so no further details available, but that seems reasonably plausible.

And searching for “Rosamund Borrowman John” I was able to find a record of her death. Some industrious volunteers have been surveying the gravestones of a place called Hambledon in Surrey, and there she is:   “Rosamund Vertue the beloved wife of John Borrowman. She died 25th August 1895. Also the above named John Borrowman son of Robert Borrowman born in Edinburgh 3rd April 1830 died at Hambledon 4th July 1906. Also Elizabeth daughter of the above died 22nd October 1932 aged 72 years” It’s all in the spreadsheet.

My next step, obviously, will be to find out where Hambledon is, and whether you can get there by train. Maybe.

Rompons avec le ronron techno-productiviste des institutions!

Back in August or September, I remember bleating anxiously on this blog about having rashly accepted to give a talk on les Humanités Numériques as part of a seminaire “Avenue Centrale” organised by the MSH in Grenoble. I can now report that eventually (in the English sense) I did manage to get some old slides updated and licked into shape, aided by the four hour T-not-so-GV journey to Grenoble last December, and a week or so of being unbearable to myself and everyone else around me. The slides duly appeared on Slideshare, and the folks at Avenue Centrale have even published a nice video and a podcast of me delivering them, but that’s not nearly as interesting  as what actually happened on the day.

Coming to terms with an implausibly pink armchair
Coming to terms with an implausibly pink armchair

On 20th December, after a meeting of the conseil scientifique du MSH Alpes, I found myself esconced in an implausibly pink fauteuil,  clutching a microphone, and ready to go, having delayed the obligatory 30 minutes for bigwigs to turn up, when there was a minor kerfuffle  as the organisers realised that a bunch of scruffy students were busy at the front door handing out an A2 sized pamphlet promisingly titled Humanités Numériques: Gare à la propagande!!! 

The source of the pamphlet, which characterizes me as un petit soldat de la conversion au numérique  des Humanités,  was  subsequently tracked down online by one of the French DH twitterati  (à savoir, Martin Grandjean) within a few minutes of my tweeting this image of it  after the show. Aside from the distribution of the pamphlet, the promised  Action-critique  took the form of three or four extra persons attending my lecture, one of whom also gave a brief speech deploring the industrial and social cost of mass digitization (I think) during the Q&A session. An agreeable though brief debate ensued, none of which sadly seems to have made it to the published version of the video, and we then all adjourned for coffee and horrible sandwiches downstairs, during which I was able to continue to chat amicably with the protesters, though the term seems barely appropriate. I learned that these were actually eco-warriors with concerns about the way big business was driving technology into inappropriate places (There have been somewhat critically received plans to hand out tablets to all school children, in an interesting reprise of the UK Government’s BBC Micro initiative in the 1980s)  On my way out I also tried to take some photos of the activists using my new tablet, which involved much banter and cursing, as I have barely mastered this new device. Out of deference to their desire for anonymity, the photos will have to stay in my personal archive for a few more decades though.

Tidings of this unusual  event caused a (very brief) flurry of excitement on twitter. Frederic Clavert was  a bit peeved to find that his logo had been appropriated for the pamphlet; others were disappointed to find no coherent plan for action in it. And there were also   (tee hee) expressions of extreme jealousy from a few of my DH colleagues — Moi aussi une affiche!  A brief sample of my first significant “moment” in the political history of DH  in France (Marjorie told me that’s what it was) follows.

tweets

Data versus Reality

… is not the title of the book I’ve been re-reading this week, though it might well be. Bill Kent’s Data and Reality was first published in 1978, and comes from the heroic age of database design and development, a period when such giants as Astrahan, Chen, Chamberlin, Codd, Date, Nijssen, Senko and Tschritzis were slugging it out over the relative merits of the relational, network, and binary database models and the abstractions they supposedly modelled : a struggle predominantly over terminology and ways of thought since (as Kent shows) almost all of these differently named and passionately advocated models were fundamentally very similar, differing only in the specific compromises they chose when confronted by the messiness of reality. Whether you call it a relation or an object or a record, the globs of storage handled by every database system were still records, combinations of fields containing representatives of perceptions of reality, chosen and combined for their utility in a specific context. The claim that such systems modelled reality in any complete sense is easy to explode; it’s remarkable though that we still need to be reminded, again and again, that such systems model only what it is (or has been) useful for their creators to believe. Kent is sanguine about this epistemological lacuna : “I can buy food from the grocer, and ask a policeman to chase a burglar, without sharing these people’s view of truth and beauty”, but for us, living in an age of massively interconnected knowledge repositories, which has developed almost accidentally from the world of more or less well-regulated corporate database systems, close attention to their differing underlying assumptions should be a major concern. This applies to the differently constructed communities of practice and knowledge which we call “academic disciplines” just as much as it does to the mechanical information systems those communities use in support of their activities.

In its time, Data and Reality was remarkable for introducing the idea that data representations and the processes carried out with them should be represented in a unified way, the basic idea of what we now call object-oriented processing; yet it also reminds of some fundamental ambiguities and assumptions swept under the carpet even within that paradigm. Are objects really uniquely identifiable? “What does ‘catching the same plane every Friday’ really mean? It may or may not be the same physical airplane. But if a mechanic is scheduled to service the same plane every Friday, it had better be the same physical airplane.” The way an object is used is not just part of its definition. It may also determine its existence as a distinct object.

Kent’s understanding of the way language works is clearly based on the Sapir-Whorf hypothesis: indeed, he quotes Whorf approvingly “Language has an enormous influence on our perception of reality. Not only does it affect how and what we think about, but also how we perceive things in the first place”. There is an odd overlap between his reminders about the mocking dance which words and their meanings perform together and contemporaneous debates within the emerging field that Wilks has charmingly characterized as “Good Old Fashioned AI”. And we can also see echoes of similar concerns within what was in the 1970s regarded as a new and different scientific discipline called Information Retrieval, concerned with the extraction of facts from documents. Although Kent explicitly rules text out of discussion (“We are not attempting to understand natural language, analyse documents, or retrieve information from documents”) his argument throughout the book reminds us that data is really a special kind of text, subject to all the hermeneutical issues we wrongly consider relevant only to the textual domain.

This is particularly true at the meta-level, of how we talk about our data models, and the systems we use to manipulate them. Because they were designed for the specific rather the general, and because they were largely developed in commercially competitive contexts, the database systems of the 1970s and 1980s proliferated terms and distinctions amongst many different kinds of entity, to an extent which Kent (like Ockham before him) argues goes well beyond necessity. This applies to such comparatively arcane distinctions as those between entity, attribute, and relationship, or between type and domain, all of which terms have subtly different connotations in different contexts, though all are reducible to a more precise set of simple primitives. It applies also (and here the TEI in me sits up and smirks) to the distinction between data and metadata. Many of the database systems of the eighties and nineties insisted that you should abstract away all the metadata for your systems into a special kind of database variously called a data dictionary, catalogue, or schema, using entirely different tools and techniques from those used to manipulate the data itself. This is a needless obfuscation once you realise that you cannot do much with your data without also processing its metadata. In more recent times, one of the more striking improvements that XML made to SGML was the ability to express a schema and the objects it describes using the same language. Where what are usually called the semantics of an XML schema should be described and how remains a matter which only a few current XML systems (notably the TEI) explicitly consider.

Kent seems to have been a modest and likeable man. He retired in 2000, and died five years later, leaving a legacy of accessible and still provocative papers, most of them available from his website . Like many other pioneers in computer science, his academic qualifications come from unrelated fields (in his case, chemical engineering and maths); like many others he worked long hours for IBM and HP, but achieved fame and intellectual satisfaction outside the corporate world in the development of industry standards and professional associations. Maybe that experience is also what underlies the much quoted paragraphs which end his book:

default

Unexpected adieux

This sunny Sunday morning sees me setting off for a couple of weeks of TEI workshops, one in Paris, one in Graz. Nothing unusual there, nor in the fact that one is better prepared for than the other. But it has been an unusual week all the same, with two deaths and possibly a new beginning. The deaths first, since they are more difficult to write about. They perturb habitual patterns, making me confront and try to articulate parts of life that are hard to fit into a public blog, yet belong there in the absence of any other personal journal. (I say “public” but doubt that anyone except me reads this).

On Tuesday morning, I received from my friend Guy in Italy a text message saying that his partner Daniela had suffered a stroke and was in a coma; 24 hours later came another announcing her death. It is hard to react adequately to such events at a distance, and particularly so by text message, so I am waiting for a later less painful time to talk to Guy. I learned from a mutual friend that the funeral was yesterday. I dont want to obituarize but Daniela was a very generous and very affectionate person, as well as a fiercely independent one. I am very glad that she did not stay long in her coma, nor return from it badly scarred; I am also very glad that the last time I saw her was at a joyful family occasion in London.

On Saturday evening, yesterday, I received an email informing me of another death, also coincidentally on Tuesday: Chris Sheppard, in whose company I passed my adolescence and early twenties, chasing the same girls, crashing the same teenage parties, growing up to pursue not the same but similar academic careers. Chris was the first person in my school to know where to buy Levi 501s and how to shrink them to fit (in the bath). He introduced me to the works of Raymond Chandler and the collection of cigaratte packets. He was far too cool to take fashionable drugs at Oxford, but was on good terms with those who did. It was largely following his example that I returned to Oxford to take my masters degree in 1969, a year behind him. As graduate students we shared a rented hovel in Stanton St John (chemical toilet, coal fire, wall to wall books) for a year during which Chris taught me almost everything I know about literary scholarship and the love of books, not by precept, but simply by example. I was best man at his wedding back in 1976, but our paths diverged thereafter. At his retirement a couple of years ago, he was head of special collections at Leeds University’s Brotherton Lribrary, where I  emember visiting him and being shown some of his more recondite treasures (a lock of Mozart’s hair, Conan Doyle’s photos of faery folk); I think the last time I saw him in person must have been at a lunch with P.N.O. Pullman some time in the 90s. Now of course that it is too late, I regret bitterly even the dwindling flow of Xmas Card exchanges and the fact that my last email with him was more than six years ago.

As to the new beginning — well, it seems a small thing in this context, but I am now feeling quite positive about the idea of buying a house some distance from the back of beyond in rural France. A specific house, that is, of which perhaps more anon. But for now, I will go back to worrying about tomorrow’s training course at the EPHE in Paris, and that in Graz a week later.

A trip to La France Profonde

So this week I have mostly been not thinking about writing academic papers at all, which may or may not be a good think. Instead I spent the first part of the week tidying up materials for the next TEI training course, which is now pretty well polished, and also for the one after that, which is not. The process of thinking about what materials to use follows a fairly recognisable pattern in which ambitious optimism (I’m going to completely revise this bit, make up something new and exciting, strike out into unknown territory) has to eventually give way to pragmatic opportunism (I’ve got this already, it just needs checking, minor tweaks, translating). When I am preparing two courses which are due within a few weeks of each other, this means that the first course moves on to the second stage while the second one is still rejoicing in the first stage. Which was the case this week. Oh, and about the only thing I have in common with Sam Beckett is that I can no longer say whether my material is originally French translated into English or the reverse, since most of it has been through the process both ways several times.

Aside from that, I spent most of the week on trains, or other forms of transport, on an expedition to La Vergne, returning at the weekend via Nottingham. As follows:

depart arrive
Weds 27 Aug home 0910 on foot/bus
Oxford to Paddington 1001 1130 train
London St P to Paris Nord 1225 1547 eurostar
Paris Austerlitz to La Souterraine 1652 1921 Ic 3655
Thurs 28 Aug
La Souterraine to Gueret 930 1005 ter bus
Gueret to La Vergne 1100 1130 taxi
La V to Bussieres Dunoise nice walk
Bussieres to La Souterraine 1445 1615 taxi
Frid 29 Aug
La Souterraine to Paris A 1038 1318 ic3620
Paris Nord to St Pancras 1513 1640
Kings Cross to Grantham 1719 1827
Grantham to Nottingham 1857 1934
Sund 31 Aug
Nottingham to Oxford 1310 1546

And what have we learned? It’s possible to get to and from La Vergne by train within a day for about £400 pounds return (less if you cash in some Eurostar points). There’s not much happening in La Souterraine , and even less in Bussières-Dunoise. Guéret seems like a decent sized town though and is accessible by train from at least two different directions. Generally speaking, the Creuse is not the back of beyond: it’s behind the back of beyond. There are many cows, and many hills. There are probably no decent restaurants for miles. There used to be a railway to transport potatoes and beef to Paris, but they took it away years ago, and now all that’s left is a rather nice rural track which has the merit of avoiding most of the aforesaid hills. And there’s a lake behind the house, where you can fish but not (allegedly) swim.

Genetic editors, please note

After finishing last week’s entry about my rash commitment to write a book chapter, I secretly vowed to monitor my progress by producing weekly reports here. I then spent the entire week (when not shopping, eating, or sleeping) working on next month’s TEI course in Paris, essentially a revision of the one I gave in May. Almost, but not quite, because half way through the week, I received another reminder of another rashly-made commitment, this time to deliver a public lecture in Grenoble in December. I promptly dashed off the following proposition :

Ceci n’est pas une pipe: l’importance de la modélisation aux humanités numériques

Lou Burnard

Récemment, on a vu emerger de l’ombre de la inter-disciplinarité une discipline nouvelle qui s’appelle les Humanités Numériques (Humanités Digitales en Suisse, Digital Humanities ailleurs). Elle represente d’abord la confrontation, et ensuite l’adaptation aux méthodes et possibilités des technologies nouvelles de l’entreprise intellectuel et scientifique de toute la domaine des sciences humaines. Ces technologies comportent notamment l’informatique, mais aussi de la statistique, de la linguistique computationelle, et de la visualisation des données. Mais en effet cet emergence ne serait qu’une évolution, voire une continuation, d’un débat assez vieux – déjà percéptible au 19ème siècle — qui opposerait les sciences dures aux sciences humaines. Dans cette intervention, je propose que cette opposition semble d’origine plus sociale que méthodologique, et que les méthodes des SHS et les méthodes des sciences dites dures ne sont pas tellement loin l’une de l’autre. C’est la création, l’évaluation, et la manipulation des modèles et des hypothèses qui caractérise tout effort d’élargissement de science, et la modélisation comme processus abstrait donc devrait être au centre de nos disciplines, qu’il s’agit de la modélisation des structures textuels et linguistiques, de la modélisation des procédures informatiques, ou de la modelisation du monde physique.

Né en 1946, Lou Burnard a pris son DEA en littérature anglaise du 19e siecle à Oxford en 1971. De 2002 a 2012, il est Directeur-adjoint aux Services informatiques de l’Université d’Oxford où il s’occupait des applications informatiques dans les domaines des sciences humaines depuis des années, surtout en linguistique de corpus (British National Corpus), en bibliothèque numérique (Oxford Text Archive), et en l’encodage de textes. Actuellement retraité, il est reconnu comme expert dans ces domaines. Il a travaillé en France comme prestateur de services aux agences Adonis et Hum-Num et ailleurs en France: il est membre des Comités Scientifiques des Maisons de Sciences de l’Homme à Caen et à Grenoble.

Cunning, or what, I said to myself : if I have to produce a chapter in English on a topic I know nothing about it, I might as well repurpose it in French and get good value for money. And then, just to be on the safe side, I ran this text by my friend Marjorie, who is a native French speaker amongst many other good qualities, and thus well placed to tactfully remove the many barbarisms in this first draft. I was duly humbled by her response :

Ceci n’est pas une pipe : l’importance de la modélisation pour les humanités numériques
Lou Burnard

Récemment, on a vu émerger de l’ombre de linter-disciplinarité une discipline nouvelle qui s’appelle les Humanités Numériques (Humanités Digitales en Suisse, Digital Humanities ailleurs). Elle représente, pour tout le domaine des sciences humaines, la confrontation puis l’adaptation aux méthodes et possibilités des technologies nouvelles. Ces technologies comprennent notamment l’informatique, mais aussi la statistique, la linguistique computationelle, et la visualisation de données. Mais cette émergence ne serait en fait qu’une évolution, voire une continuation, d’un débat assez ancien – déjà perceptible au 19ème siècle — qui opposerait les sciences dures aux sciences humaines. Dans cette communication, j‘avance l’idée que cette opposition semble d’origine plus sociale que méthodologique, et que les méthodes des SHS et celles des sciences dites dures ne sont pas si éloignées. C’est la création, l’évaluation, et la manipulation des modèles et des hypothèses qui caractérise tout effort d’élargissement de la science, et la modélisation comme processus abstrait devrait donc être au centre de nos disciplines, qu’il s’agisse de la modélisation des structures textuelles et linguistiques, de la modélisation des procédures informatiques, ou de la modélisation du monde physique.

Né en 1946, Lou Burnard a obtenu son DEA en littérature anglaise du XIXe siècle à Oxford en 1971. De 2002 à 2012, il a été Directeur-adjoint du Service informatique de l’Université d’Oxford, où il s’occupait depuis des années de l’applications de l’informatique au domaine des sciences humaines, surtout pour la linguistique de corpus (British National Corpus), les bibliothèques numériques (Oxford Text Archive), et l’encodage de textes. Actuellement retraité, il est un expert reconnu de ces domaines. Il a travaillé en France comme prestataire de services auprès d’Adonis et Huma-Num, et ailleurs en France : il est membre du Comités Scientifiques des Maisons de Sciences de l’Homme de Caen et de Grenoble.

Suitably chastened by this salutary reminder  that my command of the French language is not as perfect as might be wished for, I removed the green ink, and sent it off to Grenoble, from which I rapidly received the following reminder that sometimes less is more :

Le résumé que vous nous avez envoyé est de fait plus important (environ 1300 caractères), je vous propose donc
(pour la version papier uniquement, la version web pouvant elle rester plus développée) de le réduire quelque peu. Seriez-vous
d'accord pour que, par exemple, nous enlevions la partie finale (cf proposition ci-dessous) et les déclinaisons autour du nommage
((Humanités Digitales en Suisse, Digital Humanities ailleurs)  ou préférez-vous le retoucher vous- même ?

Pour brochure : "Récemment, on a vu émerger de l'ombre de l'inter-disciplinarité une discipline
nouvelle qui s'appelle les Humanités Numériques. Elle représente pour tout le domaine des sciences humaines la confrontation, puis
l'adaptation aux méthodes et possibilités des technologies nouvelles. Ces technologies comprennent notamment l'informatique,
mais aussi la statistique, la linguistique computationelle, et la visualisation de données. Mais cette émergence ne serait en fait
qu'une évolution, voire une continuation, d'un débat assez ancien – déjà perceptible au 19ème siècle -- qui opposerait les sciences
dures aux sciences humaines. Dans cette communication, j'avance l'idée que cette opposition semble d'origine plus sociale que
méthodologique, et que les méthodes des SHS et celles des sciences dites dures ne sont pas si éloignée."

That’ll teach me. Maybe.

Metamodelling through : the prolegomena

So back in February I was asked to contribute a chapter to a new book being confected by some top people in the domain of the digital humanities,  an invitation which I naturally accepted with alacrity, and only a small sense of alarm. I admit: I was flattered, though naturally also felt it was about time my eminence was recognised in such a way.

Dashing off an abstract is an easy task, so I did that, and then forgot all about it. Here’s the abstract. Like other such pieces,  it promises much, and even gets mildly polemical towards the end, which seemed to do the trick, as the proposal was, in due course, accepted.

 

Where do metamodels come from and how do they survive?
Lou Burnard

There is a very old joke about standards which says "Standards are a
 good thing because there are so many to choose from". Like many old
 jokes, this plays on an internal contradiction (the structuralist
 might say "opposition") in its topic. Standards are, on the one hand,
 of most benefit to the extent that they reflect and facilitate
 diversity ; on the other, they are of necessity managed or even
 imposed by a centralising authority. This contradiction is
 particularly noticeable when the process of standardisation has been
 protracted because the technologies concerned are only gradually
 establishing themselves. We see this tension even in consumer
 electronics where there is a financial market-driven imperative to
 establish standards as rapidly as possible; but the same tension
 underlines the gradual evolution of ways of thought via communities of
 practice into de facto and (eventually) "real" standards. This article
 explores the evolution of standards for data modelling methodologies
 with regard to this tension. It considers some significant early
 experiments with the application of data modelling techniques to
 humanities research data (Manfred Thaller; J-C Gardin) and discusses
 to what extent some researchers simply adopted technical standards
 emerging in the wider data processing community (relational databases,
 information modelling), while other communities strove to define their
 own models (AI, language understanding systems). It will present in
 some detail the theoretical model (metamodel) underlying the Text
 Encoding Initiative's approach to standardisation and ask the question
 whether, over time, all such community-based efforts are forced
 further towards convergence and away from diversity. The TEI currently
 maintains a balance between "do it like this" and "describe it like
 this" schools of standardisation; in the long run, it therefore risks
 being superceded by advocates of the latter who distrust the former,
 or advocates of the former, who are impatient with the latter. 
Oxford, 1 Mar 2014

Summer came and summer is now going, and this particular bird is coming home to roost. I received last week a polite reminder that my manuscript should be delivered by the end of the current month, should conform to a defined house style, and would I please sign in blood the form I was sent back in April assigning my rights in this non-existent work to non-existent publishers Snipcock and Tweed ? Naturally I replied at once pleading for a stay of execution (but ignoring the rights assignment question) which was graciously accorded, somewhat to my surprise, even unto mid October. So now I really have little excuse not to find out what grand idea this abstract is abstracted from, really ought to get down to doing the research it grandly promises to summarise, and write the wretched piece. If only I didn’t have all those other more interesting (or less interesting but more urgent) things to do.

Well, let’s see., I plan to use this blog as a record of the painful process, just so that in years to come I can look back and see where it all went horribly wrong. At least no-one is likely to find me here.