Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

Building the ELTeC : the quest continues

My quest for a reasonably reliably complete list of 19th c. English novels complete with indication of available electronic versions thereof has made a large leap forward as a result of the generosity of others.

Firstly, Professor Troy Bassett of Purdue University Fort Wayne, whom God Preserve, tells me that he will now start distributing regular extracts from his database at http://www.victorianresearch.org/atcl/index.php in CSV form for my (and others’) mungeing pleasure. I spent some time testing this out, using Libre Office to convert the CSV into an XML form which I could then reprocess with yet another xslt stylesheet. This worked reasonably well (eventually) and I was able to create a new version of this data in TEI, all 15,682 records of it. To merge in identifiers for VWWP, Internet Archive, Gutenberg, and Google texts I used a simple minded stylesheet which tries to match on the magic keys I created earlier, as discussed in previous blog postings. This worked, though not exactly with blistering speed, and I produced a second version of the Bassett TEI file with added references. Of those 15,682 titles, only 3,430 have at least one digital version and there are a total of 3,813 digital versions in all.

Secondly, and with some trepidation, I approached the sacred grove of Big Data and sent a plea for help to the nice people at Hathi Trust, that extraordinary repository of 16 million or so books in digital form. You can download a full list of their holdings in a nice simple processable form from https://www.hathitrust.org/hathifiles : the catch is that their monthly snapshots contain 16 million (and counting) records derived, I think automatically, from contributing libraries’ catalogue systems, not all of them entirely consistent in their metadata usage, and certainly none of them clearly identifying which titles might be considered to be novels or have female authors. I did some initial poking around with a perl script and ascertained that I could whittle the 16,208,265 entries in the May 4th version of the file down to a mere 1,253,950 by selecting only those which are publically acccessible, have a publication date between 1849 and 1820, are books in English, and are not US government documents. These counts are inflated by the fact that we are counting volumes here, rather than works.

Meanwhile, a helpful person on the Hathitrust help desk had got in touch pointing me to the existence of various selections created by other researchers before me, notably Ted Underwood’s file filtered_fiction_metadata.csv [available on Github at https://github.com/tedunderwood/character/blob/master/metadata/filtered_fiction_metadata.csv]
which lists 93,708 volumes of fiction, published between 1800 and 2007. “It focuses on fiction, but will include works in translation. It’s also not restricted to ‘the novel’; short stories and even folk tales are included. But we have tried to exclude fiction aimed explicitly at a juvenile audience across the whole timeline.” Underwood points out that “There is no authoritative master list of all English-language fiction from 1800 to the present. So scholars who want to do research at scale have to construct their own list.” which is true enough, though I wonder if he’s aware of Bassett’s work. Anyway, I wrote more scripts to process his CSV file into TEI, adding my magic keys to enable me to link his data with the titles in the Bassett database. All of which meant that I could provide Bassett with a list of Hathtrust links as well as those already provided for Gutenberg et al.

Bassett’s data includes some titles published only in the US, which I would like to exclude, as well as some “aimed explicitly at a juvenile audience” (the two sets of course have nothing in common, except that I would like to exclude them both). More annoyingly, it stops in 1901. I therefore went back to thinking about how to extract comparable data from the Hathifiles for the period 1901-1920. The most recent Hathi file (dated 1 June) has entries for 16,370,821 volumes. I tweaked my perl script to extract records for books in English published between 1900 and 1921, as before, tweaking it also to suppress duplicate titles, which gave me a more manageable but still daunting total of 348,232 titles.

The question is, as ever, how to pick out the novels from the chaff? The obvious answer is to apply some basic text analytic tools. I have a list of 15k titles which the obliging Bassett has already identified as novels. I have 348,232 titles extracted from the Hathitrust file. If I had to scan through them choosing the titles which are most likely to be novels, which words would catch my eye? Which words in fact appear significantly more frequently in the first list than their frequency distribution in the second would lead you to suppose? This sort of question is surely what computer linguists deal with ten times a day before breakfast.

I install Lawrence Anthony’s excellent AntConc program from http://www.laurenceanthony.net/software.html and battle through its interface a bit. Here are the top twenty “keywords” for the Bassett title list, when I use the aforesaid Hathitrust titles list as the “base corpus” for comparison. The underlying statistic (need you ask) is log likelihood (4 term); the threshold being p < 0.05 (+Bonferroni), i.e. the defaults.

#Keyword Types: 611
#Keyword Tokens: 47261
#Search Hits: 0
1 2693 + 15387.79 0.0556 novel
2 8559 + 10677.59 0.0763 a
3 1969 + 6571.89 0.0388 or
4 1262 + 6402.77 0.0266 tale
5 1426 + 4104.1 0.0283 story
6 721 + 2599.46 0.0152 romance
7 2195 + 1957.72 0.0327 s
8 471 + 1072.51 0.0098 stories
9 364 + 1058.08 0.0077 love
10 837 + 1019.33 0.016 life
11 355 + 1001.66 0.0075 tales
12 9268 + 875.37 0.0384 the
13 272 + 773.82 0.0058 adventures
14 163 + 594.6 0.0035 daughter
15 224 + 591.09 0.0048 lady
16 217 + 546.42 0.0046 woman
17 150 + 484.95 0.0032 wife
18 570 + 478.07 0.011 other
19 221 + 422.18 0.0047 little
20 131 + 395.37 0.0028 miss

This seems like pretty good evidence that simply looking for titles which contain the words “novel”, “tale”, “story”, “romance” or – this one a surprise – “or” might do pretty well. It also suggests that the Bassett novel is quite concerned about women, but that’s another story.

Reassured, I re-tweak my Perl script to extract bibliographic records as before, but this time only if their title contains one of the words “novel”, “tale”, “story”, or “romance”. A glance at the results shows that this approach cuts down the number of titles to a more manageable 16,908 but still has many false positives (“My life and the story of the Gospel hymns and of sacred songs and solos” for example) . A more sophisticated approach is needed, clearly. Time to make a cup of tea.

Building the ELTeC : stage 0

Problem: if the ELTeC is supposed to represent in some sense the full range of novel production in a given language (EN in my case) for a given time slot (1850 to  1920, it says here) how do you find out what the population actually is before starting to sample it?

Enter at this point a wonderful database: Bassett, Troy J. At the Circulating Library: A Database of Victorian Fiction, 1837-1901. Victorian Research Web. [accessed 2018-02-09] (http://www.victorianresearch.org/atcl). I say “wonderful” advisedly: I have found nothing comparably complete and usable in a week of scratching around the internet.

According to Bassett’s technical notes, “This website was written on a Macintosh using a MySQL database and PHP.” Consequently its contents are pretty consistently organized and tagged, and consequently screen scraping and munging into a different format are both pretty easy: like this

 #!/bin/bash
 for number in {1850..1901}
 do
 echo "$number "
 wget "http://www.victorianresearch.org/atcl/show_year.php?year=$number" -O $number.html ; tidy -asxml -n --new-empty-tags image $number.html | saxon - bassetConv.xsl > $number.xml
 done
 exit 0

The `bassetConv.xsl` stylesheet generates a TEI format bibl entry for each of the 15,000 plus titles, like this:


<bibl xml:id="11169" n="acounselofperfection|Malet">
<author n="576">Lucas Malet.</author>
1 vol.  London: Kegan Paul.</bibl>
The numbers are the identifiers used by the mySQL database, so they should be unique. The @n attribute on the bibl element is a key I generate for matching purposes (see later).

I would rather like to know how many of these titles are available in
digital form, and from where. The current database has links to
Google Books (500 or so, it claims: I haven’t worked out how to find
them without downloading the entire database) but nothing else so far
as I can tell.

How to accomplish this?

My best plan so far is to extract  lists of keys or fingerprints, derived from the cataloguing information supplied by each online repository  and then start looking for overlaps with Bassett. My assumption is that for any relevant title not to exist in Bassett would be rather remarkable. Some initial experiments suggest that the first few words of the title combined with the surname of the author is a reasonable approximation to a fingerprint for each title, though obviously not
perfect.

So far I have looked at the following four online repositories:

1. Victorian Women Writers Project

This has 75 titles identified as fiction, and all in good TEI format; the following grabs the catalogue info for them

http://webapp1.dlib.indiana.edu/vwwp/search?docsPerPage=100&browseText=fiction&text1=fiction&field1=browse-genre&style=&smode=simple&brand=general

Life is too short to process all the resulting HTML: in any case I am sure if I wanted a proper XML catalogue my friends at Indiana would give me one. So I manually separate out the useful chunk of HTML: and mangle it through a simple XSLT stylesheet (`vwwpConv.xsl`), to produce a file of entries like this:

<bibl xml:id="VAB7046" n="daphne|Ward">
<title>Daphne, or Marriage a la Mode</title>
<author>Ward, Humphry, Mrs., 1851–1920</author>
<publisher>London; New York: Cassell, 1909. 315 p.</publisher>
</bibl>

Note the cunningly constructed @n attribute supplying a key which, all things being equal, should match an entry in the Bassett database, if there is one.

2. The ebooks@adelaide project

This is a quirky site hosted by the University of Adelaide library which makes many texts available in a well structured XHTML format which is easy to munge into TEI. For info about this apparently little known but rather splendid project see https://ebooks.adelaide.edu.au/about/
Selecting relevant titles for our purposes is not so easy, so I grabbed a readable version of their entire catalogue in HTML from the website, and then grepped through it for potentially useful entries, resulting in a file full of lines like this:

<li><a href="/a/abbott/edwin/flatland/">Flatland: a romance of many dimensions / Edwin A. Abbott [1884]</a></li>

along with a lot of less useful lines like this

<li><a href="/a/aristotle/meteorology/">Meteorology / Aristotle; translated by E. W. Webster</a></li>

A perl script was the quickest way of munging this into minimal TEI entries like this:

<bibl><title>Flatland: a romance of many dimensions </title><author>Edwin A. Abbott </author> <date>1884</date><ref>/a/abbott/edwin/flatland/</ref></bibl>

I am not sure that there is much wheat of this kind in this chaff however: I will come back to it later.

3. And then there’s Gutenberg

Yes, the Gutenberg Project also has a catalogue: a huge one, which you
can download as an incomprehensible RDF file or a plain text
monster. I took the latter option, and hacked out of it 49639 lines
like this:

<title>100 Desert Wildflowers in Natural Color</title><author>Natt Noyes Dodge</author><idno>54631</idno>

I won’t rehearse here the manifold  inconveniences of using Gutenberg texts. But it would be useful to come back to this and see how many of the Bassett titles are available here. As a first approximation, I first reduced the title list to just the first part of the title and the author’s surname, with no spaces or punctuation, and then used the invaluable unix  comm command to identify common lines in the two files. Obviously, this procedure is vulnerable to typos and inconsistent editorial practices, which are not uncommon, but in my first experiment I found 984 items with identical titles and authors in both the Gutenberg title list and the Bassett title list.

4 Internet Archive

This wonderful collection has a good search interface, spoiled somewhat by the unreliability of the data. I used it to grab all the entries for a collection called “19thcennov” which looked promising. Sorted by descending date, the very first item had a date of “1983” which, on inspection, turned out to be a typo for “1883”, so a good thing I didn’t use “date” to limit my search. OTOH, the second item in the list was something called “Mathematics in urban science, V : Catastrophe theory” published by the “Monticello, Ill. : Council of Planning Librarians”. No matter: at least the output is in XML, and the text identifiers used by the IA bear a striking resemblance to those I thought I had invented. My search gives me 7829 items which look like this:


<doc>
<str name="creator">North, William, d. 1854</str>
<str name="date">1847-01-01T00:00:00Z</str>
<str name="identifier">impostororbornwi03nort</str>
<str name="language">eng</str>
<str name="publisher">London : T.C. Newby</str>
<str name="title">The impostor, or, Born without a conscience</str>
<str name="volume">3</str>
</doc>

which I then munge into

<bibl xml:id="impostororbornwi01nort" n="theimpostor|North">
<author>North, William, d. 1854</author>
<title>The impostor, or, Born without a conscience</title>
London : T.C. Newby</bibl>

The IA gives each volume a separate catalogue entry, so these 7829 entries boil down to only 2739 unique keys which I can compare with the Bassett keys. On a first pass through I identify 1235 matches, i.e. nearly half. I also identify lots of ways of improving the matching procedure, but for the moment this seems like it might be useful. Though I am not sure that a success rate of around 10% in identifying matches is altogether worth shouting from the rooftops about.

Digitalising in Le Mans and eating in Oulx

Monday morning, July 3rd and here I am  again at St Pancras…  waiting for the   Eurostar to Paris which is unexpectedly 30 mins late leaving, and   therefore, likewise arriving. But I have plenty of time to get across Paris by metro to Montparnasse Le Bienvenu, and then by surface street, very hot and sunny, to the grand station where the TGVs hang out. How pleasant, a train system which works, I reflect as it zooms across the unutterably boring flat countryside of the Ile de France, and so to Le Mans, where I am due to address a DARIAH “Humanities At Scale” (whatever that means) funded workshop which rejoices in the title of Bibliotheca Digitalis, though I am pretty sure it will have nothing to do with foxgloves. Out of the station, five minutes up the road, and into the Hotel Chantecler, as advertised. Various emails insist I will be taken out for dinner by and with chums from Tours, and so it comes to pass. It’s time to be lionized for tomorrow I am lecturing.

July 4th

The morning starts with a (guided) walk down to the Médiathèque Louis Aragon, which is a newer and less brutally modernist building than the Maison de Culture we pass en route, and which is in also in a state of unexpected chaos because of a major reshelving exercise. All the paperbacks and DVDs and what have you are spread out on tables under the watchful glare of lady librarians. But this is nothing to do with us: we are in a nice lecture room where we are being welcomed in broken English by various dignatories as per usual. I am second on the bill, after Jean-Yves Ramel from Tours who gives a comprehensible and accessible introduction to OCR and how it doesn’t quite work. After coffee, I pick up the theme and wax eloquent on what sort of not not-quite-working we are talking about, the need to go “beyond the page” etc etc. Then we all depart for lunch in a state of advanced self satisfaction. Lunch is across town in a nice bistrot and is enlivened by the arrival of Mark Greengrass who I havent seen in a million years, when he was starting down a route he seems now to have abandoned, digitizing cultural networks in the Hartlib Papers project at Sheffield. It’s also rather tasty, as French bistrot lunches tend to be when ordered well in advance. Back at the Mediatheque, our local librarian hosts address us in French, with a somewhat hesistant on-the-fly translation from Toshi and then we get to hear from all of the participants, most of whom turn out to be EU researchers or librarians from Italy, Romania, Bulgaria, Hungary, etc. Impossible to take in fully some thirty or so projects, presented at lightning speed (two minutes each!) thus ensuring that only the outliers stick in the memory, but I gleaned a general impression of keen competence ready to be enlarged, as well as recognizing a couple of TEI acolytes. And so to the first of the formal public lectures: a thoughtful discussion about the relative influences of social networks and of the nascent publishing industry in the production of some major 17th century works. Being public, Mark was obliged to deliver this lecture in French unlike the rest of the proceedings, and some of the good citizens of Le Mans joined us to benefit from it. Huzzah. And then it’s dinner time in a nice old restaurant, of course.

July 5th

I make no effort to arrive on time this morning, which means that the door of the Mediatheque is locked when I get there, so I have to send imploring text messages to gain entry. But those extra minutes in bed were worth it. Somewhat to my surprise, the two technical briefings being given (one from Aurelian Ruellet on how social networks should really correctly be modelled; the other from Eduard Frunzeanu and Régis Robineau on how IIIF works) are both informative and accessible. I don’t think the former mentioned anything that the TEI does not already support; and the second reminded me how overdue is proper discussion of a bridge between TEI and IIIF world views. This led me to interrogate Robineau over lunch, perhaps a tad too aggressively. After lunch, we moved to another room for the first proper workshop session in which participants were invited to do some data modelling based on a 17th c register of permissions to print emanating from the royal Chancellerie in Paris, which they did enthusiastically and in groups. Most groups decided to design a database structure to hold extracts from the document, but a couple of hardy souls did make the effort of thinking about how they’d model the document itself. The evening was a fairly full social programme, starting with a visit to the celebrated Le Mans abbey, followed by a civic reception, and concluding with a guided tour of the old town. I behaved fairly deplorably at all of these, firstly by turning up at the wrong cathedral (I blame Pierre-Yves, who bought the beers); secondly by drinking and eating too much; and thirdly by sloping off early as soon as it became apparent that serious walking was required. Which is a pity, since the bits I did manage to see of the Plantagenet old town looked really rather nice. The same indeed, might be said of the whole workshop. (See further my photo album) But I have a scheduling conflict, which means that I have to skip back to Paris the next day

July 6th

I rise late and take a lingering breakfast, before packing in a leisurely way and accidentally stealing the Hotel Chantecler’s TV remote. And then back to the station for the 1134 to Paris. It is still infeasibly hot and the countryside is still rather dull so I work on the DifDePo schema resolutely and don’t look out the window much. Back in Paris, I catch the bus to Gare de Lyon, walk to my hotel (de Venise, in an interesting little neighbourhood), dump my bag, and only then realise that there isn’t a bus from there to anywhere near rue Monge, where I am due for lunch in er 7 minutes time. Bother. I suddenly remember the existence of Parisian taxis (quite plentiful around the gare de Lyon) and persuade one to take me to the village Monge where Helene and the plat du jour are waiting, arriving a mere 30 minutes late, quite normal by Parisian standards I am assured. And so to a somewhat fraught meeting on the 4th floor of Censier, where I plea for some attention to the XML validation of the DifDePo transcriptions. Nice beers with Marc and Chiara afterwards: it’s still very hot. Then to BVH where I fail to find a nice cool shirt and am assured that there are no more linen shirts in Paris, because of the heat. And so to the Gare du Nord in time to meet L, and then shepherd her by RER (very hot) to the Gare de Lyon, and then back to the Hotel de Venise. After inspecting the local eateries, we settle on a small Japanese supper nearby and retire to bed.

7 July

Up early to catch TGV to Oulx, but we dilly dally (did I say that it was already quite hot?) and SNCF mysteriously decide to make the train leave four minutes earlier than advertised, just as we arrive at the wrong end of platform 15, and realise that it contains two TGVs stuck together, both of which are about to leave, but only one of which has seats for us. Damn and blast. We pile into the cattle class carriage of the wrong TGV, and resign ourselves to two hours of (actually quite well-behaved) excitable children en route to a colonie de vacances somewhere near Grenoble. We escape at Lyon, and find our comfy seats in the right TGV, though they are facing the wrong way for Lilette to enjoy the (very scenic) route from Grenoble to the Alps. The Residence du Commerce in Oulx is opposite the railway station and next door to a nice bar where we refresh ourselves while waiting for the hotelier lady to appear. All very peaceful. Oulx is a tiny place, boasting a scenic bridge over a river, some snow topped mountains, and one street of touristic shops, none of which has a shirt to offer me nor even soap to wash one of the numerous dirty ones I have now accumulated, but never mind. On Guy’s recommendation, we reserve a table at the Ristorantino La Stella, down the road from the station, which turns out to be really rather nice. It is run by a serious Sardinian gentleman, aided by two cheerful ladies, and a third who is his wife. There is only space for about six tables, and the food is cooked to order. We had a salad, which reminded me why one should only eat Italian tomatoes, followed by rabbit and polenta, which reminded us that rabbit can be tender and juicy. I revealed my complete ignorance of Italian history by wondering aloud why a Sardinian should be living in Piedmonte, and the proprietor’s wife (who comes from Bologna anyway) was too polite to explain.

We have come to Oulx, if you were wondering, not just because it has a curious name, but also because it is a sensible place to break the journey to Liserna (where we are now headed,) being the first civilised town you encounter  on the Italian side of the border.

Notes towards a definition of TEI Conformance

 

 

 

 

 

 

 

Each of the blobs here represents three subtly different things:

  • an ODD : that is, a collection of TEI specifications
  • a formal schema generated from that ODD, and its natural language documentation
  • the set of documents considered valid by that schema.
    The TEI provides TEI All : a set of over 500 uniquely identifiable elements, classes, attributes, etc. and a schema in which they are all permitted. For all practical purposes a user of the TEI must make a selection from this cornucopia, and we call that selection a TEI subset. Of course there are many many possible TEI subsets, each making different choices of elements or attributes or classes, but the sets of documents which each consequent schema will validate all have in common that they will also be considered valid by the schema TEI All.

A user of the TEI may however do more than simply choose a subset of the provided specifications. They may also provide additional constraints for aspects of an encoding left underspecified by the TEI, for example by requiring that attribute values be taken from a closed list of possible values rather than being any syntactically valid token. They may simply change the datatype of an attribute, for example from a string to an integer or a date. They may also provide an alternative identifier for an element or an attribute, for example to change its canonical English name for one from another language. In some cases, attribute value changes are equivalent to a subsetting operation; in others not. Renaming operations never result in a subset: a document in which the element names have all been changed to their French equivalents cannot be validated by an English language version of TEI All. A user of the TEI can also change the content model or the class membership of existing TEI elements, in ways which may or may not be equivalent to a subsetting operation.

We use the term customised subset for all these kinds of personalisation because they result in something which is not necessarily a further subset of the TEI subset concerned, but a further modification of it. In the general case, their conformance with TEI All can be determined only by inspection, and their validation may require some additional processing.

Finally, a user of the TEI is at liberty to define entirely new elements and attributes, and to make such components members of existing TEI classes so that existing TEI elements may refer to them. They may also modify the content models of existing TEI elements to refer explicitly to such new elements. This results in an extended subset, since it contains elements or attributes additional to those provided by the TEI All schema. Such additional components should always be labelled as belonging to a non-TEI namespace. A processor can then determine that these components may be left out of consideration when determining the validity of a document with respect to TEI All.

In additional to these formal considerations, TEI conformance involves attention to some less easily verifiable constraints, specifically the twin requirements of honesty and explicitness. By honesty we mean that elements in the TEI namespace must respect the semantics which the TEI Guidelines supply as a part of their definition. By explicitness we mean that all modifications (i.e. both customized and extended subsets) should be expressed using an ODD to document exactly how the TEI declarations on which they are based have been derived. (An ODD need not of course be based on the TEI at all, but in that case the question of TEI conformance does not arise)

Formally speaking, we can say of a conformant TEI document :

  • it must be a well formed XML document and
  • it is valid against the TEI All schema :
    • without modification (it is a TEI subset), or
    • after deletion of any elements it contains which are not in the TEI namespace including their children, irrespective of namespace (it is a TEI extension), or
    • after application of any canonicalization algorithm specified by its associated ODD (it is a TEI customized subset)

The purpose of these and similar rules is to make interchange of documents easier. They do not however guarantee it, and they certainly do not provide any guarantee of interoperability. Unlike many other standards, the goal of the TEI is not to enforce or impose consistency of encoding, but to provide a means by which encoding choices and policies may be more readily understood, and hence (to some extent) algorithmically comparable.

(Another reason) Why I love the Internet

So, at the recent TEI Conference in Vienna, Elisa and I were indulging in a little mutual admiration on our knowledge of an obscure work entitled Thalaba the Destroyer by the early English Romantic Poet called Robert Southey (rhymes, as any fule kno, with “mouthy”). So when I got back home, I went to look for the volume containing said work which I dimly remembered having on my shelves, in the decrepit-but-too-nice-to-throw-away-section. And sure enough, there it was. The front board has come loose, but the first three openings  look like this:

frontboardhalftitletitlepage

Having scanned those first few pages, I naturally asked Mr Google what he knew about the matter. And was thus able rapidly to confirm :

  • My copy of Thalaba is the cheap reprint (two volumes in one) published
    by Vizetelly and Beeton in 1853. There is a Google-scanned version of the same edition, available from the Internet Archive. They have included with it a couple of pages of  advertisements for other works published by Clarke Beeton (p 7 and 8) which are missing in mine however.
  • What seems like another copy of the same edition is currently on sale at Abe Books for the startling sum of $199.76. Mine is in poor condition,  which is why it only cost me half a crown back in 1967, when I used to frequent Oxford’s second hand bookshops (there aren’t any to frequent these days).

As you may have noticed above, my copy also contains several signs of its previous owners. As well as the book plate, and the inscription above, there’s a nice message from Aunty Sarah, the donor,  opposite the preface:

front-1and there’s also an intriguing note from “JB” dated some twenty years later, opposite the start of the poem proper.

body-01

So… what have we learned? Rosamund was given this book by her aunt, Sarah Brent, in 1860. And in 1882, her husband felt compelled to record his own experience of the Eastern exotic in the same book “We met at Persepolis an Arab maiden of most lovely form and features — she was a dream of beauty never to be forgotten”. What she made of it, one can only conjecture.

But why I love the Internet, is that (pondering these matters after breakfast this morning), it has helped me place these people a little more precisely in time and place. A search for “Rosamund Borrowman” told me that  the 1861 Census shows a person of that name, born 1825 in Kent, married to John Borrowman, born 1830 in Midlothian, residing in Middlesex in 1861. The ancestry.co.uk site where I found this record is pay-walled so no further details available, but that seems reasonably plausible.

And searching for “Rosamund Borrowman John” I was able to find a record of her death. Some industrious volunteers have been surveying the gravestones of a place called Hambledon in Surrey, and there she is:   “Rosamund Vertue the beloved wife of John Borrowman. She died 25th August 1895. Also the above named John Borrowman son of Robert Borrowman born in Edinburgh 3rd April 1830 died at Hambledon 4th July 1906. Also Elizabeth daughter of the above died 22nd October 1932 aged 72 years” It’s all in the spreadsheet.

My next step, obviously, will be to find out where Hambledon is, and whether you can get there by train. Maybe.

Rompons avec le ronron techno-productiviste des institutions!

Back in August or September, I remember bleating anxiously on this blog about having rashly accepted to give a talk on les Humanités Numériques as part of a seminaire “Avenue Centrale” organised by the MSH in Grenoble. I can now report that eventually (in the English sense) I did manage to get some old slides updated and licked into shape, aided by the four hour T-not-so-GV journey to Grenoble last December, and a week or so of being unbearable to myself and everyone else around me. The slides duly appeared on Slideshare, and the folks at Avenue Centrale have even published a nice video and a podcast of me delivering them, but that’s not nearly as interesting  as what actually happened on the day.

Coming to terms with an implausibly pink armchair
Coming to terms with an implausibly pink armchair

On 20th December, after a meeting of the conseil scientifique du MSH Alpes, I found myself esconced in an implausibly pink fauteuil,  clutching a microphone, and ready to go, having delayed the obligatory 30 minutes for bigwigs to turn up, when there was a minor kerfuffle  as the organisers realised that a bunch of scruffy students were busy at the front door handing out an A2 sized pamphlet promisingly titled Humanités Numériques: Gare à la propagande!!! 

The source of the pamphlet, which characterizes me as un petit soldat de la conversion au numérique  des Humanités,  was  subsequently tracked down online by one of the French DH twitterati  (à savoir, Martin Grandjean) within a few minutes of my tweeting this image of it  after the show. Aside from the distribution of the pamphlet, the promised  Action-critique  took the form of three or four extra persons attending my lecture, one of whom also gave a brief speech deploring the industrial and social cost of mass digitization (I think) during the Q&A session. An agreeable though brief debate ensued, none of which sadly seems to have made it to the published version of the video, and we then all adjourned for coffee and horrible sandwiches downstairs, during which I was able to continue to chat amicably with the protesters, though the term seems barely appropriate. I learned that these were actually eco-warriors with concerns about the way big business was driving technology into inappropriate places (There have been somewhat critically received plans to hand out tablets to all school children, in an interesting reprise of the UK Government’s BBC Micro initiative in the 1980s)  On my way out I also tried to take some photos of the activists using my new tablet, which involved much banter and cursing, as I have barely mastered this new device. Out of deference to their desire for anonymity, the photos will have to stay in my personal archive for a few more decades though.

Tidings of this unusual  event caused a (very brief) flurry of excitement on twitter. Frederic Clavert was  a bit peeved to find that his logo had been appropriated for the pamphlet; others were disappointed to find no coherent plan for action in it. And there were also   (tee hee) expressions of extreme jealousy from a few of my DH colleagues — Moi aussi une affiche!  A brief sample of my first significant “moment” in the political history of DH  in France (Marjorie told me that’s what it was) follows.

tweets

Data versus Reality

… is not the title of the book I’ve been re-reading this week, though it might well be. Bill Kent’s Data and Reality was first published in 1978, and comes from the heroic age of database design and development, a period when such giants as Astrahan, Chen, Chamberlin, Codd, Date, Nijssen, Senko and Tschritzis were slugging it out over the relative merits of the relational, network, and binary database models and the abstractions they supposedly modelled : a struggle predominantly over terminology and ways of thought since (as Kent shows) almost all of these differently named and passionately advocated models were fundamentally very similar, differing only in the specific compromises they chose when confronted by the messiness of reality. Whether you call it a relation or an object or a record, the globs of storage handled by every database system were still records, combinations of fields containing representatives of perceptions of reality, chosen and combined for their utility in a specific context. The claim that such systems modelled reality in any complete sense is easy to explode; it’s remarkable though that we still need to be reminded, again and again, that such systems model only what it is (or has been) useful for their creators to believe. Kent is sanguine about this epistemological lacuna : “I can buy food from the grocer, and ask a policeman to chase a burglar, without sharing these people’s view of truth and beauty”, but for us, living in an age of massively interconnected knowledge repositories, which has developed almost accidentally from the world of more or less well-regulated corporate database systems, close attention to their differing underlying assumptions should be a major concern. This applies to the differently constructed communities of practice and knowledge which we call “academic disciplines” just as much as it does to the mechanical information systems those communities use in support of their activities.

In its time, Data and Reality was remarkable for introducing the idea that data representations and the processes carried out with them should be represented in a unified way, the basic idea of what we now call object-oriented processing; yet it also reminds of some fundamental ambiguities and assumptions swept under the carpet even within that paradigm. Are objects really uniquely identifiable? “What does ‘catching the same plane every Friday’ really mean? It may or may not be the same physical airplane. But if a mechanic is scheduled to service the same plane every Friday, it had better be the same physical airplane.” The way an object is used is not just part of its definition. It may also determine its existence as a distinct object.

Kent’s understanding of the way language works is clearly based on the Sapir-Whorf hypothesis: indeed, he quotes Whorf approvingly “Language has an enormous influence on our perception of reality. Not only does it affect how and what we think about, but also how we perceive things in the first place”. There is an odd overlap between his reminders about the mocking dance which words and their meanings perform together and contemporaneous debates within the emerging field that Wilks has charmingly characterized as “Good Old Fashioned AI”. And we can also see echoes of similar concerns within what was in the 1970s regarded as a new and different scientific discipline called Information Retrieval, concerned with the extraction of facts from documents. Although Kent explicitly rules text out of discussion (“We are not attempting to understand natural language, analyse documents, or retrieve information from documents”) his argument throughout the book reminds us that data is really a special kind of text, subject to all the hermeneutical issues we wrongly consider relevant only to the textual domain.

This is particularly true at the meta-level, of how we talk about our data models, and the systems we use to manipulate them. Because they were designed for the specific rather the general, and because they were largely developed in commercially competitive contexts, the database systems of the 1970s and 1980s proliferated terms and distinctions amongst many different kinds of entity, to an extent which Kent (like Ockham before him) argues goes well beyond necessity. This applies to such comparatively arcane distinctions as those between entity, attribute, and relationship, or between type and domain, all of which terms have subtly different connotations in different contexts, though all are reducible to a more precise set of simple primitives. It applies also (and here the TEI in me sits up and smirks) to the distinction between data and metadata. Many of the database systems of the eighties and nineties insisted that you should abstract away all the metadata for your systems into a special kind of database variously called a data dictionary, catalogue, or schema, using entirely different tools and techniques from those used to manipulate the data itself. This is a needless obfuscation once you realise that you cannot do much with your data without also processing its metadata. In more recent times, one of the more striking improvements that XML made to SGML was the ability to express a schema and the objects it describes using the same language. Where what are usually called the semantics of an XML schema should be described and how remains a matter which only a few current XML systems (notably the TEI) explicitly consider.

Kent seems to have been a modest and likeable man. He retired in 2000, and died five years later, leaving a legacy of accessible and still provocative papers, most of them available from his website . Like many other pioneers in computer science, his academic qualifications come from unrelated fields (in his case, chemical engineering and maths); like many others he worked long hours for IBM and HP, but achieved fame and intellectual satisfaction outside the corporate world in the development of industry standards and professional associations. Maybe that experience is also what underlies the much quoted paragraphs which end his book:

default

Unexpected adieux

This sunny Sunday morning sees me setting off for a couple of weeks of TEI workshops, one in Paris, one in Graz. Nothing unusual there, nor in the fact that one is better prepared for than the other. But it has been an unusual week all the same, with two deaths and possibly a new beginning. The deaths first, since they are more difficult to write about. They perturb habitual patterns, making me confront and try to articulate parts of life that are hard to fit into a public blog, yet belong there in the absence of any other personal journal. (I say “public” but doubt that anyone except me reads this).

On Tuesday morning, I received from my friend Guy in Italy a text message saying that his partner Daniela had suffered a stroke and was in a coma; 24 hours later came another announcing her death. It is hard to react adequately to such events at a distance, and particularly so by text message, so I am waiting for a later less painful time to talk to Guy. I learned from a mutual friend that the funeral was yesterday. I dont want to obituarize but Daniela was a very generous and very affectionate person, as well as a fiercely independent one. I am very glad that she did not stay long in her coma, nor return from it badly scarred; I am also very glad that the last time I saw her was at a joyful family occasion in London.

On Saturday evening, yesterday, I received an email informing me of another death, also coincidentally on Tuesday: Chris Sheppard, in whose company I passed my adolescence and early twenties, chasing the same girls, crashing the same teenage parties, growing up to pursue not the same but similar academic careers. Chris was the first person in my school to know where to buy Levi 501s and how to shrink them to fit (in the bath). He introduced me to the works of Raymond Chandler and the collection of cigaratte packets. He was far too cool to take fashionable drugs at Oxford, but was on good terms with those who did. It was largely following his example that I returned to Oxford to take my masters degree in 1969, a year behind him. As graduate students we shared a rented hovel in Stanton St John (chemical toilet, coal fire, wall to wall books) for a year during which Chris taught me almost everything I know about literary scholarship and the love of books, not by precept, but simply by example. I was best man at his wedding back in 1976, but our paths diverged thereafter. At his retirement a couple of years ago, he was head of special collections at Leeds University’s Brotherton Lribrary, where I  emember visiting him and being shown some of his more recondite treasures (a lock of Mozart’s hair, Conan Doyle’s photos of faery folk); I think the last time I saw him in person must have been at a lunch with P.N.O. Pullman some time in the 90s. Now of course that it is too late, I regret bitterly even the dwindling flow of Xmas Card exchanges and the fact that my last email with him was more than six years ago.

As to the new beginning — well, it seems a small thing in this context, but I am now feeling quite positive about the idea of buying a house some distance from the back of beyond in rural France. A specific house, that is, of which perhaps more anon. But for now, I will go back to worrying about tomorrow’s training course at the EPHE in Paris, and that in Graz a week later.

A trip to La France Profonde

So this week I have mostly been not thinking about writing academic papers at all, which may or may not be a good think. Instead I spent the first part of the week tidying up materials for the next TEI training course, which is now pretty well polished, and also for the one after that, which is not. The process of thinking about what materials to use follows a fairly recognisable pattern in which ambitious optimism (I’m going to completely revise this bit, make up something new and exciting, strike out into unknown territory) has to eventually give way to pragmatic opportunism (I’ve got this already, it just needs checking, minor tweaks, translating). When I am preparing two courses which are due within a few weeks of each other, this means that the first course moves on to the second stage while the second one is still rejoicing in the first stage. Which was the case this week. Oh, and about the only thing I have in common with Sam Beckett is that I can no longer say whether my material is originally French translated into English or the reverse, since most of it has been through the process both ways several times.

Aside from that, I spent most of the week on trains, or other forms of transport, on an expedition to La Vergne, returning at the weekend via Nottingham. As follows:

depart arrive
Weds 27 Aug home 0910 on foot/bus
Oxford to Paddington 1001 1130 train
London St P to Paris Nord 1225 1547 eurostar
Paris Austerlitz to La Souterraine 1652 1921 Ic 3655
Thurs 28 Aug
La Souterraine to Gueret 930 1005 ter bus
Gueret to La Vergne 1100 1130 taxi
La V to Bussieres Dunoise nice walk
Bussieres to La Souterraine 1445 1615 taxi
Frid 29 Aug
La Souterraine to Paris A 1038 1318 ic3620
Paris Nord to St Pancras 1513 1640
Kings Cross to Grantham 1719 1827
Grantham to Nottingham 1857 1934
Sund 31 Aug
Nottingham to Oxford 1310 1546

And what have we learned? It’s possible to get to and from La Vergne by train within a day for about £400 pounds return (less if you cash in some Eurostar points). There’s not much happening in La Souterraine , and even less in Bussières-Dunoise. Guéret seems like a decent sized town though and is accessible by train from at least two different directions. Generally speaking, the Creuse is not the back of beyond: it’s behind the back of beyond. There are many cows, and many hills. There are probably no decent restaurants for miles. There used to be a railway to transport potatoes and beef to Paris, but they took it away years ago, and now all that’s left is a rather nice rural track which has the merit of avoiding most of the aforesaid hills. And there’s a lake behind the house, where you can fish but not (allegedly) swim.

Genetic editors, please note

After finishing last week’s entry about my rash commitment to write a book chapter, I secretly vowed to monitor my progress by producing weekly reports here. I then spent the entire week (when not shopping, eating, or sleeping) working on next month’s TEI course in Paris, essentially a revision of the one I gave in May. Almost, but not quite, because half way through the week, I received another reminder of another rashly-made commitment, this time to deliver a public lecture in Grenoble in December. I promptly dashed off the following proposition :

Ceci n’est pas une pipe: l’importance de la modélisation aux humanités numériques

Lou Burnard

Récemment, on a vu emerger de l’ombre de la inter-disciplinarité une discipline nouvelle qui s’appelle les Humanités Numériques (Humanités Digitales en Suisse, Digital Humanities ailleurs). Elle represente d’abord la confrontation, et ensuite l’adaptation aux méthodes et possibilités des technologies nouvelles de l’entreprise intellectuel et scientifique de toute la domaine des sciences humaines. Ces technologies comportent notamment l’informatique, mais aussi de la statistique, de la linguistique computationelle, et de la visualisation des données. Mais en effet cet emergence ne serait qu’une évolution, voire une continuation, d’un débat assez vieux – déjà percéptible au 19ème siècle — qui opposerait les sciences dures aux sciences humaines. Dans cette intervention, je propose que cette opposition semble d’origine plus sociale que méthodologique, et que les méthodes des SHS et les méthodes des sciences dites dures ne sont pas tellement loin l’une de l’autre. C’est la création, l’évaluation, et la manipulation des modèles et des hypothèses qui caractérise tout effort d’élargissement de science, et la modélisation comme processus abstrait donc devrait être au centre de nos disciplines, qu’il s’agit de la modélisation des structures textuels et linguistiques, de la modélisation des procédures informatiques, ou de la modelisation du monde physique.

Né en 1946, Lou Burnard a pris son DEA en littérature anglaise du 19e siecle à Oxford en 1971. De 2002 a 2012, il est Directeur-adjoint aux Services informatiques de l’Université d’Oxford où il s’occupait des applications informatiques dans les domaines des sciences humaines depuis des années, surtout en linguistique de corpus (British National Corpus), en bibliothèque numérique (Oxford Text Archive), et en l’encodage de textes. Actuellement retraité, il est reconnu comme expert dans ces domaines. Il a travaillé en France comme prestateur de services aux agences Adonis et Hum-Num et ailleurs en France: il est membre des Comités Scientifiques des Maisons de Sciences de l’Homme à Caen et à Grenoble.

Cunning, or what, I said to myself : if I have to produce a chapter in English on a topic I know nothing about it, I might as well repurpose it in French and get good value for money. And then, just to be on the safe side, I ran this text by my friend Marjorie, who is a native French speaker amongst many other good qualities, and thus well placed to tactfully remove the many barbarisms in this first draft. I was duly humbled by her response :

Ceci n’est pas une pipe : l’importance de la modélisation pour les humanités numériques
Lou Burnard

Récemment, on a vu émerger de l’ombre de linter-disciplinarité une discipline nouvelle qui s’appelle les Humanités Numériques (Humanités Digitales en Suisse, Digital Humanities ailleurs). Elle représente, pour tout le domaine des sciences humaines, la confrontation puis l’adaptation aux méthodes et possibilités des technologies nouvelles. Ces technologies comprennent notamment l’informatique, mais aussi la statistique, la linguistique computationelle, et la visualisation de données. Mais cette émergence ne serait en fait qu’une évolution, voire une continuation, d’un débat assez ancien – déjà perceptible au 19ème siècle — qui opposerait les sciences dures aux sciences humaines. Dans cette communication, j‘avance l’idée que cette opposition semble d’origine plus sociale que méthodologique, et que les méthodes des SHS et celles des sciences dites dures ne sont pas si éloignées. C’est la création, l’évaluation, et la manipulation des modèles et des hypothèses qui caractérise tout effort d’élargissement de la science, et la modélisation comme processus abstrait devrait donc être au centre de nos disciplines, qu’il s’agisse de la modélisation des structures textuelles et linguistiques, de la modélisation des procédures informatiques, ou de la modélisation du monde physique.

Né en 1946, Lou Burnard a obtenu son DEA en littérature anglaise du XIXe siècle à Oxford en 1971. De 2002 à 2012, il a été Directeur-adjoint du Service informatique de l’Université d’Oxford, où il s’occupait depuis des années de l’applications de l’informatique au domaine des sciences humaines, surtout pour la linguistique de corpus (British National Corpus), les bibliothèques numériques (Oxford Text Archive), et l’encodage de textes. Actuellement retraité, il est un expert reconnu de ces domaines. Il a travaillé en France comme prestataire de services auprès d’Adonis et Huma-Num, et ailleurs en France : il est membre du Comités Scientifiques des Maisons de Sciences de l’Homme de Caen et de Grenoble.

Suitably chastened by this salutary reminder  that my command of the French language is not as perfect as might be wished for, I removed the green ink, and sent it off to Grenoble, from which I rapidly received the following reminder that sometimes less is more :

Le résumé que vous nous avez envoyé est de fait plus important (environ 1300 caractères), je vous propose donc
(pour la version papier uniquement, la version web pouvant elle rester plus développée) de le réduire quelque peu. Seriez-vous
d'accord pour que, par exemple, nous enlevions la partie finale (cf proposition ci-dessous) et les déclinaisons autour du nommage
((Humanités Digitales en Suisse, Digital Humanities ailleurs)  ou préférez-vous le retoucher vous- même ?

Pour brochure : "Récemment, on a vu émerger de l'ombre de l'inter-disciplinarité une discipline
nouvelle qui s'appelle les Humanités Numériques. Elle représente pour tout le domaine des sciences humaines la confrontation, puis
l'adaptation aux méthodes et possibilités des technologies nouvelles. Ces technologies comprennent notamment l'informatique,
mais aussi la statistique, la linguistique computationelle, et la visualisation de données. Mais cette émergence ne serait en fait
qu'une évolution, voire une continuation, d'un débat assez ancien – déjà perceptible au 19ème siècle -- qui opposerait les sciences
dures aux sciences humaines. Dans cette communication, j'avance l'idée que cette opposition semble d'origine plus sociale que
méthodologique, et que les méthodes des SHS et celles des sciences dites dures ne sont pas si éloignée."

That’ll teach me. Maybe.

Metamodelling through : the prolegomena

So back in February I was asked to contribute a chapter to a new book being confected by some top people in the domain of the digital humanities,  an invitation which I naturally accepted with alacrity, and only a small sense of alarm. I admit: I was flattered, though naturally also felt it was about time my eminence was recognised in such a way.

Dashing off an abstract is an easy task, so I did that, and then forgot all about it. Here’s the abstract. Like other such pieces,  it promises much, and even gets mildly polemical towards the end, which seemed to do the trick, as the proposal was, in due course, accepted.

 

Where do metamodels come from and how do they survive?
Lou Burnard

There is a very old joke about standards which says "Standards are a
 good thing because there are so many to choose from". Like many old
 jokes, this plays on an internal contradiction (the structuralist
 might say "opposition") in its topic. Standards are, on the one hand,
 of most benefit to the extent that they reflect and facilitate
 diversity ; on the other, they are of necessity managed or even
 imposed by a centralising authority. This contradiction is
 particularly noticeable when the process of standardisation has been
 protracted because the technologies concerned are only gradually
 establishing themselves. We see this tension even in consumer
 electronics where there is a financial market-driven imperative to
 establish standards as rapidly as possible; but the same tension
 underlines the gradual evolution of ways of thought via communities of
 practice into de facto and (eventually) "real" standards. This article
 explores the evolution of standards for data modelling methodologies
 with regard to this tension. It considers some significant early
 experiments with the application of data modelling techniques to
 humanities research data (Manfred Thaller; J-C Gardin) and discusses
 to what extent some researchers simply adopted technical standards
 emerging in the wider data processing community (relational databases,
 information modelling), while other communities strove to define their
 own models (AI, language understanding systems). It will present in
 some detail the theoretical model (metamodel) underlying the Text
 Encoding Initiative's approach to standardisation and ask the question
 whether, over time, all such community-based efforts are forced
 further towards convergence and away from diversity. The TEI currently
 maintains a balance between "do it like this" and "describe it like
 this" schools of standardisation; in the long run, it therefore risks
 being superceded by advocates of the latter who distrust the former,
 or advocates of the former, who are impatient with the latter. 
Oxford, 1 Mar 2014

Summer came and summer is now going, and this particular bird is coming home to roost. I received last week a polite reminder that my manuscript should be delivered by the end of the current month, should conform to a defined house style, and would I please sign in blood the form I was sent back in April assigning my rights in this non-existent work to non-existent publishers Snipcock and Tweed ? Naturally I replied at once pleading for a stay of execution (but ignoring the rights assignment question) which was graciously accorded, somewhat to my surprise, even unto mid October. So now I really have little excuse not to find out what grand idea this abstract is abstracted from, really ought to get down to doing the research it grandly promises to summarise, and write the wretched piece. If only I didn’t have all those other more interesting (or less interesting but more urgent) things to do.

Well, let’s see., I plan to use this blog as a record of the painful process, just so that in years to come I can look back and see where it all went horribly wrong. At least no-one is likely to find me here.

 

 

 

What does the textual scholar require of computer science?

[Here’s the text of the talk I gave under the above title at the Fifth Postdamer I-Science-Tag “Digital Humanities Meets Information Science” on 19  March. I haven’t revised it properly yet — there’s nothing like reading a text aloud for making you aware of the places where it’s wandering off into the land of waffle — but here it is anyway] 

A recent diatribe by Roger Scruton (‘Scientism in the Arts and Humanities’, in New Atlantis, Fall 2013) has got me thinking about that old chestnut “what is the digital humanities”. Scruton argues passionately and persuasively against what he terms “scientism” – the pretension to scientific method – in the humanities, reserving particular disdain for the notion of “research” in the humanities as the term is currently used by cross-disciplinary “xxxx-studies” in humanities departments across the English-speaking world. He points out that “research” in the sciences is concerned with the establishment by scientific method of evidence to support or refute a pre-existing hypothesis about the world, whereas in the Humanities, it is applied to just about kind of activity that may add to the sum of cultural knowledge at our collective disposal, or may simply act as a substitute for such knowledge. I was struck by his unashamedly Arnoldian appropriation of the term “culture” and what follows represents some further thoughts along similar lines.

 

If science aims to deepen our understanding of “the world”, and the humanities to deepen our understanding of “culture”, we do need to find a definition for culture which goes beyond simply saying (as Scruton does) that it is about the “I and I” (probably not so much a hint of Rastafarian influence as an insistence on the subjectivity of cultural thinking), though it is true that any account of culture which ignores its effects on the individual cultural consumer will be sadly deficient. The laws of physics operate whether we know about them or not; the same cannot be said of cultural norms. And yes, of course, culture, particularly “high culture”, is a social and political construct, reflecting or reacting against the social and political power structures of the context in which it is articulated, and thus seemingly entirely contextual and contingent. But such naïve cultural relativism simply ignores the effectiveness with which the very contingency of culture also reveals, often unconsciously, its context, enabling us to construct hypotheses around the social and political norms concerned, and to assess it with reference to a wider context. The pre-occupations of human culture have not changed so much over the centuries, though different reactions to (say) birth, sexual partnership, time, death, and the construction of society are readily discernible, as are different reactions to those reactions. It seems to me that a study of culture, in the sense for which the Germans used to use the term geistwissenschaft , is a study of human reactions to, and constructions of, the world, and of those constructions. I further suggest that the relative merits of the various possible explanations it offers may be assessed in the same way as we evaluate purely scientific explanations.

 

A scientific explanation is valued according to the effectiveness with which it provides evidence in support of a hypothesis. If however the hypothesis is very general, for example that there is a single elegant principle governing the behaviour of space, mass, and time, it may not readily be identifiable as a hypothesis. When Eco says (in Interpretation and overinterpretation, 1992) that we value Copernicus’ model of the Universe more than Ptolemy’s not only because the former explains aspects left mysterious by the latter, but also because Copernicus enables us to understand the reasoning behind Ptolemy, he is not simply applying a humanistic perspective, exercising the hermeneutic meme to rhetorical effect, but demonstrating that evaluation always proceeds in the same way, whether we are considering the motions of the planets, or the relative merits of 19th century pulp fictions. For cultural objects do exist in the real world, and the cultural readings which confer “cultural” status upon them are also phenomena of the real world. Hence there is nothing inherently implausible about using scientific methods to gain some understanding of their behaviour, and how they function,

We should not however fall into the trap of supposing that in applying such methods to generate “scientistic” descriptions we have exhausted all there is of value in understanding a cultural object, a work of art. The history of a cultural object includes the history of its status considered as a work of art, but its meaning goes beyond the aggregation of perceptions of it as manifested by recorded opinion. Some of those perceptions may be ill-conceived or unhelpful, failing the Eco test of greater explanatory power for example, or other conceptual norms. To read King Lear solely as a political argument about kingship ignores the greater resonance of what it has to tell us about family life. To read Hamlet solely as an instance of the vogue for “revenge” tragedies that seems to have occurred on the English stage around the turn of the 16th century seems similarly wide of the mark. Contemporary African readers of Dickens’ Great Expectations sometimes reduce it to a fable in which Pip’s innocent life as a village dweller is corrupted by wealth and social class as soon as he moves to the town. Such a reading is one which Dickens might have recognised, and which the text certainly licences, but historically-minded critics may still feel that there is something wrong with implicitly equating the experience of a 20th century upwardly-mobile African villager and that of an imagined member of the 19th century rural poor. (Even so, a judgement we might consider inappropriate on the grounds of anachronism when applied to a specific cultural product – for example, the use of racist or sexist terms in early 20th century writings – is surely appropriate when applied to the context in which such writings are created or delivered; indeed, the writings constitute essential evidence warranting such judgements.

Consider, for example, linguistics. Language is surely the archetypal manifestation of a cultural object, almost a metaphor for culture itself (we talk about the “language of art”, we say that paintings and poets “talk to us” in a particular way, we even talk of a “vernacular” architecture). Over the last few decades, it has become increasingly clear that new technologies have facilitated a new perspective on the ways languages are used, hence how they change, and even perhaps what fundamentally they are. Corpus linguistics emphasizes the performative aspects of language, seeking to identify recurrent, possibly unconscious, regularities of usage, patterns which demand an explanation. Some have even claimed that no linguistic structures exist beyond such regularities of usage and the patterns associated with them, that there is no such thing as “grammar” analogous to the laws of physics in the real world. Even so, some explanation has to be found for these patternings. It is not necessary to subscribe to atavistic Chomskyan theories about innate grammar to seek explanations for them in terms of some general (and falsifiable) hypotheses about how languages function, for example to explain language variation and change by reference to a principle that innovation must always show itself first as deviation and is frequently associated with an assertion of group identity, that language users always value mutual comprehension above formal coherence or adherence to predefined norms, and so on.

So we should avoid depending only on a scientifically-derived and statistically-justified assessment of the facts of cultural reception. The history of digital humanities is dotted with the corpses of over-enthusiastic systematisers, from T.C. Mendenhall’s “characteristic curves of composition”  to J.F. Burrow’s reduction of Jane Austen’s style to vectors of frequency data (Computation into Criticism, OUP, 1987).  This is not, of course, to say that statistical stylometry has nothing to tell us; just that it can only ever be a means to an end. The most scientific of stylometricians will always use the objective evidence revealed by their analysis in support of an entirely subjective judgment, be it about authorship or about style. As Stanley Fish did not quite say “There is always a text in this class”: in all such judgements, the constructed text, the reading of the evidence, is the end result of the research, whether it is obtained by meticulous statistical methods or good old fashioned introspection. And I tend to agree with Arnold, and with Scruton, that constructing such readings and transmitting them is actually the purpose of the Humanities.

 

A reading is however by no means the same kind of thing as a model.

 

model

 

When I give introductory talks about the purpose and nature of text encoding, I often use the following schema to represent the distinction:

In its usual context, this schema is meant to show several things:

  1. The process of transforming resources (cultural objects such as books, paintings, historical documents etc.) into digital form is always a form of re-presentation, abstraction, reduction, reinterpretation, or encoding. Or, one might say, reading.
  2. The results of that transformation into digital form can be analysed and re-interpreted, automatically giving rise to an enriched version of that reading, which in turn can continue to be enriched by analysis in a kind of virtuous hermeneutic circle
  3. The process of encoding, and the process/es of analysis, must however be informed by the same abstract model

Perhaps this is merely a long winded way of saying that you cannot get more out of a system than you put into it, but it does suggest that the conceptual model underlying a set of readings is a different kind of thing from any of those readings and operates at a higher level of abstraction.

I freely confess that my ideas about what the humanities are or should be were formed during a distant epoch: the end of the 1960s. And my ideas about what computer science is or should be were formed during one that feels even more remote : the end of the 1970s. I have also lived long enough in the hinterland between the two disciplines to see how the intellectual territory laid claim to by either has evolved.

In the 1960s, the discipline associated with the study of English literature, at least as I experienced it at Oxford University, was going through one of its periodic fits of self doubt. At other universities, these were tumultous times, as the waves of what was to become known as Theory (with a capital T) began to sweep away the Arnoldian consensus that works of art existed independently from their creators and consumers and were invested by an innate cultural value. Even at Oxford, traditionally skeptical about such French (or worse Cambridge) vulgarity as the search for theory, it was advisable to be aware of such theoreticians as Beardsley and Wimsatt, and to be able to shoot down the intentionalist and affective fallacies. We agreed that an understanding of its author’s intention (as far as these could be determined) did not exhaust the meaning of a work of literature, any more than did an itemisation of its effects as recorded by its readers. We felt obscurely that we needed to place works within their historical and social context, to assess the extent to which they deviated from or confirmed reader expectation at different times, but we lacked the tools to do that, other than by the painstaking process of reading and remembering many, many books. We lacked an abstract model of how literature functioned or what it was, and did not know how to construct one.

 

The computer science I encountered at the end of the 1970s, by contrast, seemed obsessed by ways of representing knowledge, and constructing models. The entity-relationship model was succeeded by the Codasyl network model, which was blown away by the relational model, just as the giant mainframes began to be blown away by distributed networks of “personal” computers. Under the influence of large amounts of money and requirements for increasingly complex centralized information systems, these modelling techniques naturally evolved into methodologies such as SSADM (Structured Systems Analysis and Design Method, a set of standards developed in the early 1980s for systems analysis and application design widely used for UK government computing projects) . It is easy to poke fun at that expansive pre-web era, in which modish re-brandings of essentially identical techniques succeeded each other with confusing regularity, always with extravagant claims of advanced capabilities just around the corner, in the “next generation” architecture. The next generation when it actually arrived in the nineties was distributed, decentralized, and almost entirely uninterested in all of the effort which the database designers and conceptual modellers of previous generations had put into trying to construct and impose a federated approach to the representation and storage of knowledge. (Which is why we now see the reinvention of logic programming in the form of linked data: but that is a different story.)

 

Nevertheless, like many others at the time, I found that the tools and techniques of computer science, though they might be described in terms of a particular jargon, and though their field of application might seem entirely alien, still had something to offer the humanist. Could it be that an abstract model for the way that texts and documents function – which I take to be the essential business of the humanities – might be expressed using the same language as that used to model the data flows and processing requirements of East Midlands Gas?

It seemed clear to me that texts and documents should be described from at least three perspectives:

txttrin

 

as physical objects with a visual representation; as linguistic objects made up of words and phrases drawn from some kind of linguistic system; and as intensional objects with reference to real world objects, events, or entities. Most computer systems of the time tended to prefer one or other of these aspects. A word processor would help you produce nice printed copies of your documents; an information retrieval system would help you investigate their language; a database would help you describe what they were about. Systems which crossed these frontiers, enabling you to control the appearance of particular words used to describe felonious transactions in a court record, for example, were harder to find, and usually required to be custom-built, with many compromises along the way.

With the arrival of markup languages such as SGML in 1987 and XML a few years later, it became possible at last to describe a document in a detailed way independently of whichever of these three aspects was to predominate in its processing, and hence in a way that facilitated all of them equally. And with the arrival of the Text Encoding Initiative around the same time, an extraordinary adventure in document modelling got underway. Much has been written about the TEI (not all of it by me) and its significance; my favourite comment is that whatever else we may say of the TEI Guidelines, as Basil Bunting said of Pound’s ‘Cantos’, “they resemble the Himalayas: you can ignore them if you like – but you will have to go an awfully long way round.” The TEI’s relevance to the present paper is that it represented the first and so far only time that scholars from across the humanities disciplines were succesfully corralled into achieving some kind of consensus about the “significant particularities” of the documents they studied. The TEI was (and perhaps remains) a unique exercise in inventorising the components of the models underlying research in the humanities, from the disparate points of view of lexicographers, linguists, critical editors, manuscript scholars, historians, literary scholars, and librarians. To find an abstract language adequate to represent such divergent perspectives within a single framework we naturally sought to apply data modelling techniques inherited from computer science, expressed not in UML or SQL but using the new text-friendly features of SGML. The rest, as they say, is history.

If the success of the TEI shows us that the modelling techniques inherent to computer science could successfully be imported and made to function within the humanities paradigm, it seems reasonable to conclude by asking whether this is a unique instance of such synergy.

So, parodying Monty Python, What did computer science ever do for us humanists?  Unsurprisingly perhaps, the things that working textual scholars seem to most appreciate about the impact of information technologies on their working practices are all things that computer science as a discipline tends to take for granted. When asked a version of my working title for this piece, an English professor of my acquaintance replied:

I think I owe the discipline a great deal… the advantages of on-line ordering before a visit to the British Library (say), and (from home) easy access to bibliographical and biographical information when preparing a book or essay ms for the press. I’ve regularly used Google to track down unattributed quotations which might otherwise have taken me ages to locate; I also use the electronic databases ECCO and EEBO, although I think the interfaces and general tractability have some way to go. I ought to add, I think, the sheer convenience of being able to assemble large and complex texts–such as editions—electronically, where relevant information comes to hand over a period of time. I only wish word-processing had been available when I completed my Ph.D. in 1979… Finally, I assume that without information science there could be no email, and without email I think that academic exchange as we know it might grind to a halt.”

This reply perhaps demonstrates how deeply embedded information science has become. The aspects selected by my colleague – networked access to information resources both of the kind traditionally held by libraries and of the kind traditionally embodied in one’s peers – constitute a change in the knowledge infrastructure, the context in which work is done. There is much to be added if we are to give an adequate account of that infrastructure: about the politics of open source access, about the alleged democratising (or to use the French word vulgarisation) of access to cultural resources, about all the ways in which the Internet has transformed our ways of knowing about the world, and the world that we know about. “Never before have so many people known so little about so much” … but these changes are driven more by commercial and social imperatives than they are by the interplay of academic disciplines which is my subject.

My colleague’s reference to word processing also hints at a more subtle change in the way that work is done itself. Of course writing on a word processor is only superficially like writing on a typewriter, just as a typewriter is only superficially like a quill pen. The extent of quantitative change in going from a machine in which making corrections is an expensive and limited process to one in which documents are never finished such is their fluidity and plasticity really does approximate to a qualitative change. In the 90s this occasioned anxiety about apparently fundamental shifts in the very nature of scholarly communication, even the thinking process itself, induced by the spread of new technologies. A couple of decades later, in a seemingly entirely fragmented, and decentred world, drowning in media which seem to be dominated by twitter and sound bites, we do well to remember that there is a positive side to this transformation.

In placing first the availability of digital resources, however, I think my colleague hits the mark exactly. The challenge for computer science has always been to find better tools for coming to terms with information glut, whether in the form of paper archives or millions of digitized books. The success of Google may have suggested to some that the indexing and cataloguing techniques associated with classical information retrieval were entirely superannuated. But that model of the document as witness to a mode of expression, a particular discourse, suggests that such a view is premature. Indexing techniques are beginning to take on new and more sophisticated clothing, their function rebranded as text mining or text modelling. If it is the words that conspire to form the meaning of a text, we should be able to able to formulate new, more coherent, and better informed hypotheses about that meaning on the basis of their relative co-occurrences and absences in the immense bodies of digital text now at our disposal. To quote Ted Underwood, “The notion that documents are produced by discourses rather than authors is alien to common sense, but not alien to literary theory.” As we do so, the availability for the first time of massive quantities of digital text structured and organized in terms of our traditional models of text and textuality (rather than their purely visual properties) will enable us to make richer (and thus more explanatory) models against which to judge the salience of individual works, and in terms of which to categorise their context. Rather than looking for the proverbial needle in a haystack, we should start considering why hay is such a good home for it. I do not know whether that is a notion that computer science has fully assimilated as yet.

Poster Slamming

At the TEI 2013 member conference today, I had the pleasure of participating in the “Poster Slam”, a well established TEI ritual in which each poster-presenter is given one minute (and one slide) to introduce the topic of their poster as a means of persuading people to come to it. Preferably in verse. This year, Syd made the fatal mistake of allowing presentations in languages other than English, providing they were accompanied by a translation. So Nicolas Larrousse and I naturally presented the following poem:

Le Tageur et L’Archiviste

Le tageur ayant tagué tout l’été
Se trouva embarrassé l’avenir étant arrivé.
Pas un seul petit morceau
d’explication claire de ses travaux

Ze tagger having tagged all summer long
Found himself embarassed when the future has arrived
Not one little bit of explanation survived for all his efforts

Il alla chercher des avis malins
Chez l’archiviste son copain
Le priant de lui prêter
De la sémantique pour tout regler.

E went to ask some tricky advice from his friend ze archivist
Begging im to lend him some semantics to sort sings out

Les archivistes ne sont pas créateurs
C’est là leur moindre défaut.
Que faisiez-vous au temps chaud ?
Dit-il à cet emprunteur.

But archivists are not creators, that’s their smallest problem
What did you do during the fine weather?
He asked the borrower

– Nuit et jour à tout venant
Je taguais, ne vous déplaise.
– Vous taguiez ? j’en suis fort aise.
Eh bien! transformez maintenant.

Day and night I was tagging for anyone, if you dont mind
You were tagging? Oh that’s fine. So now you can do transformations!

Nicolas did the English bits, and I did the French bits, under the inspiration of the late great C. Trenet.

Further adventures with ODDs

This post is mostly an aide-memoire, since how to do the ODD things I want to do is not very well documented in the TEI as such.

First challenge

I have an ODD which was produced by webRoma some time ago and which (naturally) uses the traditional “exclude” syntax. I want to convert this to the new “include” format and also to ensure that it won’t get any of the new elements added to P5 since it was first defined.  I proceeds as follows:

  1. I look at the source of my ODD and I see the comment Roma inserted in the <sourceDesc> “created by ROMA on Monday 21st June 2010”
  2. I go to the list of releases on the TEI sourceforge site to find which release of P5 must have been in use at that date. Judging by the dates here, it is probably release 1.6 I want
  3. whichversionBuried away in the standard release of the TEI Stylesheets there is a a cool utility for converting an “exclusion” ODD into an “inclusion” one. It’s called tools/odd2nuodd.xsl and I run it like this:
saxon -p defaultSource=http://www.tei-c.org/Vault/P5/1.6.0/xml/tei/odd/p5subset.xml 
myOldODD.xml tools/odd2nuodd.xsl > myNewODD.xml

Note the inclusion of the 1.6.0 release number as the source directory to be used when the stylesheet starts looking for TEI definitions.

Second challenge

I have two or more new style ODDs and I want to compare their use of the TEI to assess their interoperability. So far, I only have an approximation to an answer for this, in part because I am too lazy to improve the scripts I hacked together for it last time, in part because it’s actually a rather ill-defined problem, and hence hard.

The approximation goes as follows:

  1. Run an XSL transformation on each ODD in turn, appending the results to a big text file listing element names and what happened to them in which ODD;
  2. Run a perl script (ouch) on the results of (1) to produce a summary table which starts like this:
<table>
<row role='label'><cell>Element</cell><cell>lodel</cell><cell>tei</cell><cell>sc
ore</cell></row>
<!-- 2 --><row><cell><ref target='http://www.tei-c.org/release/doc/tei-p5-doc/en
/html/ref-TEI.html'>TEI</ref></cell> <cell>change</cell> <cell>use</cell><cell>2
</cell></row>
<!-- 2 --><row><cell><ref target='http://www.tei-c.org/release/doc/tei-p5-doc/en
/html/ref-ab.html'>ab</ref></cell> <cell>use</cell> <cell>use</cell><cell>2</cel
l></row>
<!-- 2 --><row><cell><ref target='http://www.tei-c.org/release/doc/tei-p5-doc/en
/html/ref-abbr.html'>abbr</ref></cell> <cell>use</cell> <cell>use</cell><cell>2<
/cell></row>

OK, work is needed in this area. But it’s a start.

 

 

 

TEI++ : une formation avancee

Cet été, j’étais invité par le consortium CAHIER, en partenariat avec les consortium Corpus Ecrits et IRCOM,  d’organiser un atelier dite “TEI avancée” sur quatre journées.  Je leur ai propose une mode d’opération divisée en deux parties (présentation, travaux pratiques) et une organisation selon trois axes:

  1. modélisation des ressources et séléction des traits signifiants
  2. encodage et explicitation TEI des structures modelisés
  3. exploitation et analyse des ressources structurés

Je leur avais aussi propose de partager les  travaux de formation avec quelques experts francais.  La formation s’est tenue à l’Institut de Linguistique Française à Paris du 19 au 22 novembre 2012.

Voici en sommaire (anglophone, désolé) ce qui s’est enfin passé …

Day 1

Proceedings began on the fifth floor of the ILF, a nice light room but not quite big enough for the 18 or so participants. This was not altogether bad, since the consequent huddling together encouraged emergence of a cohesive group identity, which I also tried to encourage by getting the participants to place themselves on an improvised graph along two intersecting axes : literarature vs linguistics, and researchers vs support staff. Most people turned out to be in the bottom right quadrant, i.e. ingénieur + linguistique, but there was also a smattering in the littéraire + scientifique box, to say nothing of two sociologues who insisted on positioning themselves in the middle of the littéraire vs linguistique axis. The rest of this first introductory session was given over to a rapid review of some fundamentals of encoding, and a sampling of the websites of half a dozen real TEI projects around the world, which might have gone better had I rehearsed it better, but got the message across that quite a few very different projects were doing seriously cool stuff with the TEI.

After coffee, I introduced them very quickly to a spot of data analysis, using as vehicle the celebrated postcard archive of M Marcel Virgolos, and Lauranne then took over for a refresher on using Oxygen. They marked up a postcard or two, and reviewed commonly used TEI tags for each of some pre-selected texts : a French novel, a poem, and a play. Most students completed all of these, mastering most of the key features of the Oxygen XML Editor quite rapidly, but I think we did not allow enough time for this session, given the mix of abilities present.

Lunch in the form of a large cardboard box containing a “plateau” of cold cuts, salads, bread roll, plastic cutlery etc. duly appeared and was despatched. Thus strengthened, I embarked on an all-singing all-dancing overview of all the TEI modules, and what you can do with some of them. That took about an hour, but helped motivate Lauranne’s traditional exercise on using Roma to make a schema by reduction from TEI-ALL which followed it. By the end of the day, everyone seemed quite comfortable with the idea of pérsonalisation de schéma, and reasonably convinced that they might find what they wanted to mark-up somewhere, somehow in the TEI.

Day 2

On this and subsequent days we were were displaced to a much better (because bigger) teaching room. I began the day in seriously magisterial mode by explaining (many of) the components of the ODD language, and why you might care to know about it. This was quite punishing, both for me and for some of the less technically-minded participants, but no-one visibly fell asleep. For the subsequent practical, Lauranne had prepared a script in which participants submitted their own texts to analysis by OddByExample, generating a personalised ODD. The majority of course had not come with their own texts, or had texts not in TEI P5, so they ran the exercise on a rather inadequate sample of the Virgolos corpus instead. With a bit more prep, I think this could be a really fun exercise and an excellent way of getting people to learn ODD properly. It also revealed a bug in OddByExample which Sebastian graciously fixed overnight

Lunch being cardboard plateaux once more, I went for a stroll round the nearby Parc de Montsouris to see some of the gray autumnnal paris daylight while it was available. I then droned on for well over an hour on the subject of the TEI Header, which I am now selling as being “metadata for the rest of us”. They made me do it. The exercise we adopted was the French version of creating an ms description for the W. Owen ms, last seen in Berne, This is quite useful for the purpose, especially in combination with the following exercise on marking up a transcription of that manuscript ; time permitting however I would have instead preferred to use a different French manuscript for both. If I had one.

Day 3

Wednesday I had carefully billed as “journee des guest stars” since the idea was to make other celebrated French TEI enthusiasts share in the work. So we began with a presentation about TEI recommendations for dealing with named entities and their names given by Alexandre Gefen. Since the room contained more than a few French linguists this gave rise immediately to a heated debate on the philosophical basis and nature of nominal reference. The exercise in marking up names was a little under-prepared and one of the students insisted on asking the Emperor’s New Clothes question (why bother?), which was answered by another participant citing (at length, and with enthusiasm) the work of Nicole Dufournaud inter alia, which was nice. Since Alexandre had to leave early, I filled the gap by moving my brief overview of tools options up, giving a plug for Sebastian’s stylesheets, and letting them experiment with OxGarage, which they loved.

For lunch we went to the brasserie down the road, which was a much much better idea than the plateaux. Everyone got very jolly and there was a fair amount of shouting. Our second invited expert of the day was Bertrand Gaiffe, from ATILF, who delivered an excellent pair of lectures about encoding of oral and linguistic data respectively, also involving a fair amount of interaction and discusion, but not much actual tagging, since TEI interfaces for the appropriate tools remain elusive, best efforts of the LingSig notwithstanding. .

Day 4

The final day began with a presentation on the various TEI orthodoxies concerned with the editing of primary resources given by our third invited expert : Alexei Lavrentev from ICAR. Participants were then offered the choice of doing either the reverse transcription exercise or the visual encoding exercise from Berne; both options were taken up, though I was too busy sorting out the website to see how far they got with either.

After another nice brasserie lunch (roast duck), I spent about 15 minutes showing how to use TEIBoilerplate, which went down remarkably well, “Génial” they cried, as they saw all that tricky encoding in John’s demo file being rendered beautifully by Safari (France is still largely land of the Mac). The rest of the afternoon, was devoted to a more ambitious TEI-savvy piece of software : txm, from the textometrie project at Lyon. Alexei showed us what it was, and demonstrated how to make it sit up and do tricks with the Graal and Brown corpora, which participants had pre-installed. He also showed it working with a selection of literary texts prepared for use throughout the workshop.

Verdict

I think this workshop worked much better than it deserved to (I always think that). All the participants seemed very happy at the end, and several of them said they had learned more than they expected to. I think the organisation of the programme made good sense, and the balance of exposition and exercise was approximately right, though we probably didn’t do enough to make the practicals consistent and relevant. A few parts suffered from lack of preparation, and I think we could have done more to get a single case study working throughout the course of the four days, in addition to the various more specialised materials we introduced. But next time we’ll definitely get everything right. Thanks are due to the participants, the organisers, and all my co-formateurs, especially Lauranne for calming me down at moments of high anxiety.
All the materials used in the workshop are available in PDF starting from http://meet.tge-adonis.fr/sites/default/files/2012-11-initial.pdf. Dedicated TEI hackers may also be interested in the XML sources of the presentations which are available from my svn repository at http://code.google.com/p/tei-fr/source/browse/#svn%2Ftrunk%2FTalks%2F2012-11-paris