Counting the books: yes, there’s more

My efforts to find links to digitized versions of all the titles in ATCL made one huge methodogological leap forward last week, and is now poised on the brink of another.

Going through the titles I had managed to extract from a rather uncooperative Google Books interface last week, I noticed that rather a lot of them were marked as “not available” for some reason: more precisely although my 11,104 searches, each corresponding to an entry in ATCL for which I had not yet found a digitized version, had succeeded in identifying 2186 previously unseen titles, they had also thrown up 3885 titles which Google considered inaccessible, presumably for copyright reasons, and 5033 of which it flatly denied any knowledge. Yet when I looked up a few of these same titles (whether allegedly “inaccessible” or “non-existent”) in SOLO – the Bodleian’s wizard student-friendly query interface to its catalogue– there they were, page images downloadable in PDF, no sweat.

Now, amongst other delights, SOLO allows quite rich facetted searching, so it is easy to formulate a query like “find me all titles classed as fiction published in London or Scotland between 1830 and 1900, which have also been digitized by Google”, which made me think for a few moments that my work was now done. But as with many other classy library interfaces, SOLO stops short of allowing a mere automaton to carry out any searching: you have to sit at a keyboard and type, though it will grudgingly allow you to save and download the results of your query … provided it contains no more than 50 (FIFTY!) hits. Which (as I politely pointed out to the harassed librarian on online-chat duty last week), is almost entirely useless for my purposes.

Then I remembered that Real Librarians Do It With Z39.50 and dusted off my YAZ skills. The Bodleian, like all real libraries, has a perfectly good Z39.50 interface, which is not only entirely unbothered by a succession of several hundred queries but also happy to send back directly as many full catalogue entries for the hits as you can (err) handle. The only catch is that the queries have to be expressed in some antique syntax called PQN (Prefix Query Notation) and the results come back in MARC 11. I cut my programming teeth on Fortran IV, so these ancient tongues scare me a lot less than, say, JSON. I turned my list of queries unsatisfied by Google Books into PQN, fired them at library.ox,ac.uk:210/ALEPH and put the kettle on for a nice cup of tea. PQN is not very discriminating, or not in my hands at any rate, and my queries massively overgenerated. But once my 11297 results had passed through through a couple of utilities (yaz-marcdump to produce marcxml, and my very own `marctotei` to identify and fillet the relevant records) I had a set of 780 CPF format records to add to the ATCL database list, and the tea wasn’t even cold (774 once I’d weeded out some duplicates and mismatches).

A natural question is: can we do the same trick with the British Library? Or any other library offering a Z39.50 interface? In principle, yes. But of course the Bodleian’s use of MARC fields may not be entirely the same as everyone else’s, and so the script I wrote to fillet the results of a query may need fine tuning. For example, the BL does not seem to use Marc code 856 (which I rely on) at all: its digital texts are stored in something called the Digital Store, and their identifiers there don’t seem to map directly to anything like a URL. And while I was thinking about that, something unexpected happened.

A tweet arrived, alerting me to the existence online of the “OpenTexts.world” search engine: a search interface to a much more ambitious and much more comprehensive view of the world’s digital resources, namely the Global Digitised Dataset Network (GDD Network), originally a research project into the feasibility of creating a global catalogue of digitised texts. At the end of this project’s first funding year it has made available not only a nice search interface but also (applause) the underlying complete dataset. The latter looks a bit like the HT snapshot dumps I have processed before, though it is missing quite a few useful fields, such as type of text, place of publication, etc. And the nice search interface so far has only limited functionality: nice if you are exploring the data, and really quite annoying if you know exactly what you want to find. On the bright side, it allows you to download the results of the query as a CSV file and even has a sort of API, apparently supporting Lucene-style queries to be passed in via a URL to a SOLR-indexed version of the data. This could well be the answer…

Counting the Books contd.

A couple of days ago I reported here on some imbalance in the representation of male and female novelists in current digital archives, written while I was still trying to persuade the Google Books server to do tricks for me. I can now report further progress. After 21 iterations, I did finally manage to confect a complete list of all the ATCL titles freely available from Google Books. Putting this together with data already stored in ATCL for Google, I have now identified 2823 ATCL titles in Google Books, which brings the total number of known digitizations up to 11,510: 58% of all available titles. This seemed good enough pretext to revisit the summary table I produced last time, so here it is again in a new and hopefully slightly more comprehensible form:

Revised counts

As might have been predicted, with more data the situation becomes more nuanced. Note first that the percentages of available titles which get digitized (column “%dig”) apparently decreases as the actual number of texts available for digitization (column “All”) increases, suggesting that the more titles you have available the less likely you are to deal with any one of them. Only tentative conclusions are warranted, since we are lacking so much data for the later part of the century. That said, comparing the columns M-dig and F-dig suggests that throughout the century, digitizers are consistently and disproportionately more likely to go for a male-authored text. Even in titles from the 1850s, where there are substantially more female authored titles available than male (778 as opposed to 595), the proportion of them which get digitized is still lower than the proportion of male authored titles (79% as opposed to 84%). In the 1880s, 55% of titles are explicitly female-authored, as opposed to 41% male (the remainder being unspecified); yet the male authors are still sampled for digitization at a far higher rate (59% as opposed to 37%).

My previous accusations of sexism amongst the digitizers en masse thus vindicated I next considered the practice of individual archives. The following table shows the numbers of ATCL titles I found in each of five major archives, and the proportions attributed to male and female authors in each.

A-digM-digF-digU-dig%Male%Fem
All115106207505025354%44%
Hathi Trust5655356820226563%36%
InternetArchive16658897482853%45%
Google Books28231138158010540%56%
British Lib51042742225211054%44%
Gutenberg22751682590374%26%
Digitization choices by archive

Overall, the balance is comparable with that shown in the previous table: a small preference for male as opposed to female authored titles (54% to 44%). But this is perhaps concealing a marked variation in practice amongst the archives. At one extreme, Project Gutenberg has nearly three times as many male authored titles as female, while at the other Google Books actually has significantly more female authors than male (56% as opposed to 40%). In between is the British Library Microsoft collection, which matches exactly the proportions for all the archives combined.

Irrespective of gender, how much variation is there in the holdings of these archives? Here’s a frequency distribution showing how many ATCL titles are available from 1, 2, or more archives (note that I cannot distinguish how many of these are actually copies of the same digital version).

1724763%
2278024%
3120110%
42672.3%
5160.7%
Archive overlap: how many titles are available from how many archives?

Encouragingly, this suggests that there is little overlap amongst the holdings of the main digital archives: 63% of all 11,511 digitized titles listed occur in only one archive, 87% in one or two.

Which titles get digitized most frequently? This is hard to tell, for several reasons. Some archives list multi-volume titles as multiple copies; some archives list items simply copied from other archives. For my Google Books listing I excluded titles which were already listed by another archive. But for what it’s worth, here, in no particular order, are the fifteen titles listed as available from all five archives I looked at:

  • Caine, Hall (1853-1931). The Deemster: A Romance (1887)
  • Caine, Hall (1853-1931). A Son of Hagar: A Romance of Our Time (1887)
  • Hamerton, Philip Gilbert (1834-1894). Wenderholme: A Story of Lancashire and YorkshireEdinburgh: Blackwood 1869
  • Collins, Wilkie (1824-1889). Antonina: or, The Fall of Rome. A Romance of the Fifth Century London: Bentley 1850
  • Collins, Wilkie (1824-1889). The Woman in White London: Sampson Low1860
  • Eliot, George (pseud.) (1819-1880). The Mill on the Floss. Edinburgh: Blackwood 1860
  • Dickens, Charles (1812-1870). Oliver Twist: or, The Parish Boy’s Progress. London: Bentley 1838
  • Eliot, George (pseud.) (1819-1880). Middlemarch: A Study of Provincial Life. Edinburgh: Blackwood 1872
  • Gaskell, Elizabeth Cleghorn (1810-1865). Mary Barton: A Tale of Manchester Life. London: Chapman and Hall 1848
  • Grant, James (1822-1887). The Romance of War: or, The Highlanders in Spain London: Henry Colburn 1847
  • Dickens, Charles (1812-1870). Barnaby Rudge: A Tale of the Riots of ‘EightyLondon: Chapman and Hall 1841
  • Dickens, Charles (1812-1870). Bleak HouseLondon: Bradbury and Evans 1853
  • Wood, Mrs. (-). It May be True: A Novel. London: T. C. Newby 1865
  • Oliphant, Margaret (1828-1897). Harry Jocelyn. London: Hurst and Blackett 1881
  • Ouida, (pseud.) (1839-1908). Folle-Farine. London: Chapman and Hall 1871

No, it makes no sense to me either. I expected to see Charles Dickens and George Eliot and Mrs Gaskell on the list, but Hall Caine and Philip Hamerton? Clearly one needs to be very careful in interpreting this data.

See previous bloggage for details of how the numbers were obtained. Supporting data and scripts have been updated in my github repo.

An experiment in counting the books

A couple of years ago I spent some time trying to determine which of the titles in the wonderful “At the Circulating Library” (ATCL) database were freely available online in digital form. This was for largely pragmatic reasons to do with building the ELTeC English language collection: other blog entries describe the method I used and some preliminary results. It’s not as easy as you might suppose to download reliable catalogue information from most digital libraries, nor is it always readily tractable when you do. After some experimentation, I hit on the idea of creating a magic key, a kind of fingerprint, derived from the title and author name as specified, which could then be matched against keys in the same format derived from ATCL entries.

More recently, it occurred to me that this data might also provide some interesting numbers to contribute to current debates about digitization priorities. Exactly why some titles make it to Project Gutenberg, or the HathiTrust, or the Internet Archive and others don’t is not a question to which simple un-nuanced answers are likely or even (maybe) possible, but we should still ask them. Those responsible for the digitization efforts of major libraries are a little coy about the principles on which books are chosen for digitization, or even whether they actually have explicit selection policies, for some reason. I assume that there is a difficult tightrope walk between on the one hand practical but purely adventitious matters such as the relative locations of volume and scanner, the size and state of the volume, the time of day, the temperament of the scanner operator etc.) and on the other principled criteria aiming to ensure a balance of say titles by female and male authors, or high and low brow, date of production, longevity of readership, and so on. It would be surprising if the choices were completely unrelated to characteristics of the population being sampled, or totally failed to reflect the cultural priorities of the scanning operation; the same uncertainties apply, of course, to the collection being sampled for digitization itself.

Anyway, I recently read an interesting article by Allen Riddell and Troy Bassett (“What library digitization leaves out”; preprint available from https://arxiv.org/abs/2009.00513)  which reports that in the data they looked at – the comparatively small sample of surviving English novels published in 1836 and 1838 – shorter books, and books with male authors are disproportionately more likely to be digitized. I naturally wondered whether this applies equally well across the whole of the 19th century.  Which is what led me to revisit my efforts of two years ago. But first, here are the results.

There are 19,912 titles in the current ATCL database. Of these, 9152 (46%) have authors identified in the database as male, 9809 (49%) are identified as female, and 951 (4%) are identified as unknown. These relative proportions are rather different if we look at titles with at least one digital surrogate, of which there are in total 9099 (45%). Of these 9099 digitized texts, we find 5221 (57%) are of male authorship, 3718 (41%) of female authorship, and 160 (2%) are unsexed.

Look at that again. Although there are actually more titles available for digitization from female authors than for male, the number that actually gets digitized is significantly smaller (if, like me, you think a gap of 16 percentage points is pretty significant). Hmmm. These counts of course derive from the whole period covered by ATCL, from 1800 to 1900, so I also calculated them for each decade, only to find that the proportions and their imbalance remain fairly consistent across the century. And this despite huge changes in the numbers: for the last decade of the century ATCL lists nearly 6000 titles, a six-fold increase on (for example) the fourth decade. What percentage of those titles were digitized? In both decades, over 51%. And what proportion of those digitized titles were male-authored ? In both decades, 62%. There is some variability across the decades, but the basic picture remains the same

One possible explanation might be that titles with unknown or unsexable authorship (e.g. the ubiquitous “Anonymous”) are more likely to have been female, and that hence we are not seeing all the truly female authors. But even were this the case (after all, why should we not equally well hypothesize that male authors might be bashful or crave secrecy?), the proportions for books ostensibly male-authored with respect to books ostensibly not male-authored (i.e. those classed as either F or U by ATCL) remain stubbornly higher than the proportions for books definitely not male-authored. And indeed, the same mutatis mutandis is true for the ostensibly-female to ostensibly-not-female ratio.

Here’s a table showing the raw counts:

Decade All « Male » « Female » « U » A-dig M-dig F-dig U-dig
19912 9152 9809 951 9099 5221 3718 160
1830s 482 256 174 52 250 164 85 1
1840s 1037 543 422 72 538 334 202 2
1850s 1483 595 778 110 718 347 358 13
1860s 2341 1019 1093 229 1015 540 456 19
1870s 2866 1189 1514 163 1300 642 633 25
1880s 4126 1693 2287 146 1765 945 782 38
1890s 5979 2995 2863 121 3092 1929 1103 60

 

And here’s another showing the percentages:

               
Decade Ad% M% Md% F% Fd% U% Ud%
45.70% 45.96% 57.38% 49.26% 40.86% 4.78% 1.76%
1830s 51.87% 53.11% 65.60% 36.10% 34.00% 10.79% 0.40%
1840s 51.88% 52.36% 62.08% 40.69% 37.55% 6.94% 0.37%
1850s 48.42% 40.12% 48.33% 52.46% 49.86% 7.42% 1.81%
1860s 43.36% 43.53% 53.20% 46.69% 44.93% 9.78% 1.87%
1870s 45.36% 41.49% 49.38% 52.83% 48.69% 5.69% 1.92%
1880s 42.78% 41.03% 53.54% 55.43% 44.31% 3.54% 2.15%
1890s 51.71% 50.09% 62.39% 47.88% 35.67% 2.02% 1.94%

 

In an ideal world, you’d expect the percentages for titles with male authors (M%)  and for digitized titles with male authors (Md%)  to be roughly the same, right?  Think on… And feel free to download the csv file behind these tables for your own experimentation.

One should always suspect the data, so I make no excuse for the following detailed blow by blow account of how I got these numbers. Full gruesome details, including the scripts mentioned below, are available from https://github.com/lb42/bookLists

The basic method was to download a complete catalogue of relevant titles available from each target digital library, and then try to match them with records in the ATCL. For Google Books, which does not seemingly provide a complete catalogue online, I tried a different method, discussed further below.

I started by downloading the latest (June 2020) dump of the ATCL database, and converting it to a basic TEI XML format. I then did much the same for the holdings of five digital libraries with good holdings of 19th century novels: the Hathi Trust, the British Library, the Internet Archive, Project Gutenberg, and Google Books. As a control, and for testing purposes, I also looked at a few smaller collections, notably the Victorian Women Writers Project at Indiana University and the (now defunct) University of Adelaide “ebooks” repository. I wanted to provide something similar to John Mark Ockerbloom’s lovely Online Books Pages at https://onlinebooks.library.upenn.edu/ but more precisely tied in to ATCL.

Hathi Trust makes available a monthly dump of their entire collection as a huge tab-delimited file. Working with the most recent dump, dated September 1 2020, I used a simple minded perl script `hathiProcess.prl` to parse this file and select from it only freely-available English language books published in Great Britain between 1800 and 1920; an  XSLT stylesheet `htConv.xsl` then converted the results to the common project format (CPF).

The British Library website makes available an Excel spreadsheet providing metadata for the titles from their collection which were digitized some time ago by the Microsoft Books project I downloaded this, converted it to TEI with `csvtotei` and converted the result to CPF, (selecting just the 19th century titles) with `blConv.xsl`.

Project Gutenberg makes available several versions of its catalogue data. I worked with the most recently updated one, which is a vast archive of unbelievably verbose RDF files. Despite its complexity, this data doesn’t include any publication data for the source texts concerned (unsurprising really), though it does provide birth and death dates for the authors. To cut down the numbers a little, I rejected titles whose authors were not born during the 19th century, and also those which specified a MARC relator field “edt” (to cut out non-original editions). Once I had remembered how on earth to handle a gazillion tiny files of RDF (I did this back in 2018 ), I used the `gutConvRDF.xsl` script to process them all to CPF, and concatenated the results into a single file.

The Internet Archive, so far as I can see, doesn’t have any generally available or downloadable catalogue, though it does have a really good query interface. The method I used for attacking Google Books would presumably work equally well (or equally badly; see below) in this case, but I haven’t tried it. Instead I just used a predefined collection called `19thcennov` which someone at UIC Urbana Champaigne thoughtfully created back in December 2008. This gave me 7828 XML records which were easily converted to CPF using `iaConv.xsl`.

The common project format files all consist of TEI <bibl> elements with either an @xml:id attribute or an <idno> specifying the identifying code for this item in the relevant repository, e.g. ‘ia:foreignersnovel03pric` identifies the Internet Archive’s digitization of volume three of Eleanor Price’s novel “The Foreigners” . Each <bibl> also has an @n attribute supplying the magic key for the title, which is confected as follows:

  • remove the full stop following Mr or Mrs in any title containing one
  • take the substring of the title up to the first occurrence of one of the punctuation marks . , : ; or /
  • concatenate this with the author’s last name
  • convert to lower case and remove all punctuation characters and spaces

So, for example ATCL lists a work with the title “The Foreigners: A Novel” attributed to author “Eleanor C. Price”. The same work appears in the Internet Archive list, but with the author “Price, Eleanor C. (Eleanor Catherine)” and the title “The foreigners : a novel”. Despite the differing strings, both will get the same magic key “theforeigners|price”. This method is far from bullet proof, but it’s serviceable.

For Google Books, as noted above, there is no readily downloadable catalogue. But there is an API, which in a moment of madness I thought it might be cool to learn how to use. A day of poking around led me to a neat python script some helpful person had written to look up ISBN numbers (hat tip to AO8’s treasury , which I mercilessly hacked to my own purposes. My version reads a file of URL-encoded search requests like this “inTitle:the+inTitle:foreigners+inAuthor:Price”, fires them at the Google API, and processes the returns into a rudimentary bibl or a comment lamenting the absence or unavailability of the item in question. The file of search requests is rather long (one for each title in ATCL for which I have not yet found any digital version – a total of 11,203 ) so I make the program sleep for a while after firing off about 40 consecutive requests, to help the Google server catch up. Despite this considerate behaviour on my part, it did not take Mr Google long to decide that my program (or my IP address) was a threat, and then to start returning unco-operative HTTP messages like 503 (“Service Unavailable”) and 429 (“Too many requests”). The API Help pages confirm that Google considers “using an app, program or script to perform a large number of searches in a short time” prima facie justification for temporarily blocking the IP address in question; though it’s not clear what exactly is meant by “large” (more than 100?) and “short” (less than a minute?) in that phrase. Furthermore, when I search using my specially-minted API key, there seems to be a hard limit of 1000 queries per day in any case: so this job is not going to be finished very quickly. Still, I do now have an extra 1517 records to show for two day’s work.

Once I’ve created all these lists, I run the merger.xsl script to add <ref> elements to the ATCL-TEI file I created in the first step. This makes for some redundancy, for two reasons: firstly, for most of the archives a three volume novel is likely to get a separate entry for each volume; secondly, for many titles, there exist multiple digitizations – which may (or may not) derive from the same source. The following table shows for each archive the number of records selected for processing, the number of references to ATCL titles found, and the number of titles affected. Note that I haven’t yet done any de-duplication to remove overlaps.

British Library 62015 9920 5104  
Hathi Trust 460070 18891 5655  
Internet Archive 7829 4691 1655  
Project Gutenberg 38338 2880 2275  
Google Books ? 1517 1517  

I haven’t made available the CPF files for each archive, nor the final merged TEI version of the ATCL dump, since this is not really my data to share. But I have made available a file called atcl-links.csv, which is a spreadsheet with a row for each ATCL title digitized in one or more publicly available digital collection, mapping its ATCL identifier to its identifier in each repo. I’ll  update these as and when the data improves.

Can we trust the ATCL database?

I’ve been enthusiastic about the database behind the At the Circulating Library website ever since I discovered it nigh on three years ago. Troy Bassett, its creator, deserves much respect and lots of credit both for the work put into creating it and maintaining it, and for his generous open minded policy of making the data itself freely available in snapshot form for nerds like me to play with. I think of the ATCL as the nearest thing we are ever likely to get to an authoritative catalogue of the 19th century novel, and use it as such in other work which I’ll report on later. But (you guessed there was going to be a but) just how reliable is its coverage? Does it cover each decade of the 19th century equally well, or are there gaps?

When asked, Troy assures me that he’s fully aware that there are gaps. He estimates that the true size of the database should be well over 25,000 titles, rather than its current 18,000. He and his team are slowly and meticulously filling the gaps, by hoovering up and checking data from resources such as published catalogues and bibliographies, or online collections like Proquest’s NSTC. It’s tedious and fiddly work, ideally suited to a small farm of graduate students, should you have such a resource at your disposal.

My interest in ATCL is to use it as a reference point when assessing the coverage of other resources, in particular the catalogues of digital libraries. Take, for example, the current Hathi Trust catalogue. This lists 17,433,331 titled volumes in all; 235,035 of them being volumes published in the UK between 1800 and 1900. Removing obvious duplicates (Hathi catalogues each volume of a three-decker separately, usually though not always) brings this down to an only slightly more manageable 129,817 titles. How good is its coverage of 19th century novels? The difficulty of course is to winnow out just the entries which are novels. As I suggested on my blog back in June 2018, the word “novel” in a title turns out to be a very good indicator (other good words include « or » and « tale », « history » etc), so I extracted from the HT records just those titles containing the word “novel” for further investigation: there are 953 of them.

In my earlier work, I’d just assumed that if a title in HT didn’t appear in ATCL then it wasn’t a novel. But how complete is ATCL? A good way of finding out might be to work the other way round: first get a list of things which are definitely (probably) novels — and then check to see whether  they also appear in ATCL.

On my first pass, 284 of my 953 « novels » (i.e. titles containing that word) did not appear in ATCL, which surprised me. But this was mostly a result of my matching procedure being insufficiently robust to cope with the amount of variation in cataloguing practice within the HT records and their occasional divergence from ATCL practice. I spent a happy day or two going through the list of delinquents by hand, taking the opportunity to discard 19th c. reprints of earlier works, translations from other languages (mostly scandalous Zola), and a handful which were clearly not novels at all (e.g. “Photographic amusements : including a description of a number of novel effects obtainable with the camera / by Walter E. Woodbury.”) – all of which brought the number down to 191. I then fixed the “magic keys” used to identify each title in ATCL as necessary, all of which brought me to a manageable 40 titles apparently missing from ATCL, with which to torment Troy Bassett and his team. 40 titles missing from a sample of 953 is pretty good; even 191 missing is not so bad in this always very approximate work. I remain persuaded that the ATCL is a reliable approximation to a representative sampling of the 19th century novel.

A brief report on my attempts to understand the make-up and cataloguing of EEBO-TCP

Q. What exactly is EEBO-TCP made of?

A. (digital) copies of (microfilm) copies of works.

Q. “Works”?

A. Well, no. Specific copies of works, held in libraries. What I think FRBR calls “instances”.

Q. FRBR makes me think of unhappy cats. Can you give an example?

A. Consider Jeremy Taylor’s 1668 page-turner “XXV sermons preached at Golden-Grove being for the winter half-year beginning on Advent-Sunday until Whit-Sunday”. Two copies of this book from the British Library’s collection, as printed by one E.Tyler for book-seller Richard Royston, have been microfilmed, one as image set 199641 and one as image set 45789. Both copies are of the same book and therefore have the same identifier (T410) in Wing. Check ’em out at https://search.proquest.com/eebo/docview/2248511138 and https://search.proquest.com/eebo/docview/248511188 and see if you can tell them apart (I can’t, in this case) And yes, EEBO also has images of two different copies of the 1655 edition of this book: same author, title , and publisher, but completely different bibliographic entity, with a different Wing identifier (T409). And probably many others. He was big in the 17th century, that Jeremy Taylor.

Q. So?

A. Perhaps the primary key to the EEBO catalogue should be the image sets, since each catalogue entry concerns one set of images. However, given the choice, no-one would prefer a title like “microfilm no 42” to one like “amusing title of work”. Consequently, Proquest provides a unique identifier for each bibliographic work, which is suffixed with an image set identifier if there are multiple copies of the same work (which happens a lot), or if the work exists in multiple volumes which have been scanned separately (fortunately, less frequent). And, not to be outdone, TCP also provides a unique identifier, but in their case it identifies a specific copy of a specific work. The URLs I quoted above behave similarly: they uniquely identify a specific copy of a specific work.

Q. Wait, how many identifiers are there now?

A. Sticking with the 1668 third edition of Taylor’s XXV Sermons, we have two image sets (identifiers 45789 and 199641). Since these correspond with a single bibliographic entity, also known as Wing 610, Proquest gives them the same catalogue number (10772247). (You may wonder why they don’t just use the Wing number, unless you are a librarian). But we can’t have two entries in our catalogue with the same identifier, so Proquest appends the image set identifier to the catalogue number for one of them (I don’t know how they decide which, nor why they don’t do it for both). TCP, as noted above, just give each distinct item a distinct identifier: in this case the transcription of image set 45789 is in TCP text A64140, and the other is in TCP text B30404. All clear now?

Q. There must be an easier way of doing this

A. Yes. Amongst a raft (or maybe a coracle) of other improvements this year when Proquest moved the online access to EEBO to a new platform, they also introduced a new identifier called a “GOID”, which is essentially a unique number for every entry in the catalogue. That’s what I used in the URLs quoted above. Hoorah! We can now access anything in the EEBO catalogue using a simple numeric code, just like we can in the TCP subset of it.

Q. What about the TCP identifiers though?

A. Alas, these are currently not included in the dataset which underlies the Proquest online catalogue. As noted here earlier, I am working on a super TEI-compliant version of said dataset, and that will assuredly include the TCP identifiers. More on that anon.

My humble thanks to Paul Schaffner for patiently explaining all this to me. Any residual errors are mine, not his.

EEBO Bib

There you are idly scrolling through Twitter when someone announces a tempting resource that’s just crying out for TEI-ification. And there goes the afternoon and most of the evening.

Anyway, creating my EEBO Bibliography in TEI was insultingly easy. I just grabbed the Excel spreadsheet (34 Mb) thoughtfully created by those lovely people at Protext, and even more thoughtfully publicized by the even more lovely Heather Froelich on twitter this morning. I opened it up in Open Office. I exported it as a CSV file (85 Mb) and munged same through the standard TEI cvstotei stylesheet to generate 175 Mb of TEI compatible data. Then I sat down to consider how to make it actually TEI conformant, i.e. how to make the title of a bibliographic entry appear not as the content of <cell n= »5″> but as the content of a …. <title>. As you might suppose, defining the right mapping was easy for some things, but less so for others of the 17 cells in each of the 146,323 rows of the spreadsheet. There’s a table to show the mapping I decided on at the end of this blog, for those unwilling to read my pellucid XSLT code which actually uses it.

The resulting TEI file isn’t quite complete because it doesn’t have a TEI Header, needed to define the prefixes I use to save space in the URLs, but (at 120 Mb) it’s too big for github. And it’s now available at https://app.box.com/s/r8sxc68239g6pen09blzmul93tqs8rbv for your xpathing pleasure.

Here’s the table. The whole spreadsheet is a <listBibl> and each row becomes a <bibl>. I like simple solutions. I’m not proud of the <note type= »foo »>s, but that’s the best I could think of without getting far too complicated.

1MARC identifier@xml:id : prefixed by eebo:
2Image set identifier@facs : prefixed by eeboIs:
3Publication type@type (always either Book or Issue)
4Collection <series>
5Title<title>
6 Author<author>
7Publication Date<pubDate>
8Publisher<publisher>
9Country name<pubPlace>
10Publication language@xml:lang gives ISO code equivalent; text goes in a<note type= »langNote »> c
11Accession number<idno>
12Source Library<note type= »sourceLibrary »>
13Full text imageif « Y », <note type= »transcriptType »> contains « image »
14Full textif « Y », <note type= »transcriptType »> contains « text »
15USTC Classification<note type= »keywords »>
16Release dateToo boring to include
17 URL@ref with prefix proquest:
Mapping EEBO spreadsheet fields to bits of a TEI <bibl>

Counting the books

As a follow up to my previous rather excited posting I have finally got round to actually trying to count how many copies of each title selected for the ELTeC English collection various prestigious national libraries hold. Here’s how I have operationalised the need for some kind of metric approximating to the persistence or canonicity of a given title.

First I run a little XSLT script against the corpus to create a file full of lines like the following:

f @and @attr 1=1003 sinclair @attr 1=4 "modern flirtations"
set marcdump ENG18410.usmarc
show all

This means:

      • find records in which the author field contains “sinclair” and the title contains the words “modern” and “flirtations”.
      • send the output to a file called ENG18410.usmarc
      • display all the results from that query

Creating this query automagically is not without problems. Including words like “the” or punctuation like the question mark is ill advised. Some records include subtitles in their “titles” but most don’t. When the records do contain subtitles they may result in false hits: see further below.

Next I throw this at a z3950 server and go make myself a cup of tea while it chunters away. As noted in my previous posting, getting Z3950 access to a library in question is mostly just a matter of knowing the address of the server and its port, the name of a database, and sometimes (as with the British Library) also wheedling a login and password. The reason I use the recondite syntax above for my query input, and the reason that I accept the results in usmarc 21 format is … that’s what every z3950 server I have looked at so far promises to provide. Some have other exotic options for query or for output, but nothing else is universally guaranteed to work.

Returning with my cup of tea, I now have a bunch of inscrutable marc21 records tidily filed away. I wasted the best part of an evening yesterday trying but failing to find a simple online tool which would convert them into marcxml or indeed anything readable, but the best I could come up with was a perl utility called marcdump. Here’s the start of the output it gives me for ENG18410.usmarc

LDR 00535nam a2200181uu 4500
001 006812208
005 20100212180700.0
008 040420s1841 xx || 000 ||eng
019 u _aG11034382
040 _aUk
_cUk
082 04 _a823
100 1 _aSinclair, Catherine,
_d1800-1864.
245 10 _aModern flirtations :
_bor, A month at Harrowgate /
_cCatherine Sinclair. Vol. 1.
260 _a[S.l.] :
_b[s.n.],
_c1841.
336 _atext
_2rdacontent
337 _aunmediated
_2rdamedia
338 _avolume
_2rdacarrier
852 41 _aBritish Library
_bDSC
_jW5/2649

Exciting stuff, eh. The useful bit here is the publication date, which appears as subfield _c of field 260 here (sadly, there are other possibilities), and even more useful the following, which appears at the end of the output file:

Recs Errs Filename
----- ----- --------
4 0 ENG18410.usmarc

Tis but a matter of moments to grep through these files and extract a list of record counts for each title, together with a list of publication dates.

Furthermore, and much to my relief, the counts do seem to reflect my initial expectations as to which titles would be highly rated and which not. The top ten titles in my 90 are (drumroll)…

94 ENG18860 Hardy: The Mayor of Casterbridge
106 ENG18531 Yonge: The Heir of Redclyffe
135 ENG18621 Braddon : Lady Audley’s Secret
143 ENG18481 Dickens: Dombey and son
148 ENG18610 Eliot: Silas Marner
152 ENG18530 Dickens: Bleak House
157 ENG18540 Dickens : Hard Times
168 ENG18480 Thackeray: Vanity Fair
298 ENG18471 Bronte: Wuthering Heights
664 ENG18652 Carroll: Alice in Wonderland

Nearly all of these would figure on any list of long-lasting 19th c English novels. An eyebrow night be raised by some in the English department about the appearance of Yonge and Braddon, but the explanation is simple: both ladies (or their publishers) were very fond of including the phrase “by the author of ‘Most Famous Title’ ” on the title of their less famous works, and I have not yet worked out how to remove such imposters as “Work you’ve never heard of (by the author of Most Famous Title) ” from the results of a search for “Most Famous Title”.

Another eyebrow might be raised at the frequency distribution of the scores found: there is a very long tail, with nearly two-thirds of my 90 titles scoring 20 or less, while the top scorers, as shown above, score very much more. To some extent, this is explained by the crudity of my search technique, which will include musical adaptations, commentaries, versions for the use of slow readers, study notes, etc etc provided that “Most Famous Title” appears in the title somewhere. This worries me less, since the existence of such things is surely also testimony to the salience of the title in question. This factor does however have an inflationary effect on the scores, so that titles which don’t benefit from it appear lower than might be expected. “Middlemarch” for example – widely regarded as amongst the greatest English novels of the period, but not subject to – scores only 77, ahead of Sherlock Holmes debut novel “The sign of four” (72) but behind George Eliot’s closest rival for the depiction of provincial life Mrs Gaskell’s “Mary Barton” (82).

But these scores should not be subjected to such close scrutiny. If we are looking for a proxy metric for the “impact factor” of these works, it’s not implausible to be guided by the numbers of different editions of them that have accumulated in our great national libraries. If we say that a score of less than (say) 20 suggests a low impact, and anything above (say) 50 a high one, we should not go too far wrong.

So far I have tested this procedure only on the British Library’s collection. An obvious next step is to try a different English-language library (COPAC springs ro mind) to check that the ranking is not too widely different. And then to try out a different language: the BNF also has a z3950 server so I plan to subject the French collection to the same treatment.

Hoorah for standards (and librarians)

Librarians do it with Z3950. Of course, I knew that all along. I just didn’t know what it implied for the non-librarian.  I have been wondering for some time how to get at the bibliographic riches of our national and university library catalogues without using one of the artfully constructed snazzy web interfaces they all seem to have set up in the interests of “usability”. Which interfaces don’t usually include any means of answering such questions as “List dates of publication for all editions of this title which you hold”. BUT (tl:dr) most library catalogues also run a server which exposes their data via an antique API called “Z3950”, originally developed by the Library of Congress and others, apparently so that they could steal catalogue records from each other without the pain of sending round a man with a truck.

When I say antique, believe me I am not joking. Some Z3950 servers I have looked at (eg the Bodleian) will deliver records in funky modern formats like SGML, but the only format you can be sure of getting is US MARC 21. This is an old school binary format, remarkably similar to the ones I used to have to deal with when debugging IDMS databases or alien magnetic tape formats back in the 1970s. You have pointers and variable length fields and a mix of character data and binary and you really don’t want to hack this stuff in Python, honest. Which is why there are dozens of snazzy interfaces and convertors and alternative syntaxes listed in bewildering plenitude on the Library of Congress website. Some of them cost money. Most of them are for Windows only. I took an instant dislike to nearly all of them… until I discovered YAZ.

YAZ is a properly constructed library of utilities offering all sorts of features I don’t claim to understand; what I liked is that it also provides a small number of command line utilities which do things like access a z3950 server, search for records, save the results, and convert from US Marc to something a tad more readable. And it’s open source, and free, and runs on linux, just like that, no need to install lots of bloatware as well. Sudo apt install yaz* and I’m done.

Now, let’s say I want a list of the publication data for all copies of Disraeli’s Sybil held at the British Library. Simple. On the unix command line I type:

$ yaz-client -u xxxxxxxx z3950cat.bl.uk:9909/ZBLACU

(the row of xs there is my authentication string, which I promised the BL I wouldn’t share with anyone as a condition of getting access to their server for nothing. The rest is common knowledge though – see http://www.bl.uk/bibliographic/z3950configuration.html)

The yaz client operates a bit like telnet used to: at each prompt I can type special commands like “find” and “show” and see the results. By default it uses something called Prefix Query Notation, or PQN, though I could also use something called Common Command Language (CCL) “an international standard query language often used in libraries” according to Wikipedia.

Z> find @and @attr 1=1003 disraeli @attr 1=4 sybil

Don’t you just love that PQN syntax? It’s like typing in Latin. It means Find records with “disraeli” in marc field 1003 and “sybil” in marc field 4. Who needs high falutin nonsense like Xquery?

Then I can type:

Z>show 1+100

to see the first 100 records (it’s obvious, surely, that “n+m” must mean “up to m records following record number n”)

Even better, I can type

Z>set_marcdump wibble

and have a copy of  all the marc records returned saved to the file wibble. Why should I do that? Because I can then process that file to something readable with another little command line utility called marcdump:

$yaz-marcdump wibble | grep 204

which will convert wibble to something more readable, and pick out all the lines containing publication data! Yes, you do have to know what those Marc field labels mean, Cynthia.

And yes, there are features for batch mode operation so I could run my commands from a file, save the results, tidy them up and put my own super-simplistic interface on the whole business. I can see this is going to keep me happily occupied for quite a while.

Building the ELTeC : the quest continues

My quest for a reasonably reliably complete list of 19th c. English novels complete with indication of available electronic versions thereof has made a large leap forward as a result of the generosity of others.

Firstly, Professor Troy Bassett of Purdue University Fort Wayne, whom God Preserve, tells me that he will now start distributing regular extracts from his database at http://www.victorianresearch.org/atcl/index.php in CSV form for my (and others’) mungeing pleasure. I spent some time testing this out, using Libre Office to convert the CSV into an XML form which I could then reprocess with yet another xslt stylesheet. This worked reasonably well (eventually) and I was able to create a new version of this data in TEI, all 15,682 records of it. To merge in identifiers for VWWP, Internet Archive, Gutenberg, and Google texts I used a simple minded stylesheet which tries to match on the magic keys I created earlier, as discussed in previous blog postings. This worked, though not exactly with blistering speed, and I produced a second version of the Bassett TEI file with added references. Of those 15,682 titles, only 3,430 have at least one digital version and there are a total of 3,813 digital versions in all.

Secondly, and with some trepidation, I approached the sacred grove of Big Data and sent a plea for help to the nice people at Hathi Trust, that extraordinary repository of 16 million or so books in digital form. You can download a full list of their holdings in a nice simple processable form from https://www.hathitrust.org/hathifiles : the catch is that their monthly snapshots contain 16 million (and counting) records derived, I think automatically, from contributing libraries’ catalogue systems, not all of them entirely consistent in their metadata usage, and certainly none of them clearly identifying which titles might be considered to be novels or have female authors. I did some initial poking around with a perl script and ascertained that I could whittle the 16,208,265 entries in the May 4th version of the file down to a mere 1,253,950 by selecting only those which are publically acccessible, have a publication date between 1849 and 1820, are books in English, and are not US government documents. These counts are inflated by the fact that we are counting volumes here, rather than works.

Meanwhile, a helpful person on the Hathitrust help desk had got in touch pointing me to the existence of various selections created by other researchers before me, notably Ted Underwood’s file filtered_fiction_metadata.csv [available on Github at https://github.com/tedunderwood/character/blob/master/metadata/filtered_fiction_metadata.csv]
which lists 93,708 volumes of fiction, published between 1800 and 2007. “It focuses on fiction, but will include works in translation. It’s also not restricted to ‘the novel’; short stories and even folk tales are included. But we have tried to exclude fiction aimed explicitly at a juvenile audience across the whole timeline.” Underwood points out that “There is no authoritative master list of all English-language fiction from 1800 to the present. So scholars who want to do research at scale have to construct their own list.” which is true enough, though I wonder if he’s aware of Bassett’s work. Anyway, I wrote more scripts to process his CSV file into TEI, adding my magic keys to enable me to link his data with the titles in the Bassett database. All of which meant that I could provide Bassett with a list of Hathtrust links as well as those already provided for Gutenberg et al.

Bassett’s data includes some titles published only in the US, which I would like to exclude, as well as some “aimed explicitly at a juvenile audience” (the two sets of course have nothing in common, except that I would like to exclude them both). More annoyingly, it stops in 1901. I therefore went back to thinking about how to extract comparable data from the Hathifiles for the period 1901-1920. The most recent Hathi file (dated 1 June) has entries for 16,370,821 volumes. I tweaked my perl script to extract records for books in English published between 1900 and 1921, as before, tweaking it also to suppress duplicate titles, which gave me a more manageable but still daunting total of 348,232 titles.

The question is, as ever, how to pick out the novels from the chaff? The obvious answer is to apply some basic text analytic tools. I have a list of 15k titles which the obliging Bassett has already identified as novels. I have 348,232 titles extracted from the Hathitrust file. If I had to scan through them choosing the titles which are most likely to be novels, which words would catch my eye? Which words in fact appear significantly more frequently in the first list than their frequency distribution in the second would lead you to suppose? This sort of question is surely what computer linguists deal with ten times a day before breakfast.

I install Lawrence Anthony’s excellent AntConc program from http://www.laurenceanthony.net/software.html and battle through its interface a bit. Here are the top twenty “keywords” for the Bassett title list, when I use the aforesaid Hathitrust titles list as the “base corpus” for comparison. The underlying statistic (need you ask) is log likelihood (4 term); the threshold being p < 0.05 (+Bonferroni), i.e. the defaults.

#Keyword Types: 611
#Keyword Tokens: 47261
#Search Hits: 0
1 2693 + 15387.79 0.0556 novel
2 8559 + 10677.59 0.0763 a
3 1969 + 6571.89 0.0388 or
4 1262 + 6402.77 0.0266 tale
5 1426 + 4104.1 0.0283 story
6 721 + 2599.46 0.0152 romance
7 2195 + 1957.72 0.0327 s
8 471 + 1072.51 0.0098 stories
9 364 + 1058.08 0.0077 love
10 837 + 1019.33 0.016 life
11 355 + 1001.66 0.0075 tales
12 9268 + 875.37 0.0384 the
13 272 + 773.82 0.0058 adventures
14 163 + 594.6 0.0035 daughter
15 224 + 591.09 0.0048 lady
16 217 + 546.42 0.0046 woman
17 150 + 484.95 0.0032 wife
18 570 + 478.07 0.011 other
19 221 + 422.18 0.0047 little
20 131 + 395.37 0.0028 miss

This seems like pretty good evidence that simply looking for titles which contain the words “novel”, “tale”, “story”, “romance” or – this one a surprise – “or” might do pretty well. It also suggests that the Bassett novel is quite concerned about women, but that’s another story.

Reassured, I re-tweak my Perl script to extract bibliographic records as before, but this time only if their title contains one of the words “novel”, “tale”, “story”, or “romance”. A glance at the results shows that this approach cuts down the number of titles to a more manageable 16,908 but still has many false positives (“My life and the story of the Gospel hymns and of sacred songs and solos” for example) . A more sophisticated approach is needed, clearly. Time to make a cup of tea.

Building the Eltec (stage 0) … continued

Have at you, Project Gutenberg…

I am for sure not the first person to think it would be nice to try to make the Project Gutenberg metadata more easily machine tractable. Matthew Jockers wrote a python script to hack usable metadata out of the individual texts back in 2010 (see this blog entry ) ; Damon Cavar wrote some java to do something similar but starting from the RDF form of the Gutenberg catalog, as part of an ambitious
(but I think as yet incomplete) Project Gutenberg to TEI XML conversion project  last updated 2012.  More recently, Jonathan Reeve has announced an interesting project which is hacking together various bits of Gutenberg, Gitenberg, and Wikipaedia to make a  project Gutenberg database for text mining  … one day.

My objectives are not so ambitious and I like to keep things  simple. I just want to know how many Gutenberg titles are listed in the Bassett database of 19th c British fiction. (I’d also like to be able to extract a list of all British novels in English published for the first time between 1902 and 1920, but that’s a separate problem) Having experimented with other plain text options, I reluctantly decided to start from the Gutenberg RDF catalogue. At least that is expressed using a syntax which xslt can handle and validate. No claims that its semantics are entirely reliable, of course.

Step 1 is to download and unpack a massive zip file from the Gutenberg site. We want the RDF format data is linked to from a page in the Gutenberg wiki:  It is massive because it actually contains nigh on 50,000 subdirectories, each containing a single file, describing a single text. So, for example, the RDF format catalogue entry for text number 1234 is in the unpacked file cache/epub /1234/pg1234.rdf When I looked there was also just one directory called DELETE-55495 which contained a variant of the entry for pg55485.rdf, but I pretended I hadn’t noticed that.

Step 2 is to develop and perfect a simple XSLT script to extract the useful grains from the enormous amount of chaff in each RDF file. This script (rdftotei) is designed to meet the needs of the ELTeC, so it rejects anything which is clearly out of the desired time zone (author born after 1920 or before 1800), or definitely not a novel (some records use a marc edt descriptor to show that they are edited compilations). If I could find a way of identifying books which are not in English I would exclude them too.  It cranks out simplified TEI bibl records like this:

<bibl xml:id="10037" n="abeautifulpossibility|Black">
<title>A Beautiful Possibility</title>
<author dates="1857 1936">Black, Edith Ferguson</author>
</bibl>

As you can see, this includes a  magic key that I will later use for matching with other ELTeC bibliographic records, notably the Bassett database I blogged about last week.

Step 3 is to find a way of running this script against 50,000 files which does not cause my computer to melt down, and preferably will complete in my lifetime. My first simple minded approach was  a shell script that invokes saxon on each file. But this has to set up a JVM afresh each time it runs, so it takes forever. I considered glomming the individual files together into a smaller number of larger files, so that loading the JRE gets done less frequently, but this is fiddly because each of the individual files begins with an XML declaration that would have to be removed during the glomming process. A question to the oxygen users list evinces 3 helpful alternative suggestions in ten minutes: the easiest and quickest of which is to use a feature I didn’t even know existed in saxon: specifying a directory as input and as output. So with all my RDF files in the folder RDF and nothing in the directory RDFx, I do the following two shell commands:

saxon -s:RDF -o:RDFx rdftotei.xsl
cat RDFx/* > gutenList.xml

and the whole thing is done in a couple of minutes.

Step 4 is to repeat the process as before: pick out the magic keys and then look for overlaps between those keys and those in the Bassett database (like this:

saxon guten-list.xml getKeys.xsl > gutenKeys.txt
comm -12 <(sort gutenKeys.txt) <(sort bassetKeys.txt)

Result on the first round: 1478 Gutenberg titles are already known to Bassett. Not as many as I’d expected, but not bad. Here are the full results for all three digital collections.

Out of 13,859 titles in Bassett’s database,  a total of 2937 appear in at least one of Gutenberg, Internet Archive, Google Books, or VWWP, i.e. more than 20% (which is better than I was expecting).  Here are the counts for the individual collections:

Gutenberg InternetArchive Google Books VWWP
1478 1155 594 32

 

Also to be expected, there’s a bit of overlap. 2638 appear in only one digital collection; 276 in two, and 23 in all 3. You can probably guess which titles those are, though one of them came as a bit of a surprise. What’s so great about Mary Ward’s « Marcella »?.

Building the ELTeC : stage 0

Problem: if the ELTeC is supposed to represent in some sense the full range of novel production in a given language (EN in my case) for a given time slot (1850 to  1920, it says here) how do you find out what the population actually is before starting to sample it?

Enter at this point a wonderful database: Bassett, Troy J. At the Circulating Library: A Database of Victorian Fiction, 1837-1901. Victorian Research Web. [accessed 2018-02-09] (http://www.victorianresearch.org/atcl). I say « wonderful » advisedly: I have found nothing comparably complete and usable in a week of scratching around the internet.

According to Bassett’s technical notes, « This website was written on a Macintosh using a MySQL database and PHP. » Consequently its contents are pretty consistently organized and tagged, and consequently screen scraping and munging into a different format are both pretty easy: like this

 #!/bin/bash
 for number in {1850..1901}
 do
 echo "$number "
 wget "http://www.victorianresearch.org/atcl/show_year.php?year=$number" -O $number.html ; tidy -asxml -n --new-empty-tags image $number.html | saxon - bassetConv.xsl > $number.xml
 done
 exit 0

The `bassetConv.xsl` stylesheet generates a TEI format bibl entry for each of the 15,000 plus titles, like this:


<bibl xml:id="11169" n="acounselofperfection|Malet">
<author n="576">Lucas Malet.</author>
1 vol.  London: Kegan Paul.</bibl>
The numbers are the identifiers used by the mySQL database, so they should be unique. The @n attribute on the bibl element is a key I generate for matching purposes (see later).

I would rather like to know how many of these titles are available in
digital form, and from where. The current database has links to
Google Books (500 or so, it claims: I haven’t worked out how to find
them without downloading the entire database) but nothing else so far
as I can tell.

How to accomplish this?

My best plan so far is to extract  lists of keys or fingerprints, derived from the cataloguing information supplied by each online repository  and then start looking for overlaps with Bassett. My assumption is that for any relevant title not to exist in Bassett would be rather remarkable. Some initial experiments suggest that the first few words of the title combined with the surname of the author is a reasonable approximation to a fingerprint for each title, though obviously not
perfect.

So far I have looked at the following four online repositories:

1. Victorian Women Writers Project

This has 75 titles identified as fiction, and all in good TEI format; the following grabs the catalogue info for them

http://webapp1.dlib.indiana.edu/vwwp/search?docsPerPage=100&browseText=fiction&text1=fiction&field1=browse-genre&style=&smode=simple&brand=general

Life is too short to process all the resulting HTML: in any case I am sure if I wanted a proper XML catalogue my friends at Indiana would give me one. So I manually separate out the useful chunk of HTML: and mangle it through a simple XSLT stylesheet (`vwwpConv.xsl`), to produce a file of entries like this:

<bibl xml:id="VAB7046" n="daphne|Ward">
<title>Daphne, or Marriage a la Mode</title>
<author>Ward, Humphry, Mrs., 1851–1920</author>
<publisher>London; New York: Cassell, 1909. 315 p.</publisher>
</bibl>

Note the cunningly constructed @n attribute supplying a key which, all things being equal, should match an entry in the Bassett database, if there is one.

2. The ebooks@adelaide project

This is a quirky site hosted by the University of Adelaide library which makes many texts available in a well structured XHTML format which is easy to munge into TEI. For info about this apparently little known but rather splendid project see https://ebooks.adelaide.edu.au/about/
Selecting relevant titles for our purposes is not so easy, so I grabbed a readable version of their entire catalogue in HTML from the website, and then grepped through it for potentially useful entries, resulting in a file full of lines like this:

<li><a href="/a/abbott/edwin/flatland/">Flatland: a romance of many dimensions / Edwin A. Abbott [1884]</a></li>

along with a lot of less useful lines like this

<li><a href="/a/aristotle/meteorology/">Meteorology / Aristotle; translated by E. W. Webster</a></li>

A perl script was the quickest way of munging this into minimal TEI entries like this:

<bibl><title>Flatland: a romance of many dimensions </title><author>Edwin A. Abbott </author> <date>1884</date><ref>/a/abbott/edwin/flatland/</ref></bibl>

I am not sure that there is much wheat of this kind in this chaff however: I will come back to it later.

3. And then there’s Gutenberg

Yes, the Gutenberg Project also has a catalogue: a huge one, which you
can download as an incomprehensible RDF file or a plain text
monster. I took the latter option, and hacked out of it 49639 lines
like this:

<title>100 Desert Wildflowers in Natural Color</title><author>Natt Noyes Dodge</author><idno>54631</idno>

I won’t rehearse here the manifold  inconveniences of using Gutenberg texts. But it would be useful to come back to this and see how many of the Bassett titles are available here. As a first approximation, I first reduced the title list to just the first part of the title and the author’s surname, with no spaces or punctuation, and then used the invaluable unix  comm command to identify common lines in the two files. Obviously, this procedure is vulnerable to typos and inconsistent editorial practices, which are not uncommon, but in my first experiment I found 984 items with identical titles and authors in both the Gutenberg title list and the Bassett title list.

4 Internet Archive

This wonderful collection has a good search interface, spoiled somewhat by the unreliability of the data. I used it to grab all the entries for a collection called « 19thcennov » which looked promising. Sorted by descending date, the very first item had a date of « 1983 » which, on inspection, turned out to be a typo for « 1883 », so a good thing I didn’t use « date » to limit my search. OTOH, the second item in the list was something called « Mathematics in urban science, V : Catastrophe theory » published by the « Monticello, Ill. : Council of Planning Librarians ». No matter: at least the output is in XML, and the text identifiers used by the IA bear a striking resemblance to those I thought I had invented. My search gives me 7829 items which look like this:


<doc>
<str name="creator">North, William, d. 1854</str>
<str name="date">1847-01-01T00:00:00Z</str>
<str name="identifier">impostororbornwi03nort</str>
<str name="language">eng</str>
<str name="publisher">London : T.C. Newby</str>
<str name="title">The impostor, or, Born without a conscience</str>
<str name="volume">3</str>
</doc>

which I then munge into

<bibl xml:id="impostororbornwi01nort" n="theimpostor|North">
<author>North, William, d. 1854</author>
<title>The impostor, or, Born without a conscience</title>
London : T.C. Newby</bibl>

The IA gives each volume a separate catalogue entry, so these 7829 entries boil down to only 2739 unique keys which I can compare with the Bassett keys. On a first pass through I identify 1235 matches, i.e. nearly half. I also identify lots of ways of improving the matching procedure, but for the moment this seems like it might be useful. Though I am not sure that a success rate of around 10% in identifying matches is altogether worth shouting from the rooftops about.

Digitalising in Le Mans and eating in Oulx

Monday morning, July 3rd and here I am  again at St Pancras…  waiting for the   Eurostar to Paris which is unexpectedly 30 mins late leaving, and   therefore, likewise arriving. But I have plenty of time to get across Paris by metro to Montparnasse Le Bienvenu, and then by surface street, very hot and sunny, to the grand station where the TGVs hang out. How pleasant, a train system which works, I reflect as it zooms across the unutterably boring flat countryside of the Ile de France, and so to Le Mans, where I am due to address a DARIAH “Humanities At Scale” (whatever that means) funded workshop which rejoices in the title of Bibliotheca Digitalis, though I am pretty sure it will have nothing to do with foxgloves. Out of the station, five minutes up the road, and into the Hotel Chantecler, as advertised. Various emails insist I will be taken out for dinner by and with chums from Tours, and so it comes to pass. It’s time to be lionized for tomorrow I am lecturing.

July 4th

The morning starts with a (guided) walk down to the Médiathèque Louis Aragon, which is a newer and less brutally modernist building than the Maison de Culture we pass en route, and which is in also in a state of unexpected chaos because of a major reshelving exercise. All the paperbacks and DVDs and what have you are spread out on tables under the watchful glare of lady librarians. But this is nothing to do with us: we are in a nice lecture room where we are being welcomed in broken English by various dignatories as per usual. I am second on the bill, after Jean-Yves Ramel from Tours who gives a comprehensible and accessible introduction to OCR and how it doesn’t quite work. After coffee, I pick up the theme and wax eloquent on what sort of not not-quite-working we are talking about, the need to go “beyond the page” etc etc. Then we all depart for lunch in a state of advanced self satisfaction. Lunch is across town in a nice bistrot and is enlivened by the arrival of Mark Greengrass who I havent seen in a million years, when he was starting down a route he seems now to have abandoned, digitizing cultural networks in the Hartlib Papers project at Sheffield. It’s also rather tasty, as French bistrot lunches tend to be when ordered well in advance. Back at the Mediatheque, our local librarian hosts address us in French, with a somewhat hesistant on-the-fly translation from Toshi and then we get to hear from all of the participants, most of whom turn out to be EU researchers or librarians from Italy, Romania, Bulgaria, Hungary, etc. Impossible to take in fully some thirty or so projects, presented at lightning speed (two minutes each!) thus ensuring that only the outliers stick in the memory, but I gleaned a general impression of keen competence ready to be enlarged, as well as recognizing a couple of TEI acolytes. And so to the first of the formal public lectures: a thoughtful discussion about the relative influences of social networks and of the nascent publishing industry in the production of some major 17th century works. Being public, Mark was obliged to deliver this lecture in French unlike the rest of the proceedings, and some of the good citizens of Le Mans joined us to benefit from it. Huzzah. And then it’s dinner time in a nice old restaurant, of course.

July 5th

I make no effort to arrive on time this morning, which means that the door of the Mediatheque is locked when I get there, so I have to send imploring text messages to gain entry. But those extra minutes in bed were worth it. Somewhat to my surprise, the two technical briefings being given (one from Aurelian Ruellet on how social networks should really correctly be modelled; the other from Eduard Frunzeanu and Régis Robineau on how IIIF works) are both informative and accessible. I don’t think the former mentioned anything that the TEI does not already support; and the second reminded me how overdue is proper discussion of a bridge between TEI and IIIF world views. This led me to interrogate Robineau over lunch, perhaps a tad too aggressively. After lunch, we moved to another room for the first proper workshop session in which participants were invited to do some data modelling based on a 17th c register of permissions to print emanating from the royal Chancellerie in Paris, which they did enthusiastically and in groups. Most groups decided to design a database structure to hold extracts from the document, but a couple of hardy souls did make the effort of thinking about how they’d model the document itself. The evening was a fairly full social programme, starting with a visit to the celebrated Le Mans abbey, followed by a civic reception, and concluding with a guided tour of the old town. I behaved fairly deplorably at all of these, firstly by turning up at the wrong cathedral (I blame Pierre-Yves, who bought the beers); secondly by drinking and eating too much; and thirdly by sloping off early as soon as it became apparent that serious walking was required. Which is a pity, since the bits I did manage to see of the Plantagenet old town looked really rather nice. The same indeed, might be said of the whole workshop. (See further my photo album) But I have a scheduling conflict, which means that I have to skip back to Paris the next day

July 6th

I rise late and take a lingering breakfast, before packing in a leisurely way and accidentally stealing the Hotel Chantecler’s TV remote. And then back to the station for the 1134 to Paris. It is still infeasibly hot and the countryside is still rather dull so I work on the DifDePo schema resolutely and don’t look out the window much. Back in Paris, I catch the bus to Gare de Lyon, walk to my hotel (de Venise, in an interesting little neighbourhood), dump my bag, and only then realise that there isn’t a bus from there to anywhere near rue Monge, where I am due for lunch in er 7 minutes time. Bother. I suddenly remember the existence of Parisian taxis (quite plentiful around the gare de Lyon) and persuade one to take me to the village Monge where Helene and the plat du jour are waiting, arriving a mere 30 minutes late, quite normal by Parisian standards I am assured. And so to a somewhat fraught meeting on the 4th floor of Censier, where I plea for some attention to the XML validation of the DifDePo transcriptions. Nice beers with Marc and Chiara afterwards: it’s still very hot. Then to BVH where I fail to find a nice cool shirt and am assured that there are no more linen shirts in Paris, because of the heat. And so to the Gare du Nord in time to meet L, and then shepherd her by RER (very hot) to the Gare de Lyon, and then back to the Hotel de Venise. After inspecting the local eateries, we settle on a small Japanese supper nearby and retire to bed.

7 July

Up early to catch TGV to Oulx, but we dilly dally (did I say that it was already quite hot?) and SNCF mysteriously decide to make the train leave four minutes earlier than advertised, just as we arrive at the wrong end of platform 15, and realise that it contains two TGVs stuck together, both of which are about to leave, but only one of which has seats for us. Damn and blast. We pile into the cattle class carriage of the wrong TGV, and resign ourselves to two hours of (actually quite well-behaved) excitable children en route to a colonie de vacances somewhere near Grenoble. We escape at Lyon, and find our comfy seats in the right TGV, though they are facing the wrong way for Lilette to enjoy the (very scenic) route from Grenoble to the Alps. The Residence du Commerce in Oulx is opposite the railway station and next door to a nice bar where we refresh ourselves while waiting for the hotelier lady to appear. All very peaceful. Oulx is a tiny place, boasting a scenic bridge over a river, some snow topped mountains, and one street of touristic shops, none of which has a shirt to offer me nor even soap to wash one of the numerous dirty ones I have now accumulated, but never mind. On Guy’s recommendation, we reserve a table at the Ristorantino La Stella, down the road from the station, which turns out to be really rather nice. It is run by a serious Sardinian gentleman, aided by two cheerful ladies, and a third who is his wife. There is only space for about six tables, and the food is cooked to order. We had a salad, which reminded me why one should only eat Italian tomatoes, followed by rabbit and polenta, which reminded us that rabbit can be tender and juicy. I revealed my complete ignorance of Italian history by wondering aloud why a Sardinian should be living in Piedmonte, and the proprietor’s wife (who comes from Bologna anyway) was too polite to explain.

We have come to Oulx, if you were wondering, not just because it has a curious name, but also because it is a sensible place to break the journey to Liserna (where we are now headed,) being the first civilised town you encounter  on the Italian side of the border.

How to make a sow’s ear into a silk purse/Comment faire d’une buse un épervier

As the proverb says, you can’t turn a buzzard into a sparrow-hawk. But here’s how I went about producing a TEI-conformant minimally encoded edition of the 1611 King James bible, starting from an all-singing, all-dancing, vastly over-complicated, web site to the existence of which Martin Mueller had alerted me last Friday in a plaintive posting to TEI-L

The problem with web sites like this is that they expose only various HTML views of their underlying data, usually heavily infested with javascript, often deploying all sorts of nonstandard gizmos to make the pages look just so. I’ve no quarrel with their doing that, I just wish they’d make it a bit easier to get at the real data, i.e. the transcribed text and the pointers to the images that go with them. This site is a case in point: after spending some time looking at some of the gazillion HTML files a simple-minded wget -r starts hoovering up, I hypothesize that the data is actually stashed away somewhere inaccessible and all that you see on the site is a bunch of variously configured static web pages which have been created from it. The good thing about that is that the static web pages were created by an automaton, and the process can therefore be reversed by another automaton. The bad thing is that (in this case at least) clearly different automata have been used to generate vaguely different versions of the same page. For example: the following three URLs all show subtly different versions of the same first page of the 1611 bible : https://www.kingjamesbibleonline.org/1611_Genesis-Chapter-1/  https://www.kingjamesbibleonline.org/Genesis-Chapter-1_Original-1611-KJV/ and https://www.kingjamesbibleonline.org/Genesis_1_1611/

These are not simple redirects: each URL delivers a file in a different format. Well, I could have asked whoever runs this site what was going on, but that would have spoiled the fun, so I chose the format I liked best (the last of the three above) and wrote a script to generate requests for just those pages. I had to assume that the URLs would follow a consistent naming format, encouraged by a table of the names of the books of the bible I found in one of the chunks of embedded javascript, and which I moved into my little perl script. Running this script produced a bash script which grabbed each page using curl and saved it as an HTML file on my machine.
What went wrong with this process? Surprisingly little: I didn’t find out till Sunday lunchtime that not only had I completely overlooked the Apocryphal books, but also that the file naming conventions used for these were not quite the same as for the canonical chapters (« Judith » for example is actually spelled « Iudeth »), but for the bulk of the 1300 or so chapters my guesses about the URL to use were spot on.  [This was Hubris. See my comment below]
The real challenge was disentangling the useful data from the resulting screen-scraped mess of pottage. My usual approach here is to run the HTML I get through Dave Ragget’s utterly wonderful tidy utility to turn it into kosher XML, and then write an XSLT script to pick out the bits I want. In this case, however, the pottage I got was so messy even tidy threw up its metaphorical hands and refused to proceed with it, so I had to concoct a little perl script to pre-process each file and extract just the useful part, before running it through tidy, and processing the resulting XML into conformant TEI by means of saxon and a little XSLT stylesheet. Like this:

for f in webScraped/*.html; do \
FNAME=`basename $f .html`;\ 
echo ${FNAME} ;\ 
perl extract.prl $f | \ 
tidy -q  --error-file tidyerrs --doctype omit --numeric-entities yes - asxml | \ 
saxon -o:chaps/${FNAME}.xml -s:- -xsl:postGrab.xsl chapId=${FNAME};\ 
done;

And what went wrong with this process? Well, quite a lot, as you might expect, but nothing that couldn’t be fixed by tweaking and rerunning the process (this is why I always put the effort into writing bash scripts for jobs like this: they can be rerun till I get it right). For example, I was deriving an XML id for each chapter from the name of the input file, but some of the files had names beginning with digits. My plan was to put each chapter into a separate XML file and then use xInclude to munge them together into a TEI document (to maximize the flexibility of said mungeing) — but for this to work I had to get namespace declarations in the right place, and as anyone who has used them knows, anything involving namespace declarations never works the first time. To say nothing of unexpected inconsistencies in the data : which in fact were really very few. In fact so far, the only thing that has so far caught me out was a handful of chunks which looked like biblical data in the HTML markup but were not. Fortunately these were detectable because they wound up with an invalid @n attribute in the XML output and could therefore be weeded out after the event.
While all this was going on, I wrote my driver file and did some further sniffing around on the interwebs for sources from which to complete the work. The 1611 Bible has a title page and loads of prefatory matter not all of which is available on www.kingjamesbibleonline.org (for the record, I found the title page on Wikimedia, and the rest of the front matter at several places, but in page image form only).
How should the Bible be modelled as a TEI document? In my first view, each verse is an <ab>, each chapter is a <div>, each book is a <text>, each testament is a <group>. This made sense to me, but then I realised that processing would be simpler if instead each book were regarded as a <div> of a different type; hence, that is what the current versions of both driver file and ODD say. Considerations of where to put the genealogies, tables for Easter, almanachs etc. which arguably do not belong in the front matter may lead me to review that decision, so I have left <group> lurking in my ODD. It should be easy to produce a separate driver file representing that alternative view.

A more pressing need, however, is to sort out the placement of the page image links: in the HTML source, and therefore in my XML, links to the page images for the whole of a chapter are given as a sequence at the start of the transcribed text for that chapter, rather than being placed at the point in the transcript where that page begins. In some cases that point is identifiable because the transcribers have chosen to include part of the running page header in the transcription there (it winds up as a <fw> element); but in many it isn’t. Most chapters only occupy a few pages so sorting this out would not be a major effort, just a rather tedious one, and not-easily-automatable one.
So what have we learned? Actually it’s not that hard to make a usable TEI document even from something as idiosyncratic as this web site. Which is encouraging. Let me also hasten to add that I mean no disrespect for the hard work (still less for the generosity) which must have gone into the production of that website: everyone is entitled to their own design decisions. I am pleased to express my thanks to the anonymous people who have put in that effort, and decided to share its fruits even with the godless multitude. As the 1611 translators say in their preface ‘Zeale to promote the common good, whether it be by devising any thing our selves, or revising that which hath bene laboured by others, deserveth certainly much respect and esteeme’.

Notes towards a definition of TEI Conformance

 

 

 

 

 

 

 

Each of the blobs here represents three subtly different things:

  • an ODD : that is, a collection of TEI specifications
  • a formal schema generated from that ODD, and its natural language documentation
  • the set of documents considered valid by that schema.
    The TEI provides TEI All : a set of over 500 uniquely identifiable elements, classes, attributes, etc. and a schema in which they are all permitted. For all practical purposes a user of the TEI must make a selection from this cornucopia, and we call that selection a TEI subset. Of course there are many many possible TEI subsets, each making different choices of elements or attributes or classes, but the sets of documents which each consequent schema will validate all have in common that they will also be considered valid by the schema TEI All.

A user of the TEI may however do more than simply choose a subset of the provided specifications. They may also provide additional constraints for aspects of an encoding left underspecified by the TEI, for example by requiring that attribute values be taken from a closed list of possible values rather than being any syntactically valid token. They may simply change the datatype of an attribute, for example from a string to an integer or a date. They may also provide an alternative identifier for an element or an attribute, for example to change its canonical English name for one from another language. In some cases, attribute value changes are equivalent to a subsetting operation; in others not. Renaming operations never result in a subset: a document in which the element names have all been changed to their French equivalents cannot be validated by an English language version of TEI All. A user of the TEI can also change the content model or the class membership of existing TEI elements, in ways which may or may not be equivalent to a subsetting operation.

We use the term customised subset for all these kinds of personalisation because they result in something which is not necessarily a further subset of the TEI subset concerned, but a further modification of it. In the general case, their conformance with TEI All can be determined only by inspection, and their validation may require some additional processing.

Finally, a user of the TEI is at liberty to define entirely new elements and attributes, and to make such components members of existing TEI classes so that existing TEI elements may refer to them. They may also modify the content models of existing TEI elements to refer explicitly to such new elements. This results in an extended subset, since it contains elements or attributes additional to those provided by the TEI All schema. Such additional components should always be labelled as belonging to a non-TEI namespace. A processor can then determine that these components may be left out of consideration when determining the validity of a document with respect to TEI All.

In additional to these formal considerations, TEI conformance involves attention to some less easily verifiable constraints, specifically the twin requirements of honesty and explicitness. By honesty we mean that elements in the TEI namespace must respect the semantics which the TEI Guidelines supply as a part of their definition. By explicitness we mean that all modifications (i.e. both customized and extended subsets) should be expressed using an ODD to document exactly how the TEI declarations on which they are based have been derived. (An ODD need not of course be based on the TEI at all, but in that case the question of TEI conformance does not arise)

Formally speaking, we can say of a conformant TEI document :

  • it must be a well formed XML document and
  • it is valid against the TEI All schema :
    • without modification (it is a TEI subset), or
    • after deletion of any elements it contains which are not in the TEI namespace including their children, irrespective of namespace (it is a TEI extension), or
    • after application of any canonicalization algorithm specified by its associated ODD (it is a TEI customized subset)

The purpose of these and similar rules is to make interchange of documents easier. They do not however guarantee it, and they certainly do not provide any guarantee of interoperability. Unlike many other standards, the goal of the TEI is not to enforce or impose consistency of encoding, but to provide a means by which encoding choices and policies may be more readily understood, and hence (to some extent) algorithmically comparable.

(Another reason) Why I love the Internet

So, at the recent TEI Conference in Vienna, Elisa and I were indulging in a little mutual admiration on our knowledge of an obscure work entitled Thalaba the Destroyer by the early English Romantic Poet called Robert Southey (rhymes, as any fule kno, with « mouthy »). So when I got back home, I went to look for the volume containing said work which I dimly remembered having on my shelves, in the decrepit-but-too-nice-to-throw-away-section. And sure enough, there it was. The front board has come loose, but the first three openings  look like this:

frontboardhalftitletitlepage

Having scanned those first few pages, I naturally asked Mr Google what he knew about the matter. And was thus able rapidly to confirm :

  • My copy of Thalaba is the cheap reprint (two volumes in one) published
    by Vizetelly and Beeton in 1853. There is a Google-scanned version of the same edition, available from the Internet Archive. They have included with it a couple of pages of  advertisements for other works published by Clarke Beeton (p 7 and 8) which are missing in mine however.
  • What seems like another copy of the same edition is currently on sale at Abe Books for the startling sum of $199.76. Mine is in poor condition,  which is why it only cost me half a crown back in 1967, when I used to frequent Oxford’s second hand bookshops (there aren’t any to frequent these days).

As you may have noticed above, my copy also contains several signs of its previous owners. As well as the book plate, and the inscription above, there’s a nice message from Aunty Sarah, the donor,  opposite the preface:

front-1and there’s also an intriguing note from « JB » dated some twenty years later, opposite the start of the poem proper.

body-01

So… what have we learned? Rosamund was given this book by her aunt, Sarah Brent, in 1860. And in 1882, her husband felt compelled to record his own experience of the Eastern exotic in the same book « We met at Persepolis an Arab maiden of most lovely form and features — she was a dream of beauty never to be forgotten ». What she made of it, one can only conjecture.

But why I love the Internet, is that (pondering these matters after breakfast this morning), it has helped me place these people a little more precisely in time and place. A search for « Rosamund Borrowman » told me that  the 1861 Census shows a person of that name, born 1825 in Kent, married to John Borrowman, born 1830 in Midlothian, residing in Middlesex in 1861. The ancestry.co.uk site where I found this record is pay-walled so no further details available, but that seems reasonably plausible.

And searching for « Rosamund Borrowman John » I was able to find a record of her death. Some industrious volunteers have been surveying the gravestones of a place called Hambledon in Surrey, and there she is:   « Rosamund Vertue the beloved wife of John Borrowman. She died 25th August 1895. Also the above named John Borrowman son of Robert Borrowman born in Edinburgh 3rd April 1830 died at Hambledon 4th July 1906. Also Elizabeth daughter of the above died 22nd October 1932 aged 72 years » It’s all in the spreadsheet.

My next step, obviously, will be to find out where Hambledon is, and whether you can get there by train. Maybe.