An experiment in counting the books

A couple of years ago I spent some time trying to determine which of the titles in the wonderful “At the Circulating Library” (ATCL) database were freely available online in digital form. This was for largely pragmatic reasons to do with building the ELTeC English language collection: other blog entries describe the method I used and some preliminary results. It’s not as easy as you might suppose to download reliable catalogue information from most digital libraries, nor is it always readily tractable when you do. After some experimentation, I hit on the idea of creating a magic key, a kind of fingerprint, derived from the title and author name as specified, which could then be matched against keys in the same format derived from ATCL entries.

More recently, it occurred to me that this data might also provide some interesting numbers to contribute to current debates about digitization priorities. Exactly why some titles make it to Project Gutenberg, or the HathiTrust, or the Internet Archive and others don’t is not a question to which simple un-nuanced answers are likely or even (maybe) possible, but we should still ask them. Those responsible for the digitization efforts of major libraries are a little coy about the principles on which books are chosen for digitization, or even whether they actually have explicit selection policies, for some reason. I assume that there is a difficult tightrope walk between on the one hand practical but purely adventitious matters such as the relative locations of volume and scanner, the size and state of the volume, the time of day, the temperament of the scanner operator etc.) and on the other principled criteria aiming to ensure a balance of say titles by female and male authors, or high and low brow, date of production, longevity of readership, and so on. It would be surprising if the choices were completely unrelated to characteristics of the population being sampled, or totally failed to reflect the cultural priorities of the scanning operation; the same uncertainties apply, of course, to the collection being sampled for digitization itself.

Anyway, I recently read an interesting article by Allen Riddell and Troy Bassett (“What library digitization leaves out”; preprint available from https://arxiv.org/abs/2009.00513)  which reports that in the data they looked at – the comparatively small sample of surviving English novels published in 1836 and 1838 – shorter books, and books with male authors are disproportionately more likely to be digitized. I naturally wondered whether this applies equally well across the whole of the 19th century.  Which is what led me to revisit my efforts of two years ago. But first, here are the results.

There are 19,912 titles in the current ATCL database. Of these, 9152 (46%) have authors identified in the database as male, 9809 (49%) are identified as female, and 951 (4%) are identified as unknown. These relative proportions are rather different if we look at titles with at least one digital surrogate, of which there are in total 9099 (45%). Of these 9099 digitized texts, we find 5221 (57%) are of male authorship, 3718 (41%) of female authorship, and 160 (2%) are unsexed.

Look at that again. Although there are actually more titles available for digitization from female authors than for male, the number that actually gets digitized is significantly smaller (if, like me, you think a gap of 16 percentage points is pretty significant). Hmmm. These counts of course derive from the whole period covered by ATCL, from 1800 to 1900, so I also calculated them for each decade, only to find that the proportions and their imbalance remain fairly consistent across the century. And this despite huge changes in the numbers: for the last decade of the century ATCL lists nearly 6000 titles, a six-fold increase on (for example) the fourth decade. What percentage of those titles were digitized? In both decades, over 51%. And what proportion of those digitized titles were male-authored ? In both decades, 62%. There is some variability across the decades, but the basic picture remains the same

One possible explanation might be that titles with unknown or unsexable authorship (e.g. the ubiquitous “Anonymous”) are more likely to have been female, and that hence we are not seeing all the truly female authors. But even were this the case (after all, why should we not equally well hypothesize that male authors might be bashful or crave secrecy?), the proportions for books ostensibly male-authored with respect to books ostensibly not male-authored (i.e. those classed as either F or U by ATCL) remain stubbornly higher than the proportions for books definitely not male-authored. And indeed, the same mutatis mutandis is true for the ostensibly-female to ostensibly-not-female ratio.

Here’s a table showing the raw counts:

Decade All “Male” “Female” “U” A-dig M-dig F-dig U-dig
19912 9152 9809 951 9099 5221 3718 160
1830s 482 256 174 52 250 164 85 1
1840s 1037 543 422 72 538 334 202 2
1850s 1483 595 778 110 718 347 358 13
1860s 2341 1019 1093 229 1015 540 456 19
1870s 2866 1189 1514 163 1300 642 633 25
1880s 4126 1693 2287 146 1765 945 782 38
1890s 5979 2995 2863 121 3092 1929 1103 60

 

And here’s another showing the percentages:

               
Decade Ad% M% Md% F% Fd% U% Ud%
45.70% 45.96% 57.38% 49.26% 40.86% 4.78% 1.76%
1830s 51.87% 53.11% 65.60% 36.10% 34.00% 10.79% 0.40%
1840s 51.88% 52.36% 62.08% 40.69% 37.55% 6.94% 0.37%
1850s 48.42% 40.12% 48.33% 52.46% 49.86% 7.42% 1.81%
1860s 43.36% 43.53% 53.20% 46.69% 44.93% 9.78% 1.87%
1870s 45.36% 41.49% 49.38% 52.83% 48.69% 5.69% 1.92%
1880s 42.78% 41.03% 53.54% 55.43% 44.31% 3.54% 2.15%
1890s 51.71% 50.09% 62.39% 47.88% 35.67% 2.02% 1.94%

 

In an ideal world, you’d expect the percentages for titles with male authors (M%)  and for digitized titles with male authors (Md%)  to be roughly the same, right?  Think on… And feel free to download the csv file behind these tables for your own experimentation.

One should always suspect the data, so I make no excuse for the following detailed blow by blow account of how I got these numbers. Full gruesome details, including the scripts mentioned below, are available from https://github.com/lb42/bookLists

The basic method was to download a complete catalogue of relevant titles available from each target digital library, and then try to match them with records in the ATCL. For Google Books, which does not seemingly provide a complete catalogue online, I tried a different method, discussed further below.

I started by downloading the latest (June 2020) dump of the ATCL database, and converting it to a basic TEI XML format. I then did much the same for the holdings of five digital libraries with good holdings of 19th century novels: the Hathi Trust, the British Library, the Internet Archive, Project Gutenberg, and Google Books. As a control, and for testing purposes, I also looked at a few smaller collections, notably the Victorian Women Writers Project at Indiana University and the (now defunct) University of Adelaide “ebooks” repository. I wanted to provide something similar to John Mark Ockerbloom’s lovely Online Books Pages at https://onlinebooks.library.upenn.edu/ but more precisely tied in to ATCL.

Hathi Trust makes available a monthly dump of their entire collection as a huge tab-delimited file. Working with the most recent dump, dated September 1 2020, I used a simple minded perl script `hathiProcess.prl` to parse this file and select from it only freely-available English language books published in Great Britain between 1800 and 1920; an  XSLT stylesheet `htConv.xsl` then converted the results to the common project format (CPF).

The British Library website makes available an Excel spreadsheet providing metadata for the titles from their collection which were digitized some time ago by the Microsoft Books project I downloaded this, converted it to TEI with `csvtotei` and converted the result to CPF, (selecting just the 19th century titles) with `blConv.xsl`.

Project Gutenberg makes available several versions of its catalogue data. I worked with the most recently updated one, which is a vast archive of unbelievably verbose RDF files. Despite its complexity, this data doesn’t include any publication data for the source texts concerned (unsurprising really), though it does provide birth and death dates for the authors. To cut down the numbers a little, I rejected titles whose authors were not born during the 19th century, and also those which specified a MARC relator field “edt” (to cut out non-original editions). Once I had remembered how on earth to handle a gazillion tiny files of RDF (I did this back in 2018 ), I used the `gutConvRDF.xsl` script to process them all to CPF, and concatenated the results into a single file.

The Internet Archive, so far as I can see, doesn’t have any generally available or downloadable catalogue, though it does have a really good query interface. The method I used for attacking Google Books would presumably work equally well (or equally badly; see below) in this case, but I haven’t tried it. Instead I just used a predefined collection called `19thcennov` which someone at UIC Urbana Champaigne thoughtfully created back in December 2008. This gave me 7828 XML records which were easily converted to CPF using `iaConv.xsl`.

The common project format files all consist of TEI <bibl> elements with either an @xml:id attribute or an <idno> specifying the identifying code for this item in the relevant repository, e.g. ‘ia:foreignersnovel03pric` identifies the Internet Archive’s digitization of volume three of Eleanor Price’s novel “The Foreigners” . Each <bibl> also has an @n attribute supplying the magic key for the title, which is confected as follows:

  • remove the full stop following Mr or Mrs in any title containing one
  • take the substring of the title up to the first occurrence of one of the punctuation marks . , : ; or /
  • concatenate this with the author’s last name
  • convert to lower case and remove all punctuation characters and spaces

So, for example ATCL lists a work with the title “The Foreigners: A Novel” attributed to author “Eleanor C. Price”. The same work appears in the Internet Archive list, but with the author “Price, Eleanor C. (Eleanor Catherine)” and the title “The foreigners : a novel”. Despite the differing strings, both will get the same magic key “theforeigners|price”. This method is far from bullet proof, but it’s serviceable.

For Google Books, as noted above, there is no readily downloadable catalogue. But there is an API, which in a moment of madness I thought it might be cool to learn how to use. A day of poking around led me to a neat python script some helpful person had written to look up ISBN numbers (hat tip to AO8’s treasury , which I mercilessly hacked to my own purposes. My version reads a file of URL-encoded search requests like this “inTitle:the+inTitle:foreigners+inAuthor:Price”, fires them at the Google API, and processes the returns into a rudimentary bibl or a comment lamenting the absence or unavailability of the item in question. The file of search requests is rather long (one for each title in ATCL for which I have not yet found any digital version – a total of 11,203 ) so I make the program sleep for a while after firing off about 40 consecutive requests, to help the Google server catch up. Despite this considerate behaviour on my part, it did not take Mr Google long to decide that my program (or my IP address) was a threat, and then to start returning unco-operative HTTP messages like 503 (“Service Unavailable”) and 429 (“Too many requests”). The API Help pages confirm that Google considers “using an app, program or script to perform a large number of searches in a short time” prima facie justification for temporarily blocking the IP address in question; though it’s not clear what exactly is meant by “large” (more than 100?) and “short” (less than a minute?) in that phrase. Furthermore, when I search using my specially-minted API key, there seems to be a hard limit of 1000 queries per day in any case: so this job is not going to be finished very quickly. Still, I do now have an extra 1517 records to show for two day’s work.

Once I’ve created all these lists, I run the merger.xsl script to add <ref> elements to the ATCL-TEI file I created in the first step. This makes for some redundancy, for two reasons: firstly, for most of the archives a three volume novel is likely to get a separate entry for each volume; secondly, for many titles, there exist multiple digitizations – which may (or may not) derive from the same source. The following table shows for each archive the number of records selected for processing, the number of references to ATCL titles found, and the number of titles affected. Note that I haven’t yet done any de-duplication to remove overlaps.

British Library 62015 9920 5104  
Hathi Trust 460070 18891 5655  
Internet Archive 7829 4691 1655  
Project Gutenberg 38338 2880 2275  
Google Books ? 1517 1517  

I haven’t made available the CPF files for each archive, nor the final merged TEI version of the ATCL dump, since this is not really my data to share. But I have made available a file called atcl-links.csv, which is a spreadsheet with a row for each ATCL title digitized in one or more publicly available digital collection, mapping its ATCL identifier to its identifier in each repo. I’ll  update these as and when the data improves.

Building the Eltec (stage 0) … continued

Have at you, Project Gutenberg…

I am for sure not the first person to think it would be nice to try to make the Project Gutenberg metadata more easily machine tractable. Matthew Jockers wrote a python script to hack usable metadata out of the individual texts back in 2010 (see this blog entry ) ; Damon Cavar wrote some java to do something similar but starting from the RDF form of the Gutenberg catalog, as part of an ambitious
(but I think as yet incomplete) Project Gutenberg to TEI XML conversion project  last updated 2012.  More recently, Jonathan Reeve has announced an interesting project which is hacking together various bits of Gutenberg, Gitenberg, and Wikipaedia to make a  project Gutenberg database for text mining  … one day.

My objectives are not so ambitious and I like to keep things  simple. I just want to know how many Gutenberg titles are listed in the Bassett database of 19th c British fiction. (I’d also like to be able to extract a list of all British novels in English published for the first time between 1902 and 1920, but that’s a separate problem) Having experimented with other plain text options, I reluctantly decided to start from the Gutenberg RDF catalogue. At least that is expressed using a syntax which xslt can handle and validate. No claims that its semantics are entirely reliable, of course.

Step 1 is to download and unpack a massive zip file from the Gutenberg site. We want the RDF format data is linked to from a page in the Gutenberg wiki:  It is massive because it actually contains nigh on 50,000 subdirectories, each containing a single file, describing a single text. So, for example, the RDF format catalogue entry for text number 1234 is in the unpacked file cache/epub /1234/pg1234.rdf When I looked there was also just one directory called DELETE-55495 which contained a variant of the entry for pg55485.rdf, but I pretended I hadn’t noticed that.

Step 2 is to develop and perfect a simple XSLT script to extract the useful grains from the enormous amount of chaff in each RDF file. This script (rdftotei) is designed to meet the needs of the ELTeC, so it rejects anything which is clearly out of the desired time zone (author born after 1920 or before 1800), or definitely not a novel (some records use a marc edt descriptor to show that they are edited compilations). If I could find a way of identifying books which are not in English I would exclude them too.  It cranks out simplified TEI bibl records like this:

<bibl xml:id="10037" n="abeautifulpossibility|Black">
<title>A Beautiful Possibility</title>
<author dates="1857 1936">Black, Edith Ferguson</author>
</bibl>

As you can see, this includes a  magic key that I will later use for matching with other ELTeC bibliographic records, notably the Bassett database I blogged about last week.

Step 3 is to find a way of running this script against 50,000 files which does not cause my computer to melt down, and preferably will complete in my lifetime. My first simple minded approach was  a shell script that invokes saxon on each file. But this has to set up a JVM afresh each time it runs, so it takes forever. I considered glomming the individual files together into a smaller number of larger files, so that loading the JRE gets done less frequently, but this is fiddly because each of the individual files begins with an XML declaration that would have to be removed during the glomming process. A question to the oxygen users list evinces 3 helpful alternative suggestions in ten minutes: the easiest and quickest of which is to use a feature I didn’t even know existed in saxon: specifying a directory as input and as output. So with all my RDF files in the folder RDF and nothing in the directory RDFx, I do the following two shell commands:

saxon -s:RDF -o:RDFx rdftotei.xsl
cat RDFx/* > gutenList.xml

and the whole thing is done in a couple of minutes.

Step 4 is to repeat the process as before: pick out the magic keys and then look for overlaps between those keys and those in the Bassett database (like this:

saxon guten-list.xml getKeys.xsl > gutenKeys.txt
comm -12 <(sort gutenKeys.txt) <(sort bassetKeys.txt)

Result on the first round: 1478 Gutenberg titles are already known to Bassett. Not as many as I’d expected, but not bad. Here are the full results for all three digital collections.

Out of 13,859 titles in Bassett’s database,  a total of 2937 appear in at least one of Gutenberg, Internet Archive, Google Books, or VWWP, i.e. more than 20% (which is better than I was expecting).  Here are the counts for the individual collections:

Gutenberg InternetArchive Google Books VWWP
1478 1155 594 32

 

Also to be expected, there’s a bit of overlap. 2638 appear in only one digital collection; 276 in two, and 23 in all 3. You can probably guess which titles those are, though one of them came as a bit of a surprise. What’s so great about Mary Ward’s “Marcella”?.