Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

Building the Eltec (stage 0) … continued

Have at you, Project Gutenberg…

I am for sure not the first person to think it would be nice to try to make the Project Gutenberg metadata more easily machine tractable. Matthew Jockers wrote a python script to hack usable metadata out of the individual texts back in 2010 (see this blog entry ) ; Damon Cavar wrote some java to do something similar but starting from the RDF form of the Gutenberg catalog, as part of an ambitious
(but I think as yet incomplete) Project Gutenberg to TEI XML conversion project  last updated 2012.  More recently, Jonathan Reeve has announced an interesting project which is hacking together various bits of Gutenberg, Gitenberg, and Wikipaedia to make a  project Gutenberg database for text mining  … one day.

My objectives are not so ambitious and I like to keep things  simple. I just want to know how many Gutenberg titles are listed in the Bassett database of 19th c British fiction. (I’d also like to be able to extract a list of all British novels in English published for the first time between 1902 and 1920, but that’s a separate problem) Having experimented with other plain text options, I reluctantly decided to start from the Gutenberg RDF catalogue. At least that is expressed using a syntax which xslt can handle and validate. No claims that its semantics are entirely reliable, of course.

Step 1 is to download and unpack a massive zip file from the Gutenberg site. We want the RDF format data is linked to from a page in the Gutenberg wiki:  It is massive because it actually contains nigh on 50,000 subdirectories, each containing a single file, describing a single text. So, for example, the RDF format catalogue entry for text number 1234 is in the unpacked file cache/epub /1234/pg1234.rdf When I looked there was also just one directory called DELETE-55495 which contained a variant of the entry for pg55485.rdf, but I pretended I hadn’t noticed that.

Step 2 is to develop and perfect a simple XSLT script to extract the useful grains from the enormous amount of chaff in each RDF file. This script (rdftotei) is designed to meet the needs of the ELTeC, so it rejects anything which is clearly out of the desired time zone (author born after 1920 or before 1800), or definitely not a novel (some records use a marc edt descriptor to show that they are edited compilations). If I could find a way of identifying books which are not in English I would exclude them too.  It cranks out simplified TEI bibl records like this:

<bibl xml:id="10037" n="abeautifulpossibility|Black">
<title>A Beautiful Possibility</title>
<author dates="1857 1936">Black, Edith Ferguson</author>
</bibl>

As you can see, this includes a  magic key that I will later use for matching with other ELTeC bibliographic records, notably the Bassett database I blogged about last week.

Step 3 is to find a way of running this script against 50,000 files which does not cause my computer to melt down, and preferably will complete in my lifetime. My first simple minded approach was  a shell script that invokes saxon on each file. But this has to set up a JVM afresh each time it runs, so it takes forever. I considered glomming the individual files together into a smaller number of larger files, so that loading the JRE gets done less frequently, but this is fiddly because each of the individual files begins with an XML declaration that would have to be removed during the glomming process. A question to the oxygen users list evinces 3 helpful alternative suggestions in ten minutes: the easiest and quickest of which is to use a feature I didn’t even know existed in saxon: specifying a directory as input and as output. So with all my RDF files in the folder RDF and nothing in the directory RDFx, I do the following two shell commands:

saxon -s:RDF -o:RDFx rdftotei.xsl
cat RDFx/* > gutenList.xml

and the whole thing is done in a couple of minutes.

Step 4 is to repeat the process as before: pick out the magic keys and then look for overlaps between those keys and those in the Bassett database (like this:

saxon guten-list.xml getKeys.xsl > gutenKeys.txt
comm -12 <(sort gutenKeys.txt) <(sort bassetKeys.txt)

Result on the first round: 1478 Gutenberg titles are already known to Bassett. Not as many as I’d expected, but not bad. Here are the full results for all three digital collections.

Out of 13,859 titles in Bassett’s database,  a total of 2937 appear in at least one of Gutenberg, Internet Archive, Google Books, or VWWP, i.e. more than 20% (which is better than I was expecting).  Here are the counts for the individual collections:

Gutenberg InternetArchive Google Books VWWP
1478 1155 594 32

 

Also to be expected, there’s a bit of overlap. 2638 appear in only one digital collection; 276 in two, and 23 in all 3. You can probably guess which titles those are, though one of them came as a bit of a surprise. What’s so great about Mary Ward’s “Marcella”?.

Building the ELTeC : stage 0

Problem: if the ELTeC is supposed to represent in some sense the full range of novel production in a given language (EN in my case) for a given time slot (1850 to  1920, it says here) how do you find out what the population actually is before starting to sample it?

Enter at this point a wonderful database: Bassett, Troy J. At the Circulating Library: A Database of Victorian Fiction, 1837-1901. Victorian Research Web. [accessed 2018-02-09] (http://www.victorianresearch.org/atcl). I say “wonderful” advisedly: I have found nothing comparably complete and usable in a week of scratching around the internet.

According to Bassett’s technical notes, “This website was written on a Macintosh using a MySQL database and PHP.” Consequently its contents are pretty consistently organized and tagged, and consequently screen scraping and munging into a different format are both pretty easy: like this

 #!/bin/bash
 for number in {1850..1901}
 do
 echo "$number "
 wget "http://www.victorianresearch.org/atcl/show_year.php?year=$number" -O $number.html ; tidy -asxml -n --new-empty-tags image $number.html | saxon - bassetConv.xsl > $number.xml
 done
 exit 0

The `bassetConv.xsl` stylesheet generates a TEI format bibl entry for each of the 15,000 plus titles, like this:


<bibl xml:id="11169" n="acounselofperfection|Malet">
<author n="576">Lucas Malet.</author>
1 vol.  London: Kegan Paul.</bibl>
The numbers are the identifiers used by the mySQL database, so they should be unique. The @n attribute on the bibl element is a key I generate for matching purposes (see later).

I would rather like to know how many of these titles are available in
digital form, and from where. The current database has links to
Google Books (500 or so, it claims: I haven’t worked out how to find
them without downloading the entire database) but nothing else so far
as I can tell.

How to accomplish this?

My best plan so far is to extract  lists of keys or fingerprints, derived from the cataloguing information supplied by each online repository  and then start looking for overlaps with Bassett. My assumption is that for any relevant title not to exist in Bassett would be rather remarkable. Some initial experiments suggest that the first few words of the title combined with the surname of the author is a reasonable approximation to a fingerprint for each title, though obviously not
perfect.

So far I have looked at the following four online repositories:

1. Victorian Women Writers Project

This has 75 titles identified as fiction, and all in good TEI format; the following grabs the catalogue info for them

http://webapp1.dlib.indiana.edu/vwwp/search?docsPerPage=100&browseText=fiction&text1=fiction&field1=browse-genre&style=&smode=simple&brand=general

Life is too short to process all the resulting HTML: in any case I am sure if I wanted a proper XML catalogue my friends at Indiana would give me one. So I manually separate out the useful chunk of HTML: and mangle it through a simple XSLT stylesheet (`vwwpConv.xsl`), to produce a file of entries like this:

<bibl xml:id="VAB7046" n="daphne|Ward">
<title>Daphne, or Marriage a la Mode</title>
<author>Ward, Humphry, Mrs., 1851–1920</author>
<publisher>London; New York: Cassell, 1909. 315 p.</publisher>
</bibl>

Note the cunningly constructed @n attribute supplying a key which, all things being equal, should match an entry in the Bassett database, if there is one.

2. The ebooks@adelaide project

This is a quirky site hosted by the University of Adelaide library which makes many texts available in a well structured XHTML format which is easy to munge into TEI. For info about this apparently little known but rather splendid project see https://ebooks.adelaide.edu.au/about/
Selecting relevant titles for our purposes is not so easy, so I grabbed a readable version of their entire catalogue in HTML from the website, and then grepped through it for potentially useful entries, resulting in a file full of lines like this:

<li><a href="/a/abbott/edwin/flatland/">Flatland: a romance of many dimensions / Edwin A. Abbott [1884]</a></li>

along with a lot of less useful lines like this

<li><a href="/a/aristotle/meteorology/">Meteorology / Aristotle; translated by E. W. Webster</a></li>

A perl script was the quickest way of munging this into minimal TEI entries like this:

<bibl><title>Flatland: a romance of many dimensions </title><author>Edwin A. Abbott </author> <date>1884</date><ref>/a/abbott/edwin/flatland/</ref></bibl>

I am not sure that there is much wheat of this kind in this chaff however: I will come back to it later.

3. And then there’s Gutenberg

Yes, the Gutenberg Project also has a catalogue: a huge one, which you
can download as an incomprehensible RDF file or a plain text
monster. I took the latter option, and hacked out of it 49639 lines
like this:

<title>100 Desert Wildflowers in Natural Color</title><author>Natt Noyes Dodge</author><idno>54631</idno>

I won’t rehearse here the manifold  inconveniences of using Gutenberg texts. But it would be useful to come back to this and see how many of the Bassett titles are available here. As a first approximation, I first reduced the title list to just the first part of the title and the author’s surname, with no spaces or punctuation, and then used the invaluable unix  comm command to identify common lines in the two files. Obviously, this procedure is vulnerable to typos and inconsistent editorial practices, which are not uncommon, but in my first experiment I found 984 items with identical titles and authors in both the Gutenberg title list and the Bassett title list.

4 Internet Archive

This wonderful collection has a good search interface, spoiled somewhat by the unreliability of the data. I used it to grab all the entries for a collection called “19thcennov” which looked promising. Sorted by descending date, the very first item had a date of “1983” which, on inspection, turned out to be a typo for “1883”, so a good thing I didn’t use “date” to limit my search. OTOH, the second item in the list was something called “Mathematics in urban science, V : Catastrophe theory” published by the “Monticello, Ill. : Council of Planning Librarians”. No matter: at least the output is in XML, and the text identifiers used by the IA bear a striking resemblance to those I thought I had invented. My search gives me 7829 items which look like this:


<doc>
<str name="creator">North, William, d. 1854</str>
<str name="date">1847-01-01T00:00:00Z</str>
<str name="identifier">impostororbornwi03nort</str>
<str name="language">eng</str>
<str name="publisher">London : T.C. Newby</str>
<str name="title">The impostor, or, Born without a conscience</str>
<str name="volume">3</str>
</doc>

which I then munge into

<bibl xml:id="impostororbornwi01nort" n="theimpostor|North">
<author>North, William, d. 1854</author>
<title>The impostor, or, Born without a conscience</title>
London : T.C. Newby</bibl>

The IA gives each volume a separate catalogue entry, so these 7829 entries boil down to only 2739 unique keys which I can compare with the Bassett keys. On a first pass through I identify 1235 matches, i.e. nearly half. I also identify lots of ways of improving the matching procedure, but for the moment this seems like it might be useful. Though I am not sure that a success rate of around 10% in identifying matches is altogether worth shouting from the rooftops about.