Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

Another Fine Mess…

As previously mentioned, I have been trying to mangle Allardyce Nicoll’s Handlists into a tractable database for what seems like forever. Here’s the latest and hopefully last update.

Some of the entries are just disambiguating cross references: these are (or should be) marked as eType=’note’. Some of them are partial entries including a reference to another entry which may or may not contain the same data: these are (or should be) marked as eType=’ref’. This classification of entries was carried out by the addAtts script early on in the pipeline; the same script also added a magic key for each entry to facilitate matching up Lacy and Nicoll entries, but ignored entries with eType=’ref’ for some reason. I did not notice this gaffe till later, much later, after I had spent weeks on the next stage of the pipeline, (the clever bit of matching up Lacy and Nicoll entries, which involved a lot of manual intervention)

Here’s what I did to fix that blunder…

  1. saxon -xi entries.xml addAttsAgain.xsl > oops.xml (run a corrected version of addAtts script (renamed addAttsAgain) to generate a file of corrected entry elements for the entries of eType ref, now renamed as eType=part).
  2. saxon oops.xml addWhen.xsl > oops2.xml (run the existing addWhen script to add a @when for these new entries)
  3. saxon allEntries.xml attributePatch.xsl > temp.xml (run attributePatch script to produce an improved version of allEntries.xml. )

The text is now quite intelligently tagged, and there is a (non TEI) schema to describe its markup. I need to do more on its documentation, but there is an ODD.

Dates

Round about now, I realised that @when values were missing for many titles, and were mostly not in ISO format, which matters, partly because I can now use my ODD-defined schema to validate the file but mainly because it would be nice to sort entries correctly by date. So I embarked on the long tedious process of dating the entries a bit more consistently.

Nicoll represents dates in one of three different ways.

  1. Where the full date of a performance or a license is known, it is given as DD/MM/YY or (occasionally) DD/MM/YYYY. This is easy to identify and extract to the @when attribute for the entry
  2. Where the date is only partial, it may appear in the form MM/YYYY. This is more problematic.
  3. Where the only date available is that of a publication, it will usually be in the form YYYY, possibly in brackets. I wrote a script to extract these to the @when attribute too.

There are quite a lot of OCR errors to correct (I instead of 1, u instead of 11, s instead of 5, redundant blanks or nonexistent punctuation, and so on). Many of these could be fixed with regexp search and replace. I also found cases where the end of a printed line had simply been ignored, which were more difficult to detect.

Eventually, I have plausible dates for as many as possible of the datable entries, in one or other of the three formats specified. I run another script to convert them all to a kosher iso format i.e. YYYY, YYYY-MM, or YYYY-MM-DD, and then validate. A surprisingly large number trip at this last hurdle, mostly because of a previously unspotted OCR error, but this does throw up five cases which can only be attributed to lax proof reading at Cambridge University Press. These five include obvious nonsense like “32/2/1822” given as the date for the Drury Lane performance of Edward P. Knight’s “The Veteran Soldier”, and more tangled cases such as “29/2/1823” given as the date for a performance at the Adelphi of Moncrieff’s “The Secret”. Sorry, Allardyce, but 1823 was not a leap year, so this cannot be true. Moreover, according to the Adelphi Calendar (https://www.umass.edu/AdelphiTheatreCalendar/auth.htm) , on 28 Feb 1823, the theatre was dark for Lent… and the same source is stubbornly silent on the existence of a play of this title and authorship anywhere. So someone is mistaken.

Just to put those peccadillos into perspective : by my reckoning, there are now 24,351 distinct entries in the Nicoll database, of which 24,301 are apparently now correctly dated . Fifty are genuinely undated; five have impossible datings. A pretty good error rate.

Multiple authorship

As I may have remarked before, the entries in Nicoll’s Handlists are of quite a few different types. Some of them are just cross references, supplying the name under which a pseudonym has been indexed but not documenting any particular performance; others (quite a few) are partial entries, associating a performance or publication for one author with an entry for the same performance or publication listed under the name of the “main” author in a collaboration. Nicoll supplies the following definition: “I have adopted the principle of placing the main entry of any particular play under the name of that author whose name appeared first in the play-bill, newspaper advertisement or review from which information regarding authorship was obtained”. For my purposes however, all these additional entries simply inflate the number of performances etc. (by a factor of nearly 10%) and are unnecessary for a resource in digital form. I therefore tag them differently, and process the multi-author entries so that all the authorship information is accessible in the same place. For example, here is the “main” entry for a play with multiple authorship:

<entry when="1897-09-06" xml:id="N08816" eType="multi" n="ohsusannah_AMBIENT">
<class group="FARCE">F.C.</class>
<author>AMBIENT, MARK </author>
<title>Oh! Susannah! </title>
<perf>Eden, Brighton, 6/9/97; Roy. 5/10/97.</perf>
<lic>L.C. </lic>
<bib>French </bib>
<note type="auth">[Written in collaboration with A. ATWOOD and R. VAUN .]</note>
</entry>

The Handlist also contains two fragmentary entries, one for each of the two co-authors:

<entryFrag when="1897-09-06" xml:id="N08946" eType="part" n="ohsusannah_ATWOOD">
<author>ATWOOD, ALBAN </author>
<title>Oh! Susannah! </title>
<perf>Eden, Brighton, 6/9/97.</perf>
<note>See M. AMBIENT.</note>
</entryFrag>

<entryFrag when="1897-09-06" xml:id="N19381" eType="part" n="ohsusannah1_VAUN">
<author>VAUN, RUSSELL </author>
<title>Oh! Susannah 1 </title>
<perf>Eden, Brighton, 6/9/97.</perf>
<note>See M. AMBIENT.</note>
</entryFrag>

I wrote a script to combine all these to produce a new multi-author entry, like this:

<entry when="1897-09-06" xml:id="N08816" eType="multi" n="ohsusannah_AMBIENT">
<class group="FARCE">F.C.</class>
<author>AMBIENT, MARK </author>
<author type="also">ATWOOD, ALBAN </author>
<author type="also">VAUN, RUSSELL </author>
<title>Oh! Susannah! </title>
<perf>Eden, Brighton, 6/9/97; Roy. 5/10/97.</perf>
<lic>L.C. </lic>
<bib>French </bib>
<note type="auth">[Written in collaboration with A. ATWOOD and R. VAUN .]</note>
</entry>

Note that e.g. “R. VAUN” now appears as “VAUN, RUSSELL”. Achieving that particular coup de main involved quite a lot of XSLT juggling before I found a 99% successful solution.

In the process, I found only the following five cases where the author name referenced by Nicoll was hard to find in the Handlists.

  • “TAIT” (but I found him in the errata list for vol 4)
  • “PINCROFT” Confusingly, this exists as a pseudonym for BANERO J.M. , which is also the name of the main author. Something wrong there: Nicoll nodded.
  • “MOUNTJOY” No other sign of this pseudonym.
  • “Corri” must be Clarence Collingwood Corri, who supplied the music for George Sims’ 1899 farce In Gay Piccadilly. Dan Leno was in it.
  • “CARGILL, G.B.” I have not yet found any other sign of this co-author.

The next challenge

It is definitely time to revisit the Lacy catalogue. What’s to do with the 196 catalogue entries for which no corresponding entry has shown up in either of Nicoll’s Handlists? Are they all old stuff, first published or performed long before 1800, and therefore reasonably omitted from the Handlists? Or are there some rogue components amongst them? Time will tell. Meanwhile, here’s the current state of affairs, according to my reportCounts script.

Today there are 24351 entries in this file, of which ....
24350 are classified
22995 are plain old entries
some of which are unclassified
1356 multi-authored entries
1750 fragmentary entries constructed from cross references
30 fragmentary entries constructed from notes
51 notes, xrefs, and comments
18784 with perf data
15896 with lic data
4158 with bib data
1169 have been matched out of 1300 Lacy references
1327 catalogue entries linked to a Nicoll entry out of 1511

391 entries have a bib referencing Lacy but no @matches
493 have a female author

Multiple authorship

As noted previously Nicoll’s Handlists are organized by author name, which makes them manageable, but also can be seriously misleading. In particular, where a play is to be credited to more than one author, Nicoll’s practice is to repeat the information about the play in a second slightly degenerate entry, thus inflating the number of entries in the Handlist. Here for example is the “main” entry for a play co-authored by A’Beckett and Lemon:

<entry type="Bsq.">
<author>A'BECKETT, GILBERT ABBOTT</author>
<title>The Knight and the Sprite </title>
<note type="perf">Strand, M. 11/11/1844</note>
<note>L.C. 9/11/1844.</note>
<note type="auth">[Written in collaboration with M. LEMON]</note>
</entry>

And here is what I have unkindly termed the “degenerate” entry for same:

<entry type="Bsq.">
<author>LEMON, MARK</author>
<title>The Knight and the Sprite </title>
<note type="perf">Strand, M. 11/11/1844</note>
<note>See G. A. A BECKETT.</note>
</entry>

I assume Nicoll’s rationale for this redundancy is to make it easier to find everything written by a given author when flipping through the pages of a printed volume. But this makes much less sense in a digital resource. What we would rather see (I think) is an entry which makes explicit its multiple authorship: like this

<entry type="multi">
<class>Bsq</class>
<author>A'BECKETT, GILBERT ABBOTT</author>
<author type="also">LEMON, MARK</author>
<title>The Knight and the Sprite </title>
<note type="perf">Strand, M. 11/11/1844</note>
<note>L.C. 9/11/1844.</note>
<note type="auth">[Written in collaboration with M. LEMON]</note>
</entry>

(Note that to get there I have had to rethink the way I encode Nicoll’s genre tags, initially by moving them to an element of their own rather than using the @type attribute of the <entry> element. And note also that I am preserving those arguably redundant <note type=”auth”> elements so I can tell if something goes wrong)

I have spent the last week or two slowly making this possible. Slowly because I am slow, but also because it is not entirely straightforward to translate the string “M. LEMON” (as given in the note in the main entry for A’Beckett) into “LEMON, MARK”, which is the handle used on other main entries for the distinguished editor of Punch. (The same would apply, of course, if I decided to use the note within the degenerate entry to effect the join: I would then have to map “G.A. A. BECKETT” to “A’BECKETT, GILBERT ABBOTT.” ) And these are easy cases: Nicoll’s canonical format for names can get quite complicated. Consider, for example, “YORKE, ELIZABETH, Countess of HARDWICKE” or “ADDISON, Captain (later Lieutenant-Colonel) HENRY ROBERT” … Anyway, I made the job easier for myself by extracting from the entries a lookup table mapping name components (as given by notes within main entries) to canonical full names: like this

<author f="49">
<s>LEMON</s>
<w>MARK</w>
<str>LEMON, MARK</str>
</author>

This all worked quite satisfactorily for the 1800-1850 entries, for which there are only 58 additional name entries to handle, though getting to the point of being reasonably confident in that number took much longer than you might think, involving as it did quite a lot of OCR error correction.

However, things got much more challenging when I looked into the 1850-1900 entries. Firstly, there are many more entries to deal with: 1299 cases of “collaboration” . Secondly, some cases (34 to be exact) use an abbreviated form like this:

<entry type="P.">
<author>BYAM, MARTIN </author>
<title>The Babes in the Wood </title>
<note type="perf">R.A. Woolwich, 14/12/57.</note> L.C.
<note type="auth">[Written in collaboration with F. GRAHAM and W. T. VINCENT.]</note>
</entry>

This main entry will need to get two additional author elements, one for “F.GRAHAM” and one for “W.T. VINCENT”, not just one – which means revising my simple-minded XSLT script yet again. And it will also have to handle notes like this without too much fuss:

<note type="auth">[Written in collaboration with A. R. SMITH, F. TALFOURD and W. P. HALE.]</note>

The script does a good job of alerting me to cases where Allardyce has apparently nodded, and named as a collaborator someone who does not appear anywhere in the rest of the Handlist. This happens precisely once in the 1800-1850 volume, but seemingly many times more in the later volume. However, on examination, many of these discrepancies are a consequence of my cavalier editing praxis. Things like the kinds of quotation marks used to flag up pseudonyms, or whether or not surnames can contain spaces, return to bite me. Others are caused by OCR failures – occasionally lines seem to have just dropped out.

And, further to keep me on my toes, I have now discovered that there are three cases in which Nicoll gives up entirely on this painstaking method of documenting multiple authorship. The first concerns 18 titles to be attributed to the pseudonymous “Richard Henry”: these all appear once only under “HENRY, RICHARD”, like this

<entry type="Bsq.">
<author>“HENRY, RICHARD" [RICHARD BUTLER and H. CHANCE NEWTON] </author>
<title>Lancelot the Lovely; or, The Idol of the King </title>
<note type="perf">(Aven. 22/4/89).</note> L.C.
<note type="music">[Music by J. Crook.]</note>
</entry>

None of these 18 titles is listed, however, under NEWTON, nor indeed under BUTLER. A further, and apparently disjoint, batch of titles is listed under “NEWTON, H. CHANCE (“RICHARD HENRY”), I think I am going to pretend I haven’t noticed them. Likewise this one:

<entry type="D.Sk.">
<author>GORDON-CLIFFORD, E. and H. </author>
<title>A Black Dove </title>
<note type="perf">P’s. H. Kew, 12/9/94.</note>
</entry>

Hand Lists – The Return

Three weeks ago, I wrote an interim report on the work I was doing to make Allardyce Nicoll’s Handlists more machine tractable. I didn’t actually spend all of the previous month correcting OCR errors, writing bits of XSLT to manipulate the OCRd text, figuring out what had gone wrong with my matching algorithm etc. It just feels that way.

Anyway, here’s a result:

This camembert shows how all the 25,000+ entries in the two Handlists are classified. The categories used (Drama, Farce, Panto, etc.) are ones I made up by grouping together the much finer-grained but trickier text types Nicoll provides (of which there are more than a hundred values) into the 15 basic classes you see above. More of that another day.

The size of each wedge is, as you might expect, proportionate to the number of entries so classified, and (reading anti-clockwise) they are in descending order. As I noted last month, the top six categories together account for three-quarters of the data.

I also said last month that my next mission would be to see how these proportions change over time. And indeed they do. Like this:

Each column here represents a decade for which the Handlists provide data, from 1810s on the left to 1890s on the right. Each column summarizes theatrical events recorded for that decade, using the same 15 crude classifications as the camembert. The size of each coloured blob is proportionate to the percentage of events in that decade classified in that way. For example, in the 1860s column, the pale blue blob is much bigger than any of the others, because nearly half (48.6% to be exact) of the available theatrical events that decade are classified as “Drama”. In the same decade, the pale green blob above it is smaller because “Farce” accounted for a smaller proportion (15%). I haven’t included the numbers in the graphic to make it easier to read, but they are available.

Note that all the blobs are stacked on top of each other in alphabetical order, so you can detect changes over time for a given category by reading from left to right. For example, a blue blob for “Panto” appears near the top (row 4) in each decade, demonstrating the this particular form of theatre formed part of each decades offerings, getting perhaps a little more popular as the century wears on, but never disappearing. Contrast that with “Melodrama” (the purple blobs in row seven) or “Burletta” (the dark yellow blobs near the bottom) both of which are flourishing in the decades before mid century, and almost entirely eclipsed thereafter.

Now, I am certainly not claiming to have discovered that melodrama and burletta were both seriously unfashionable from round about 1850 onwards, despite their earlier mode-ishness. But it is always satisfying (and reassuring) to find “common knowledge” backed up by actual observed data.

Handlists made handier

I have been down a deep deep rabbit hole for the last week or two trying to get my XML-tagged versions of Allardyce Nicol’s two Handlists into shape. Here is an interim report.

What is an entry?

One problem has to do with the actual content of Nicoll’s Handlists. What exactly do they list? Although this is essentially a record of performances, it is organized very much by author. There are, for example, about 30 entries which don’t refer to any specific play, but are merely there to indicate the preferred form of an author’s name: like this one

<entry>“LAWRENCE, SLINGSBY.” See G. H. LEWES</entry>

Multiple authorship is also a problem. An entry like this one is straightforward enough:

<entry type="D."><author>ANDERSON, JAMES R. </author><title>The Robbers </title><note type="perf" >D.L. 21/4/51</note>. L.C. D.L. 26/12/45.</entry>

Inter alia, this tells us that there was a performance of a drama called “The Robbers” at Drury Lane theatre on 21 April 1851, and that the author of the piece is recorded to be James R. Anderson.

But there are also entries like this one:

<entry type="F."><author>ATWELL, E. </author><title>A Stuffed Dog </title><note type="perf">Park. H. Camden Town, 2/11/89</note>. See J. A. KNOX.</entry>

This one tells us that “A Stuffed Dog”, written by E. Atwell, was making them roar at the Park Theatre in Camden Town in November 1889. If we look for Mr J. A. Knox, we find another entry, apparently for the same performance:

<entry type="F."><author>KNOX, J. ARMORY </author><title>A Stuffed Dog </title><note type="perf" >Park H. Camden Town, 2/11/89, copy.</note>. L.C. [Written in collaboration with E. ATWELL .]</entry>

On the face of it, if I want to determine how many Farces are listed in the Handlists, for example to determine the waxing or waning of this particular type of performance over time, I need to be wary of cases like this one, where a single farce has multiple authors, and therefore gives rise to multiple entries: both these entries refer to a single performance, so should only be counted once.

How serious a problem is this? Out of 25 thousand- plus entries, (25,632 to be exact) I find that there are 1346 entries containing the word “collaboration” and 1738 containing the word “See ”. Most, but not all, of them point to a collaboration entry which references the same performance of the same play. There are only two entries in which the word “See ” really appears as part of a title, but there are maybe a dozen or more other types of cross references, for example to plays renamed or whose authorship Nicoll has resolved. The number of cross references which go nowhere or to an entry which documents a different title or performance is unknown, but not zero. I spent some time trying to check automatically but did not finish: other bits of the rabbit hole (like checking and fixing OCR errors in the dates) seemed more useful.

Although ostensibly organized by author, about half the entries in the Handlists record performances for which there is no author. (10,374 out of 25,662 entries to be precise). Usually, but not always, distinct entries are given for the same title performed on different occasions or at different venues – but occasionally an entry will provide a list of performances: like this

<entry type="C.O."><author>MANCHESTER, G. </author><title> The School Girl </title><note type="perf">Grand, Cardiff, 2/9/95; Stand. 14/10/95</note>. L.C. [Music by A. Maurice.]</entry>

I have not checked, but I suspect that this happens when a play has its first performance in the provinces: in this example, we may conjecture that “The School Girl” went down well enough in Cardiff for the management to risk bringing it to the Standard in Shoreditch a month later.

What sort of play is this?

Nicoll thoughtfully provides lists of the abbreviated codes he uses to indicate “the nature of the play itself” for both the Handlist 1800-1850 and the Handlist 1850-1900. There are a few codes present in only one or other of the two lists (B.O. for Ballad Opera is in the earlier list only, for example). An investigation of his usage of these codes over the two hand lists indicates the justice of the warning he also provides that “these designations are in no way final, and are often indefinite”. The two lists propose a total of 87 different codes including some very general categories (D for drama, F for Farce, P for Pantomime etc.) as well as many more nuanced classifications such as “Military Drama”, “Operatic Drama”, “Poetic Drama” “Romantic Comedy”, “Romantic Comedy Drama”, and “Romantic Drama”. Nicoll further remarks “Where possible, the designation employed in the original bills has here been followed” – so we should take these characterisations as indicative of the language in which Victorian Theatre chose to describe itself, not as a formally organized taxonomy.

I did some counting up of the actual usage of these codes, and found a further 23 codes not specified in either list. However, most of these are used very infrequently : less than 10 times for all except two of them. The exceptions are the emdash which is used for 15 entries that describe non performance items such as published collections of plays and the code “Bsq. O.” which is used for 22 entries, all of them presumably “burlesque operas”.

For what it’s worth, here’s a colourful camembert to show the distributional statistics of these categorisations. Of the 110 different codes used, more than half (64) are used fewer than 10 times. Or, to put it another way, the top six codes between them account for 17,242 out of the total nunber of 25,632 entries – over 67%.; the top eight codes (labelled in the picture) account for more than three-quarters of the whole population.

My next project will be to see if these proportions change significantly over time.

Categorising Lacy

 Collecting the data

Allardyce Nicoll’s monumental “History of English Drama 1660-1900” (Cambridge University Press, 1955) has two volumes containing “hand lists” of theatrical titles produced in the first and second halves of the 19th century respectively. The list in Volume 4, covering 1800-1850, can be downloaded from CUP’s site in PDF format if you have an appropriate institutional login; it even has a DOI: https://doi.org/10.1017/CBO9780511897764.010.

I downloaded and worked on the PDF version (handlist18001850.pdf) with the following workflow:

  • Generate DOCX version using Abby (thanks, HumaNum) (handlist18001850_hnOCR.docx)
  • Process this with `docxtoTEI` (thanks TEI) (handlist18001850_hnOCR.xml); some hand edits too to get rid of unnecessary clutter such as forme work and simplify the subsequent processing
  • Process this with `addMilestones.xsl` to delimit entries and author sections
  • Process output with `writeList.xsl` to produce structured list of all titles (entries1800-1850.xml)
  • Process with `addKey.xsl` to select entries mentioning Lacy and add a magic key derived from the title (LacyEntries1.xml)
  • Process this with `checkEntryList.xsl` to determine whether there is a match for this magic key in the current Lacy Catalog file, and add an attribute @matched indicating the result (LacyEntriesChecked1.xml)

The text of Volume 5 covering titles from 1850 to 1900, is not available in digital form from CUP for some unknown reason. I did however discover in the Internet Archive a less than perfect scan of it, provided by the Digital Library of India (https://archive.org/details/in.ernet.dli.2015.40678). I used a grubby perl script (`tagPlayList.pl`) to process the fairly unreliable OCR plain text version of this to produce output compatible with that produced by `addMilestones.xsl`, and fed this into the toolchain listed above to produce LacyEntries2.xml and LacyEntriesChecked2.xml

Results

– entries1800-1850.xml has 8495 titles of which 456 (53.6%) mention Lacy; of these, 307 could be matched, and 149 not.

– entries1850-1900.xml has 17202 titles of which 754 (43.8%) mention Lacy, of these, 454 could be matched and 300 not.

Failing a bit better

I next tried a different way of checking for matches, viz. the ever reliable unix utility `comm` applied to a pair of text files, one containing all the values for title/@n in the LacyCatalog and the other containing all the values for entry/@n in the tweaked Nicoll entry lists. First time round, unsurprisingly this gave different results: 537 matches in all, with 652 values unique to the Nicoll-derived list, and 961 unique to the Lacy catalogue.

I noticed however that both files had some duplicate entries, which were causing confusion in the matching process, though unsurprisingly, the most frequent cause of disagreement seemed to be OCR error or inconsistent formatting.

I manually added suffixes to handle the discrepancies I noticed in @n values. The author’s surname was used to distinguish identically titled but different works; the suffix “-bis” for actual repetitions.

Removing the duplicate entries and some of the more egregious OCR errors gave more plausible values : 760 matches in all, with 429 values unique to the Nicoll-derived list, and 738 unique to the Lacy catalogue. This was further improved by checking the 23 titles in Nicoll’s list which indicated that the Lacy publication used a different title. The output from comm after this modification gave me 772 matched titles, with 410 unique to Nicoll, and 726 unique to Lucy.
saxon ~/Public/Lacy/newcatalog.xml listKeys.xsl > lacyKeys.txt
saxon lacyEntries.xml listEntryKeys.xsl > nicollKeys.txt
comm --total lacyKeys.txt nicollKeys.txt


This exercise also revealed that there are a couple of genuine double entries in the Lacy Catalogue. Henry Byron’s “Bluebeard” extravaganza appears both in volume 19.3 (L0273) and in volume 49.9 (L0729); likewise Selby’s “Witch of Windermere” appears both in volume 84.4 (L1248) and in volume 3.6 (L0036).

After more tweaking, I decided to declare victory with 753 matched entries, and 436 not matching. Nicoll’s catalog includes entries for Lacy titles from volumes after 100, which I do not include in this project, so I expected some matches to fail.

Categorizing the titles

One of the reasons for being interested in Nicoll’s Handlists is that they assign each title a code such as ‘D’ for drama. How useful are these codes as a way of categorizing the contents of LAE?

Nicoll’s category codes are at once very delicate and very vague. Clearly, they are derived from the way the piece describes itself on its title page (if it says it’s an “extravaganza” then that’s what it is), but at the same time these often ornate descriptions have clearly been rationalised to make up a smaller number of unique category labels than simply using the words of the title would. But only to a degree: “burletta” and “burlesque” are distinguished, with roughly equal numbers of each, but there are plenty of titles which describe themselves as “a burlesque burletta”. In an attempt to further simplify these descriptions, I decided to map Nicoll’s codes to just four main classes: Comedy, Drama, Musical, and Spectacle — even though items frequently cross these very broad categories: should “comic drama” go under “comedy” or “drama”, for example? “Musical” is particularly problematic since many comedies include songs, as does almost anything classed as a “Spectacle”: my intention was to limit it to pieces clearly operatic or at least operetta-ic .

Undeterred, I tried my own experiment in classification, based on the words appearing on the titlepages. I used the following regexp to collect categorizing strings :

([Cc]omedy|[Cc]omic|[Ff]arce|[Tt]ragedy|[Dd]rama|[Bb]urle|[Pp]antomime|[Cc]omedietta|[Ee]xtravagan|[Oo]per[ae]|[Vv]audeville|[Pp]lay|[Ss]ketch|[Ii]nterlude|[Mm]usic)

and concatenated all the matching strings for each title. Prefixed with the number of acts in the play, this became the value for a @type attribute on each catalogue entry. For example, a one act play whose subtitle contained the phrase “A Burlesque Burletta” would be given the category label “1_BurleBurle”. A script `listTypes.xsl` counted up unique category labels (disregarding the number of acts) producing a list that begins:
~~~
Burle (40)
BurleBurle (3)
BurleBurleOpera (1)
BurleDrama (7)
BurleExtravagan (21)
BurleExtravaganComicPantomime (1)
~~~

(Note that the counts are for the whole string of categorizers: the count for “Burle” does not include the count for “BurleBurle”)

In many cases, it’s easy to map these category labels to the four basic categories identified above: all of the above would count as “Spectacle” — except perhaps “BurleDrama” and “BurleBurleOpera” (the latter however proves to be a mistake in segmentation: the “opera” part of the characterization concerns the source, not the play itself.)

There are 57 titles which lack any of these categorizing substrings, many of them preferring humorous variants on traditional title page discourse, such as “An Original Irish Stew” or “A new and original, aerial, floreal and conchological fairy tale (Of which the most striking feature is borrowed from the Countess D’Aulnois)“. These I (regretfully) left to one side for the moment.

My first (rough) simplification produced the following figures for all 1500 titles in Lacy’s Acting Edition:

CountPercentCategory
76551COMEDY
42628.4DRAMA
553.6MUSICAL
19713.1SPECTACLE
Categorizations by subtitle, for all 1500 titles

These relative proportions for these gross categories seem to be much the same as those derived from the more finely-grained Nicoll analysis of half the number of titles:

CountPercentCategory
37050.6COMEDY
15521.2DRAMA
7610.3MUSICAL
13017SPECTACLE
Categorizations by Nicoll, for Lacy titles only

The story looks rather different however when we count up the categories for all 15,021 categorized entries in the two Nicoll handlists, not just those for Lacy titles:

CountPercentCategory
508133.82COMEDY
551336.7DRAMA
203213.52MUSICAL
239515.94SPECTACLE
Categorizations by Nicoll, all titles

It has to be said that these counts are all fairly unreliable — aside from encoding problems caused by flakey OCR and my post-processing, Nicoll’s records count performances of the same title as different items where a title is of unknown authorship: this has the effect (probably) of inflating the counts for some categories such as Pantomimes. And of course my crude four part classification really needs much more thought. But putting all that to one side, it does seem that in selecting titles for his Acting Edition, Lacy preferred comedy over drama.

How old is that play?

Nearly every play in the Lacy catalogue – 1468 of them to be exact – now has a date of first performance, either explicitly given in the front matter of the text, or (for about 100 other cases) diligantly extracted by me from Nicoll’s “Handlist”. These dates supply a terminus ad quem for the play’s composition : it cannot have been written after its first performance. Similarly, although the individual volumes are not dated, we may reasonably assume that the volume itself cannot have been printed before the latest “first performance” date it contains. This is not an entirely satisfactory procedure if we want to track changes over time, since the number of volumes allocated to a particular year varies over the 38 year period, but it is the best I can do.

Nevertheless, I thought it might be interesting to plot for each volume how many of the plays it contains are recent, not so recent, or positively antediluvian. One hypothesis might be that the proportion of recently composed material declines over time, whether because less of it is available for Lacy to reprint, or because the bourgeois drawing room for which the later volumes are primarily intended prefers its drama antiquated. Another might be that the proportion of old warhorses in each volume is pretty much consistent over the whole life time of the Acting Edition.

Here’s my first attempt at visualising the data. It shows that there are a few volumes round about the start and end of the 1860s when the quantity of older material seems to shoot up, but that for the most part each volume contains a majority of material less than 10 years old. It also, however, suggests that the amount of new material in the 1870s starts to decline.

How balanced a sample is the VPP?

The full catalogue of Lacy’s Acting Edition comprises some 1500 titles, produced by just over 320 different authors. Over a third (583 to be exact) of all titles are produced by a small group of a dozen or so recidivists, each of them accounting for more than 25 titles. These include some predictable exceptions like “Anon” (65 titles), but also some extraordinarily prolific writers like John Maddison Morton (82 titles or 5% of the LAE), J.R. Planché (69 titles), and Henry James Byron (51 titles). In the second rank of creativity, there are 20 authors each of whom is responsible for producing between 10 and 25 titles, and who collectively account for 346, about a fifth of the whole. These include such familiar names as William Shakespeare (24 titles), just ahead of the less famous Thomas Egerton Wilks (23 titles) and some distance from George Colman (12 titles). At the other end of the scale, only a tenth of titles (171) are the product of an author otherwise unrepresented.

One of my first questions when looking at the Victorian Plays Project catalogue was the extent to which it might be considered a representative sample of the whole LAE. That of course depends on the basis on which you are sampling: as a first exercise, I consider here authorship. The VPP sample contains 343 titles, which are the product of 130 authors, only 8 of whom produce more than 10 titles, and nearly half of whom (74) produce only one title. This seems like a markedly different frequency distribution. Moreover, the ranking of authors within a “top twenty” list for the two corpora shows some surprising differences. Some authors who appear high in the upper half of the LAE list, e.g. Williams and Selby, trail near the bottom of the VPP list. It is unsurprising to find that titles low down the VPP list are also low down the LAE list; what does surprise me is the disparity in ranking for the comparatively frequent authors. Tom Taylor, the highest ranking VPP author of all, is only the 10th most frequent author in LAE; and John Palgrave Simpson, who ranks 12th in LAE, only just scrapes into the 25th row of VPP. Some of these oddities may be attributed to editorial decisions by the VPP: for example to exclude entirely titles by one William Shakespeare, even though these are ranked 14th in LAE.

Anyway, here are the Lacy Acting Edition Top Twenty authors, ranked by the number of titles attributed to them.

LAE rank VPP rank Titles (LAE/VPP) Author SDA dates
1 3 82/15 Morton, John Maddison * 1811-1891 A1
2 4 69/14 Planché, J.R. * 1796-1880 A1
3 6 65/13 [Anon.]    
4 5 51/14 Byron, Henry James * 1835-1884 A2
5 7 41/12 Suter, William E.   1811-1882 A1
6 2 40/17 Brough, William * 1826-1870 A2
7 16 38/5 Williams, Thomas J. * 1824-1874 A2
8 19 37/5 Selby, Charles * 1802-1863 A1
9 11 36/7 Burnand, Francis C. * 1836-1917 A2
10= 1 34/20 Taylor, Tom * 1817-1880 A2
10= 8 34/12 Coyne, Joseph Stirling * 1803-1868 A1
12= 25 28/4 Simpson, John Palgrave * 1807-1887 A1
12= 20 28/5 Oxenford, John * 1812-1877 A1
14 0 24/0 Shakespeare, William   1564-1616 A1
15 24 23/4 Wilks, Thomas Egerton   1812-1854 A1
16 14 19/6 Stirling, Edward   1809-1894 A1
17= 0 18/1 Wooler, John Pratt   1824-1868 A2
17= 18 18/6 Talfourd, Francis * 1828-1862 A2
17= 21 18/5 Jerrold, Douglas * 1803-1857 A1
17= 11 18/7 Halliday, Andrew * 1830-1877 A2
LAE Top 20 Authors

 

There is of course much more one might wish to say about these authors. It is unsurprising to find that they are all males, and equally that they are mostly members of the Dramatic Authors Society, the agency which had been founded to ensure their copyrights were observed, and which also required payment of a fee for provincial representation. Their dates, with four exceptions, are taken from Wikipedia, where there is much else is to be found. (The exceptions yet to be immortalized on Wikipedia are William Suter, Thomas J. Williams, Thomas Egerton Wilks, and John Pratt Wooler : their dates are taken from the Hathi Trust catalogue record). Just for fun, I decided to categorize them into two age groups on the following basis:

A1: born before the Battle of Waterloo (1815)

A2: born after Waterloo but before the Great Reform Act (1832)

Unexpectedly there are equal numbers in each group.

In the interests of full disclosure, I should add that the list of plays so far converted to TEI format demonstrates a tiny and even more divergent sampling of these authors. The most frequent author so far converted is J Maddison Morton with 6 titles, which corresponds well with the LAE ranking, but the next three in that ranking are all so far missing entirely. In fact, of the authors in the LAE top twenty, the following are all so far missing: Planche, Byron, Suter, Williams, Selby, Shakespeare, Wilks, Stirling, Wooler, Talfourd, and Halliday. Only five authors are so far represented by more than one title (Morton, Coyne, Courtney, Oxenford, and W.S. Gilbert).

As comparison, I also took a look at the author counts for the 45 or so LAE titles selected for inclusion in the Chawyck Healey “English Drama” collections. Only 10 authors appear here more than once, all of them represented by no more than 2 titles, except Simpson, who clocks in three. Only four of these authors also appear in the LAE Top Twenty (the inescapable John Maddison Morton, J.R. Planche, John Palgrave Simpson, and Thomas Egerton Wilks). Clearly these titles were selected on some other grounds than their frequency in the LAE.

Hunting for Lacy traces in the digital world

Title

Lacy’s Acting Edition was published in a series of 100 volumes, each containing up to 15 plays, between 1850 and 1874. (All dates approximate and unreliable). In addition to the collected volumes, Lacy sold individual play titles in cheap (6d) paper copies, many of which also found their way into private collections and public libraries. Consequently, copies of various components of the Lacy Acting Editions are now scattered across many research libraries. In some cases, they also exist in digital form, usually as scanned page images.

It is relatively easy to recover details of a library’s holdings from an online catalogue, for example by searching for the string “Lacy’s Acting Edition” or by specifying “Thomas Hailes Lacy” as publisher. It is less easy to restrict the search to generally available digital versions as there is still no reliable joint catalogue of digitized texts in major public collections, combining the digital holdings of say the British Library, the Bodleian, and other UK libraries, in the same way as has been done for many US libraries by the Hathi Trust, or more generally by the Internet Archive. (A project at the National Library of Scotland did set up such a site, under the name opentexts.world, a few years back, but its status is currently unclear and unsupported.

The ease with which the results of such searches can be obtained in a machine-tractable form (rather then simply displayed on a web page) is also quite variable. One is usually forced to fall back on web-scraping techniques and quite a lot of manual post-editing. This note documents my fairly uneven progress towards a definitive collection of links to existing and freely available digital copies of the plays constituting the Acting Edition on various sites. The fairly good news is that, as of today, of the 1498 titles making up the 100 volume Acting Edition, I have identified 586 which are freely available in some digital form somewhere. Track progress by looking at my online catalogue.

Hathi Trust

A search for the string “Lacy’s Acting Edition” anywhere in the catalogue record at https://catalog.hathitrust.org/ produces 294 hits, of which 246 are available in “full view” (i.e. should be downloadable without formality). A search for the string “Thomas Hailes Lacy” as publisher somewhat counter-intuitively produces only 94 hits. The web page displaying results looks like this:

  1. Results from a HT search. Setting page length to the maximum allowed (100) makes it feasible in this case to download all pages with minimal scrolling.

As usual, the easiest way to screen scrape is to save the HTML page as a file, use tidy to make it into well-formed XML, and then write XSLT to extract the useful information. In this case, the generated XML uses an undefined prefix “xlink:”, which I had to remove by hand, but apart from that everything needful was done by the XSLT stylesheet htScraper.xsl, resulting in a document (htListFull.xml) containing entries like this:

<bibl>
 <title>The first night; a comic drama in one
   act.</title>
 <pubDate>1800</pubDate>
 <author>Lacy, Thomas Hailes, 1809-73.</author>
</bibl>
<bibl>
 <title>After the party; a comedy in one act.</title>
 <pubDate>1870</pubDate>
 <author>Lacy,
   Thomas Hailes, 1809-1873.</author>
 <ref target=”https://hdl.handle.net/2027/hvd.32044072039373″>HT</ref>
</bibl>

No <ref> element is generated for entries which are not accessible in “full view” mode. Also note that the handle quoted above is for the Hathitrust index page; to download the whole text as a single PDF file you must visit that page, and wait while the PDF is constructed. Oh, and yes, you must also be logged in at a HathiTrust member institution. So much for “full view” access.

Open Texts

I blogged about this now sadly un-maintained site back in October 2020. The site was dark for a while, but seems to be back for the moment: this morning I visited and was able to download a list of 106 hits in CSV, XML, or JSON in one click, which was nice.

This is what I like to see at the foot of my first page of results

Individual results looking like this:

<doc>
 <str name=”organisation”>Bodleian Libraries</str>
 <str name=”idLocal”>016930688</str>
 <str name=”title”>King of the Merrows, or, The prince and the piper : a fairy extravaganza / written by F.C. Burnand, from an original plot constructed by J. Palgrave Simpson.</str>
 <str name=”urlMain”>http://purl.ox.ac.uk/uuid/42d1488e99284fef9421268c33e4730d</str>
 <int name=”year”>1862</int>
 <arr name=”date”>
  <str>1862</str>
 </arr>
 <arr name=”publisher”>
  <str>Thomas Hailes Lacy</str>
 </arr>
 <arr name=”creator”>
  <str>Burnand, F. C.</str>
 </arr>
 <arr name=”description”>
  <str>First performed at the Royal Olympic Theatre, 26th December, 1861.</str>
 </arr>
 <arr name=”placeOfPublication”>
  <str>London</str>
 </arr>
 <str name=”catLink”>http://solo.bodleian.ox.ac.uk/permalink/f/89vilt/oxfaleph016930688</str>
 <str name=”language”>English</str>
</doc>

are easily converted (e.g. by my stylesheet opentexts-conv.xsl) to produce

<bibl>
 <title>King of the Merrows, or, The prince and the piper : a fairy extravaganza / written by F.C.
   Burnand, from an original plot constructed by J. Palgrave Simpson.</title>
 <pubDate>1862</pubDate>
 <author>Burnand, F. C.</author>
 <note>First performed at the Royal Olympic Theatre, 26th December, 1861.</note>
 <ref target=”http://purl.ox.ac.uk/uuid/42d1488e99284fef9421268c33e4730d”/>
</bibl>

which is easily merged into the main Lacy catalogue.

Moreover, in this case (hoorah for the Bodleian), a visit to the publically available URL actually downloads the whole of the PDF file without further ado.

Sadly, PURLs are available for only three of the items in the Open Texts list of 106; the vast majority (90) being handles from HathiTrust, and the rest (13) links to archive.org. Moreover, the data has not apparently been updated since October 2020, which is presumably why it does not have anything like the 316 handles I found in the Hathi Trust catalogue for myself. In fact, every one of the handles it supplies exists also in the htListFull.xml list.

Google Books

A cauchemar. Google has digitized (almost certainly) all of the Lacy Acting Edition volumes, but it seems to be entirely arbitrary which ones you can access via Google Books. I have tried various approaches to searching (there is something called a `bibliogroup` for Lacy), and then reprocessing the resulting (very obscure) HTML, but cannot say I have succeeded in cracking this code. The file gbSearch.xml contains the screen- scraped-and-converted-to-XML output from a query for this; the stylesheet gbSearch.xsl filters out from this the 37 useful links it provides to files you can actually download from Google Books (but you still have to go through a captcha check, of course).

Searching specifically for “Lacy Acting Edition” on Google Books will provide an exciting list of entries for each of the first 93 volumes in the LAE — but only two of them (volumes 77 and 93) actually have anything you can download. (I belatedly discovered that this annoying behaviour can be modified by selecting “Full View” from the drop down menu at top left of the query screen, which hides the titles you cannot have). On the other hand, there are also a few occasions where the text actually digitized for a specific title is the whole of the volume in which that title appears. Thus, searching Google Books for The Half Caste will provide you with a link for the whole of volume 97, in which that title appears. Likewise a search for In Three Volumes actually gives you a link to the whole of Volume 91. Anyway, once you have a reliable link to Google’s equivalent of the Internet Archive’s “details” page (at the moment, it looks like https://www.google.co.uk/books/edition/Oberon_An_opera_in_four_acts_in_prose_an/IoFaWP1TQgkC) you can pass that to Google, and get back a nice “New” Google Books page in the middle of which is a nice “Download PDF” button. Which works — once you have completed the annoying captcha test of course.

All very well if you have the time to spend cutting and pasting links: but why couldn’t Google have provided a simple download in a form I can script? I assume it’s for the same reason they want to control access to these resources — to stop unscrupulous entrepreneurs in the “Print On Demand” industry from making a swift buck. And we all know how effective that policy is, don’t we?

Bodley

Real librarians do it with Z39.50. But my results (bodleyTexts.xml) show only 9 titles available in digital form.

The Hall Collection

Every now and then, serendipitous searching pays off. The Hall Collection contains approximately 600 English plays mostly from the late 18th and early 19th centuries, originally used as prompt books by a professional actress called Clara St. Casse. The Collection was donated to the University of Warwick Library by a Mrs G. F. Hall of Leamington Spa, together with a collection of other printed plays. Naturally it includes quite a few (102 to be exact) Lacy titles. Although the Warwick site (https://wdc.contentdm.oclc.org/digital/collection/hall) seems to provide only downloads and browsing of individual pages, someone, presumably from the Library, has also had the good sense and generosity to deposit the whole collection at archive.org, from which I was able to obtain an XML file (hallColl.xml) which can be readily processed to produce links to the 102 Lacy published titles: see hallCollTitles.xml

Internet Archive

This archive has an excellent search interface and will also deliver results in any tractable form you like, including json or xml. It cannot however perform magic to overcome variant cataloguing practices amongst the collections it has incorporated. So, for example, a search for “Lacy Acting Edition” throws up precisely one hit (“a copy graciously made available by Fordham University”). A more general search for “Thomas Hailes Lacy” gets me 125 hits, 102 of which come from the Hall Collection. A search (thomas hailes lacy) AND -collection:(hallcollection) finds me the 23 titles not included in the Hall Collection. On the other hand, a search for “T.H. Lacy AND -collection:(hallcollection)” finds 66 titles, not included in the Hall Collection, but not included in the foregoing either.

On the bright side, the hits can be downloaded in a format which is more or less identical to that generated by the XML option quoted for the Open Texts server above, so mungeing the results lists together is a Simple Matter Of Programming, resulting in iaList.xml.

Reviving the VPP : a start

The Victorian theatre has not enjoyed documentation or digitization as systematically as has the Victorian novel, reflecting perhaps scholarly perception of their comparative artistic significance. Yet it is a truism that the influence of the Victorian popular theatre on the development of the novel during this period was by no means limited to the efforts of dedicated amateur enthusiasts such as Dickens and Collins and their circle. In Emily Allen’s words “Victorian theatre was the novel’s ally, inspiration, and competitor”. As an ongoing expression of popular culture, nineteenth century theatre has deep roots and many branches; its lineage runs from the high gothic of romantic melodrama to the memes of cinema and modern day television, embracing both the theatre of sensational spectacle and that of domestic realism. Yet for those wishing to see the phenomenon as a whole, to perform a kind of distant reading of its texts, there is nothing approximating to Bassett’s At the Circulating Library database of Victorian fiction (http://www.victorianresearch.org/atcl/search.php) in terms of completeness or coverage. Such attempts to document the Victorian theatre as do exist, have generally done so in terms of the careers of individual actors, writers, or institutions. Although collections of the primary source materials exist in a few libraries, it is as a consequence of individual collections or bequeaths, rather than any attempt at systematic coverage.

One notable exception is Richard Pearson’s Victorian Plays Project (VPP), originally funded by the AHRC 2005-2007, and still hosted at the National University of Ireland in Galway. A key deliverable of this project was an online catalogue of the approximately 1500 titles making up Lacy’s Acting Edition of Plays, derived from the (apparently unique) surviving copies of that edition preserved in what was then the Birmingham Central Library.

Thomas Hailes Lacy began publishing contemporary plays at his Covent Garden printing house shortly after the Theatre Regulation Act of 1843 which removed the duopoly previously enjoyed by the Covent Garden and Drury Lane theatres. In a far-sited move, Lacy acquired the rights to print plays from the theatrical managers, ostensibly to protect their copyrights, though he was not averse to a little piracy himself. These “Acting Editions” contained everything needful to produce a play: including details of costumes, settings, blocking, accompanying business etc. as well as cast lists and the text of the play itself. New titles appeared every year until the 1870s when Lacy sold the whole collection to Samuel French, an American publisher with whom he had exchanged plays for publication for the previous two decades.

According to the existing VPP website (http://victorian.nuigalway.ie/modx/index.php?id=187), in addition to producing this on-line catalogue, the project aimed to “generate e-texts in .pdf format that replicate the original texts re-edited for electronic usage” and also to “create a database of plays marked up using TEI encoding in XML that will be searchable”. The website also states that “Transcription of the Lacy’s Catalogue, and editing and encoding of the texts was undertaken by the Victorian Plays Project using OxyGen TEI mark-up software and Acrobat Professional. ” (http://victorian.nuigalway.ie/modx/index.php?id=182).

As of today, the website does provide a list of all 1428 titles in the Acting Edition, including basic data about their authorship and performance history. It also makes available a set of 239 titles which have been transcribed and reformatted as PDF files preserving much of the typography of the originals. Other formats, if they exist, are not visible on the website, though a small number of titles have clearly been annotated and indexed at some point in the past with separate lists of named entities and striking phrases. (Some further information on this and a closely related sister project concerned with the records of the Lord Chamberlain’s Office is provided by Radcliffe, C. & Mattacks, K., (2009) “From Analogues to Digital: New Resources in Nineteenth-Century Theatre”, 19: Interdisciplinary Studies in the Long Nineteenth Century 8. doi: https://doi.org/10.16995/ntn.499 )

However, the VPP website does not seem to have been developed since 2015, and the untimely death of Professor Richard Pearson at the end of 2018 (https://bavs.ac.uk/uncategorized/obituary-richard-pearson/) casts its future development into serious doubt. As is all too often the case, preservation of a digital archive turns out to depend as much on individual personal support as on technological constraints.

I have therefore applied for funding to carry out an initial scoping study investigating the feasibility of reviving and bringing up to date the Victorian Plays Project. If accepted (and there’s no reason to suppose it will be) this would naturally begin by reviewing any additional digital materials which have been archived, and by interviewing personnel associated with the original project at Galway. The inventory resulting from this review would be extended with a survey of other digital versions of the Lacy Acting Edition now available online (for example, in transcribed form at Project Gutenberg and elsewhere and in digital facsimile via the Hathi Trust or the Internet Archive). Contacts at Galway and elsewhere (for example in the library and special collections community, and in the professional Victorian studies networks) would be approached for information about existing related endeavours, and to raise awareness of the project.

If sufficient suitable materials can be found, the next step will be to design, document, and implement procedures to convert them all to a single simple TEI encoding, consistent with (for example) that used by the DraCor project, or the ELTeC. Following these de facto community standards has many advantages, such as the ability to re-use existing software tools, or the ability to leverage existing community familiarity with the format. The resulting digital archive would be initially maintained as an open repository on GitHub, with all converted materials made available under a CC-BY licence.

It is probable that automatic conversion to this (or any other) target format will be much easier for texts already transcribed than for texts only available in digital image format. In a second phase of the project it is planned to explore and report on the applicability of “machine learning” techniques to enhance the performance of existing OCR platforms. By comparison with novels and other print material from this period, the Acting Edition texts are unusual in the complexity and variety of their typography. This complexity, derived from the need to clearly distinguish speaking parts, stage directions etc., is however regular and systematic and should thus be potentially beneficial in the task of automatic markup.

The availability of a consistently organized and encoded corpus of Victorian play texts will make possible the application of emerging distant reading methods and tools to a component of victorian cultural history which has been curiously neglected, if not undervalued, hitherto.

In the meantime, I have been tracking down other existing online resources for the description of the 19th century theatre. But that, as they say, is another and a different blog posting perhaps.

EEBO TCP in P5 – the return

It seemed a necessary act of piety to respond positively to a request for help from my former colleagues at the Oxford Text Archive, when they finally got around to considering the conversion of the latest (and one must fear the last) tranche of EEBO texts from the Text Creation Partnership. The conversion into a TEI P5 compatible version of the vast majority of EEBO-TCP phases 1 and 2 texts and their subsequent upload to a gazillion github repositories was accomplished by a team headed by Sebastian, back in the days when TEI Simple Print was new, and we were all a bit more bushy-tailed and bright eyed. Now the OTA has received their last tranche of TCP phase 2 texts, it should not (surely) be too much of a sweat to crank them through the same conversion process and deposit the results in Github too. Though of course nothing is ever quite that simple.

The XSLT script which does the heavy lifting is called tcp2tei and (thank you Sebastian) here it is, safe and sound in the TEI Stylesheets repository. And it still works. There is even a shell script for creating a new github repo and uploading each file to it from the same masterly hand; this one nearly works, as a consequence of github having got a little more fussy about authentication mechanisms in the last five years, but that’s not hard to fix. So I should just declare victory and move on.

On closer inspection however three issues have surfaced (so far).

Firstly, the catalogue numbers. In the current TCP P5 texts, each TEI header has a string of <idno> elements supplying its identifier in the Michigan DLPS database, the identifiers of one or more MARC records from OCLC or UMI catalogues, its Proquest number, identifiers in one or more standard bibliographies (ESTC, Wing, STC, Evans etc.) and the number of the image set which was scanned. For some reason I do not understand, not all the new texts supplied to the OTA have their full complement of these identifiers. For example, of the 6498 titles supplied, 3062 have OCLC Marc record identifiers, (discounting an additional 187 duplicated OCLC records in which the record identifier is prefixed redundantly by “ocn”). None of the 6498 has an image set number. Only 2987 have a Proquest identifier, and it’s always the same as the MARC catalogue number. And 963 have no bibliographic identifier of any sort.

No matter: after my skirmishes with EEBO metadata last summer (reported at https://foxglove.hypotheses.org/date/2020/08) , I am confident of being able to recover missing catalogue numbers from at least two different sources: one being Paul Shaffner (whom God preserve)’s eebodat1.xml and the other being Proquest (whom God has recently abandoned)’s title list. The stylesheet I am working on to do mundane things like change the availability statement in the header is duly expanded to supply the missing <idno>s. I decided to add the new Proquest numbers (the so called GOID) even though these are not present in the existing files.

Secondly the image links. One reason for caring about the Image Set numbers is that they are used as part of the address to which @facs attribute values scattered throughout the texts are mapped. Back in the day, it was possible to link directly to a page image in this way. This facility is however no more: Proquest (and presumably their successors) will only allow you to access individual page images by using their own interface, so far as I can tell. It is possible to access the same images via the JISC Historical Dataset sites by judiciously stringing together values from those <idno>s, but I have yet to find a reliable way of doing so for individual pages. For the present therefore the @facs values will remain a touching reminder of how things once were. Though I did add a link to the JISC site into the new headers, along with other useful documentation.

And thirdly the real subject of this entry: what to do about @rend. Now, I have long believed that the TCP P5 texts are not only valid TEI P5 XML but also valid against a specific TEI schema, to wit the schema named (after much argument and in the teeth of some opposition) TEI Simple Print. I distinctly remember (or think I remember) Sebastian and Magdalena putting quite a bit of effort into enhancing that schema with lots of @rendition values to match EEBO-TCP requirements. So when I actually tried validating my nice new files against it, I was a bit puzzled to find that they didn’t.

Specifically, the attribute @rend is not available in the TEI Simple schema, and has not been since at least August 2016. In its place, I should be using @rendition to point to one or more of the predefined simple:rendition values. So I spent an hour or two tabulating all the @rend values used in the new files, and finding their simple:equivalents. This proved easy for most cases, but impossible for just a few (7), some of them esoteric (@rend=’upsideDown’ anyone?), but others (e.g. @rend=”margQuotes” and its friends margSglQuotes and margDblQuotes) quite frequent and clearly necessary. I also realised (belatedly) that allowing my script to make this change was going to make my new texts inconsistent with the existing ones. For the existing TCP P5 texts are not valid against TEI Simple either, I discovered, and somewhat to my embarassment: they use @rend with all sorts of exotic values all over the place. They do use @rendition, but in one way only: some <pb/> elements have @rendition=’simple:additional’. This was entirely mysterious to me for a while (until Paul remembered what it was for). In any case, I will worry about that when a new systematic revision of the whole collection is undertaken, should that day ever dawn. For the moment, I will grit my teeth, stick with @rend, and assure anyone who asks that all TCP P5 texts are valid against, err, TEI ALL. This is known as biting the bullet, I believe.

Update: 10 June 2021. I uploaded all 6498 new texts to new repositories in the Github textcreationpartnership collection over a period of 24 hours last week. And, at somewhat greater length, I have now updated my repository at eebo-bib to describe more precisely what I did to create a TEI-compatible TCP bibliography. Definitely time to declare victory and move on.

Seven steps to Ossian

A TEI transcription of the 1773 edition of James Macpherson’s “translations” of the works of “Ossian”

Why would anyone want such a thing? I can’t imagine, but here’s how I made this one. It turned out to be a seven step process — so far. You can check out each stage from this github repo, if you’re really curious…

1. Decide which PDF to work from

You might think that one library’s digitized copy of “the 1773 edition of Ossian” would be much the same as another’s. But no. There are variations in the physical state of the originals, and the PDF format in which the digitization is made available may also vary. I downloaded three different digitized versions from the Internet Archive, but mainly I used the PDF version of the copy preserved at the National Library of Scotland. https://ia802302.us.archive.org/33/items/poemsofossiantra11macp/poemsofossiantra11macp.pdf I say “mainly” because that particular PDF file had a curious glitch in it which made some of the half-titles disappear when extracted as separate image files. I supplied the missing text from the PDF version of the New York Public Library’s copy.

2. Extract images from PDF

$ pdfimages [filename.pdf] [outputPrefix]

I am too lazy to install anything clever, so I use tried and tested ancient command line Unix tools, like pdfimages. Applying this to my chosen PDF file, I find that each page in the NLS PDF produces three files: two in PPM format which appear to be masks, and one in grayscale representing the page, in negative form. I extract the page image and save it in my img folder, ready for the next stage.

3. Do OCR using medieval rules

$ tesseract [inputfile] [outputfile] -l enm

As noted above, I have a preference for old-fashioned command-line Unix tools, and tesseract, once instructed to use an appropriate language model (enm, rather than eng), actually does a pretty good job of recognizing 18th century typography. It consistently fails on ligatured “ct” and a few other oddities, but is much better than I expected at distinguishing long-s from f. Most of its errors seem to be due to poor image quality. At the end of the process, I have about eight hundred text files, each corresponding with one page of the source, and most of them containing plausible text, which I save in the txt folder.

4. Hand-check, page by page, introducing minimal non-xml markup

I then (and this is where the time goes) proofread each one, introducing some absolutely minimal markup, of my own invention. The cheatsheet reads as follows:

  • introduce a — line at start and end of text on page
  • introduce a == line at start and end of note-text on page
  • introduce a blank line between paras but otherwise retain linebreaks
  • introduce an extra hyphen following end-of-line hyphens which are to be retained
  • replace * or +1 sigla for notes and note references with @ and a sequence number
  • use entity references for long dash, accented letters etc
  • use ““” for open double quotes
  • retain forme work on a single line
  • delimit smallcaps with { … }
  • delimit italicized phrases with {{ … }}
  • use % to mark the start of a dramatic style speech
  • add \ at end of verse lines
  • add $ at end of speech
  • add &end; at end of an argument or other chunk

I made the corrections using, need you ask, emacs aided and abetted with some perl one-liners for bulk corrections. Reading Ossian is an odd way to pass time during lockdown, but no worse than some of the other sanity-preserving expedients one reads about on Twitter. A good soundtrack seems to be almost any Sibelius symphony.

5. Transform and (slightly) reorganize the textfiles into proper XML, one per text

perl streamer.prl v1files.txt

This is not the most elegant or indeed sanitary code I have ever written; it also took quite a few iterations to get it working acceptably, which I defined as generating well-formed XML. It reads in a listof filenames, interspersed with flags to tell it when to start a new output file, and what its initial page number should be. Then it processes in succession each page of transcribed text, building up one string containing all the text chunks for a work, and another containing all the footnotes. Footnotes often span pages, of course. The resulting strings are then output as two separate XML <div> elements. Their contents also acquire some minimal XML tagging (<pb/>, <hi>, <p>, <sp> etc.) before they get flushed out. I gave up trying to overcome some inelegant results of not particularly elegant process. The code is in the Scripts folder of this repo for the morbidly curious; the results are in the xml folder: at least they are well-formed XML.

6. Run XSLT scripts to convert this stuff to kosher TEI documents and validate same.

Since this version is going to be my contribution to the “Ossian Online” project, it should probably follow that project’s usage and TEI practices. Alas, they do not have an ODD to tell me what that should be, and their files are apparently validated against TEI-All. But they do have a reasonable amount of documentation, and enough files already available online for me to be able to construct an ODD automagically (take a bow oddbyexample.xsl a well kept secret inside the TEI P5 Utilities repository) and thus a schema I can use to validate my TEI files when I have finished licking them into shape.

As ever, the fun part of the project is seeing how much of the remaining data-mungeing can be scripted in XSLT. Quite a lot, it transpires, though it remains necessary to hand-craft the details of titlepages, tables of contents etc. Another complete sweep through the text checking for miscellaneous things like the following is also needed:

  • words broken by a pagebreak but not properly reassembled (happens occasionally)
  • quotations not marked as such
  • verse lines not marked as such
  • code-switching
  • residual OCR errors (there are always residual OCR errors)

Before launching into that campaign, I checked the <pb/> elements introduced at stage 5 against the page numbering of the original as preserved in the paratextual comments of my transcription. Somewhat to my surprise, the page-numbering corresponded exactly with the number of such elements, enabling me to construct both a reliable reference system and reliable page image links as values for the facs attribute on each <pb/>.

7. Decide on the macrostructure

The Ossian Online project uses <div> elements for every subdivision of the 1773 edition, at whatever level, all the way down from its two volumes to the arguments of individual poems. It takes the perfectly reasonable view that every text can be organized as an ordered hierarchy of uniformly nested objects. As a consequence, the @type attribute for <div> has to do quite a lot of heavy lifting. Odd_by_example enumerates its values as follows:

  • advertisement
  • argument
  • book
  • contents
  • dedication
  • dissertation
  • duan
  • fragment
  • maintext
  • poem
  • preface

This list combines types that have a structural function (fragment, duan, book) with others that are purely descriptive (advertisement, argument, poem). Nothing wrong with that, but I still find this “divs all the way down” approach somewhat problematic, and that for for two reasons. Firstly a <div> is supposedly something incomplete, which is true of (for example) the argument prefixed to each poem or book, but not of the book or poem itself. Secondly, the relation between the argument and the poem requires that the two be siblings within some larger entity, but the poem is not really an incomplete part of that entity in quite the same way as the argument. Furthermore, in the 1773 edition, we have some texts which are undivided (Carricthura for example) along with others which are divided variously into “duan”s (Cathlona) or “book”s (Fingal). Should each book of Fingal be treated as a single text? Should the whole of Fingal be treated as a single text? Can’t we have both?

Values such as maintext and poem in the list above ring alarm bells indicating that these ontological issues are being evaded. Since the TEI in its wisdom already provides a mechanism for coping with exactly this (not at all unusual) kind of macrostructure, why not use it? I refer of course to the element <group>.

My version of the 1773 edition prefers to treat each distinct work as a <text>, rather than a <div type='maintext'>. Within it, there is a <front> containing a titlepage or half title and the argument, followed by a <body>, if the work is not further subdivided, or by a <group> if it is. A <group>, combines a number of lower-level <text> elements, each again with a <front> and a <body>. I also treat each of the two volumes of the 1773 edition as a <group>. The file driver.tei embeds each file in the structure using xInclude; it is commented to explain what’s going on (a bit).

It remains to be seen what my colleagues in Galway make of this radical re-organisation, to say nothing of my perverse desire to retain the long-s form. But at least changing to a format which matches more exactly that of the (excellent) work already done on the Ossian Online site will be a Simple Matter Of Programming.

An experiment in counting the books

A couple of years ago I spent some time trying to determine which of the titles in the wonderful “At the Circulating Library” (ATCL) database were freely available online in digital form. This was for largely pragmatic reasons to do with building the ELTeC English language collection: other blog entries describe the method I used and some preliminary results. It’s not as easy as you might suppose to download reliable catalogue information from most digital libraries, nor is it always readily tractable when you do. After some experimentation, I hit on the idea of creating a magic key, a kind of fingerprint, derived from the title and author name as specified, which could then be matched against keys in the same format derived from ATCL entries.

More recently, it occurred to me that this data might also provide some interesting numbers to contribute to current debates about digitization priorities. Exactly why some titles make it to Project Gutenberg, or the HathiTrust, or the Internet Archive and others don’t is not a question to which simple un-nuanced answers are likely or even (maybe) possible, but we should still ask them. Those responsible for the digitization efforts of major libraries are a little coy about the principles on which books are chosen for digitization, or even whether they actually have explicit selection policies, for some reason. I assume that there is a difficult tightrope walk between on the one hand practical but purely adventitious matters such as the relative locations of volume and scanner, the size and state of the volume, the time of day, the temperament of the scanner operator etc.) and on the other principled criteria aiming to ensure a balance of say titles by female and male authors, or high and low brow, date of production, longevity of readership, and so on. It would be surprising if the choices were completely unrelated to characteristics of the population being sampled, or totally failed to reflect the cultural priorities of the scanning operation; the same uncertainties apply, of course, to the collection being sampled for digitization itself.

Anyway, I recently read an interesting article by Allen Riddell and Troy Bassett (“What library digitization leaves out”; preprint available from https://arxiv.org/abs/2009.00513)  which reports that in the data they looked at – the comparatively small sample of surviving English novels published in 1836 and 1838 – shorter books, and books with male authors are disproportionately more likely to be digitized. I naturally wondered whether this applies equally well across the whole of the 19th century.  Which is what led me to revisit my efforts of two years ago. But first, here are the results.

There are 19,912 titles in the current ATCL database. Of these, 9152 (46%) have authors identified in the database as male, 9809 (49%) are identified as female, and 951 (4%) are identified as unknown. These relative proportions are rather different if we look at titles with at least one digital surrogate, of which there are in total 9099 (45%). Of these 9099 digitized texts, we find 5221 (57%) are of male authorship, 3718 (41%) of female authorship, and 160 (2%) are unsexed.

Look at that again. Although there are actually more titles available for digitization from female authors than for male, the number that actually gets digitized is significantly smaller (if, like me, you think a gap of 16 percentage points is pretty significant). Hmmm. These counts of course derive from the whole period covered by ATCL, from 1800 to 1900, so I also calculated them for each decade, only to find that the proportions and their imbalance remain fairly consistent across the century. And this despite huge changes in the numbers: for the last decade of the century ATCL lists nearly 6000 titles, a six-fold increase on (for example) the fourth decade. What percentage of those titles were digitized? In both decades, over 51%. And what proportion of those digitized titles were male-authored ? In both decades, 62%. There is some variability across the decades, but the basic picture remains the same

One possible explanation might be that titles with unknown or unsexable authorship (e.g. the ubiquitous “Anonymous”) are more likely to have been female, and that hence we are not seeing all the truly female authors. But even were this the case (after all, why should we not equally well hypothesize that male authors might be bashful or crave secrecy?), the proportions for books ostensibly male-authored with respect to books ostensibly not male-authored (i.e. those classed as either F or U by ATCL) remain stubbornly higher than the proportions for books definitely not male-authored. And indeed, the same mutatis mutandis is true for the ostensibly-female to ostensibly-not-female ratio.

Here’s a table showing the raw counts:

Decade All “Male” “Female” “U” A-dig M-dig F-dig U-dig
19912 9152 9809 951 9099 5221 3718 160
1830s 482 256 174 52 250 164 85 1
1840s 1037 543 422 72 538 334 202 2
1850s 1483 595 778 110 718 347 358 13
1860s 2341 1019 1093 229 1015 540 456 19
1870s 2866 1189 1514 163 1300 642 633 25
1880s 4126 1693 2287 146 1765 945 782 38
1890s 5979 2995 2863 121 3092 1929 1103 60

 

And here’s another showing the percentages:

               
Decade Ad% M% Md% F% Fd% U% Ud%
45.70% 45.96% 57.38% 49.26% 40.86% 4.78% 1.76%
1830s 51.87% 53.11% 65.60% 36.10% 34.00% 10.79% 0.40%
1840s 51.88% 52.36% 62.08% 40.69% 37.55% 6.94% 0.37%
1850s 48.42% 40.12% 48.33% 52.46% 49.86% 7.42% 1.81%
1860s 43.36% 43.53% 53.20% 46.69% 44.93% 9.78% 1.87%
1870s 45.36% 41.49% 49.38% 52.83% 48.69% 5.69% 1.92%
1880s 42.78% 41.03% 53.54% 55.43% 44.31% 3.54% 2.15%
1890s 51.71% 50.09% 62.39% 47.88% 35.67% 2.02% 1.94%

 

In an ideal world, you’d expect the percentages for titles with male authors (M%)  and for digitized titles with male authors (Md%)  to be roughly the same, right?  Think on… And feel free to download the csv file behind these tables for your own experimentation.

One should always suspect the data, so I make no excuse for the following detailed blow by blow account of how I got these numbers. Full gruesome details, including the scripts mentioned below, are available from https://github.com/lb42/bookLists

The basic method was to download a complete catalogue of relevant titles available from each target digital library, and then try to match them with records in the ATCL. For Google Books, which does not seemingly provide a complete catalogue online, I tried a different method, discussed further below.

I started by downloading the latest (June 2020) dump of the ATCL database, and converting it to a basic TEI XML format. I then did much the same for the holdings of five digital libraries with good holdings of 19th century novels: the Hathi Trust, the British Library, the Internet Archive, Project Gutenberg, and Google Books. As a control, and for testing purposes, I also looked at a few smaller collections, notably the Victorian Women Writers Project at Indiana University and the (now defunct) University of Adelaide “ebooks” repository. I wanted to provide something similar to John Mark Ockerbloom’s lovely Online Books Pages at https://onlinebooks.library.upenn.edu/ but more precisely tied in to ATCL.

Hathi Trust makes available a monthly dump of their entire collection as a huge tab-delimited file. Working with the most recent dump, dated September 1 2020, I used a simple minded perl script `hathiProcess.prl` to parse this file and select from it only freely-available English language books published in Great Britain between 1800 and 1920; an  XSLT stylesheet `htConv.xsl` then converted the results to the common project format (CPF).

The British Library website makes available an Excel spreadsheet providing metadata for the titles from their collection which were digitized some time ago by the Microsoft Books project I downloaded this, converted it to TEI with `csvtotei` and converted the result to CPF, (selecting just the 19th century titles) with `blConv.xsl`.

Project Gutenberg makes available several versions of its catalogue data. I worked with the most recently updated one, which is a vast archive of unbelievably verbose RDF files. Despite its complexity, this data doesn’t include any publication data for the source texts concerned (unsurprising really), though it does provide birth and death dates for the authors. To cut down the numbers a little, I rejected titles whose authors were not born during the 19th century, and also those which specified a MARC relator field “edt” (to cut out non-original editions). Once I had remembered how on earth to handle a gazillion tiny files of RDF (I did this back in 2018 ), I used the `gutConvRDF.xsl` script to process them all to CPF, and concatenated the results into a single file.

The Internet Archive, so far as I can see, doesn’t have any generally available or downloadable catalogue, though it does have a really good query interface. The method I used for attacking Google Books would presumably work equally well (or equally badly; see below) in this case, but I haven’t tried it. Instead I just used a predefined collection called `19thcennov` which someone at UIC Urbana Champaigne thoughtfully created back in December 2008. This gave me 7828 XML records which were easily converted to CPF using `iaConv.xsl`.

The common project format files all consist of TEI <bibl> elements with either an @xml:id attribute or an <idno> specifying the identifying code for this item in the relevant repository, e.g. ‘ia:foreignersnovel03pric` identifies the Internet Archive’s digitization of volume three of Eleanor Price’s novel “The Foreigners” . Each <bibl> also has an @n attribute supplying the magic key for the title, which is confected as follows:

  • remove the full stop following Mr or Mrs in any title containing one
  • take the substring of the title up to the first occurrence of one of the punctuation marks . , : ; or /
  • concatenate this with the author’s last name
  • convert to lower case and remove all punctuation characters and spaces

So, for example ATCL lists a work with the title “The Foreigners: A Novel” attributed to author “Eleanor C. Price”. The same work appears in the Internet Archive list, but with the author “Price, Eleanor C. (Eleanor Catherine)” and the title “The foreigners : a novel”. Despite the differing strings, both will get the same magic key “theforeigners|price”. This method is far from bullet proof, but it’s serviceable.

For Google Books, as noted above, there is no readily downloadable catalogue. But there is an API, which in a moment of madness I thought it might be cool to learn how to use. A day of poking around led me to a neat python script some helpful person had written to look up ISBN numbers (hat tip to AO8’s treasury , which I mercilessly hacked to my own purposes. My version reads a file of URL-encoded search requests like this “inTitle:the+inTitle:foreigners+inAuthor:Price”, fires them at the Google API, and processes the returns into a rudimentary bibl or a comment lamenting the absence or unavailability of the item in question. The file of search requests is rather long (one for each title in ATCL for which I have not yet found any digital version – a total of 11,203 ) so I make the program sleep for a while after firing off about 40 consecutive requests, to help the Google server catch up. Despite this considerate behaviour on my part, it did not take Mr Google long to decide that my program (or my IP address) was a threat, and then to start returning unco-operative HTTP messages like 503 (“Service Unavailable”) and 429 (“Too many requests”). The API Help pages confirm that Google considers “using an app, program or script to perform a large number of searches in a short time” prima facie justification for temporarily blocking the IP address in question; though it’s not clear what exactly is meant by “large” (more than 100?) and “short” (less than a minute?) in that phrase. Furthermore, when I search using my specially-minted API key, there seems to be a hard limit of 1000 queries per day in any case: so this job is not going to be finished very quickly. Still, I do now have an extra 1517 records to show for two day’s work.

Once I’ve created all these lists, I run the merger.xsl script to add <ref> elements to the ATCL-TEI file I created in the first step. This makes for some redundancy, for two reasons: firstly, for most of the archives a three volume novel is likely to get a separate entry for each volume; secondly, for many titles, there exist multiple digitizations – which may (or may not) derive from the same source. The following table shows for each archive the number of records selected for processing, the number of references to ATCL titles found, and the number of titles affected. Note that I haven’t yet done any de-duplication to remove overlaps.

British Library 62015 9920 5104  
Hathi Trust 460070 18891 5655  
Internet Archive 7829 4691 1655  
Project Gutenberg 38338 2880 2275  
Google Books ? 1517 1517  

I haven’t made available the CPF files for each archive, nor the final merged TEI version of the ATCL dump, since this is not really my data to share. But I have made available a file called atcl-links.csv, which is a spreadsheet with a row for each ATCL title digitized in one or more publicly available digital collection, mapping its ATCL identifier to its identifier in each repo. I’ll  update these as and when the data improves.

Building the Eltec (stage 0) … continued

Have at you, Project Gutenberg…

I am for sure not the first person to think it would be nice to try to make the Project Gutenberg metadata more easily machine tractable. Matthew Jockers wrote a python script to hack usable metadata out of the individual texts back in 2010 (see this blog entry ) ; Damon Cavar wrote some java to do something similar but starting from the RDF form of the Gutenberg catalog, as part of an ambitious
(but I think as yet incomplete) Project Gutenberg to TEI XML conversion project  last updated 2012.  More recently, Jonathan Reeve has announced an interesting project which is hacking together various bits of Gutenberg, Gitenberg, and Wikipaedia to make a  project Gutenberg database for text mining  … one day.

My objectives are not so ambitious and I like to keep things  simple. I just want to know how many Gutenberg titles are listed in the Bassett database of 19th c British fiction. (I’d also like to be able to extract a list of all British novels in English published for the first time between 1902 and 1920, but that’s a separate problem) Having experimented with other plain text options, I reluctantly decided to start from the Gutenberg RDF catalogue. At least that is expressed using a syntax which xslt can handle and validate. No claims that its semantics are entirely reliable, of course.

Step 1 is to download and unpack a massive zip file from the Gutenberg site. We want the RDF format data is linked to from a page in the Gutenberg wiki:  It is massive because it actually contains nigh on 50,000 subdirectories, each containing a single file, describing a single text. So, for example, the RDF format catalogue entry for text number 1234 is in the unpacked file cache/epub /1234/pg1234.rdf When I looked there was also just one directory called DELETE-55495 which contained a variant of the entry for pg55485.rdf, but I pretended I hadn’t noticed that.

Step 2 is to develop and perfect a simple XSLT script to extract the useful grains from the enormous amount of chaff in each RDF file. This script (rdftotei) is designed to meet the needs of the ELTeC, so it rejects anything which is clearly out of the desired time zone (author born after 1920 or before 1800), or definitely not a novel (some records use a marc edt descriptor to show that they are edited compilations). If I could find a way of identifying books which are not in English I would exclude them too.  It cranks out simplified TEI bibl records like this:

<bibl xml:id="10037" n="abeautifulpossibility|Black">
<title>A Beautiful Possibility</title>
<author dates="1857 1936">Black, Edith Ferguson</author>
</bibl>

As you can see, this includes a  magic key that I will later use for matching with other ELTeC bibliographic records, notably the Bassett database I blogged about last week.

Step 3 is to find a way of running this script against 50,000 files which does not cause my computer to melt down, and preferably will complete in my lifetime. My first simple minded approach was  a shell script that invokes saxon on each file. But this has to set up a JVM afresh each time it runs, so it takes forever. I considered glomming the individual files together into a smaller number of larger files, so that loading the JRE gets done less frequently, but this is fiddly because each of the individual files begins with an XML declaration that would have to be removed during the glomming process. A question to the oxygen users list evinces 3 helpful alternative suggestions in ten minutes: the easiest and quickest of which is to use a feature I didn’t even know existed in saxon: specifying a directory as input and as output. So with all my RDF files in the folder RDF and nothing in the directory RDFx, I do the following two shell commands:

saxon -s:RDF -o:RDFx rdftotei.xsl
cat RDFx/* > gutenList.xml

and the whole thing is done in a couple of minutes.

Step 4 is to repeat the process as before: pick out the magic keys and then look for overlaps between those keys and those in the Bassett database (like this:

saxon guten-list.xml getKeys.xsl > gutenKeys.txt
comm -12 <(sort gutenKeys.txt) <(sort bassetKeys.txt)

Result on the first round: 1478 Gutenberg titles are already known to Bassett. Not as many as I’d expected, but not bad. Here are the full results for all three digital collections.

Out of 13,859 titles in Bassett’s database,  a total of 2937 appear in at least one of Gutenberg, Internet Archive, Google Books, or VWWP, i.e. more than 20% (which is better than I was expecting).  Here are the counts for the individual collections:

Gutenberg InternetArchive Google Books VWWP
1478 1155 594 32

 

Also to be expected, there’s a bit of overlap. 2638 appear in only one digital collection; 276 in two, and 23 in all 3. You can probably guess which titles those are, though one of them came as a bit of a surprise. What’s so great about Mary Ward’s “Marcella”?.

How to make a sow’s ear into a silk purse/Comment faire d’une buse un épervier

As the proverb says, you can’t turn a buzzard into a sparrow-hawk. But here’s how I went about producing a TEI-conformant minimally encoded edition of the 1611 King James bible, starting from an all-singing, all-dancing, vastly over-complicated, web site to the existence of which Martin Mueller had alerted me last Friday in a plaintive posting to TEI-L

The problem with web sites like this is that they expose only various HTML views of their underlying data, usually heavily infested with javascript, often deploying all sorts of nonstandard gizmos to make the pages look just so. I’ve no quarrel with their doing that, I just wish they’d make it a bit easier to get at the real data, i.e. the transcribed text and the pointers to the images that go with them. This site is a case in point: after spending some time looking at some of the gazillion HTML files a simple-minded wget -r starts hoovering up, I hypothesize that the data is actually stashed away somewhere inaccessible and all that you see on the site is a bunch of variously configured static web pages which have been created from it. The good thing about that is that the static web pages were created by an automaton, and the process can therefore be reversed by another automaton. The bad thing is that (in this case at least) clearly different automata have been used to generate vaguely different versions of the same page. For example: the following three URLs all show subtly different versions of the same first page of the 1611 bible : https://www.kingjamesbibleonline.org/1611_Genesis-Chapter-1/  https://www.kingjamesbibleonline.org/Genesis-Chapter-1_Original-1611-KJV/ and https://www.kingjamesbibleonline.org/Genesis_1_1611/

These are not simple redirects: each URL delivers a file in a different format. Well, I could have asked whoever runs this site what was going on, but that would have spoiled the fun, so I chose the format I liked best (the last of the three above) and wrote a script to generate requests for just those pages. I had to assume that the URLs would follow a consistent naming format, encouraged by a table of the names of the books of the bible I found in one of the chunks of embedded javascript, and which I moved into my little perl script. Running this script produced a bash script which grabbed each page using curl and saved it as an HTML file on my machine.
What went wrong with this process? Surprisingly little: I didn’t find out till Sunday lunchtime that not only had I completely overlooked the Apocryphal books, but also that the file naming conventions used for these were not quite the same as for the canonical chapters (“Judith” for example is actually spelled “Iudeth”), but for the bulk of the 1300 or so chapters my guesses about the URL to use were spot on.  [This was Hubris. See my comment below]
The real challenge was disentangling the useful data from the resulting screen-scraped mess of pottage. My usual approach here is to run the HTML I get through Dave Ragget’s utterly wonderful tidy utility to turn it into kosher XML, and then write an XSLT script to pick out the bits I want. In this case, however, the pottage I got was so messy even tidy threw up its metaphorical hands and refused to proceed with it, so I had to concoct a little perl script to pre-process each file and extract just the useful part, before running it through tidy, and processing the resulting XML into conformant TEI by means of saxon and a little XSLT stylesheet. Like this:

for f in webScraped/*.html; do \
FNAME=`basename $f .html`;\ 
echo ${FNAME} ;\ 
perl extract.prl $f | \ 
tidy -q  --error-file tidyerrs --doctype omit --numeric-entities yes - asxml | \ 
saxon -o:chaps/${FNAME}.xml -s:- -xsl:postGrab.xsl chapId=${FNAME};\ 
done;

And what went wrong with this process? Well, quite a lot, as you might expect, but nothing that couldn’t be fixed by tweaking and rerunning the process (this is why I always put the effort into writing bash scripts for jobs like this: they can be rerun till I get it right). For example, I was deriving an XML id for each chapter from the name of the input file, but some of the files had names beginning with digits. My plan was to put each chapter into a separate XML file and then use xInclude to munge them together into a TEI document (to maximize the flexibility of said mungeing) — but for this to work I had to get namespace declarations in the right place, and as anyone who has used them knows, anything involving namespace declarations never works the first time. To say nothing of unexpected inconsistencies in the data : which in fact were really very few. In fact so far, the only thing that has so far caught me out was a handful of chunks which looked like biblical data in the HTML markup but were not. Fortunately these were detectable because they wound up with an invalid @n attribute in the XML output and could therefore be weeded out after the event.
While all this was going on, I wrote my driver file and did some further sniffing around on the interwebs for sources from which to complete the work. The 1611 Bible has a title page and loads of prefatory matter not all of which is available on www.kingjamesbibleonline.org (for the record, I found the title page on Wikimedia, and the rest of the front matter at several places, but in page image form only).
How should the Bible be modelled as a TEI document? In my first view, each verse is an <ab>, each chapter is a <div>, each book is a <text>, each testament is a <group>. This made sense to me, but then I realised that processing would be simpler if instead each book were regarded as a <div> of a different type; hence, that is what the current versions of both driver file and ODD say. Considerations of where to put the genealogies, tables for Easter, almanachs etc. which arguably do not belong in the front matter may lead me to review that decision, so I have left <group> lurking in my ODD. It should be easy to produce a separate driver file representing that alternative view.

A more pressing need, however, is to sort out the placement of the page image links: in the HTML source, and therefore in my XML, links to the page images for the whole of a chapter are given as a sequence at the start of the transcribed text for that chapter, rather than being placed at the point in the transcript where that page begins. In some cases that point is identifiable because the transcribers have chosen to include part of the running page header in the transcription there (it winds up as a <fw> element); but in many it isn’t. Most chapters only occupy a few pages so sorting this out would not be a major effort, just a rather tedious one, and not-easily-automatable one.
So what have we learned? Actually it’s not that hard to make a usable TEI document even from something as idiosyncratic as this web site. Which is encouraging. Let me also hasten to add that I mean no disrespect for the hard work (still less for the generosity) which must have gone into the production of that website: everyone is entitled to their own design decisions. I am pleased to express my thanks to the anonymous people who have put in that effort, and decided to share its fruits even with the godless multitude. As the 1611 translators say in their preface ‘Zeale to promote the common good, whether it be by devising any thing our selves, or revising that which hath bene laboured by others, deserveth certainly much respect and esteeme’.