Auteur/autrice : foxglove
Another Fine Mess…
As previously mentioned, I have been trying to mangle Allardyce Nicoll’s Handlists into a tractable database for what seems like forever. Here’s the latest and hopefully last update.
Some of the entries are just disambiguating cross references: these are (or should be) marked as eType=’note’. Some of them are partial entries including a reference to another entry which may or may not contain the same data: these are (or should be) marked as eType=’ref’. This classification of entries was carried out by the addAtts script early on in the pipeline; the same script also added a magic key for each entry to facilitate matching up Lacy and Nicoll entries, but ignored entries with eType=’ref’ for some reason. I did not notice this gaffe till later, much later, after I had spent weeks on the next stage of the pipeline, (the clever bit of matching up Lacy and Nicoll entries, which involved a lot of manual intervention)
Here’s what I did to fix that blunder…
saxon -xi entries.xml addAttsAgain.xsl > oops.xml
(run a corrected version of addAtts script (renamed addAttsAgain) to generate a file of corrected entry elements for the entries of eType ref, now renamed as eType=part).saxon oops.xml addWhen.xsl > oops2.xml
(run the existing addWhen script to add a @when for these new entries)saxon allEntries.xml attributePatch.xsl > temp.xml
(run attributePatch script to produce an improved version of allEntries.xml. )
The text is now quite intelligently tagged, and there is a (non TEI) schema to describe its markup. I need to do more on its documentation, but there is an ODD.
Dates
Round about now, I realised that @when values were missing for many titles, and were mostly not in ISO format, which matters, partly because I can now use my ODD-defined schema to validate the file but mainly because it would be nice to sort entries correctly by date. So I embarked on the long tedious process of dating the entries a bit more consistently.
Nicoll represents dates in one of three different ways.
- Where the full date of a performance or a license is known, it is given as DD/MM/YY or (occasionally) DD/MM/YYYY. This is easy to identify and extract to the @when attribute for the entry
- Where the date is only partial, it may appear in the form MM/YYYY. This is more problematic.
- Where the only date available is that of a publication, it will usually be in the form YYYY, possibly in brackets. I wrote a script to extract these to the @when attribute too.
There are quite a lot of OCR errors to correct (I instead of 1, u instead of 11, s instead of 5, redundant blanks or nonexistent punctuation, and so on). Many of these could be fixed with regexp search and replace. I also found cases where the end of a printed line had simply been ignored, which were more difficult to detect.
Eventually, I have plausible dates for as many as possible of the datable entries, in one or other of the three formats specified. I run another script to convert them all to a kosher iso format i.e. YYYY, YYYY-MM, or YYYY-MM-DD, and then validate. A surprisingly large number trip at this last hurdle, mostly because of a previously unspotted OCR error, but this does throw up five cases which can only be attributed to lax proof reading at Cambridge University Press. These five include obvious nonsense like “32/2/1822” given as the date for the Drury Lane performance of Edward P. Knight’s “The Veteran Soldier”, and more tangled cases such as “29/2/1823” given as the date for a performance at the Adelphi of Moncrieff’s “The Secret”. Sorry, Allardyce, but 1823 was not a leap year, so this cannot be true. Moreover, according to the Adelphi Calendar (https://www.umass.edu/AdelphiTheatreCalendar/auth.htm) , on 28 Feb 1823, the theatre was dark for Lent… and the same source is stubbornly silent on the existence of a play of this title and authorship anywhere. So someone is mistaken.
Just to put those peccadillos into perspective : by my reckoning, there are now 24,351 distinct entries in the Nicoll database, of which 24,301 are apparently now correctly dated . Fifty are genuinely undated; five have impossible datings. A pretty good error rate.
Multiple authorship
As I may have remarked before, the entries in Nicoll’s Handlists are of quite a few different types. Some of them are just cross references, supplying the name under which a pseudonym has been indexed but not documenting any particular performance; others (quite a few) are partial entries, associating a performance or publication for one author with an entry for the same performance or publication listed under the name of the “main” author in a collaboration. Nicoll supplies the following definition: “I have adopted the principle of placing the main entry of any particular play under the name of that author whose name appeared first in the play-bill, newspaper advertisement or review from which information regarding authorship was obtained”. For my purposes however, all these additional entries simply inflate the number of performances etc. (by a factor of nearly 10%) and are unnecessary for a resource in digital form. I therefore tag them differently, and process the multi-author entries so that all the authorship information is accessible in the same place. For example, here is the “main” entry for a play with multiple authorship:
<entry when="1897-09-06" xml:id="N08816" eType="multi" n="ohsusannah_AMBIENT">
<class group="FARCE">F.C.</class>
<author>AMBIENT, MARK </author>
<title>Oh! Susannah! </title>
<perf>Eden, Brighton, 6/9/97; Roy. 5/10/97.</perf>
<lic>L.C. </lic>
<bib>French </bib>
<note type="auth">[Written in collaboration with A. ATWOOD and R. VAUN .]</note>
</entry>
The Handlist also contains two fragmentary entries, one for each of the two co-authors:
<entryFrag when="1897-09-06" xml:id="N08946" eType="part" n="ohsusannah_ATWOOD">
<author>ATWOOD, ALBAN </author>
<title>Oh! Susannah! </title>
<perf>Eden, Brighton, 6/9/97.</perf>
<note>See M. AMBIENT.</note>
</entryFrag>
<entryFrag when="1897-09-06" xml:id="N19381" eType="part" n="ohsusannah1_VAUN">
<author>VAUN, RUSSELL </author>
<title>Oh! Susannah 1 </title>
<perf>Eden, Brighton, 6/9/97.</perf>
<note>See M. AMBIENT.</note>
</entryFrag>
I wrote a script to combine all these to produce a new multi-author entry, like this:
<entry when="1897-09-06" xml:id="N08816" eType="multi" n="ohsusannah_AMBIENT">
<class group="FARCE">F.C.</class>
<author>AMBIENT, MARK </author>
<author type="also">ATWOOD, ALBAN </author>
<author type="also">VAUN, RUSSELL </author>
<title>Oh! Susannah! </title>
<perf>Eden, Brighton, 6/9/97; Roy. 5/10/97.</perf>
<lic>L.C. </lic>
<bib>French </bib>
<note type="auth">[Written in collaboration with A. ATWOOD and R. VAUN .]</note>
</entry>
Note that e.g. “R. VAUN” now appears as “VAUN, RUSSELL”. Achieving that particular coup de main involved quite a lot of XSLT juggling before I found a 99% successful solution.
In the process, I found only the following five cases where the author name referenced by Nicoll was hard to find in the Handlists.
- “TAIT” (but I found him in the errata list for vol 4)
- “PINCROFT” Confusingly, this exists as a pseudonym for BANERO J.M. , which is also the name of the main author. Something wrong there: Nicoll nodded.
- “MOUNTJOY” No other sign of this pseudonym.
- “Corri” must be Clarence Collingwood Corri, who supplied the music for George Sims’ 1899 farce In Gay Piccadilly. Dan Leno was in it.
- “CARGILL, G.B.” I have not yet found any other sign of this co-author.
The next challenge
It is definitely time to revisit the Lacy catalogue. What’s to do with the 196 catalogue entries for which no corresponding entry has shown up in either of Nicoll’s Handlists? Are they all old stuff, first published or performed long before 1800, and therefore reasonably omitted from the Handlists? Or are there some rogue components amongst them? Time will tell. Meanwhile, here’s the current state of affairs, according to my reportCounts script.
Today there are 24351 entries in this file, of which ....
24350 are classified
22995 are plain old entries
some of which are unclassified
1356 multi-authored entries
1750 fragmentary entries constructed from cross references
30 fragmentary entries constructed from notes
51 notes, xrefs, and comments
18784 with perf data
15896 with lic data
4158 with bib data
1169 have been matched out of 1300 Lacy references
1327 catalogue entries linked to a Nicoll entry out of 1511
391 entries have a bib referencing Lacy but no @matches
493 have a female author
Multiple authorship
As noted previously Nicoll’s Handlists are organized by author name, which makes them manageable, but also can be seriously misleading. In particular, where a play is to be credited to more than one author, Nicoll’s practice is to repeat the information about the play in a second slightly degenerate entry, thus inflating the number of entries in the Handlist. Here for example is the “main” entry for a play co-authored by A’Beckett and Lemon:
<entry type="Bsq.">
<author>A'BECKETT, GILBERT ABBOTT</author>
<title>The Knight and the Sprite </title>
<note type="perf">Strand, M. 11/11/1844</note>
<note>L.C. 9/11/1844.</note>
<note type="auth">[Written in collaboration with M. LEMON]</note>
</entry>
And here is what I have unkindly termed the “degenerate” entry for same:
<entry type="Bsq.">
<author>LEMON, MARK</author>
<title>The Knight and the Sprite </title>
<note type="perf">Strand, M. 11/11/1844</note>
<note>See G. A. A BECKETT.</note>
</entry>
I assume Nicoll’s rationale for this redundancy is to make it easier to find everything written by a given author when flipping through the pages of a printed volume. But this makes much less sense in a digital resource. What we would rather see (I think) is an entry which makes explicit its multiple authorship: like this
<entry type="multi">
<class>Bsq</class>
<author>A'BECKETT, GILBERT ABBOTT</author>
<author type="also">LEMON, MARK</author>
<title>The Knight and the Sprite </title>
<note type="perf">Strand, M. 11/11/1844</note>
<note>L.C. 9/11/1844.</note>
<note type="auth">[Written in collaboration with M. LEMON]</note>
</entry>
(Note that to get there I have had to rethink the way I encode Nicoll’s genre tags, initially by moving them to an element of their own rather than using the @type attribute of the <entry> element. And note also that I am preserving those arguably redundant <note type=”auth”> elements so I can tell if something goes wrong)
I have spent the last week or two slowly making this possible. Slowly because I am slow, but also because it is not entirely straightforward to translate the string “M. LEMON” (as given in the note in the main entry for A’Beckett) into “LEMON, MARK”, which is the handle used on other main entries for the distinguished editor of Punch. (The same would apply, of course, if I decided to use the note within the degenerate entry to effect the join: I would then have to map “G.A. A. BECKETT” to “A’BECKETT, GILBERT ABBOTT.” ) And these are easy cases: Nicoll’s canonical format for names can get quite complicated. Consider, for example, “YORKE, ELIZABETH, Countess of HARDWICKE” or “ADDISON, Captain (later Lieutenant-Colonel) HENRY ROBERT” … Anyway, I made the job easier for myself by extracting from the entries a lookup table mapping name components (as given by notes within main entries) to canonical full names: like this
<author f="49">
<s>LEMON</s>
<w>MARK</w>
<str>LEMON, MARK</str>
</author>
This all worked quite satisfactorily for the 1800-1850 entries, for which there are only 58 additional name entries to handle, though getting to the point of being reasonably confident in that number took much longer than you might think, involving as it did quite a lot of OCR error correction.
However, things got much more challenging when I looked into the 1850-1900 entries. Firstly, there are many more entries to deal with: 1299 cases of “collaboration” . Secondly, some cases (34 to be exact) use an abbreviated form like this:
<entry type="P.">
<author>BYAM, MARTIN </author>
<title>The Babes in the Wood </title>
<note type="perf">R.A. Woolwich, 14/12/57.</note> L.C.
<note type="auth">[Written in collaboration with F. GRAHAM and W. T. VINCENT.]</note>
</entry>
This main entry will need to get two additional author elements, one for “F.GRAHAM” and one for “W.T. VINCENT”, not just one – which means revising my simple-minded XSLT script yet again. And it will also have to handle notes like this without too much fuss:
<note type="auth">[Written in collaboration with A. R. SMITH, F. TALFOURD and W. P. HALE.]</note>
The script does a good job of alerting me to cases where Allardyce has apparently nodded, and named as a collaborator someone who does not appear anywhere in the rest of the Handlist. This happens precisely once in the 1800-1850 volume, but seemingly many times more in the later volume. However, on examination, many of these discrepancies are a consequence of my cavalier editing praxis. Things like the kinds of quotation marks used to flag up pseudonyms, or whether or not surnames can contain spaces, return to bite me. Others are caused by OCR failures – occasionally lines seem to have just dropped out.
And, further to keep me on my toes, I have now discovered that there are three cases in which Nicoll gives up entirely on this painstaking method of documenting multiple authorship. The first concerns 18 titles to be attributed to the pseudonymous “Richard Henry”: these all appear once only under “HENRY, RICHARD”, like this
<entry type="Bsq.">
<author>“HENRY, RICHARD" [RICHARD BUTLER and H. CHANCE NEWTON] </author>
<title>Lancelot the Lovely; or, The Idol of the King </title>
<note type="perf">(Aven. 22/4/89).</note> L.C.
<note type="music">[Music by J. Crook.]</note>
</entry>
None of these 18 titles is listed, however, under NEWTON, nor indeed under BUTLER. A further, and apparently disjoint, batch of titles is listed under “NEWTON, H. CHANCE (“RICHARD HENRY”), I think I am going to pretend I haven’t noticed them. Likewise this one:
<entry type="D.Sk.">
<author>GORDON-CLIFFORD, E. and H. </author>
<title>A Black Dove </title>
<note type="perf">P’s. H. Kew, 12/9/94.</note>
</entry>
Hand Lists – The Return
Three weeks ago, I wrote an interim report on the work I was doing to make Allardyce Nicoll’s Handlists more machine tractable. I didn’t actually spend all of the previous month correcting OCR errors, writing bits of XSLT to manipulate the OCRd text, figuring out what had gone wrong with my matching algorithm etc. It just feels that way.
Anyway, here’s a result:
This camembert shows how all the 25,000+ entries in the two Handlists are classified. The categories used (Drama, Farce, Panto, etc.) are ones I made up by grouping together the much finer-grained but trickier text types Nicoll provides (of which there are more than a hundred values) into the 15 basic classes you see above. More of that another day.
The size of each wedge is, as you might expect, proportionate to the number of entries so classified, and (reading anti-clockwise) they are in descending order. As I noted last month, the top six categories together account for three-quarters of the data.
I also said last month that my next mission would be to see how these proportions change over time. And indeed they do. Like this:
Each column here represents a decade for which the Handlists provide data, from 1810s on the left to 1890s on the right. Each column summarizes theatrical events recorded for that decade, using the same 15 crude classifications as the camembert. The size of each coloured blob is proportionate to the percentage of events in that decade classified in that way. For example, in the 1860s column, the pale blue blob is much bigger than any of the others, because nearly half (48.6% to be exact) of the available theatrical events that decade are classified as “Drama”. In the same decade, the pale green blob above it is smaller because “Farce” accounted for a smaller proportion (15%). I haven’t included the numbers in the graphic to make it easier to read, but they are available.
Note that all the blobs are stacked on top of each other in alphabetical order, so you can detect changes over time for a given category by reading from left to right. For example, a blue blob for “Panto” appears near the top (row 4) in each decade, demonstrating the this particular form of theatre formed part of each decades offerings, getting perhaps a little more popular as the century wears on, but never disappearing. Contrast that with “Melodrama” (the purple blobs in row seven) or “Burletta” (the dark yellow blobs near the bottom) both of which are flourishing in the decades before mid century, and almost entirely eclipsed thereafter.
Now, I am certainly not claiming to have discovered that melodrama and burletta were both seriously unfashionable from round about 1850 onwards, despite their earlier mode-ishness. But it is always satisfying (and reassuring) to find “common knowledge” backed up by actual observed data.
Handlists made handier
I have been down a deep deep rabbit hole for the last week or two trying to get my XML-tagged versions of Allardyce Nicol’s two Handlists into shape. Here is an interim report.
What is an entry?
One problem has to do with the actual content of Nicoll’s Handlists. What exactly do they list? Although this is essentially a record of performances, it is organized very much by author. There are, for example, about 30 entries which don’t refer to any specific play, but are merely there to indicate the preferred form of an author’s name: like this one
<entry>“LAWRENCE, SLINGSBY.” See G. H. LEWES</entry>
Multiple authorship is also a problem. An entry like this one is straightforward enough:
<entry type="D."><author>ANDERSON, JAMES R. </author><title>The Robbers </title><note type="perf" >D.L. 21/4/51</note>. L.C. D.L. 26/12/45.</entry>
Inter alia, this tells us that there was a performance of a drama called “The Robbers” at Drury Lane theatre on 21 April 1851, and that the author of the piece is recorded to be James R. Anderson.
But there are also entries like this one:
<entry type="F."><author>ATWELL, E. </author><title>A Stuffed Dog </title><note type="perf">Park. H. Camden Town, 2/11/89</note>. See J. A. KNOX.</entry>
This one tells us that “A Stuffed Dog”, written by E. Atwell, was making them roar at the Park Theatre in Camden Town in November 1889. If we look for Mr J. A. Knox, we find another entry, apparently for the same performance:
<entry type="F."><author>KNOX, J. ARMORY </author><title>A Stuffed Dog </title><note type="perf" >Park H. Camden Town, 2/11/89, copy.</note>. L.C. [Written in collaboration with E. ATWELL .]</entry>
On the face of it, if I want to determine how many Farces are listed in the Handlists, for example to determine the waxing or waning of this particular type of performance over time, I need to be wary of cases like this one, where a single farce has multiple authors, and therefore gives rise to multiple entries: both these entries refer to a single performance, so should only be counted once.
How serious a problem is this? Out of 25 thousand- plus entries, (25,632 to be exact) I find that there are 1346 entries containing the word “collaboration” and 1738 containing the word “See ”. Most, but not all, of them point to a collaboration entry which references the same performance of the same play. There are only two entries in which the word “See ” really appears as part of a title, but there are maybe a dozen or more other types of cross references, for example to plays renamed or whose authorship Nicoll has resolved. The number of cross references which go nowhere or to an entry which documents a different title or performance is unknown, but not zero. I spent some time trying to check automatically but did not finish: other bits of the rabbit hole (like checking and fixing OCR errors in the dates) seemed more useful.
Although ostensibly organized by author, about half the entries in the Handlists record performances for which there is no author. (10,374 out of 25,662 entries to be precise). Usually, but not always, distinct entries are given for the same title performed on different occasions or at different venues – but occasionally an entry will provide a list of performances: like this
<entry type="C.O."><author>MANCHESTER, G. </author><title> The School Girl </title><note type="perf">Grand, Cardiff, 2/9/95; Stand. 14/10/95</note>. L.C. [Music by A. Maurice.]</entry>
I have not checked, but I suspect that this happens when a play has its first performance in the provinces: in this example, we may conjecture that “The School Girl” went down well enough in Cardiff for the management to risk bringing it to the Standard in Shoreditch a month later.
What sort of play is this?
Nicoll thoughtfully provides lists of the abbreviated codes he uses to indicate “the nature of the play itself” for both the Handlist 1800-1850 and the Handlist 1850-1900. There are a few codes present in only one or other of the two lists (B.O. for Ballad Opera is in the earlier list only, for example). An investigation of his usage of these codes over the two hand lists indicates the justice of the warning he also provides that “these designations are in no way final, and are often indefinite”. The two lists propose a total of 87 different codes including some very general categories (D for drama, F for Farce, P for Pantomime etc.) as well as many more nuanced classifications such as “Military Drama”, “Operatic Drama”, “Poetic Drama” “Romantic Comedy”, “Romantic Comedy Drama”, and “Romantic Drama”. Nicoll further remarks “Where possible, the designation employed in the original bills has here been followed” – so we should take these characterisations as indicative of the language in which Victorian Theatre chose to describe itself, not as a formally organized taxonomy.
I did some counting up of the actual usage of these codes, and found a further 23 codes not specified in either list. However, most of these are used very infrequently : less than 10 times for all except two of them. The exceptions are the emdash which is used for 15 entries that describe non performance items such as published collections of plays and the code “Bsq. O.” which is used for 22 entries, all of them presumably “burlesque operas”.
For what it’s worth, here’s a colourful camembert to show the distributional statistics of these categorisations. Of the 110 different codes used, more than half (64) are used fewer than 10 times. Or, to put it another way, the top six codes between them account for 17,242 out of the total nunber of 25,632 entries – over 67%.; the top eight codes (labelled in the picture) account for more than three-quarters of the whole population.
My next project will be to see if these proportions change significantly over time.
Categorising Lacy
Collecting the data
Allardyce Nicoll’s monumental “History of English Drama 1660-1900” (Cambridge University Press, 1955) has two volumes containing “hand lists” of theatrical titles produced in the first and second halves of the 19th century respectively. The list in Volume 4, covering 1800-1850, can be downloaded from CUP’s site in PDF format if you have an appropriate institutional login; it even has a DOI: https://doi.org/10.1017/CBO9780511897764.010.
I downloaded and worked on the PDF version (handlist18001850.pdf) with the following workflow:
- Generate DOCX version using Abby (thanks, HumaNum) (handlist18001850_hnOCR.docx)
- Process this with `
docxtoTEI`
(thanks TEI) (handlist18001850_hnOCR.xml); some hand edits too to get rid of unnecessary clutter such as forme work and simplify the subsequent processing - Process this with
`addMilestones.xsl`
to delimit entries and author sections - Process output with `
writeList.xsl
` to produce structured list of all titles (entries1800-1850.xml) - Process with
`addKey.xsl`
to select entries mentioning Lacy and add a magic key derived from the title (LacyEntries1.xml) - Process this with
`checkEntryList.xsl`
to determine whether there is a match for this magic key in the current Lacy Catalog file, and add an attribute@matched
indicating the result (LacyEntriesChecked1.xml)
The text of Volume 5 covering titles from 1850 to 1900, is not available in digital form from CUP for some unknown reason. I did however discover in the Internet Archive a less than perfect scan of it, provided by the Digital Library of India (https://archive.org/details/in.ernet.dli.2015.40678). I used a grubby perl script (`tagPlayList.pl`) to process the fairly unreliable OCR plain text version of this to produce output compatible with that produced by `addMilestones.xsl`,
and fed this into the toolchain listed above to produce LacyEntries2.xml and LacyEntriesChecked2.xml
Results
– entries1800-1850.xml has 8495 titles of which 456 (53.6%) mention Lacy; of these, 307 could be matched, and 149 not.
– entries1850-1900.xml has 17202 titles of which 754 (43.8%) mention Lacy, of these, 454 could be matched and 300 not.
Failing a bit better
I next tried a different way of checking for matches, viz. the ever reliable unix utility `comm` applied to a pair of text files, one containing all the values for title/@n in the LacyCatalog and the other containing all the values for entry/@n in the tweaked Nicoll entry lists. First time round, unsurprisingly this gave different results: 537 matches in all, with 652 values unique to the Nicoll-derived list, and 961 unique to the Lacy catalogue.
I noticed however that both files had some duplicate entries, which were causing confusion in the matching process, though unsurprisingly, the most frequent cause of disagreement seemed to be OCR error or inconsistent formatting.
I manually added suffixes to handle the discrepancies I noticed in @n values. The author’s surname was used to distinguish identically titled but different works; the suffix “-bis” for actual repetitions.
Removing the duplicate entries and some of the more egregious OCR errors gave more plausible values : 760 matches in all, with 429 values unique to the Nicoll-derived list, and 738 unique to the Lacy catalogue. This was further improved by checking the 23 titles in Nicoll’s list which indicated that the Lacy publication used a different title. The output from comm after this modification gave me 772 matched titles, with 410 unique to Nicoll, and 726 unique to Lucy.saxon ~/Public/Lacy/newcatalog.xml listKeys.xsl > lacyKeys.txt
saxon lacyEntries.xml listEntryKeys.xsl > nicollKeys.txt
comm --total lacyKeys.txt nicollKeys.txt
This exercise also revealed that there are a couple of genuine double entries in the Lacy Catalogue. Henry Byron’s “Bluebeard” extravaganza appears both in volume 19.3 (L0273) and in volume 49.9 (L0729); likewise Selby’s “Witch of Windermere” appears both in volume 84.4 (L1248) and in volume 3.6 (L0036).
After more tweaking, I decided to declare victory with 753 matched entries, and 436 not matching. Nicoll’s catalog includes entries for Lacy titles from volumes after 100, which I do not include in this project, so I expected some matches to fail.
Categorizing the titles
One of the reasons for being interested in Nicoll’s Handlists is that they assign each title a code such as ‘D’ for drama. How useful are these codes as a way of categorizing the contents of LAE?
Nicoll’s category codes are at once very delicate and very vague. Clearly, they are derived from the way the piece describes itself on its title page (if it says it’s an “extravaganza” then that’s what it is), but at the same time these often ornate descriptions have clearly been rationalised to make up a smaller number of unique category labels than simply using the words of the title would. But only to a degree: “burletta” and “burlesque” are distinguished, with roughly equal numbers of each, but there are plenty of titles which describe themselves as “a burlesque burletta”. In an attempt to further simplify these descriptions, I decided to map Nicoll’s codes to just four main classes: Comedy, Drama, Musical, and Spectacle — even though items frequently cross these very broad categories: should “comic drama” go under “comedy” or “drama”, for example? “Musical” is particularly problematic since many comedies include songs, as does almost anything classed as a “Spectacle”: my intention was to limit it to pieces clearly operatic or at least operetta-ic .
Undeterred, I tried my own experiment in classification, based on the words appearing on the titlepages. I used the following regexp to collect categorizing strings :
([Cc]omedy|[Cc]omic|[Ff]arce|[Tt]ragedy|[Dd]rama|[Bb]urle|[Pp]antomime|[Cc]omedietta|[Ee]xtravagan|[Oo]per[ae]|[Vv]audeville|[Pp]lay|[Ss]ketch|[Ii]nterlude|[Mm]usic)
and concatenated all the matching strings for each title. Prefixed with the number of acts in the play, this became the value for a @type attribute on each catalogue entry. For example, a one act play whose subtitle contained the phrase “A Burlesque Burletta” would be given the category label “1_BurleBurle”. A script `listTypes.xsl` counted up unique category labels (disregarding the number of acts) producing a list that begins:
~~~
Burle (40)
BurleBurle (3)
BurleBurleOpera (1)
BurleDrama (7)
BurleExtravagan (21)
BurleExtravaganComicPantomime (1)
~~~
(Note that the counts are for the whole string of categorizers: the count for “Burle” does not include the count for “BurleBurle”)
In many cases, it’s easy to map these category labels to the four basic categories identified above: all of the above would count as “Spectacle” — except perhaps “BurleDrama” and “BurleBurleOpera” (the latter however proves to be a mistake in segmentation: the “opera” part of the characterization concerns the source, not the play itself.)
There are 57 titles which lack any of these categorizing substrings, many of them preferring humorous variants on traditional title page discourse, such as “An Original Irish Stew” or “A new and original, aerial, floreal and conchological fairy tale (Of which the most striking feature is borrowed from the Countess D’Aulnois)“. These I (regretfully) left to one side for the moment.
My first (rough) simplification produced the following figures for all 1500 titles in Lacy’s Acting Edition:
Count | Percent | Category |
765 | 51 | COMEDY |
426 | 28.4 | DRAMA |
55 | 3.6 | MUSICAL |
197 | 13.1 | SPECTACLE |
These relative proportions for these gross categories seem to be much the same as those derived from the more finely-grained Nicoll analysis of half the number of titles:
Count | Percent | Category |
370 | 50.6 | COMEDY |
155 | 21.2 | DRAMA |
76 | 10.3 | MUSICAL |
130 | 17 | SPECTACLE |
The story looks rather different however when we count up the categories for all 15,021 categorized entries in the two Nicoll handlists, not just those for Lacy titles:
Count | Percent | Category |
5081 | 33.82 | COMEDY |
5513 | 36.7 | DRAMA |
2032 | 13.52 | MUSICAL |
2395 | 15.94 | SPECTACLE |
It has to be said that these counts are all fairly unreliable — aside from encoding problems caused by flakey OCR and my post-processing, Nicoll’s records count performances of the same title as different items where a title is of unknown authorship: this has the effect (probably) of inflating the counts for some categories such as Pantomimes. And of course my crude four part classification really needs much more thought. But putting all that to one side, it does seem that in selecting titles for his Acting Edition, Lacy preferred comedy over drama.
How old is that play?
Nearly every play in the Lacy catalogue – 1468 of them to be exact – now has a date of first performance, either explicitly given in the front matter of the text, or (for about 100 other cases) diligantly extracted by me from Nicoll’s “Handlist”. These dates supply a terminus ad quem for the play’s composition : it cannot have been written after its first performance. Similarly, although the individual volumes are not dated, we may reasonably assume that the volume itself cannot have been printed before the latest “first performance” date it contains. This is not an entirely satisfactory procedure if we want to track changes over time, since the number of volumes allocated to a particular year varies over the 38 year period, but it is the best I can do.
Nevertheless, I thought it might be interesting to plot for each volume how many of the plays it contains are recent, not so recent, or positively antediluvian. One hypothesis might be that the proportion of recently composed material declines over time, whether because less of it is available for Lacy to reprint, or because the bourgeois drawing room for which the later volumes are primarily intended prefers its drama antiquated. Another might be that the proportion of old warhorses in each volume is pretty much consistent over the whole life time of the Acting Edition.
Here’s my first attempt at visualising the data. It shows that there are a few volumes round about the start and end of the 1860s when the quantity of older material seems to shoot up, but that for the most part each volume contains a majority of material less than 10 years old. It also, however, suggests that the amount of new material in the 1870s starts to decline.
How balanced a sample is the VPP?
The full catalogue of Lacy’s Acting Edition comprises some 1500 titles, produced by just over 320 different authors. Over a third (583 to be exact) of all titles are produced by a small group of a dozen or so recidivists, each of them accounting for more than 25 titles. These include some predictable exceptions like “Anon” (65 titles), but also some extraordinarily prolific writers like John Maddison Morton (82 titles or 5% of the LAE), J.R. Planché (69 titles), and Henry James Byron (51 titles). In the second rank of creativity, there are 20 authors each of whom is responsible for producing between 10 and 25 titles, and who collectively account for 346, about a fifth of the whole. These include such familiar names as William Shakespeare (24 titles), just ahead of the less famous Thomas Egerton Wilks (23 titles) and some distance from George Colman (12 titles). At the other end of the scale, only a tenth of titles (171) are the product of an author otherwise unrepresented.
One of my first questions when looking at the Victorian Plays Project catalogue was the extent to which it might be considered a representative sample of the whole LAE. That of course depends on the basis on which you are sampling: as a first exercise, I consider here authorship. The VPP sample contains 343 titles, which are the product of 130 authors, only 8 of whom produce more than 10 titles, and nearly half of whom (74) produce only one title. This seems like a markedly different frequency distribution. Moreover, the ranking of authors within a “top twenty” list for the two corpora shows some surprising differences. Some authors who appear high in the upper half of the LAE list, e.g. Williams and Selby, trail near the bottom of the VPP list. It is unsurprising to find that titles low down the VPP list are also low down the LAE list; what does surprise me is the disparity in ranking for the comparatively frequent authors. Tom Taylor, the highest ranking VPP author of all, is only the 10th most frequent author in LAE; and John Palgrave Simpson, who ranks 12th in LAE, only just scrapes into the 25th row of VPP. Some of these oddities may be attributed to editorial decisions by the VPP: for example to exclude entirely titles by one William Shakespeare, even though these are ranked 14th in LAE.
Anyway, here are the Lacy Acting Edition Top Twenty authors, ranked by the number of titles attributed to them.
LAE rank | VPP rank | Titles (LAE/VPP) | Author | SDA | dates |
---|---|---|---|---|---|
1 | 3 | 82/15 | Morton, John Maddison | * | 1811-1891 A1 |
2 | 4 | 69/14 | Planché, J.R. | * | 1796-1880 A1 |
3 | 6 | 65/13 | [Anon.] | ||
4 | 5 | 51/14 | Byron, Henry James | * | 1835-1884 A2 |
5 | 7 | 41/12 | Suter, William E. | 1811-1882 A1 | |
6 | 2 | 40/17 | Brough, William | * | 1826-1870 A2 |
7 | 16 | 38/5 | Williams, Thomas J. | * | 1824-1874 A2 |
8 | 19 | 37/5 | Selby, Charles | * | 1802-1863 A1 |
9 | 11 | 36/7 | Burnand, Francis C. | * | 1836-1917 A2 |
10= | 1 | 34/20 | Taylor, Tom | * | 1817-1880 A2 |
10= | 8 | 34/12 | Coyne, Joseph Stirling | * | 1803-1868 A1 |
12= | 25 | 28/4 | Simpson, John Palgrave | * | 1807-1887 A1 |
12= | 20 | 28/5 | Oxenford, John | * | 1812-1877 A1 |
14 | 0 | 24/0 | Shakespeare, William | 1564-1616 A1 | |
15 | 24 | 23/4 | Wilks, Thomas Egerton | 1812-1854 A1 | |
16 | 14 | 19/6 | Stirling, Edward | 1809-1894 A1 | |
17= | 0 | 18/1 | Wooler, John Pratt | 1824-1868 A2 | |
17= | 18 | 18/6 | Talfourd, Francis | * | 1828-1862 A2 |
17= | 21 | 18/5 | Jerrold, Douglas | * | 1803-1857 A1 |
17= | 11 | 18/7 | Halliday, Andrew | * | 1830-1877 A2 |
There is of course much more one might wish to say about these authors. It is unsurprising to find that they are all males, and equally that they are mostly members of the Dramatic Authors Society, the agency which had been founded to ensure their copyrights were observed, and which also required payment of a fee for provincial representation. Their dates, with four exceptions, are taken from Wikipedia, where there is much else is to be found. (The exceptions yet to be immortalized on Wikipedia are William Suter, Thomas J. Williams, Thomas Egerton Wilks, and John Pratt Wooler : their dates are taken from the Hathi Trust catalogue record). Just for fun, I decided to categorize them into two age groups on the following basis:
A1: born before the Battle of Waterloo (1815)
A2: born after Waterloo but before the Great Reform Act (1832)
Unexpectedly there are equal numbers in each group.
In the interests of full disclosure, I should add that the list of plays so far converted to TEI format demonstrates a tiny and even more divergent sampling of these authors. The most frequent author so far converted is J Maddison Morton with 6 titles, which corresponds well with the LAE ranking, but the next three in that ranking are all so far missing entirely. In fact, of the authors in the LAE top twenty, the following are all so far missing: Planche, Byron, Suter, Williams, Selby, Shakespeare, Wilks, Stirling, Wooler, Talfourd, and Halliday. Only five authors are so far represented by more than one title (Morton, Coyne, Courtney, Oxenford, and W.S. Gilbert).
As comparison, I also took a look at the author counts for the 45 or so LAE titles selected for inclusion in the Chawyck Healey “English Drama” collections. Only 10 authors appear here more than once, all of them represented by no more than 2 titles, except Simpson, who clocks in three. Only four of these authors also appear in the LAE Top Twenty (the inescapable John Maddison Morton, J.R. Planche, John Palgrave Simpson, and Thomas Egerton Wilks). Clearly these titles were selected on some other grounds than their frequency in the LAE.
Nobody talks like that: a stylometric exercise
Back in March 2022, I was asked if I’d like to be interviewed as part of a research project concerning editing in the 21st century . What the hell I said, I have close to no real experience of digital editing (unless you count my lovely digital edition of “Through Beatnik Eyeballs”), though I have made a reasonably satisfactory career out of telling other people how they should do it. One thing they really do teach well at Oxford is the ability to sound as if you know what you’re talking about… Anyway, I signed up, and after some vicissitudes was duly interviewed via Skype, sitting in my birdsong-filled garden, some time in June. Some considerable time later (at the end of December to be precise) I was invited to revise the transcript someone had made of my interview, and did so. I removed some of the more egregious hesitations and a few garden path sentences, but left the bullshit intact, collection of that being after all the object of the exercise. And later still, I learned, my transcribed interview joined a group of fifty or so others on a website, in a proper digital archive no less. This was all very satisfactory, though I was a little disappointed to find that the edited transcriptions were being made available in PDF or in RTF only. Are those really now considered to be appropriate long term preservation formats? I suppose in a world where Boris Johnson can become prime minister, nothing should be surprising. And what happened to the audio? The design goals of this project were firmly based in a part of the forest of the Digital Humanities somewhat removed from linguistic analysis, discourse semantics, problems of speech transcription, or the textual analysis of academic talk. The project was to deliver something readable by human beings, like a book. Quite enough for one grant.
But hoorah for the open-minded spirit of open access, which makes it possible for me (and anyone else so inclined) to play with the resulting resources and do at least some things not originally envisaged in the project design! Entirely unsurprisingly, I spent a happy few days last week downloading the RTF files and converting them to TEI (the scripts and the results are now available in a github repo ). TEI because that’s what I do, but also because I wanted to be able to do textual analysis properly.
My resulting TEI corpus contains 46 small documents, one for each interview, each consisting of a sequence of TEI <sp> elements, with a who attribute to indicate a unique code for the speaker Each element contains one or more paragraphs of text, preceded by the speaker code as given in the transcription (there are a few differences). Like this:
<sp who="#LB">
<speaker>LB</speaker>
<p>I would have liked to have been better paid.</p></sp>
<sp who="#MK"><speaker>MK</speaker><p>Sure.</p></sp>
This was prefixed by a paragraph of background information about the interviewee, which I banished to the <front> of the document, as a source of metadata. I also created a rudimentary TEI Header for each document.
Importing my 36 documents into TXM, I found the corpus had a total of 237,271 words. I made a partition on the basis of the @who attribute, so that all the words for each distinct speaker were grouped together. Here’s the bar graph from TXM showing how many words were associated with each of the 51 distinct speakers. It shows that one speaker (JOS) talks a lot more than anyone else: but this is unsurprising, since he is one of the two interviewers, and I have simply aggregated his side of each discussion irrespective of participant. I did the same for the other interviewer, MK, who has fewer interviews (14 as opposed to 32 for JOS); at 6000 words, he actually talks less than the three most garrulous interviewees (JC, AG, and RR), all of whom manage more than 6500 words. At the other end of the scale, there are five interviewees who hover around 2000 words apiece. The bulk of respondents fall comfortably between these extremes.
What can I do with this data? Well, treating it just as data, it might be interesting to see whether the frequency with which words, or lemmas, or POS codes appear in each speaker’s chunk is much the same, or whether some stylometrical statistic can be used to group like-speaking speakers together. Does everyone talk in more or less the same way or (ex hypothese) do professors and old lags like me talk differently from early career researchers? It’s not quite the typical stylometric use-case (which tries to establish probable authorship on the basis of similarities) but close. Fortunately for the mathematically challenged, there exists a fairly well established range of tools designed to explore such matters. Unfortunately for the mathemagically challenged (amongst whom I unreservedly place myself), you do need to know what you’re doing with these really quite sharp edged tools. So please forgive any idiocies in what follows…
I played with TXM and with Stylo, both of which claim to be usable by the non specialist, and both of which have (interestingly different, but that’s another story) user interfaces. TXM has the great advantage of accepting TEI XML as input and treating it sensibly. Stylo requires me to pre-process my XML text into dumbed down plain ascii using arbitrary naming conventions to provide metadata. Both produce fancy graphics.
Here, for example, is a dendogram from Stylo, showing how my 50 locuteurs cluster together, if we look just at the highest frequency lemmas. If I interpret this correctly, it shows the interviewers (“allJOS” and “allDK”) grouped together and distinct from all other respondents, which seems reassuring.
This is even more evident in the Principle Component Analysis also produced with Stylo, which shows JOS and MK as complete outliers.
And TXM provides further confirmation of this from a lexical perspective:
I think what this is telling me is not only that JOS and MK are outliers, miles from all the other documents shown here only as a red splodge in the blue cloud, but also that the words they favour are characteristically to do with their role as interviewers (What, future, question, projects, maybe etc.) Or so I believe. But clearly I am going to have to do a lot more background reading before I can say anything really interesting about this little dataset…
Consistency is a good thing…
Now that I have all the available files in a form which is at least valid according to a TEI P5 schema, I can start fussing about the consistency of the markup they contain.
Let’s start with an easy one. Attribute names may be marked up using the element <att>, or using the element <ident>, or using the element <name>, or just flagged in the running text, by a preceding @ (I didn’t find any cases of a following =, though I suspect there are some). I really don’t care a lot which is used, but I do care that there should be just one rule to bind them, not four. So how do things stand at present?
There are 298 <att> elements, 212 <ident> elements, and 98 <name> elements. <name> is used both for personal names and for names of attributes, classes etc. A first step therefore is to turn all name[@type] elements into idents, and to simplify the values for type. A second step might be to look at all occurences of attribute names simply flagged with an @ sign. in wPressDox, the regex [[:space:]]@[[:alpha:]][[:alnum:]+] matches 608 times; in legacyTEI files, only 18. Some careful regex matching might enable me to turn the @thing cases into <att>s in most cases. But hold on a second: if consistency is the goal it would be much easier to turn all <att>thing</att>s into @thing than the reverse. The question really is : is faithfulness to the original tagging goals completely unimportant? We have 16 cases of ident[@type=’attr’] and 47 of ident[@type=’attrName’] and 21 cases of ident[@type=’attr’] as well as 32 name[@type=’attr’], and indeed some random occasions where <code> (with no @type value) is used to delimit an attribute name. Making all of these consistently <att>seems to preserve the original encoder’s goals. Turning them all into @xxx however seems somewhat different.
There are also cases where the original tagging has tried hard to make distinctions we would probably no longer bother with: for example <mentioned> and <soCalled>. I leave these well alone.
Now a more tricky one.
There are 1817 <div> elements, 705 of them being @typed. 619 of those are type=’h2’, but this does not necessarily reflect their hierarchic position, merely that they were originally indicated by an h2 level heading in the WordPress file. Of the 83 typed divs which are not “h2”, the most frequent are h3 (45), followed by h1 (8). However, of the 45 h3 divs, only 20 have an h2 parent; the remaining 25 are contained by an untyped div. Other very rare values include “agendaItem” (3), appendix (4), and glossary (3).
It seemed like a good idea to check that <div> elements contain something other than a nested div and remove the redundant layer if not. The xpath //div[count(*) eq 1][div] finds 52 items , though this may be an artefact of my retagging script. Somewhat more problematic are <div>s which contain just a <head: in some cases, these are probably genuine: for example
<div><head> 12:30 – 13:30: Lunch</head></div>
<div type="h2"><head>Goodbye Peter and thank you! 😟❤️</head></div>
but absent supernatural powers, it’s in principle impossible to know how to interpret a sequence of headings in the wordpress files. For example, here’s a snippet from the 2020-10_30824 document:
The “Review…” lines are div/head s containing a link to another part of the document (this kind of transclusion happens nine times in all). Are they siblings or parents of the “SUNDAY… ” div/head s ? Is the “Proposal on ruby glosses” a child or a sibling of “SUNDAY, 25…” ? You tell me. I have mostly left them all as siblings for the moment.
Then there’s lists: there are 2456 <list>s, only 109 of them typed, @type values are “unordered” (33), “simple” (21), “ordered” (25) and “gloss” (30).
There are 45 untyped lists which contain one or more items containing a label; three of these contain lists with labels as a child of the list as well as labels as child of the list/item; mercifully all confined to one file (2009-12). The label tag is used ambiguously: sometimes it contains a person siglum; sometimes a subheading. On further investigation, the siglum usage occurs only in one file (tcm24) so I changed all these to rs. The 30 explicitly gloss lists are all in the TEI legacy files too.
Further down the slippery slope: there are plenty of minority interest tagging distinctions: I have turned a scattering of <quote>s into <q>s, and all <ptr> elements into <ref>s, but not yet checked that all the @targets actually go somewhere. I have retained 9 cases of <time> which might just as well be <date>s; and six cases of <foreign>; also 76 <lb/> elements standing in (mostly) for proper structural markup. I have retained and made valid the one document in which <sp> and <speaker> elements have been used, but not tried to add such markup to the couple of occasions where the minutes launch into dramatic mode.
But rather than continue to polish this pig’s head, I have spent today providing it with some infrastructure, and putting it all in an accessible repository at https://github.com/lb42/theCellar
I also spent far too much time providing a CETEICEAN front end to it, at https://lb42.github.io/theCellar/tcMins/index.html
Plenty more to do. Of course.
The usual problem
Now that I have all of the TEI Council minutes in XML which is more or less valid against TEI-All, I can start worrying about defining a sensible schema for them, oh bliss. One possibility might be just to accept and preserve every tagging decision taken during the long history of this archive, even the silly ones. Another might be to retro-convert everything to a single brutalist vision of how things Ought To Be. Or somewhere between the two extremes, perhaps.
Over the last 23 years, different editors of TC minutes have taken different views in all the places where you might expect them to. Even in the days when minutes were prepared in kosher TEI, mostly conforming to TEI Lite, there was still plenty of scope for different practice. Shall we distinguish soCalled, and mentioned, q and quote, term and emph? Are we consistent in using emph for linguistic emphasis rather than formatting? Do we distinguish q and quote, and if so, for why? If we have gi and att (or occasionally ident type=’att’) do we also need tag, and code and ident?
In more recent times, when such ontological anxieties have become perhaps less feverish, the minutes use a comparatively restricted set of distinctions, mostly to do with whether a snippet of text is in italic or bold, or used as a heading or a link, or is a list item. Indeed, sometimes the tagging decisions we see in the XML file are purely an artefact of the formatting tweaks needed to present the minutes on the WordPress website and have little to do with document structure or meaning. And in sadly many cases, if a semantically tagged version of such documents ever existed, it is now lost. Should we, in the interests of consistency, enforce the lowest common denominator across the whole set of documents?
Consistency at least in the way major components of each document are presented would surely be advantageous. To take a simple example, every set of minutes begins with a list of the persons participating in the meeting. Sometimes it is presented as a list of items; sometimes as a single paragraph; sometimes as a sequence of paragraphs. Almost always the names of individual attendees are associated with a siglum or set of initials, but the way in which this is all represented in the XML structure varies considerably. This sort of thing is easy, if time consuming, to make consistent. And probably something like the current conventions, in which each person’s name is given as a distinct <item> within a <list> should be aimed for, since it is clear that the various ways these lists are currently presented is really only an accident of formatting, changes which are of much lesser interest than ease of processing the list in various ways. Whether or not the full TEI paraphernalia linking occurrences of a person’s initials in the text to their appearance in the list of attendees, is another question.
Of courser, if we were starting this exercise from zero, we would follow the textbooks and first carry out a data analysis. What are the important entities in a set of minutes, and what are their properties? Each of these documents relates to a meeting which took place over one or more days, in a specific place, or in cyberspace, with a specified set of participants. The minutes indicate the topics discussed, to some extent formalised in terms of identified issues, or action points. We might also ask what sorts of research questions should our analysis facilitate: how often do particular individuals or kinds of individual intervene? How long does it take for an issue to be resolved? How many different issues are under consideration at a particular time? Where do issues come from? And so on.
But we are not starting this exercise from scratch. The documents already exist. Moreover, the conceptual entities they are concerned with, and therefore represent, change over time, reflecting the Council’s evolution both in terms of its practice and its sense of purpose. That purpose has always been to maintain and develop the technical content of the TEI Guidelines, of course; but with the availability of sophisticated issue-tracking and reporting software the way in which this is carried out has changed a great deal. Consequently the operational model – the modus operandi – of the Council has also changed a great deal. These changes are necessarily reflected in the organization and content of the minutes.
Writing a full history of the TEI Council’s evolution is not however the purpose of this document, tempting though it is. A few salient aspects of that history do however affect our document analysis. For example, it’s necessary to understand that when first set up, the Council worked very much in the same way as the original TEI project: its role was largely to initiate, supervise, and integrate work carried out in more or less autonomous working groups. This had worked well for some major expansions of the P5 Guidelines, such as the addition of manuscript description, or character encoding issues following the adoption of Unicode, where the TEI had been able to constitute a motivated and informed group of experts to produce concrete proposals; less well in areas where such a group proved harder to constitute or motivate. For the first five years of its existence, from 2002 to the publication of TEI P5 1.0.0 in 2007, however, the Council’s minutes are full of reports from specific working groups, and actions on someone to pursue them.
This was also a period during which the TEI enjoyed the luxury of two paid editors. The process by which the Council itself took over editorial responsibility probably started with the full scale review of the first draft of P5, in which each chapter was assigned to a Council member for review, though actual implementation of changes to the Guidelines (which involved a content management system called perforce) remained a specialised activity, not available to all. The minutes from this period necessarily therefore have many “action points” aimed specifically at the editors.
For releases 1.0.1 to 2.7.0 (2008 to 2014) the following formulation of the Council’s role appeared on the PDF title page
TEI P5:
Guidelines for Electronic Text
Encoding and Interchange
by the TEI Consortium
Originally edited by C.M. Sperberg-McQueen and Lou
Burnard for the ACH-ALLC-ACL Text Encoding Initiative
Now entirely revised and expanded under the supervision
of the Technical Council of the TEI Consortium
edited by Lou Burnard and Syd Bauman
Only in September 2014, with the 2.7.0 release, did that last line disappear, establishing finally that the Council was now editorially responsible for the whole.
By this date the Council’s modus operandi had also changed considerably. Already, in 2009, we find the Council reviewing and acting on proposals for change to the Guidelines known as “feature requests”, originating from the wider TEI community rather than from the Council or the Board. A key step towards expanding this practice was the adoption of the open source issue tracker provided by sourceforge, which hosted the TEI Guidelines source from 2007 onwards, and remains a recognizable forerunner of the current github based system.
The move to such systems has several implications for the current archival project. Firstly it means that a substantial amount of the TEI’s intellectual history is now exhaustively documented, including all sorts of crazy ideas and false starts and frequent repetition, but all on a platform which the TEI itself does not own or control. Secondly, it means that the links into the documentary base provided by those external systems and the more diplomatic narrative constructions provided by the current minutes are really quite important if we wish to develop a proper historical understanding. And finally, of course, the availability of this detailed repository of issues and their resolution has changed dramatically the way the TEI Council does its work.
Defining the target
Defining the target
It’s easy to say rather glibly that TEI markup is a good archival format, and in many respects it is: experience shows that a TEI file can nearly always be read without too many assumptions about the platform or software needed to read it. Because a TEI document uses a very basic form of labelled bracketting, developing software to act upon the markup is a breeze; moreover because the semantics and syntax of the markup are well defined, the software can perform whatever tricks it likes on the basis of an explicit model of the document’s structure and semantics. The tricky part is deciding what exactly the components of that explicit model should be: what (to coin a phrase) is this text really?
On the screen I am currently typing at I see that the phrase “Defining the target” and the word “really” are both in an italic font. The first is a heading, and the second is a word I wish to emphasize. In neither case is it particularly helpful to state that there’s a font change here: if I lost that information for the first case you would still (probably) recognise the words as a heading on other grounds (it’s not a sentence; it’s a separate block; it’s in a place where a heading is conventionally appropriate etc.) but in the second, without the signal given by the font style change you have no easy way of noticing that this word is meant to be more salient than the others, still less of recognising the allusion it makes to a famous journal article. Is recognising (and preserving) this emphasis as essential a part of this document as distinguishing the heading from what follows, or noting the paragraph divisions ?
Although it’s tricky, it’s something I and others have been doing for decades, this business of deciding which are the “essential” components of a document, independently of its realisation on screen or paper. The claim is not just that this separation of the document from its realisation is meaningful, but that it’s also useful. Certainly it makes it much simpler to process masses of similar but different documents in a reasonably intelligent way if their structural components and semantically salient properties are explicitly and exhaustively flagged in the same way. Certainly there may be a price to pay for that simplicity: we may have to renounce the ability to visualise the document exactly as one or other of its many historical realisations did; just as we do for other cases where such a realisation depended on a specific software infrastructure. Good luck emulating a pixel-perfect WordPerfect 4.2 or WordStar view of your TEI document on the basis of its TEI archival form.
All this by way of prelude to the next stage in my attempts to recover/reconstruct a usable TEI archive of the deliberations of the TEI Council. Those deliberations currently exist (as previous blog entries have shown) in one or more of three different forms: as Google Docs, as WordPress HTML pages, or in one or other legacy TEI format. All of these formats are relatively simple to convert into XML without loss of such information as they already contain: the task is to define a minimal TEI markup scheme to which they can all be reduced, without losing anything essential. It is that classic TEI markup problem: what do you want to distinguish in your documents? With the added constraint that I’d rather not have to introduce distinctions not already explicit (one way or another) in the sources.
I started with the Word Press XML files, since these constitute the official published record, even though they have many shortcomings. I wrote another perl script to extract a list of all the different XML tags present in the files, and an XSLT stylesheet containing a default template for each of them, mapping format-oriented tags like <h1> and <b> to semantic ones like <head> and <hi>. I then spent a happy hour or three fiddling with that, before deciding that this approach was too labour intensive to be a general solution.
So I moved on to the Google Doc files. I exported them all as docx files, applied the default TEI docxtotei conversion, and started looking at my 100 or so allegedly TEI documents. The first step was to generate an ODD which described their actual tagging practice, for which I used the TEI
oddByExample utility. This is a good way of starting the process, but has some quirks (like specifying all the element classes you might use, even though you don’t actually use any of them and explicitly deleting each attribute supplied by a class rather than deleting the class), and one major drawback. The drawback is actually perhaps a virtue: the schema you get from the ODD it generates is a strictly conformant TEI subset of TEI All. So if your data has features which are not valid in TEI All, shall we say @xml:id values which are of the wrong datatype, or empty <list> elements, or <list> or <table> elements appearing directly inside <front> instead of being decently wrapped inside a <div> … it won’t be valid.
(These examples were not chosen at random, by the way: they are all the consequence of a bug– issue https://github.com/TEIC/Stylesheets/issues/604 reported this morning – in the current docxtotei tool). Anyway, this means that either the ODD needs to be adapted to be more forgiving, or the data needs to be corrected to be less weird. Doing the former would also mean tweaking the data (to avoid polluting the TEI namespace with the weirdness), so maybe choosing the latter course of action is the wiser decision. Especially since it’s not so hard to correct the aberations I have identified so far.
So my first XSLT stylesheet is simplify.xsl, which does just that. If it finds a list or a table directly inside a front it wraps them in a p and looks the other way. When it finds an anchor it sticks an extra letter in front of its identifier. After its ministrations, all 112 generated XML files (bar one, which had an empty <list> element) were valid against the generated ODD schema. Hosannah.
That leaves 51 items with no easy XML representation, or 12 items if we assume that the legacy TEI format also counts as potentially easy XML. Sadly all but 1 of those 12 are in the “ill formed” Word Press XML format, so some (more) manual tweaking will be required before I can safely apply the retagfromWP conversion to them. Then I will have to work out what to do with the legacy TEI files, some of which are still in P4. But I think I see a way forward…
Surveying the Remains
There have been 161 TEI council meetings up to February 2023. The minutes of each meeting (conference call or face to face) – except one – are available on the Council website, but only as Word Press pages.
I have tracked down a P4 or P5 source file for 40 of them, covering meetings up to October 2008. I think there must once have been more, because some of the WordPress pages show clear signs of having been adapted or converted from a TEI original. In several cases, some TEI tags are still present, notably <gi> (appears in 20 cases between 2009-04 and 2014-06) or publicationstmt (sic), which appears along with other remnants of a TEI header in 38 cases up to 2016-03. But there is no trace of the original source files anywhere on the current website.
From 2016 onwards, the website provides only Word Press format files, in which HTML tagging is used. However, this tagging is not entirely well-formed: there are many cases where hard line breaks in a table cell are marked by HTML p end-tags, for example. And at least one where the internal structure of a table row has been completely lost.
As a first step, I wrote a perl script which did its best to extract a single well-formed XML document from each set of Word Press pages. This failed consistently for the 36 pre-2016 pages which contain residual TEI tagging but worked reasonably well for the remainder, most of the time. Only 13 of the post-2016 files (out of 85) needed hand-editing to make them well formed, though the tagging still leaves much to be desired. In particular, I realised that some of of the WordPress files made no attempt to preserve the often deeply-nested structure of the minutes, or distinguish marginal annotation from the text.
Since 2016 the minutes have been edited in Google Docs and drafts are therefore (currently) available in Word, ODT, or other formats from the Google Docs website, if you know where to find them. This part (finding them) became much easier when I asked former Council colleagues to share their secret stash of drafts with me. Converting from Google Docs to TEI is comparatively simple and much less error prone than working with the WordPress pages directly. It really ought to be the WordPress pages which constitute the document of record for these minutes, but …
It seemed like a good idea to do a bit of checking in any case. So here’s what I did:
- use curl to download all the word press pages to 161 separate files called yyyy-dd.html
- use a perl script `articulate.prl` to extract from each of them a (hopefully) well formed xml file containing just the ‘article’ recognised by wordpress; save the result in a file called yyyy-dd_dddd.xml (where dddd is the wordpress article number)
- check the well formedness of the resulting files with `xmlwf` and spend no more than a day or two fiddling with the ill-formed ones to improve them
- spend a lot of time downloading and renaming files from Google Docs. The downloading was needed for files not in the zip James sent me; the renaming was essential for my personal sanity.
- I then enriched the XML file I made in the previous blog entry with links to all the files collected together.
At the last count, there are 162 entries (this includes one which is mysteriously missing from the current TEI website). Of these,
- 85 are available as well formed wpressxml files
- 37 are ill formed wpressxml files
- 41 are only available in a legacy TEI format
- 115 are available as draft versions from Google Docs
Of the 37 ill-formed word press files, 11 are not also available in Google docs format.
The Google Docs collection lacks anything before 2012-04, and (for no apparent reason) three more recent items : minutes from 2014-01, 2015-10, and 2017-11.
So my next step will probably be to define a target TEI format (with an ODD of course) and set about writing snippets of XSLT.
Yesterday’s Information Tomorrow (maybe)
If you go to the TEI’s website at http://www.tei-c.org you will find, as you might hope, a respectable number of documents tracking the evolution of the Text Encoding Initiative over the last umpteen decades. Curiously, though, the record for the most ancient period (before 2008, shall we say) is a lot easier to find and manipulate than for most recent times. This posting records my attempts to put together in archival format the full record of the meetings of the TEI Technical Council.
The Council, as any fule kno, met for the first time in 2002, and is still producing regular reports of its debates and its decisions. There is a page on the TEI website (https://tei-c.org/activities/council/Meetings/) which “lists TEI Technical Council meetings and teleconferences, with links to the meeting minutes.”
I downloaded that list (it’s a WordPress HTML file, of course), ran it through HTML Tidy, and processed the result to produce a nice simple TEI file of entries like this
<list>
<head>2022</head>
<item> conference call <date>8 December 2022</date><ref
target=”https://tei-c.org/activities/council/meetings/tei-technical-council-teleconference-2022-12-08/”
> [on website]</ref></item>
<item> conference call <date>10 November 2022</date><ref
target=”https://tei-c.org/activities/council/meetings/tei-technical-council-teleconference-2022-11-10/”
> [on website]</ref></item>
My quixotic goal is to enrich this data with links to TEI source files for each set of minutes, preferably in a consistent TEI format.
Now, twenty years ago this would have been quite a reasonable proposition since (as the current TEI Vault shows), the TEI once had an “eat your own dogfood” policy of producing all of its documents in TEI. Over the years, this policy has varied somewhat, largely as a consequence of changes in the tools available, and the culture that goes with them. These policy changes relate not just to the look and feel of the website itself but also to which versions of its contents are preserved and how. Today, I think it is not unreasonable to say that much of the TEI website exists only as WordPress pages: many of those pages were first created as Google Docs and then converted to WordPress, some of the older ones were created originally in hand-crafted TEI XML, and the very oldest were created in TEI P4 SGML, but the only versions that can reliably be downloaded from the current website are in WordPress HTML.
Much of the time, of course, this is unproblematic. Mostly, we just want to read the stuff, not analyse it. But occasionally, and especially for the older material, whoever or whatever was responsible for producing the WordPress files has really made a hash of it. Consider, for example, the working paper
This working document is quite an important one in the history of ODD. But as currently presented on the TEI web site it is badly broken, to the extent that the text has become incomprehensible. Consider this paragraph:
Comparison with earlier versions of the page (thank you Wayback Machine) show that this is a recent breakage: here for example is the same paragraph as it appeared back in 2005 when it entered the Internet Archive.
The Wayback machine, of course, can only archive what its crawlers find. They found this page a couple of times between 2005 and 2018, both of them looking fine, but thereafter only the WordPress version. This would not matter so much were it not for the fact that the original TEI P5 source has not apparently been archived anywhere, so the breakage cannot easily be fixed.
Such losses in translation occur occasionally in more recent documents too. Here’s a paragraph from the WordPress view of document tcm46 (Minutes of the TEI Council’s April 2011 meeting) for example:
Again, until or unless I track down the original version of this file, there’s no way of filling that particular gap.
Less annoying, but more pervasive is the fact that the WordPress files rarely try to preserve any structural or semantic information. The markup will mostly contain a long series of list items, some of which may pertain to the same topic, some of which may in fact be headings, some of which are an accident of formatting. In the text (apart from links) there’s no explicit indication of interesting things you might want to search for, such as names, places, or dates.
Very few WordPress files are well formed HTML, though the wonderful w3c utility tidy does a good job on pushing them into a processable shape. Out of 120 wordPress files, 38 (nearly a third) failed to respond to this treatment, mostly because they contained an unhealthy mixture of HTML and TEI or TEI-like tags.
And finally it has to be said (I’ll be brief) that it seems really sad that the TEI is preserving its deliberations in a proprietary, tool-dependent, presentation-oriented format … the kind of format which the TEI was set up to preserve scholarship from. What kind of apostacy is that?
Hunting for Lacy traces in the digital world
Title
Lacy’s Acting Edition was published in a series of 100 volumes, each containing up to 15 plays, between 1850 and 1874. (All dates approximate and unreliable). In addition to the collected volumes, Lacy sold individual play titles in cheap (6d) paper copies, many of which also found their way into private collections and public libraries. Consequently, copies of various components of the Lacy Acting Editions are now scattered across many research libraries. In some cases, they also exist in digital form, usually as scanned page images.
It is relatively easy to recover details of a library’s holdings from an online catalogue, for example by searching for the string “Lacy’s Acting Edition” or by specifying “Thomas Hailes Lacy” as publisher. It is less easy to restrict the search to generally available digital versions as there is still no reliable joint catalogue of digitized texts in major public collections, combining the digital holdings of say the British Library, the Bodleian, and other UK libraries, in the same way as has been done for many US libraries by the Hathi Trust, or more generally by the Internet Archive. (A project at the National Library of Scotland did set up such a site, under the name opentexts.world, a few years back, but its status is currently unclear and unsupported.
The ease with which the results of such searches can be obtained in a machine-tractable form (rather then simply displayed on a web page) is also quite variable. One is usually forced to fall back on web-scraping techniques and quite a lot of manual post-editing. This note documents my fairly uneven progress towards a definitive collection of links to existing and freely available digital copies of the plays constituting the Acting Edition on various sites. The fairly good news is that, as of today, of the 1498 titles making up the 100 volume Acting Edition, I have identified 586 which are freely available in some digital form somewhere. Track progress by looking at my online catalogue.
Hathi Trust
A search for the string “Lacy’s Acting Edition” anywhere in the catalogue record at https://catalog.hathitrust.org/ produces 294 hits, of which 246 are available in “full view” (i.e. should be downloadable without formality). A search for the string “Thomas Hailes Lacy” as publisher somewhat counter-intuitively produces only 94 hits. The web page displaying results looks like this:
- Results from a HT search. Setting page length to the maximum allowed (100) makes it feasible in this case to download all pages with minimal scrolling.
As usual, the easiest way to screen scrape is to save the HTML page as a file, use tidy to make it into well-formed XML, and then write XSLT to extract the useful information. In this case, the generated XML uses an undefined prefix “xlink:”, which I had to remove by hand, but apart from that everything needful was done by the XSLT stylesheet htScraper.xsl, resulting in a document (htListFull.xml) containing entries like this:
<bibl>
<title>The first night; a comic drama in one
act.</title>
<pubDate>1800</pubDate>
<author>Lacy, Thomas Hailes, 1809-73.</author>
</bibl>
<bibl>
<title>After the party; a comedy in one act.</title>
<pubDate>1870</pubDate>
<author>Lacy,
Thomas Hailes, 1809-1873.</author>
<ref target=”https://hdl.handle.net/2027/hvd.32044072039373″>HT</ref>
</bibl>
No <ref> element is generated for entries which are not accessible in “full view” mode. Also note that the handle quoted above is for the Hathitrust index page; to download the whole text as a single PDF file you must visit that page, and wait while the PDF is constructed. Oh, and yes, you must also be logged in at a HathiTrust member institution. So much for “full view” access.
Open Texts
I blogged about this now sadly un-maintained site back in October 2020. The site was dark for a while, but seems to be back for the moment: this morning I visited and was able to download a list of 106 hits in CSV, XML, or JSON in one click, which was nice.
This is what I like to see at the foot of my first page of results
Individual results looking like this:
<doc>
<str name=”organisation”>Bodleian Libraries</str>
<str name=”idLocal”>016930688</str>
<str name=”title”>King of the Merrows, or, The prince and the piper : a fairy extravaganza / written by F.C. Burnand, from an original plot constructed by J. Palgrave Simpson.</str>
<str name=”urlMain”>http://purl.ox.ac.uk/uuid/42d1488e99284fef9421268c33e4730d</str>
<int name=”year”>1862</int>
<arr name=”date”>
<str>1862</str>
</arr>
<arr name=”publisher”>
<str>Thomas Hailes Lacy</str>
</arr>
<arr name=”creator”>
<str>Burnand, F. C.</str>
</arr>
<arr name=”description”>
<str>First performed at the Royal Olympic Theatre, 26th December, 1861.</str>
</arr>
<arr name=”placeOfPublication”>
<str>London</str>
</arr>
<str name=”catLink”>http://solo.bodleian.ox.ac.uk/permalink/f/89vilt/oxfaleph016930688</str>
<str name=”language”>English</str>
</doc>
are easily converted (e.g. by my stylesheet opentexts-conv.xsl) to produce
<bibl>
<title>King of the Merrows, or, The prince and the piper : a fairy extravaganza / written by F.C.
Burnand, from an original plot constructed by J. Palgrave Simpson.</title>
<pubDate>1862</pubDate>
<author>Burnand, F. C.</author>
<note>First performed at the Royal Olympic Theatre, 26th December, 1861.</note>
<ref target=”http://purl.ox.ac.uk/uuid/42d1488e99284fef9421268c33e4730d”/>
</bibl>
which is easily merged into the main Lacy catalogue.
Moreover, in this case (hoorah for the Bodleian), a visit to the publically available URL actually downloads the whole of the PDF file without further ado.
Sadly, PURLs are available for only three of the items in the Open Texts list of 106; the vast majority (90) being handles from HathiTrust, and the rest (13) links to archive.org. Moreover, the data has not apparently been updated since October 2020, which is presumably why it does not have anything like the 316 handles I found in the Hathi Trust catalogue for myself. In fact, every one of the handles it supplies exists also in the htListFull.xml list.
Google Books
A cauchemar. Google has digitized (almost certainly) all of the Lacy Acting Edition volumes, but it seems to be entirely arbitrary which ones you can access via Google Books. I have tried various approaches to searching (there is something called a `bibliogroup` for Lacy), and then reprocessing the resulting (very obscure) HTML, but cannot say I have succeeded in cracking this code. The file gbSearch.xml contains the screen- scraped-and-converted-to-XML output from a query for this; the stylesheet gbSearch.xsl filters out from this the 37 useful links it provides to files you can actually download from Google Books (but you still have to go through a captcha check, of course).
Searching specifically for “Lacy Acting Edition” on Google Books will provide an exciting list of entries for each of the first 93 volumes in the LAE — but only two of them (volumes 77 and 93) actually have anything you can download. (I belatedly discovered that this annoying behaviour can be modified by selecting “Full View” from the drop down menu at top left of the query screen, which hides the titles you cannot have). On the other hand, there are also a few occasions where the text actually digitized for a specific title is the whole of the volume in which that title appears. Thus, searching Google Books for The Half Caste will provide you with a link for the whole of volume 97, in which that title appears. Likewise a search for In Three Volumes actually gives you a link to the whole of Volume 91. Anyway, once you have a reliable link to Google’s equivalent of the Internet Archive’s “details” page (at the moment, it looks like https://www.google.co.uk/books/edition/Oberon_An_opera_in_four_acts_in_prose_an/IoFaWP1TQgkC) you can pass that to Google, and get back a nice “New” Google Books page in the middle of which is a nice “Download PDF” button. Which works — once you have completed the annoying captcha test of course.
All very well if you have the time to spend cutting and pasting links: but why couldn’t Google have provided a simple download in a form I can script? I assume it’s for the same reason they want to control access to these resources — to stop unscrupulous entrepreneurs in the “Print On Demand” industry from making a swift buck. And we all know how effective that policy is, don’t we?
Bodley
Real librarians do it with Z39.50. But my results (bodleyTexts.xml) show only 9 titles available in digital form.
The Hall Collection
Every now and then, serendipitous searching pays off. The Hall Collection contains approximately 600 English plays mostly from the late 18th and early 19th centuries, originally used as prompt books by a professional actress called Clara St. Casse. The Collection was donated to the University of Warwick Library by a Mrs G. F. Hall of Leamington Spa, together with a collection of other printed plays. Naturally it includes quite a few (102 to be exact) Lacy titles. Although the Warwick site (https://wdc.contentdm.oclc.org/digital/collection/hall) seems to provide only downloads and browsing of individual pages, someone, presumably from the Library, has also had the good sense and generosity to deposit the whole collection at archive.org, from which I was able to obtain an XML file (hallColl.xml) which can be readily processed to produce links to the 102 Lacy published titles: see hallCollTitles.xml
Internet Archive
This archive has an excellent search interface and will also deliver results in any tractable form you like, including json or xml. It cannot however perform magic to overcome variant cataloguing practices amongst the collections it has incorporated. So, for example, a search for “Lacy Acting Edition” throws up precisely one hit (“a copy graciously made available by Fordham University”). A more general search for “Thomas Hailes Lacy” gets me 125 hits, 102 of which come from the Hall Collection. A search (thomas hailes lacy) AND -collection:(hallcollection) finds me the 23 titles not included in the Hall Collection. On the other hand, a search for “T.H. Lacy AND -collection:(hallcollection)” finds 66 titles, not included in the Hall Collection, but not included in the foregoing either.
On the bright side, the hits can be downloaded in a format which is more or less identical to that generated by the XML option quoted for the Open Texts server above, so mungeing the results lists together is a Simple Matter Of Programming, resulting in iaList.xml.