Hand Lists – The Return

Three weeks ago, I wrote an interim report on the work I was doing to make Allardyce Nicoll’s Handlists more machine tractable. I didn’t actually spend all of the previous month correcting OCR errors, writing bits of XSLT to manipulate the OCRd text, figuring out what had gone wrong with my matching algorithm etc. It just feels that way.

Anyway, here’s a result:

This camembert shows how all the 25,000+ entries in the two Handlists are classified. The categories used (Drama, Farce, Panto, etc.) are ones I made up by grouping together the much finer-grained but trickier text types Nicoll provides (of which there are more than a hundred values) into the 15 basic classes you see above. More of that another day.

The size of each wedge is, as you might expect, proportionate to the number of entries so classified, and (reading anti-clockwise) they are in descending order. As I noted last month, the top six categories together account for three-quarters of the data.

I also said last month that my next mission would be to see how these proportions change over time. And indeed they do. Like this:

Each column here represents a decade for which the Handlists provide data, from 1810s on the left to 1890s on the right. Each column summarizes theatrical events recorded for that decade, using the same 15 crude classifications as the camembert. The size of each coloured blob is proportionate to the percentage of events in that decade classified in that way. For example, in the 1860s column, the pale blue blob is much bigger than any of the others, because nearly half (48.6% to be exact) of the available theatrical events that decade are classified as “Drama”. In the same decade, the pale green blob above it is smaller because “Farce” accounted for a smaller proportion (15%). I haven’t included the numbers in the graphic to make it easier to read, but they are available.

Note that all the blobs are stacked on top of each other in alphabetical order, so you can detect changes over time for a given category by reading from left to right. For example, a blue blob for “Panto” appears near the top (row 4) in each decade, demonstrating the this particular form of theatre formed part of each decades offerings, getting perhaps a little more popular as the century wears on, but never disappearing. Contrast that with “Melodrama” (the purple blobs in row seven) or “Burletta” (the dark yellow blobs near the bottom) both of which are flourishing in the decades before mid century, and almost entirely eclipsed thereafter.

Now, I am certainly not claiming to have discovered that melodrama and burletta were both seriously unfashionable from round about 1850 onwards, despite their earlier mode-ishness. But it is always satisfying (and reassuring) to find “common knowledge” backed up by actual observed data.

Handlists made handier

I have been down a deep deep rabbit hole for the last week or two trying to get my XML-tagged versions of Allardyce Nicol’s two Handlists into shape. Here is an interim report.

What is an entry?

One problem has to do with the actual content of Nicoll’s Handlists. What exactly do they list? Although this is essentially a record of performances, it is organized very much by author. There are, for example, about 30 entries which don’t refer to any specific play, but are merely there to indicate the preferred form of an author’s name: like this one

<entry>“LAWRENCE, SLINGSBY.” See G. H. LEWES</entry>

Multiple authorship is also a problem. An entry like this one is straightforward enough:

<entry type="D."><author>ANDERSON, JAMES R. </author><title>The Robbers </title><note type="perf" >D.L. 21/4/51</note>. L.C. D.L. 26/12/45.</entry>

Inter alia, this tells us that there was a performance of a drama called “The Robbers” at Drury Lane theatre on 21 April 1851, and that the author of the piece is recorded to be James R. Anderson.

But there are also entries like this one:

<entry type="F."><author>ATWELL, E. </author><title>A Stuffed Dog </title><note type="perf">Park. H. Camden Town, 2/11/89</note>. See J. A. KNOX.</entry>

This one tells us that “A Stuffed Dog”, written by E. Atwell, was making them roar at the Park Theatre in Camden Town in November 1889. If we look for Mr J. A. Knox, we find another entry, apparently for the same performance:

<entry type="F."><author>KNOX, J. ARMORY </author><title>A Stuffed Dog </title><note type="perf" >Park H. Camden Town, 2/11/89, copy.</note>. L.C. [Written in collaboration with E. ATWELL .]</entry>

On the face of it, if I want to determine how many Farces are listed in the Handlists, for example to determine the waxing or waning of this particular type of performance over time, I need to be wary of cases like this one, where a single farce has multiple authors, and therefore gives rise to multiple entries: both these entries refer to a single performance, so should only be counted once.

How serious a problem is this? Out of 25 thousand- plus entries, (25,632 to be exact) I find that there are 1346 entries containing the word “collaboration” and 1738 containing the word “See ”. Most, but not all, of them point to a collaboration entry which references the same performance of the same play. There are only two entries in which the word “See ” really appears as part of a title, but there are maybe a dozen or more other types of cross references, for example to plays renamed or whose authorship Nicoll has resolved. The number of cross references which go nowhere or to an entry which documents a different title or performance is unknown, but not zero. I spent some time trying to check automatically but did not finish: other bits of the rabbit hole (like checking and fixing OCR errors in the dates) seemed more useful.

Although ostensibly organized by author, about half the entries in the Handlists record performances for which there is no author. (10,374 out of 25,662 entries to be precise). Usually, but not always, distinct entries are given for the same title performed on different occasions or at different venues – but occasionally an entry will provide a list of performances: like this

<entry type="C.O."><author>MANCHESTER, G. </author><title> The School Girl </title><note type="perf">Grand, Cardiff, 2/9/95; Stand. 14/10/95</note>. L.C. [Music by A. Maurice.]</entry>

I have not checked, but I suspect that this happens when a play has its first performance in the provinces: in this example, we may conjecture that “The School Girl” went down well enough in Cardiff for the management to risk bringing it to the Standard in Shoreditch a month later.

What sort of play is this?

Nicoll thoughtfully provides lists of the abbreviated codes he uses to indicate “the nature of the play itself” for both the Handlist 1800-1850 and the Handlist 1850-1900. There are a few codes present in only one or other of the two lists (B.O. for Ballad Opera is in the earlier list only, for example). An investigation of his usage of these codes over the two hand lists indicates the justice of the warning he also provides that “these designations are in no way final, and are often indefinite”. The two lists propose a total of 87 different codes including some very general categories (D for drama, F for Farce, P for Pantomime etc.) as well as many more nuanced classifications such as “Military Drama”, “Operatic Drama”, “Poetic Drama” “Romantic Comedy”, “Romantic Comedy Drama”, and “Romantic Drama”. Nicoll further remarks “Where possible, the designation employed in the original bills has here been followed” – so we should take these characterisations as indicative of the language in which Victorian Theatre chose to describe itself, not as a formally organized taxonomy.

I did some counting up of the actual usage of these codes, and found a further 23 codes not specified in either list. However, most of these are used very infrequently : less than 10 times for all except two of them. The exceptions are the emdash which is used for 15 entries that describe non performance items such as published collections of plays and the code “Bsq. O.” which is used for 22 entries, all of them presumably “burlesque operas”.

For what it’s worth, here’s a colourful camembert to show the distributional statistics of these categorisations. Of the 110 different codes used, more than half (64) are used fewer than 10 times. Or, to put it another way, the top six codes between them account for 17,242 out of the total nunber of 25,632 entries – over 67%.; the top eight codes (labelled in the picture) account for more than three-quarters of the whole population.

My next project will be to see if these proportions change significantly over time.

Categorising Lacy

 Collecting the data

Allardyce Nicoll’s monumental “History of English Drama 1660-1900” (Cambridge University Press, 1955) has two volumes containing “hand lists” of theatrical titles produced in the first and second halves of the 19th century respectively. The list in Volume 4, covering 1800-1850, can be downloaded from CUP’s site in PDF format if you have an appropriate institutional login; it even has a DOI: https://doi.org/10.1017/CBO9780511897764.010.

I downloaded and worked on the PDF version (handlist18001850.pdf) with the following workflow:

  • Generate DOCX version using Abby (thanks, HumaNum) (handlist18001850_hnOCR.docx)
  • Process this with `docxtoTEI` (thanks TEI) (handlist18001850_hnOCR.xml); some hand edits too to get rid of unnecessary clutter such as forme work and simplify the subsequent processing
  • Process this with `addMilestones.xsl` to delimit entries and author sections
  • Process output with `writeList.xsl` to produce structured list of all titles (entries1800-1850.xml)
  • Process with `addKey.xsl` to select entries mentioning Lacy and add a magic key derived from the title (LacyEntries1.xml)
  • Process this with `checkEntryList.xsl` to determine whether there is a match for this magic key in the current Lacy Catalog file, and add an attribute @matched indicating the result (LacyEntriesChecked1.xml)

The text of Volume 5 covering titles from 1850 to 1900, is not available in digital form from CUP for some unknown reason. I did however discover in the Internet Archive a less than perfect scan of it, provided by the Digital Library of India (https://archive.org/details/in.ernet.dli.2015.40678). I used a grubby perl script (`tagPlayList.pl`) to process the fairly unreliable OCR plain text version of this to produce output compatible with that produced by `addMilestones.xsl`, and fed this into the toolchain listed above to produce LacyEntries2.xml and LacyEntriesChecked2.xml

Results

– entries1800-1850.xml has 8495 titles of which 456 (53.6%) mention Lacy; of these, 307 could be matched, and 149 not.

– entries1850-1900.xml has 17202 titles of which 754 (43.8%) mention Lacy, of these, 454 could be matched and 300 not.

Failing a bit better

I next tried a different way of checking for matches, viz. the ever reliable unix utility `comm` applied to a pair of text files, one containing all the values for title/@n in the LacyCatalog and the other containing all the values for entry/@n in the tweaked Nicoll entry lists. First time round, unsurprisingly this gave different results: 537 matches in all, with 652 values unique to the Nicoll-derived list, and 961 unique to the Lacy catalogue.

I noticed however that both files had some duplicate entries, which were causing confusion in the matching process, though unsurprisingly, the most frequent cause of disagreement seemed to be OCR error or inconsistent formatting.

I manually added suffixes to handle the discrepancies I noticed in @n values. The author’s surname was used to distinguish identically titled but different works; the suffix “-bis” for actual repetitions.

Removing the duplicate entries and some of the more egregious OCR errors gave more plausible values : 760 matches in all, with 429 values unique to the Nicoll-derived list, and 738 unique to the Lacy catalogue. This was further improved by checking the 23 titles in Nicoll’s list which indicated that the Lacy publication used a different title. The output from comm after this modification gave me 772 matched titles, with 410 unique to Nicoll, and 726 unique to Lucy.
saxon ~/Public/Lacy/newcatalog.xml listKeys.xsl > lacyKeys.txt
saxon lacyEntries.xml listEntryKeys.xsl > nicollKeys.txt
comm --total lacyKeys.txt nicollKeys.txt


This exercise also revealed that there are a couple of genuine double entries in the Lacy Catalogue. Henry Byron’s “Bluebeard” extravaganza appears both in volume 19.3 (L0273) and in volume 49.9 (L0729); likewise Selby’s “Witch of Windermere” appears both in volume 84.4 (L1248) and in volume 3.6 (L0036).

After more tweaking, I decided to declare victory with 753 matched entries, and 436 not matching. Nicoll’s catalog includes entries for Lacy titles from volumes after 100, which I do not include in this project, so I expected some matches to fail.

Categorizing the titles

One of the reasons for being interested in Nicoll’s Handlists is that they assign each title a code such as ‘D’ for drama. How useful are these codes as a way of categorizing the contents of LAE?

Nicoll’s category codes are at once very delicate and very vague. Clearly, they are derived from the way the piece describes itself on its title page (if it says it’s an “extravaganza” then that’s what it is), but at the same time these often ornate descriptions have clearly been rationalised to make up a smaller number of unique category labels than simply using the words of the title would. But only to a degree: “burletta” and “burlesque” are distinguished, with roughly equal numbers of each, but there are plenty of titles which describe themselves as “a burlesque burletta”. In an attempt to further simplify these descriptions, I decided to map Nicoll’s codes to just four main classes: Comedy, Drama, Musical, and Spectacle — even though items frequently cross these very broad categories: should “comic drama” go under “comedy” or “drama”, for example? “Musical” is particularly problematic since many comedies include songs, as does almost anything classed as a “Spectacle”: my intention was to limit it to pieces clearly operatic or at least operetta-ic .

Undeterred, I tried my own experiment in classification, based on the words appearing on the titlepages. I used the following regexp to collect categorizing strings :

([Cc]omedy|[Cc]omic|[Ff]arce|[Tt]ragedy|[Dd]rama|[Bb]urle|[Pp]antomime|[Cc]omedietta|[Ee]xtravagan|[Oo]per[ae]|[Vv]audeville|[Pp]lay|[Ss]ketch|[Ii]nterlude|[Mm]usic)

and concatenated all the matching strings for each title. Prefixed with the number of acts in the play, this became the value for a @type attribute on each catalogue entry. For example, a one act play whose subtitle contained the phrase “A Burlesque Burletta” would be given the category label “1_BurleBurle”. A script `listTypes.xsl` counted up unique category labels (disregarding the number of acts) producing a list that begins:
~~~
Burle (40)
BurleBurle (3)
BurleBurleOpera (1)
BurleDrama (7)
BurleExtravagan (21)
BurleExtravaganComicPantomime (1)
~~~

(Note that the counts are for the whole string of categorizers: the count for “Burle” does not include the count for “BurleBurle”)

In many cases, it’s easy to map these category labels to the four basic categories identified above: all of the above would count as “Spectacle” — except perhaps “BurleDrama” and “BurleBurleOpera” (the latter however proves to be a mistake in segmentation: the “opera” part of the characterization concerns the source, not the play itself.)

There are 57 titles which lack any of these categorizing substrings, many of them preferring humorous variants on traditional title page discourse, such as “An Original Irish Stew” or “A new and original, aerial, floreal and conchological fairy tale (Of which the most striking feature is borrowed from the Countess D’Aulnois)“. These I (regretfully) left to one side for the moment.

My first (rough) simplification produced the following figures for all 1500 titles in Lacy’s Acting Edition:

CountPercentCategory
76551COMEDY
42628.4DRAMA
553.6MUSICAL
19713.1SPECTACLE
Categorizations by subtitle, for all 1500 titles

These relative proportions for these gross categories seem to be much the same as those derived from the more finely-grained Nicoll analysis of half the number of titles:

CountPercentCategory
37050.6COMEDY
15521.2DRAMA
7610.3MUSICAL
13017SPECTACLE
Categorizations by Nicoll, for Lacy titles only

The story looks rather different however when we count up the categories for all 15,021 categorized entries in the two Nicoll handlists, not just those for Lacy titles:

CountPercentCategory
508133.82COMEDY
551336.7DRAMA
203213.52MUSICAL
239515.94SPECTACLE
Categorizations by Nicoll, all titles

It has to be said that these counts are all fairly unreliable — aside from encoding problems caused by flakey OCR and my post-processing, Nicoll’s records count performances of the same title as different items where a title is of unknown authorship: this has the effect (probably) of inflating the counts for some categories such as Pantomimes. And of course my crude four part classification really needs much more thought. But putting all that to one side, it does seem that in selecting titles for his Acting Edition, Lacy preferred comedy over drama.

How old is that play?

Nearly every play in the Lacy catalogue – 1468 of them to be exact – now has a date of first performance, either explicitly given in the front matter of the text, or (for about 100 other cases) diligantly extracted by me from Nicoll’s “Handlist”. These dates supply a terminus ad quem for the play’s composition : it cannot have been written after its first performance. Similarly, although the individual volumes are not dated, we may reasonably assume that the volume itself cannot have been printed before the latest “first performance” date it contains. This is not an entirely satisfactory procedure if we want to track changes over time, since the number of volumes allocated to a particular year varies over the 38 year period, but it is the best I can do.

Nevertheless, I thought it might be interesting to plot for each volume how many of the plays it contains are recent, not so recent, or positively antediluvian. One hypothesis might be that the proportion of recently composed material declines over time, whether because less of it is available for Lacy to reprint, or because the bourgeois drawing room for which the later volumes are primarily intended prefers its drama antiquated. Another might be that the proportion of old warhorses in each volume is pretty much consistent over the whole life time of the Acting Edition.

Here’s my first attempt at visualising the data. It shows that there are a few volumes round about the start and end of the 1860s when the quantity of older material seems to shoot up, but that for the most part each volume contains a majority of material less than 10 years old. It also, however, suggests that the amount of new material in the 1870s starts to decline.

How balanced a sample is the VPP?

The full catalogue of Lacy’s Acting Edition comprises some 1500 titles, produced by just over 320 different authors. Over a third (583 to be exact) of all titles are produced by a small group of a dozen or so recidivists, each of them accounting for more than 25 titles. These include some predictable exceptions like “Anon” (65 titles), but also some extraordinarily prolific writers like John Maddison Morton (82 titles or 5% of the LAE), J.R. Planché (69 titles), and Henry James Byron (51 titles). In the second rank of creativity, there are 20 authors each of whom is responsible for producing between 10 and 25 titles, and who collectively account for 346, about a fifth of the whole. These include such familiar names as William Shakespeare (24 titles), just ahead of the less famous Thomas Egerton Wilks (23 titles) and some distance from George Colman (12 titles). At the other end of the scale, only a tenth of titles (171) are the product of an author otherwise unrepresented.

One of my first questions when looking at the Victorian Plays Project catalogue was the extent to which it might be considered a representative sample of the whole LAE. That of course depends on the basis on which you are sampling: as a first exercise, I consider here authorship. The VPP sample contains 343 titles, which are the product of 130 authors, only 8 of whom produce more than 10 titles, and nearly half of whom (74) produce only one title. This seems like a markedly different frequency distribution. Moreover, the ranking of authors within a “top twenty” list for the two corpora shows some surprising differences. Some authors who appear high in the upper half of the LAE list, e.g. Williams and Selby, trail near the bottom of the VPP list. It is unsurprising to find that titles low down the VPP list are also low down the LAE list; what does surprise me is the disparity in ranking for the comparatively frequent authors. Tom Taylor, the highest ranking VPP author of all, is only the 10th most frequent author in LAE; and John Palgrave Simpson, who ranks 12th in LAE, only just scrapes into the 25th row of VPP. Some of these oddities may be attributed to editorial decisions by the VPP: for example to exclude entirely titles by one William Shakespeare, even though these are ranked 14th in LAE.

Anyway, here are the Lacy Acting Edition Top Twenty authors, ranked by the number of titles attributed to them.

LAE rank VPP rank Titles (LAE/VPP) Author SDA dates
1 3 82/15 Morton, John Maddison * 1811-1891 A1
2 4 69/14 Planché, J.R. * 1796-1880 A1
3 6 65/13 [Anon.]    
4 5 51/14 Byron, Henry James * 1835-1884 A2
5 7 41/12 Suter, William E.   1811-1882 A1
6 2 40/17 Brough, William * 1826-1870 A2
7 16 38/5 Williams, Thomas J. * 1824-1874 A2
8 19 37/5 Selby, Charles * 1802-1863 A1
9 11 36/7 Burnand, Francis C. * 1836-1917 A2
10= 1 34/20 Taylor, Tom * 1817-1880 A2
10= 8 34/12 Coyne, Joseph Stirling * 1803-1868 A1
12= 25 28/4 Simpson, John Palgrave * 1807-1887 A1
12= 20 28/5 Oxenford, John * 1812-1877 A1
14 0 24/0 Shakespeare, William   1564-1616 A1
15 24 23/4 Wilks, Thomas Egerton   1812-1854 A1
16 14 19/6 Stirling, Edward   1809-1894 A1
17= 0 18/1 Wooler, John Pratt   1824-1868 A2
17= 18 18/6 Talfourd, Francis * 1828-1862 A2
17= 21 18/5 Jerrold, Douglas * 1803-1857 A1
17= 11 18/7 Halliday, Andrew * 1830-1877 A2
LAE Top 20 Authors

 

There is of course much more one might wish to say about these authors. It is unsurprising to find that they are all males, and equally that they are mostly members of the Dramatic Authors Society, the agency which had been founded to ensure their copyrights were observed, and which also required payment of a fee for provincial representation. Their dates, with four exceptions, are taken from Wikipedia, where there is much else is to be found. (The exceptions yet to be immortalized on Wikipedia are William Suter, Thomas J. Williams, Thomas Egerton Wilks, and John Pratt Wooler : their dates are taken from the Hathi Trust catalogue record). Just for fun, I decided to categorize them into two age groups on the following basis:

A1: born before the Battle of Waterloo (1815)

A2: born after Waterloo but before the Great Reform Act (1832)

Unexpectedly there are equal numbers in each group.

In the interests of full disclosure, I should add that the list of plays so far converted to TEI format demonstrates a tiny and even more divergent sampling of these authors. The most frequent author so far converted is J Maddison Morton with 6 titles, which corresponds well with the LAE ranking, but the next three in that ranking are all so far missing entirely. In fact, of the authors in the LAE top twenty, the following are all so far missing: Planche, Byron, Suter, Williams, Selby, Shakespeare, Wilks, Stirling, Wooler, Talfourd, and Halliday. Only five authors are so far represented by more than one title (Morton, Coyne, Courtney, Oxenford, and W.S. Gilbert).

As comparison, I also took a look at the author counts for the 45 or so LAE titles selected for inclusion in the Chawyck Healey “English Drama” collections. Only 10 authors appear here more than once, all of them represented by no more than 2 titles, except Simpson, who clocks in three. Only four of these authors also appear in the LAE Top Twenty (the inescapable John Maddison Morton, J.R. Planche, John Palgrave Simpson, and Thomas Egerton Wilks). Clearly these titles were selected on some other grounds than their frequency in the LAE.

Nobody talks like that: a stylometric exercise

Back in March 2022, I was asked if I’d like to be interviewed as part of a research project concerning editing in the 21st century . What the hell I said, I have close to no real experience of digital editing (unless you count my lovely digital edition of “Through Beatnik Eyeballs”), though I have made a reasonably satisfactory career out of telling other people how they should do it. One thing they really do teach well at Oxford is the ability to sound as if you know what you’re talking about… Anyway, I signed up, and after some vicissitudes was duly interviewed via Skype, sitting in my birdsong-filled garden, some time in June. Some considerable time later (at the end of December to be precise)  I was invited to revise the transcript someone had made of my interview, and did so. I removed some of the more egregious hesitations and a few garden path sentences, but left the bullshit intact, collection of that being after all the object of the exercise. And later still, I learned, my transcribed interview joined a group of fifty or so others on a website, in a proper digital archive no less. This was all very satisfactory, though I was a little disappointed to find that the edited transcriptions were being made available in PDF or in RTF only. Are those really now considered to be appropriate long term preservation formats? I suppose in a world where Boris Johnson can become prime minister, nothing should be surprising. And what happened to the audio? The design goals of this project were firmly based in a part of the forest of the Digital Humanities somewhat removed from linguistic analysis, discourse semantics, problems of speech transcription, or the textual analysis of academic talk. The project was to deliver something readable by human beings, like a book. Quite enough for one grant.

But hoorah for the open-minded spirit of open access, which makes it possible for me (and anyone else so inclined) to play with the resulting resources and do at least some things not originally envisaged in the project design! Entirely unsurprisingly, I spent a happy few days last week downloading the RTF files and converting them to TEI (the scripts and the results are now available in a github repo ). TEI because that’s what I do, but also because I wanted to be able to do textual analysis properly.

My resulting TEI corpus contains 46 small documents, one for each interview, each consisting of a sequence of TEI <sp> elements, with a who attribute to indicate a unique code for the speaker Each element contains one or more paragraphs of text, preceded by the speaker code as given in the transcription (there are a few differences). Like this:

<sp who="#LB">
<speaker>LB</speaker>
<p>I would have liked to have been better paid.</p></sp>
<sp who="#MK"><speaker>MK</speaker><p>Sure.</p></sp>

This was prefixed by a paragraph of background information about the interviewee, which I banished to the <front> of the document, as a source of metadata. I also created a rudimentary TEI Header for each document.

Importing my 36 documents into TXM, I found the corpus had a total of 237,271 words. I made a partition on the basis of the @who attribute, so that all the words for each distinct speaker were grouped together. Here’s the bar graph from TXM showing how many words were associated with each of the 51 distinct speakers. It shows that one speaker (JOS) talks a lot more than anyone else: but this is unsurprising, since he is one of the two interviewers, and I have simply aggregated his side of each discussion irrespective of participant. I did the same for the other interviewer, MK, who has fewer interviews (14 as opposed to 32 for JOS); at 6000 words, he actually talks less than the three most garrulous interviewees (JC, AG, and RR), all of whom manage more than 6500 words. At the other end of the scale, there are five interviewees who hover around 2000 words apiece. The bulk of respondents fall comfortably between these extremes.

What can I do with this data? Well, treating it just as data, it might be interesting to see whether the frequency with which words, or lemmas, or POS codes appear in each speaker’s chunk is much the same, or whether some stylometrical statistic can be used to group like-speaking speakers together. Does everyone talk in more or less the same way or (ex hypothese) do professors and old lags like me talk differently from early career researchers? It’s not quite the typical stylometric use-case (which tries to establish probable authorship on the basis of similarities) but close. Fortunately for the mathematically challenged, there exists a fairly well established range of tools designed to explore such matters. Unfortunately for the mathemagically challenged (amongst whom I unreservedly place myself), you do need to know what you’re doing with these really quite sharp edged tools. So please forgive any idiocies in what follows…

I played with TXM and with Stylo, both of which claim to be usable by the non specialist, and both of which have (interestingly different, but that’s another story) user interfaces. TXM has the great advantage of accepting TEI XML as input and treating it sensibly. Stylo requires me to pre-process my XML text into dumbed down plain ascii using arbitrary naming conventions to provide metadata. Both produce fancy graphics.

Here, for example, is a dendogram from Stylo, showing how my 50 locuteurs cluster together, if we look just at the highest frequency lemmas. If I interpret this correctly, it shows the interviewers (“allJOS” and “allDK”) grouped together and distinct from all other respondents, which seems reassuring.

This is even more evident in the Principle Component Analysis also produced with Stylo, which shows JOS and MK as complete outliers.

And TXM provides further confirmation of this from a lexical perspective:

I think what this is telling me is not only that JOS and MK are outliers, miles from all the other documents shown here only as a red splodge in the blue cloud, but also that the words they favour are characteristically to do with their role as interviewers (What, future,  question, projects, maybe etc.)  Or so I believe. But clearly I am going to have to do a lot more background reading before I can say anything really interesting about this little dataset…

Consistency is a good thing…

Now that I have all the available files in a form which is at least valid according to a TEI P5 schema, I can start fussing about the consistency of the markup they contain.

Let’s start with an easy one. Attribute names may be marked up using the element <att>, or using the element <ident>, or using the element <name>, or just flagged in the running text, by a preceding @ (I didn’t find any cases of a following =, though I suspect there are some). I really don’t care a lot which is used, but I do care that there should be just one rule to bind them, not four. So how do things stand at present?

There are 298 <att> elements, 212 <ident> elements, and 98 <name> elements. <name> is used both for personal names and for names of attributes, classes etc. A first step therefore is to turn all name[@type] elements into idents, and to simplify the values for type. A second step might be to look at all occurences of attribute names simply flagged with an @ sign. in wPressDox, the regex [[:space:]]@[[:alpha:]][[:alnum:]+] matches 608 times; in legacyTEI files, only 18. Some careful regex matching might enable me to turn the @thing cases into <att>s in most cases. But hold on a second: if consistency is the goal it would be much easier to turn all <att>thing</att>s into @thing than the reverse. The question really is : is faithfulness to the original tagging goals completely unimportant? We have 16 cases of ident[@type=’attr’] and 47 of ident[@type=’attrName’] and 21 cases of ident[@type=’attr’] as well as 32 name[@type=’attr’], and indeed some random occasions where <code> (with no @type value) is used to delimit an attribute name. Making all of these consistently <att>seems to preserve the original encoder’s goals. Turning them all into @xxx however seems somewhat different.

There are also cases where the original tagging has tried hard to make distinctions we would probably no longer bother with: for example <mentioned> and <soCalled>. I leave these well alone.

Now a more tricky one.

There are 1817 <div> elements, 705 of them being @typed. 619 of those are type=’h2’, but this does not necessarily reflect their hierarchic position, merely that they were originally indicated by an h2 level heading in the WordPress file. Of the 83 typed divs which are not “h2”, the most frequent are h3 (45), followed by h1 (8). However, of the 45 h3 divs, only 20 have an h2 parent; the remaining 25 are contained by an untyped div. Other very rare values include “agendaItem” (3), appendix (4), and glossary (3).

It seemed like a good idea to check that <div> elements contain something other than a nested div and remove the redundant layer if not. The xpath //div[count(*) eq 1][div] finds 52 items , though this may be an artefact of my retagging script. Somewhat more problematic are <div>s which contain just a <head: in some cases, these are probably genuine: for example

<div><head> 12:30 – 13:30: Lunch</head></div>
<div type="h2"><head>Goodbye Peter and thank you! 😟❤️</head></div>

but absent supernatural powers, it’s in principle impossible to know how to interpret a sequence of headings in the wordpress files. For example, here’s a snippet from the 2020-10_30824 document:

The “Review…” lines are div/head s containing a link to another part of the document (this kind of transclusion happens nine times in all). Are they siblings or parents of the “SUNDAY… ” div/head s ? Is the “Proposal on ruby glosses” a child or a sibling of “SUNDAY, 25…” ? You tell me. I have mostly left them all as siblings for the moment.

Then there’s lists: there are 2456 <list>s, only 109 of them typed, @type values are “unordered” (33), “simple” (21), “ordered” (25) and “gloss” (30).

There are 45 untyped lists which contain one or more items containing a label; three of these contain lists with labels as a child of the list as well as labels as child of the list/item; mercifully all confined to one file (2009-12). The label tag is used ambiguously: sometimes it contains a person siglum; sometimes a subheading. On further investigation, the siglum usage occurs only in one file (tcm24) so I changed all these to rs. The 30 explicitly gloss lists are all in the TEI legacy files too.

Further down the slippery slope: there are plenty of minority interest tagging distinctions: I have turned a scattering of <quote>s into <q>s, and all <ptr> elements into <ref>s, but not yet checked that all the @targets actually go somewhere. I have retained 9 cases of <time> which might just as well be <date>s; and six cases of <foreign>; also 76 <lb/> elements standing in (mostly) for proper structural markup. I have retained and made valid the one document in which <sp> and <speaker> elements have been used, but not tried to add such markup to the couple of occasions where the minutes launch into dramatic mode.

But rather than continue to polish this pig’s head, I have spent today providing it with some infrastructure, and putting it all in an accessible repository at https://github.com/lb42/theCellar

I also spent far too much time providing a CETEICEAN front end to it, at https://lb42.github.io/theCellar/tcMins/index.html

Plenty more to do. Of course.

The usual problem

Now that I have all of the TEI Council minutes in XML which is more or less valid against TEI-All, I can start worrying about defining a sensible schema for them, oh bliss. One possibility might be just to accept and preserve every tagging decision taken during the long history of this archive, even the silly ones. Another might be to retro-convert everything to a single brutalist vision of how things Ought To Be. Or somewhere between the two extremes, perhaps.

Over the last 23 years, different editors of TC minutes have taken different views in all the places where you might expect them to. Even in the days when minutes were prepared in kosher TEI, mostly conforming to TEI Lite, there was still plenty of scope for different practice. Shall we distinguish soCalled, and mentioned, q and quote, term and emph? Are we consistent in using emph for linguistic emphasis rather than formatting? Do we distinguish q and quote, and if so, for why? If we have gi and att (or occasionally ident type=’att’) do we also need tag, and code and ident?

In more recent times, when such ontological anxieties have become perhaps less feverish, the minutes use a comparatively restricted set of distinctions, mostly to do with whether a snippet of text is in italic or bold, or used as a heading or a link, or is a list item. Indeed, sometimes the tagging decisions we see in the XML file are purely an artefact of the formatting tweaks needed to present the minutes on the WordPress website and have little to do with document structure or meaning. And in sadly many cases, if a semantically tagged version of such documents ever existed, it is now lost. Should we, in the interests of consistency, enforce the lowest common denominator across the whole set of documents?

Consistency at least in the way major components of each document are presented would surely be advantageous. To take a simple example, every set of minutes begins with a list of the persons participating in the meeting. Sometimes it is presented as a list of items; sometimes as a single paragraph; sometimes as a sequence of paragraphs. Almost always the names of individual attendees are associated with a siglum or set of initials, but the way in which this is all represented in the XML structure varies considerably. This sort of thing is easy, if time consuming, to make consistent. And probably something like the current conventions, in which each person’s name is given as a distinct <item> within a <list> should be aimed for, since it is clear that the various ways these lists are currently presented is really only an accident of formatting, changes which are of much lesser interest than ease of processing the list in various ways. Whether or not the full TEI paraphernalia linking occurrences of a person’s initials in the text to their appearance in the list of attendees, is another question.

Of courser, if we were starting this exercise from zero, we would follow the textbooks and first carry out a data analysis. What are the important entities in a set of minutes, and what are their properties? Each of these documents relates to a meeting which took place over one or more days, in a specific place, or in cyberspace, with a specified set of participants. The minutes indicate the topics discussed, to some extent formalised in terms of identified issues, or action points. We might also ask what sorts of research questions should our analysis facilitate: how often do particular individuals or kinds of individual intervene? How long does it take for an issue to be resolved? How many different issues are under consideration at a particular time? Where do issues come from? And so on.

But we are not starting this exercise from scratch. The documents already exist. Moreover, the conceptual entities they are concerned with, and therefore represent, change over time, reflecting the Council’s evolution both in terms of its practice and its sense of purpose. That purpose has always been to maintain and develop the technical content of the TEI Guidelines, of course; but with the availability of sophisticated issue-tracking and reporting software the way in which this is carried out has changed a great deal. Consequently the operational model – the modus operandi – of the Council has also changed a great deal. These changes are necessarily reflected in the organization and content of the minutes.

Writing a full history of the TEI Council’s evolution is not however the purpose of this document, tempting though it is. A few salient aspects of that history do however affect our document analysis. For example, it’s necessary to understand that when first set up, the Council worked very much in the same way as the original TEI project: its role was largely to initiate, supervise, and integrate work carried out in more or less autonomous working groups. This had worked well for some major expansions of the P5 Guidelines, such as the addition of manuscript description, or character encoding issues following the adoption of Unicode, where the TEI had been able to constitute a motivated and informed group of experts to produce concrete proposals; less well in areas where such a group proved harder to constitute or motivate. For the first five years of its existence, from 2002 to the publication of TEI P5 1.0.0 in 2007, however, the Council’s minutes are full of reports from specific working groups, and actions on someone to pursue them.

This was also a period during which the TEI enjoyed the luxury of two paid editors. The process by which the Council itself took over editorial responsibility probably started with the full scale review of the first draft of P5, in which each chapter was assigned to a Council member for review, though actual implementation of changes to the Guidelines (which involved a content management system called perforce) remained a specialised activity, not available to all. The minutes from this period necessarily therefore have many “action points” aimed specifically at the editors.

For releases 1.0.1 to 2.7.0 (2008 to 2014) the following formulation of the Council’s role appeared on the PDF title page

TEI P5:
Guidelines for Electronic Text
Encoding and Interchange
by the TEI Consortium
Originally edited by C.M. Sperberg-McQueen and Lou
Burnard for the ACH-ALLC-ACL Text Encoding Initiative
Now entirely revised and expanded under the supervision
of the Technical Council of the TEI Consortium
edited by Lou Burnard and Syd Bauman

Only in September 2014, with the 2.7.0 release, did that last line disappear, establishing finally that the Council was now editorially responsible for the whole.

By this date the Council’s modus operandi had also changed considerably. Already, in 2009, we find the Council reviewing and acting on proposals for change to the Guidelines known as “feature requests”, originating from the wider TEI community rather than from the Council or the Board. A key step towards expanding this practice was the adoption of the open source issue tracker provided by sourceforge, which hosted the TEI Guidelines source from 2007 onwards, and remains a recognizable forerunner of the current github based system.

The move to such systems has several implications for the current archival project. Firstly it means that a substantial amount of the TEI’s intellectual history is now exhaustively documented, including all sorts of crazy ideas and false starts and frequent repetition, but all on a platform which the TEI itself does not own or control. Secondly, it means that the links into the documentary base provided by those external systems and the more diplomatic narrative constructions provided by the current minutes are really quite important if we wish to develop a proper historical understanding. And finally, of course, the availability of this detailed repository of issues and their resolution has changed dramatically the way the TEI Council does its work.

Defining the target

Defining the target

It’s easy to say rather glibly that TEI markup is a good archival format, and in many respects it is: experience shows that a TEI file can nearly always be read without too many assumptions about the platform or software needed to read it. Because a TEI document uses a very basic form of labelled bracketting, developing software to act upon the markup is a breeze; moreover because the semantics and syntax of the markup are well defined, the software can perform whatever tricks it likes on the basis of an explicit model of the document’s structure and semantics. The tricky part is deciding what exactly the components of that explicit model should be: what (to coin a phrase) is this text really?

On the screen I am currently typing at I see that the phrase “Defining the target” and the word “really” are both in an italic font. The first is a heading, and the second is a word I wish to emphasize. In neither case is it particularly helpful to state that there’s a font change here: if I lost that information for the first case you would still (probably) recognise the words as a heading on other grounds (it’s not a sentence; it’s a separate block; it’s in a place where a heading is conventionally appropriate etc.) but in the second, without the signal given by the font style change you have no easy way of noticing that this word is meant to be more salient than the others, still less of recognising the allusion it makes to a famous journal article. Is recognising (and preserving) this emphasis as essential a part of this document as distinguishing the heading from what follows, or noting the paragraph divisions ?

Although it’s tricky, it’s something I and others have been doing for decades, this business of deciding which are the “essential” components of a document, independently of its realisation on screen or paper. The claim is not just that this separation of the document from its realisation is meaningful, but that it’s also useful. Certainly it makes it much simpler to process masses of similar but different documents in a reasonably intelligent way if their structural components and semantically salient properties are explicitly and exhaustively flagged in the same way. Certainly there may be a price to pay for that simplicity: we may have to renounce the ability to visualise the document exactly as one or other of its many historical realisations did; just as we do for other cases where such a realisation depended on a specific software infrastructure. Good luck emulating a pixel-perfect WordPerfect 4.2 or WordStar view of your TEI document on the basis of its TEI archival form.

All this by way of prelude to the next stage in my attempts to recover/reconstruct a usable TEI archive of the deliberations of the TEI Council. Those deliberations currently exist (as previous blog entries have shown) in one or more of three different forms: as Google Docs, as WordPress HTML pages, or in one or other legacy TEI format. All of these formats are relatively simple to convert into XML without loss of such information as they already contain: the task is to define a minimal TEI markup scheme to which they can all be reduced, without losing anything essential. It is that classic TEI markup problem: what do you want to distinguish in your documents? With the added constraint that I’d rather not have to introduce distinctions not already explicit (one way or another) in the sources.

I started with the Word Press XML files, since these constitute the official published record, even though they have many shortcomings. I wrote another perl script to extract a list of all the different XML tags present in the files, and an XSLT stylesheet containing a default template for each of them, mapping format-oriented tags like <h1> and <b> to semantic ones like <head> and <hi>. I then spent a happy hour or three fiddling with that, before deciding that this approach was too labour intensive to be a general solution.

So I moved on to the Google Doc files. I exported them all as docx files, applied the default TEI docxtotei conversion, and started looking at my 100 or so allegedly TEI documents. The first step was to generate an ODD which described their actual tagging practice, for which I used the TEI

oddByExample utility. This is a good way of starting the process, but has some quirks (like specifying all the element classes you might use, even though you don’t actually use any of them and explicitly deleting each attribute supplied by a class rather than deleting the class), and one major drawback. The drawback is actually perhaps a virtue: the schema you get from the ODD it generates is a strictly conformant TEI subset of TEI All. So if your data has features which are not valid in TEI All, shall we say @xml:id values which are of the wrong datatype, or empty <list> elements, or <list> or <table> elements appearing directly inside <front> instead of being decently wrapped inside a <div> … it won’t be valid.

(These examples were not chosen at random, by the way: they are all the consequence of a bug– issue https://github.com/TEIC/Stylesheets/issues/604 reported this morning – in the current docxtotei tool). Anyway, this means that either the ODD needs to be adapted to be more forgiving, or the data needs to be corrected to be less weird. Doing the former would also mean tweaking the data (to avoid polluting the TEI namespace with the weirdness), so maybe choosing the latter course of action is the wiser decision. Especially since it’s not so hard to correct the aberations I have identified so far.

So my first XSLT stylesheet is simplify.xsl, which does just that. If it finds a list or a table directly inside a front it wraps them in a p and looks the other way. When it finds an anchor it sticks an extra letter in front of its identifier. After its ministrations, all 112 generated XML files (bar one, which had an empty <list> element) were valid against the generated ODD schema. Hosannah.

That leaves 51 items with no easy XML representation, or 12 items if we assume that the legacy TEI format also counts as potentially easy XML. Sadly all but 1 of those 12 are in the “ill formed” Word Press XML format, so some (more) manual tweaking will be required before I can safely apply the retagfromWP conversion to them. Then I will have to work out what to do with the legacy TEI files, some of which are still in P4. But I think I see a way forward…

Surveying the Remains

There have been 161 TEI council meetings up to February 2023. The minutes of each meeting (conference call or face to face) – except one – are available on the Council website, but only as Word Press pages.

I have tracked down a P4 or P5 source file for 40 of them, covering meetings up to October 2008. I think there must once have been more, because some of the WordPress pages show clear signs of having been adapted or converted from a TEI original. In several cases, some TEI tags are still present, notably <gi> (appears in 20 cases between 2009-04 and 2014-06) or publicationstmt (sic), which appears along with other remnants of a TEI header in 38 cases up to 2016-03. But there is no trace of the original source files anywhere on the current website.

From 2016 onwards, the website provides only Word Press format files, in which HTML tagging is used. However, this tagging is not entirely well-formed: there are many cases where hard line breaks in a table cell are marked by HTML p end-tags, for example. And at least one where the internal structure of a table row has been completely lost.

As a first step, I wrote a perl script which did its best to extract a single well-formed XML document from each set of Word Press pages. This failed consistently for the 36 pre-2016 pages which contain residual TEI tagging but worked reasonably well for the remainder, most of the time. Only 13 of the post-2016 files (out of 85) needed hand-editing to make them well formed, though the tagging still leaves much to be desired. In particular, I realised that some of of the WordPress files made no attempt to preserve the often deeply-nested structure of the minutes, or distinguish marginal annotation from the text.

Since 2016 the minutes have been edited in Google Docs and drafts are therefore (currently) available in Word, ODT, or other formats from the Google Docs website, if you know where to find them. This part (finding them) became much easier when I asked former Council colleagues to share their secret stash of drafts with me. Converting from Google Docs to TEI is comparatively simple and much less error prone than working with the WordPress pages directly. It really ought to be the WordPress pages which constitute the document of record for these minutes, but …

It seemed like a good idea to do a bit of checking in any case. So here’s what I did:

  1. use curl to download all the word press pages to 161 separate files called yyyy-dd.html
  2. use a perl script `articulate.prl` to extract from each of them a (hopefully) well formed xml file containing just the ‘article’ recognised by wordpress; save the result in a file called yyyy-dd_dddd.xml (where dddd is the wordpress article number)
  3. check the well formedness of the resulting files with `xmlwf` and spend no more than a day or two fiddling with the ill-formed ones to improve them
  4. spend a lot of time downloading and renaming files from Google Docs. The downloading was needed for files not in the zip James sent me; the renaming was essential for my personal sanity.
  5. I then enriched the XML file I made in the previous blog entry with links to all the files collected together.

At the last count, there are 162 entries (this includes one which is mysteriously missing from the current TEI website). Of these,

  • 85 are available as well formed wpressxml files
  • 37 are ill formed wpressxml files
  • 41 are only available in a legacy TEI format
  • 115 are available as draft versions from Google Docs

Of the 37 ill-formed word press files, 11 are not also available in Google docs format.

The Google Docs collection lacks anything before 2012-04, and (for no apparent reason) three more recent items : minutes from 2014-01, 2015-10, and 2017-11.

So my next step will probably be to define a target TEI format (with an ODD of course) and set about writing snippets of XSLT.

Yesterday’s Information Tomorrow (maybe)

If you go to the TEI’s website at http://www.tei-c.org you will find, as you might hope, a respectable number of documents tracking the evolution of the Text Encoding Initiative over the last umpteen decades. Curiously, though, the record for the most ancient period (before 2008, shall we say) is a lot easier to find and manipulate than for most recent times. This posting records my attempts to put together in archival format the full record of the meetings of the TEI Technical Council.

The Council, as any fule kno, met for the first time in 2002, and is still producing regular reports of its debates and its decisions. There is a page on the TEI website (https://tei-c.org/activities/council/Meetings/) which “lists TEI Technical Council meetings and teleconferences, with links to the meeting minutes.”

I downloaded that list (it’s a WordPress HTML file, of course), ran it through HTML Tidy, and processed the result to produce a nice simple TEI file of entries like this

<list>
<head>2022</head>
<item> conference call <date>8 December 2022</date><ref
target=”https://tei-c.org/activities/council/meetings/tei-technical-council-teleconference-2022-12-08/”
> [on website]</ref></item>
<item> conference call <date>10 November 2022</date><ref
target=”https://tei-c.org/activities/council/meetings/tei-technical-council-teleconference-2022-11-10/”
> [on website]</ref></item>

My quixotic goal is to enrich this data with links to TEI source files for each set of minutes, preferably in a consistent TEI format.

Now, twenty years ago this would have been quite a reasonable proposition since (as the current TEI Vault shows), the TEI once had an “eat your own dogfood” policy of producing all of its documents in TEI. Over the years, this policy has varied somewhat, largely as a consequence of changes in the tools available, and the culture that goes with them. These policy changes relate not just to the look and feel of the website itself but also to which versions of its contents are preserved and how. Today, I think it is not unreasonable to say that much of the TEI website exists only as WordPress pages: many of those pages were first created as Google Docs and then converted to WordPress, some of the older ones were created originally in hand-crafted TEI XML, and the very oldest were created in TEI P4 SGML, but the only versions that can reliably be downloaded from the current website are in WordPress HTML.

Much of the time, of course, this is unproblematic. Mostly, we just want to read the stuff, not analyse it. But occasionally, and especially for the older material, whoever or whatever was responsible for producing the WordPress files has really made a hash of it. Consider, for example, the working paper

https://tei-c.org/activities/council/working/tcw02-approaching-the-son-of-odd-source-markup-for-p5/

This working document is quite an important one in the history of ODD. But as currently presented on the TEI web site it is badly broken, to the extent that the text has become incomprehensible.  Consider this paragraph:

Comparison with earlier versions of the page (thank you Wayback Machine) show that this is a recent breakage: here for example is the same paragraph as it appeared back in 2005 when it entered the Internet Archive.

The Wayback machine, of course, can only archive what its crawlers find. They found this page a couple of times between 2005 and 2018, both of them looking fine, but thereafter only the WordPress version. This would not matter so much were it not for the fact that the original TEI P5 source has not apparently been archived anywhere, so the breakage cannot easily be fixed.

Such losses in translation occur occasionally in more recent documents too. Here’s a paragraph from the WordPress view of document tcm46  (Minutes of the TEI Council’s April 2011 meeting) for example:

Again, until or unless I track down the original version of this file, there’s no way of filling that particular gap.

Less annoying, but more pervasive is the fact that the WordPress files rarely try to preserve any structural or semantic information. The markup will mostly contain a long series of list items, some of which may pertain to the same topic, some of which may in fact be headings, some of which are an accident of formatting. In the text (apart from links) there’s no explicit indication of interesting things you might want to search for, such as names, places, or dates.

Very few WordPress files are well formed HTML, though the wonderful w3c utility tidy does a good job on pushing them into a processable shape. Out of 120 wordPress files, 38 (nearly a third) failed to respond to this treatment, mostly because they contained an unhealthy mixture of HTML and TEI or TEI-like tags.

And finally it has to be said (I’ll be brief) that it seems really sad that the TEI is preserving its deliberations in a proprietary, tool-dependent, presentation-oriented format … the kind of format which the TEI was set up to preserve scholarship from. What kind of apostacy is that?

Hunting for Lacy traces in the digital world

Title

Lacy’s Acting Edition was published in a series of 100 volumes, each containing up to 15 plays, between 1850 and 1874. (All dates approximate and unreliable). In addition to the collected volumes, Lacy sold individual play titles in cheap (6d) paper copies, many of which also found their way into private collections and public libraries. Consequently, copies of various components of the Lacy Acting Editions are now scattered across many research libraries. In some cases, they also exist in digital form, usually as scanned page images.

It is relatively easy to recover details of a library’s holdings from an online catalogue, for example by searching for the string “Lacy’s Acting Edition” or by specifying “Thomas Hailes Lacy” as publisher. It is less easy to restrict the search to generally available digital versions as there is still no reliable joint catalogue of digitized texts in major public collections, combining the digital holdings of say the British Library, the Bodleian, and other UK libraries, in the same way as has been done for many US libraries by the Hathi Trust, or more generally by the Internet Archive. (A project at the National Library of Scotland did set up such a site, under the name opentexts.world, a few years back, but its status is currently unclear and unsupported.

The ease with which the results of such searches can be obtained in a machine-tractable form (rather then simply displayed on a web page) is also quite variable. One is usually forced to fall back on web-scraping techniques and quite a lot of manual post-editing. This note documents my fairly uneven progress towards a definitive collection of links to existing and freely available digital copies of the plays constituting the Acting Edition on various sites. The fairly good news is that, as of today, of the 1498 titles making up the 100 volume Acting Edition, I have identified 586 which are freely available in some digital form somewhere. Track progress by looking at my online catalogue.

Hathi Trust

A search for the string “Lacy’s Acting Edition” anywhere in the catalogue record at https://catalog.hathitrust.org/ produces 294 hits, of which 246 are available in “full view” (i.e. should be downloadable without formality). A search for the string “Thomas Hailes Lacy” as publisher somewhat counter-intuitively produces only 94 hits. The web page displaying results looks like this:

  1. Results from a HT search. Setting page length to the maximum allowed (100) makes it feasible in this case to download all pages with minimal scrolling.

As usual, the easiest way to screen scrape is to save the HTML page as a file, use tidy to make it into well-formed XML, and then write XSLT to extract the useful information. In this case, the generated XML uses an undefined prefix “xlink:”, which I had to remove by hand, but apart from that everything needful was done by the XSLT stylesheet htScraper.xsl, resulting in a document (htListFull.xml) containing entries like this:

<bibl>
 <title>The first night; a comic drama in one
   act.</title>
 <pubDate>1800</pubDate>
 <author>Lacy, Thomas Hailes, 1809-73.</author>
</bibl>
<bibl>
 <title>After the party; a comedy in one act.</title>
 <pubDate>1870</pubDate>
 <author>Lacy,
   Thomas Hailes, 1809-1873.</author>
 <ref target=”https://hdl.handle.net/2027/hvd.32044072039373″>HT</ref>
</bibl>

No <ref> element is generated for entries which are not accessible in “full view” mode. Also note that the handle quoted above is for the Hathitrust index page; to download the whole text as a single PDF file you must visit that page, and wait while the PDF is constructed. Oh, and yes, you must also be logged in at a HathiTrust member institution. So much for “full view” access.

Open Texts

I blogged about this now sadly un-maintained site back in October 2020. The site was dark for a while, but seems to be back for the moment: this morning I visited and was able to download a list of 106 hits in CSV, XML, or JSON in one click, which was nice.

This is what I like to see at the foot of my first page of results

Individual results looking like this:

<doc>
 <str name=”organisation”>Bodleian Libraries</str>
 <str name=”idLocal”>016930688</str>
 <str name=”title”>King of the Merrows, or, The prince and the piper : a fairy extravaganza / written by F.C. Burnand, from an original plot constructed by J. Palgrave Simpson.</str>
 <str name=”urlMain”>http://purl.ox.ac.uk/uuid/42d1488e99284fef9421268c33e4730d</str>
 <int name=”year”>1862</int>
 <arr name=”date”>
  <str>1862</str>
 </arr>
 <arr name=”publisher”>
  <str>Thomas Hailes Lacy</str>
 </arr>
 <arr name=”creator”>
  <str>Burnand, F. C.</str>
 </arr>
 <arr name=”description”>
  <str>First performed at the Royal Olympic Theatre, 26th December, 1861.</str>
 </arr>
 <arr name=”placeOfPublication”>
  <str>London</str>
 </arr>
 <str name=”catLink”>http://solo.bodleian.ox.ac.uk/permalink/f/89vilt/oxfaleph016930688</str>
 <str name=”language”>English</str>
</doc>

are easily converted (e.g. by my stylesheet opentexts-conv.xsl) to produce

<bibl>
 <title>King of the Merrows, or, The prince and the piper : a fairy extravaganza / written by F.C.
   Burnand, from an original plot constructed by J. Palgrave Simpson.</title>
 <pubDate>1862</pubDate>
 <author>Burnand, F. C.</author>
 <note>First performed at the Royal Olympic Theatre, 26th December, 1861.</note>
 <ref target=”http://purl.ox.ac.uk/uuid/42d1488e99284fef9421268c33e4730d”/>
</bibl>

which is easily merged into the main Lacy catalogue.

Moreover, in this case (hoorah for the Bodleian), a visit to the publically available URL actually downloads the whole of the PDF file without further ado.

Sadly, PURLs are available for only three of the items in the Open Texts list of 106; the vast majority (90) being handles from HathiTrust, and the rest (13) links to archive.org. Moreover, the data has not apparently been updated since October 2020, which is presumably why it does not have anything like the 316 handles I found in the Hathi Trust catalogue for myself. In fact, every one of the handles it supplies exists also in the htListFull.xml list.

Google Books

A cauchemar. Google has digitized (almost certainly) all of the Lacy Acting Edition volumes, but it seems to be entirely arbitrary which ones you can access via Google Books. I have tried various approaches to searching (there is something called a `bibliogroup` for Lacy), and then reprocessing the resulting (very obscure) HTML, but cannot say I have succeeded in cracking this code. The file gbSearch.xml contains the screen- scraped-and-converted-to-XML output from a query for this; the stylesheet gbSearch.xsl filters out from this the 37 useful links it provides to files you can actually download from Google Books (but you still have to go through a captcha check, of course).

Searching specifically for “Lacy Acting Edition” on Google Books will provide an exciting list of entries for each of the first 93 volumes in the LAE — but only two of them (volumes 77 and 93) actually have anything you can download. (I belatedly discovered that this annoying behaviour can be modified by selecting “Full View” from the drop down menu at top left of the query screen, which hides the titles you cannot have). On the other hand, there are also a few occasions where the text actually digitized for a specific title is the whole of the volume in which that title appears. Thus, searching Google Books for The Half Caste will provide you with a link for the whole of volume 97, in which that title appears. Likewise a search for In Three Volumes actually gives you a link to the whole of Volume 91. Anyway, once you have a reliable link to Google’s equivalent of the Internet Archive’s “details” page (at the moment, it looks like https://www.google.co.uk/books/edition/Oberon_An_opera_in_four_acts_in_prose_an/IoFaWP1TQgkC) you can pass that to Google, and get back a nice “New” Google Books page in the middle of which is a nice “Download PDF” button. Which works — once you have completed the annoying captcha test of course.

All very well if you have the time to spend cutting and pasting links: but why couldn’t Google have provided a simple download in a form I can script? I assume it’s for the same reason they want to control access to these resources — to stop unscrupulous entrepreneurs in the “Print On Demand” industry from making a swift buck. And we all know how effective that policy is, don’t we?

Bodley

Real librarians do it with Z39.50. But my results (bodleyTexts.xml) show only 9 titles available in digital form.

The Hall Collection

Every now and then, serendipitous searching pays off. The Hall Collection contains approximately 600 English plays mostly from the late 18th and early 19th centuries, originally used as prompt books by a professional actress called Clara St. Casse. The Collection was donated to the University of Warwick Library by a Mrs G. F. Hall of Leamington Spa, together with a collection of other printed plays. Naturally it includes quite a few (102 to be exact) Lacy titles. Although the Warwick site (https://wdc.contentdm.oclc.org/digital/collection/hall) seems to provide only downloads and browsing of individual pages, someone, presumably from the Library, has also had the good sense and generosity to deposit the whole collection at archive.org, from which I was able to obtain an XML file (hallColl.xml) which can be readily processed to produce links to the 102 Lacy published titles: see hallCollTitles.xml

Internet Archive

This archive has an excellent search interface and will also deliver results in any tractable form you like, including json or xml. It cannot however perform magic to overcome variant cataloguing practices amongst the collections it has incorporated. So, for example, a search for “Lacy Acting Edition” throws up precisely one hit (“a copy graciously made available by Fordham University”). A more general search for “Thomas Hailes Lacy” gets me 125 hits, 102 of which come from the Hall Collection. A search (thomas hailes lacy) AND -collection:(hallcollection) finds me the 23 titles not included in the Hall Collection. On the other hand, a search for “T.H. Lacy AND -collection:(hallcollection)” finds 66 titles, not included in the Hall Collection, but not included in the foregoing either.

On the bright side, the hits can be downloaded in a format which is more or less identical to that generated by the XML option quoted for the Open Texts server above, so mungeing the results lists together is a Simple Matter Of Programming, resulting in iaList.xml.

An experiment in CLS

Some time ago, I agreed to participate along with several others much smarter than me in COST Action Work Group 3. The goals of this work group were, amongst other things, to run a small experiment in counting verb frequencies on ELTeC texts enhanced with POS and lemma information. It took a surprisingly long time to find out exactly what contribution was required of me, and I make no claim to have got it right even now. But here’s what I thought I was doing.

First, I wrote an insultingly simple XSL stylesheet to produce a list, in descending frequency order, of verbal lemmas in each of the (now) 10 ELTeC level 2 corpora. For example, here’s the start of the file rom/verbFreq.xml:

<frequencies>
 <lemma form=”face” freq=”30919″/>
 <lemma form=”avea” freq=”29391″/>
 <lemma form=”zice” freq=”22673″/>
<!– … and so on for several hundred more lines –>
</frequencies>

… which tells us that in our data Romanian’s favourite verb has the lemma face, and the next favourite is avea. The code for doing this is (like all the rest of the code described here) in the github repo COST-ELteC/ELTeC-data/Scripts if you care: it’s called imaginatively verbFreqs.xsl

Next, I wrote another simple-minded script to extract from each novel a bag of words, with no markup or punctuation: just all the verbs, for example, or all the nouns, in their order of appearance in the text. So the that celebrated work Hard Times, which begins in the original like this

<div type=”group”>
 <head>BOOK THE FIRST <hi>SOWING</hi> </head>
 <div type=”chapter”>  <head>CHAPTER I
     THE ONE THING NEEDFUL</head>
  <p>‘<hi>Now</hi>, what I want is, Facts.  Teach these boys and girls nothing but Facts.  Facts    alone are wanted in life.  Plant nothing else, and root out everything else.  You can only form    the minds of reasoning animals upon Facts: nothing else will ever be of any service to them.</p>
<!– … –>
 </div>
<!– … –>
</div>

generates a bag of words starting like this

want be teach be want|wanted plant root form be…    

if I ask for VERB lemmas, or like this

book sowing|sow chapter thing fact boy girl fact fact life mind reasoning|reason animal fact    service 

if I ask for NOUN lemmas. You may wish to complain about the behaviour of the lemmatizer here, but I am taking the path of least resistance and using whatever treetagger (in this case) produces without cavil. This deplorable laziness returns to bite me further below…

I wrote some python to run the xslt script filter.xsl which does this task: the script is called filter.py and it uses a Python interface to the Saxon C processor which I was very pleased with myself about when I got it working; less so later, see below. There’s more mundane detail of how to run it in the README in the Scripts folder.

If still awake, you are probably wondering what the point of all this was. And here comes the scientific bit. The little workgroup I had signed up for wished to test a Hypothesis, which (if I understand it correctly) might be crudely summarized thusly:

  • The European novel undergoes some sort of seismic shift around the turn of the 19th century, which is popularly known as The Rise of Modernism
  • Modernism has many stylistic correlatives, but they include notably a focus on the interior life of characters, on sensation and feeling, rather than on objective omniscient narrative
  • If this is true, we should expect to see a change in the frequency with which verbs associated with that ‘inner life’ appear over time.

I hope you can see where we are going with this, now. All we need is a reasonably plausible list of verbs which express aspects of ‘inner life’. And so, for the next few months, with zoom and email and similar modern contrivances, the group theorized how to actually produce such a list. I may have fallen asleep during the process and missed something critical, but eventually (I think) it was decided that we would explore two approaches to identifying our list. Firstly, we’d ask language experts to vote for their top ten “inner” verbs. Secondly, we’d use a statistical procedure (word vector embedding) to identify a list of candidate verbs automagically. Then we’d compare the results, declare victory, and move on.

What could possible go wrong? Well, at least two things.

Firstly, the ask-an-expert approach turned out to be less successful than it might have been, largely for purely logistical reasons. If we had asked the experts simply to review the existing verb frequency lists for their language and identify in them those verbs which were indubitably and always betokeners of interiority, plus any others which were a bit thus inclined sometimes, then we might have got our results a bit faster. But we didn’t, and the experts, understandably a bit mystified by the whole process, gave us lists which varied widely in their format and scope. So I found myself having to tweak and readjust their contributions, to remove duplicates and ambiguity. As for the automagical procedure, it proved a little challenging for most participants to run, if only because it required access to a machine capable of running Google’s word2vec program which is not meant for your average laptop. In any case, you can see the resulting word lists in the file innerVerbs.xml which I hope is fairly self-explanatory.

Secondly, my simplistic notion of ‘lemma’ turned out to be problematic. As you noticed above, when unable to choose between two alternatives, treetagger obligingly gives you both of them, separated by a vertical bar. That’s no problem for me: I just discard the alternative. But other lemmatizers behave differently. For example, in our Portuguese data, the lemmas for reflexive verbs are suffixed by a # and an indication of person. In our Hungarian data, spelling variations of the same basic lemma are sometimes presented as different lemmas. In the first case, should I simply ignore the part of the lemma after the #? In the second, should I aggregate all the differently spelled variants and consider matches for any of them as equivalent? As usual in computational linguistics, it all depends what you think you’re counting…

Despite these metalinguistic anxieties, I wrote a (needlessly complicated) python script called verbCount.py to count the frequencies of the inner verbs through time, comparing the things-called-lemmas in our various lists of inner verbs with the things-identified-as-lemmas in the level2-encoded files. Invoking various XSLT scripts and Saxon C as before, this script grudgingly churned out a file for each text in the corpus under examination, with a row for each title and a column for each inner verb, like this:

   extId year verbs innerVerbs aimer connaître croire entendre regarder savoir sembler trouver voir vouloir     FRA00101 1860 3889 310  17 9 28 22 18 52 5 47 83 29    FRA00102 1883 5499 465  112 21 38 16 17 55 32 30 77 67       FRA00201 1910 7577 682  26 20 41 75 96 63 49 93 128 91   

I say ‘grudgingly’ because the script was obliged to process the whole of every file in order to extract a year of publication from its TEI header, and consequently ran with noticeable slowness. If I’d thought to include the year of publication along with other metadata in the filename of the “bag of words” I could have used that instead, which would have been much quicker. Maybe if I get a better set of inner life verbs I’ll revise the scripts to do so.

Anyway, we now have a bunch of CSV files. And why? Because my colleague Diana has produced some R scripts which will plot this data set so everyone can understand it. Or at least look at it. Here’s what we get for some of the Portuguese data:

innerVerbs.png

I leave it to the statistically-informed to interpret this and other similar results. The closing conference of the COST Action, taking place next week, includes a paper (on which I am somewhat embarassingly cited as co-author) presenting the results in more detail.

Reviving the VPP : a start

The Victorian theatre has not enjoyed documentation or digitization as systematically as has the Victorian novel, reflecting perhaps scholarly perception of their comparative artistic significance. Yet it is a truism that the influence of the Victorian popular theatre on the development of the novel during this period was by no means limited to the efforts of dedicated amateur enthusiasts such as Dickens and Collins and their circle. In Emily Allen’s words “Victorian theatre was the novel’s ally, inspiration, and competitor”. As an ongoing expression of popular culture, nineteenth century theatre has deep roots and many branches; its lineage runs from the high gothic of romantic melodrama to the memes of cinema and modern day television, embracing both the theatre of sensational spectacle and that of domestic realism. Yet for those wishing to see the phenomenon as a whole, to perform a kind of distant reading of its texts, there is nothing approximating to Bassett’s At the Circulating Library database of Victorian fiction (http://www.victorianresearch.org/atcl/search.php) in terms of completeness or coverage. Such attempts to document the Victorian theatre as do exist, have generally done so in terms of the careers of individual actors, writers, or institutions. Although collections of the primary source materials exist in a few libraries, it is as a consequence of individual collections or bequeaths, rather than any attempt at systematic coverage.

One notable exception is Richard Pearson’s Victorian Plays Project (VPP), originally funded by the AHRC 2005-2007, and still hosted at the National University of Ireland in Galway. A key deliverable of this project was an online catalogue of the approximately 1500 titles making up Lacy’s Acting Edition of Plays, derived from the (apparently unique) surviving copies of that edition preserved in what was then the Birmingham Central Library.

Thomas Hailes Lacy began publishing contemporary plays at his Covent Garden printing house shortly after the Theatre Regulation Act of 1843 which removed the duopoly previously enjoyed by the Covent Garden and Drury Lane theatres. In a far-sited move, Lacy acquired the rights to print plays from the theatrical managers, ostensibly to protect their copyrights, though he was not averse to a little piracy himself. These “Acting Editions” contained everything needful to produce a play: including details of costumes, settings, blocking, accompanying business etc. as well as cast lists and the text of the play itself. New titles appeared every year until the 1870s when Lacy sold the whole collection to Samuel French, an American publisher with whom he had exchanged plays for publication for the previous two decades.

According to the existing VPP website (http://victorian.nuigalway.ie/modx/index.php?id=187), in addition to producing this on-line catalogue, the project aimed to “generate e-texts in .pdf format that replicate the original texts re-edited for electronic usage” and also to “create a database of plays marked up using TEI encoding in XML that will be searchable”. The website also states that “Transcription of the Lacy’s Catalogue, and editing and encoding of the texts was undertaken by the Victorian Plays Project using OxyGen TEI mark-up software and Acrobat Professional. ” (http://victorian.nuigalway.ie/modx/index.php?id=182).

As of today, the website does provide a list of all 1428 titles in the Acting Edition, including basic data about their authorship and performance history. It also makes available a set of 239 titles which have been transcribed and reformatted as PDF files preserving much of the typography of the originals. Other formats, if they exist, are not visible on the website, though a small number of titles have clearly been annotated and indexed at some point in the past with separate lists of named entities and striking phrases. (Some further information on this and a closely related sister project concerned with the records of the Lord Chamberlain’s Office is provided by Radcliffe, C. & Mattacks, K., (2009) “From Analogues to Digital: New Resources in Nineteenth-Century Theatre”, 19: Interdisciplinary Studies in the Long Nineteenth Century 8. doi: https://doi.org/10.16995/ntn.499 )

However, the VPP website does not seem to have been developed since 2015, and the untimely death of Professor Richard Pearson at the end of 2018 (https://bavs.ac.uk/uncategorized/obituary-richard-pearson/) casts its future development into serious doubt. As is all too often the case, preservation of a digital archive turns out to depend as much on individual personal support as on technological constraints.

I have therefore applied for funding to carry out an initial scoping study investigating the feasibility of reviving and bringing up to date the Victorian Plays Project. If accepted (and there’s no reason to suppose it will be) this would naturally begin by reviewing any additional digital materials which have been archived, and by interviewing personnel associated with the original project at Galway. The inventory resulting from this review would be extended with a survey of other digital versions of the Lacy Acting Edition now available online (for example, in transcribed form at Project Gutenberg and elsewhere and in digital facsimile via the Hathi Trust or the Internet Archive). Contacts at Galway and elsewhere (for example in the library and special collections community, and in the professional Victorian studies networks) would be approached for information about existing related endeavours, and to raise awareness of the project.

If sufficient suitable materials can be found, the next step will be to design, document, and implement procedures to convert them all to a single simple TEI encoding, consistent with (for example) that used by the DraCor project, or the ELTeC. Following these de facto community standards has many advantages, such as the ability to re-use existing software tools, or the ability to leverage existing community familiarity with the format. The resulting digital archive would be initially maintained as an open repository on GitHub, with all converted materials made available under a CC-BY licence.

It is probable that automatic conversion to this (or any other) target format will be much easier for texts already transcribed than for texts only available in digital image format. In a second phase of the project it is planned to explore and report on the applicability of “machine learning” techniques to enhance the performance of existing OCR platforms. By comparison with novels and other print material from this period, the Acting Edition texts are unusual in the complexity and variety of their typography. This complexity, derived from the need to clearly distinguish speaking parts, stage directions etc., is however regular and systematic and should thus be potentially beneficial in the task of automatic markup.

The availability of a consistently organized and encoded corpus of Victorian play texts will make possible the application of emerging distant reading methods and tools to a component of victorian cultural history which has been curiously neglected, if not undervalued, hitherto.

In the meantime, I have been tracking down other existing online resources for the description of the 19th century theatre. But that, as they say, is another and a different blog posting perhaps.

EEBO TCP in P5 – the return

It seemed a necessary act of piety to respond positively to a request for help from my former colleagues at the Oxford Text Archive, when they finally got around to considering the conversion of the latest (and one must fear the last) tranche of EEBO texts from the Text Creation Partnership. The conversion into a TEI P5 compatible version of the vast majority of EEBO-TCP phases 1 and 2 texts and their subsequent upload to a gazillion github repositories was accomplished by a team headed by Sebastian, back in the days when TEI Simple Print was new, and we were all a bit more bushy-tailed and bright eyed. Now the OTA has received their last tranche of TCP phase 2 texts, it should not (surely) be too much of a sweat to crank them through the same conversion process and deposit the results in Github too. Though of course nothing is ever quite that simple.

The XSLT script which does the heavy lifting is called tcp2tei and (thank you Sebastian) here it is, safe and sound in the TEI Stylesheets repository. And it still works. There is even a shell script for creating a new github repo and uploading each file to it from the same masterly hand; this one nearly works, as a consequence of github having got a little more fussy about authentication mechanisms in the last five years, but that’s not hard to fix. So I should just declare victory and move on.

On closer inspection however three issues have surfaced (so far).

Firstly, the catalogue numbers. In the current TCP P5 texts, each TEI header has a string of <idno> elements supplying its identifier in the Michigan DLPS database, the identifiers of one or more MARC records from OCLC or UMI catalogues, its Proquest number, identifiers in one or more standard bibliographies (ESTC, Wing, STC, Evans etc.) and the number of the image set which was scanned. For some reason I do not understand, not all the new texts supplied to the OTA have their full complement of these identifiers. For example, of the 6498 titles supplied, 3062 have OCLC Marc record identifiers, (discounting an additional 187 duplicated OCLC records in which the record identifier is prefixed redundantly by “ocn”). None of the 6498 has an image set number. Only 2987 have a Proquest identifier, and it’s always the same as the MARC catalogue number. And 963 have no bibliographic identifier of any sort.

No matter: after my skirmishes with EEBO metadata last summer (reported at https://foxglove.hypotheses.org/date/2020/08) , I am confident of being able to recover missing catalogue numbers from at least two different sources: one being Paul Shaffner (whom God preserve)’s eebodat1.xml and the other being Proquest (whom God has recently abandoned)’s title list. The stylesheet I am working on to do mundane things like change the availability statement in the header is duly expanded to supply the missing <idno>s. I decided to add the new Proquest numbers (the so called GOID) even though these are not present in the existing files.

Secondly the image links. One reason for caring about the Image Set numbers is that they are used as part of the address to which @facs attribute values scattered throughout the texts are mapped. Back in the day, it was possible to link directly to a page image in this way. This facility is however no more: Proquest (and presumably their successors) will only allow you to access individual page images by using their own interface, so far as I can tell. It is possible to access the same images via the JISC Historical Dataset sites by judiciously stringing together values from those <idno>s, but I have yet to find a reliable way of doing so for individual pages. For the present therefore the @facs values will remain a touching reminder of how things once were. Though I did add a link to the JISC site into the new headers, along with other useful documentation.

And thirdly the real subject of this entry: what to do about @rend. Now, I have long believed that the TCP P5 texts are not only valid TEI P5 XML but also valid against a specific TEI schema, to wit the schema named (after much argument and in the teeth of some opposition) TEI Simple Print. I distinctly remember (or think I remember) Sebastian and Magdalena putting quite a bit of effort into enhancing that schema with lots of @rendition values to match EEBO-TCP requirements. So when I actually tried validating my nice new files against it, I was a bit puzzled to find that they didn’t.

Specifically, the attribute @rend is not available in the TEI Simple schema, and has not been since at least August 2016. In its place, I should be using @rendition to point to one or more of the predefined simple:rendition values. So I spent an hour or two tabulating all the @rend values used in the new files, and finding their simple:equivalents. This proved easy for most cases, but impossible for just a few (7), some of them esoteric (@rend=’upsideDown’ anyone?), but others (e.g. @rend=”margQuotes” and its friends margSglQuotes and margDblQuotes) quite frequent and clearly necessary. I also realised (belatedly) that allowing my script to make this change was going to make my new texts inconsistent with the existing ones. For the existing TCP P5 texts are not valid against TEI Simple either, I discovered, and somewhat to my embarassment: they use @rend with all sorts of exotic values all over the place. They do use @rendition, but in one way only: some <pb/> elements have @rendition=’simple:additional’. This was entirely mysterious to me for a while (until Paul remembered what it was for). In any case, I will worry about that when a new systematic revision of the whole collection is undertaken, should that day ever dawn. For the moment, I will grit my teeth, stick with @rend, and assure anyone who asks that all TCP P5 texts are valid against, err, TEI ALL. This is known as biting the bullet, I believe.

Update: 10 June 2021. I uploaded all 6498 new texts to new repositories in the Github textcreationpartnership collection over a period of 24 hours last week. And, at somewhat greater length, I have now updated my repository at eebo-bib to describe more precisely what I did to create a TEI-compatible TCP bibliography. Definitely time to declare victory and move on.

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search