How old is that play?

Nearly every play in the Lacy catalogue – 1468 of them to be exact – now has a date of first performance, either explicitly given in the front matter of the text, or (for about 100 other cases) diligantly extracted by me from Nicoll’s “Handlist”. These dates supply a terminus ad quem for the play’s composition : it cannot have been written after its first performance. Similarly, although the individual volumes are not dated, we may reasonably assume that the volume itself cannot have been printed before the latest “first performance” date it contains. This is not an entirely satisfactory procedure if we want to track changes over time, since the number of volumes allocated to a particular year varies over the 38 year period, but it is the best I can do.

Nevertheless, I thought it might be interesting to plot for each volume how many of the plays it contains are recent, not so recent, or positively antediluvian. One hypothesis might be that the proportion of recently composed material declines over time, whether because less of it is available for Lacy to reprint, or because the bourgeois drawing room for which the later volumes are primarily intended prefers its drama antiquated. Another might be that the proportion of old warhorses in each volume is pretty much consistent over the whole life time of the Acting Edition.

Here’s my first attempt at visualising the data. It shows that there are a few volumes round about the start and end of the 1860s when the quantity of older material seems to shoot up, but that for the most part each volume contains a majority of material less than 10 years old. It also, however, suggests that the amount of new material in the 1870s starts to decline.

How balanced a sample is the VPP?

The full catalogue of Lacy’s Acting Edition comprises some 1500 titles, produced by just over 320 different authors. Over a third (583 to be exact) of all titles are produced by a small group of a dozen or so recidivists, each of them accounting for more than 25 titles. These include some predictable exceptions like “Anon” (65 titles), but also some extraordinarily prolific writers like John Maddison Morton (82 titles or 5% of the LAE), J.R. Planché (69 titles), and Henry James Byron (51 titles). In the second rank of creativity, there are 20 authors each of whom is responsible for producing between 10 and 25 titles, and who collectively account for 346, about a fifth of the whole. These include such familiar names as William Shakespeare (24 titles), just ahead of the less famous Thomas Egerton Wilks (23 titles) and some distance from George Colman (12 titles). At the other end of the scale, only a tenth of titles (171) are the product of an author otherwise unrepresented.

One of my first questions when looking at the Victorian Plays Project catalogue was the extent to which it might be considered a representative sample of the whole LAE. That of course depends on the basis on which you are sampling: as a first exercise, I consider here authorship. The VPP sample contains 343 titles, which are the product of 130 authors, only 8 of whom produce more than 10 titles, and nearly half of whom (74) produce only one title. This seems like a markedly different frequency distribution. Moreover, the ranking of authors within a “top twenty” list for the two corpora shows some surprising differences. Some authors who appear high in the upper half of the LAE list, e.g. Williams and Selby, trail near the bottom of the VPP list. It is unsurprising to find that titles low down the VPP list are also low down the LAE list; what does surprise me is the disparity in ranking for the comparatively frequent authors. Tom Taylor, the highest ranking VPP author of all, is only the 10th most frequent author in LAE; and John Palgrave Simpson, who ranks 12th in LAE, only just scrapes into the 25th row of VPP. Some of these oddities may be attributed to editorial decisions by the VPP: for example to exclude entirely titles by one William Shakespeare, even though these are ranked 14th in LAE.

Anyway, here are the Lacy Acting Edition Top Twenty authors, ranked by the number of titles attributed to them.

LAE rank VPP rank Titles (LAE/VPP) Author SDA dates
1 3 82/15 Morton, John Maddison * 1811-1891 A1
2 4 69/14 Planché, J.R. * 1796-1880 A1
3 6 65/13 [Anon.]    
4 5 51/14 Byron, Henry James * 1835-1884 A2
5 7 41/12 Suter, William E.   1811-1882 A1
6 2 40/17 Brough, William * 1826-1870 A2
7 16 38/5 Williams, Thomas J. * 1824-1874 A2
8 19 37/5 Selby, Charles * 1802-1863 A1
9 11 36/7 Burnand, Francis C. * 1836-1917 A2
10= 1 34/20 Taylor, Tom * 1817-1880 A2
10= 8 34/12 Coyne, Joseph Stirling * 1803-1868 A1
12= 25 28/4 Simpson, John Palgrave * 1807-1887 A1
12= 20 28/5 Oxenford, John * 1812-1877 A1
14 0 24/0 Shakespeare, William   1564-1616 A1
15 24 23/4 Wilks, Thomas Egerton   1812-1854 A1
16 14 19/6 Stirling, Edward   1809-1894 A1
17= 0 18/1 Wooler, John Pratt   1824-1868 A2
17= 18 18/6 Talfourd, Francis * 1828-1862 A2
17= 21 18/5 Jerrold, Douglas * 1803-1857 A1
17= 11 18/7 Halliday, Andrew * 1830-1877 A2
LAE Top 20 Authors

 

There is of course much more one might wish to say about these authors. It is unsurprising to find that they are all males, and equally that they are mostly members of the Dramatic Authors Society, the agency which had been founded to ensure their copyrights were observed, and which also required payment of a fee for provincial representation. Their dates, with four exceptions, are taken from Wikipedia, where there is much else is to be found. (The exceptions yet to be immortalized on Wikipedia are William Suter, Thomas J. Williams, Thomas Egerton Wilks, and John Pratt Wooler : their dates are taken from the Hathi Trust catalogue record). Just for fun, I decided to categorize them into two age groups on the following basis:

A1: born before the Battle of Waterloo (1815)

A2: born after Waterloo but before the Great Reform Act (1832)

Unexpectedly there are equal numbers in each group.

In the interests of full disclosure, I should add that the list of plays so far converted to TEI format demonstrates a tiny and even more divergent sampling of these authors. The most frequent author so far converted is J Maddison Morton with 6 titles, which corresponds well with the LAE ranking, but the next three in that ranking are all so far missing entirely. In fact, of the authors in the LAE top twenty, the following are all so far missing: Planche, Byron, Suter, Williams, Selby, Shakespeare, Wilks, Stirling, Wooler, Talfourd, and Halliday. Only five authors are so far represented by more than one title (Morton, Coyne, Courtney, Oxenford, and W.S. Gilbert).

As comparison, I also took a look at the author counts for the 45 or so LAE titles selected for inclusion in the Chawyck Healey “English Drama” collections. Only 10 authors appear here more than once, all of them represented by no more than 2 titles, except Simpson, who clocks in three. Only four of these authors also appear in the LAE Top Twenty (the inescapable John Maddison Morton, J.R. Planche, John Palgrave Simpson, and Thomas Egerton Wilks). Clearly these titles were selected on some other grounds than their frequency in the LAE.

Nobody talks like that: a stylometric exercise

Back in March 2022, I was asked if I’d like to be interviewed as part of a research project concerning editing in the 21st century . What the hell I said, I have close to no real experience of digital editing (unless you count my lovely digital edition of “Through Beatnik Eyeballs”), though I have made a reasonably satisfactory career out of telling other people how they should do it. One thing they really do teach well at Oxford is the ability to sound as if you know what you’re talking about… Anyway, I signed up, and after some vicissitudes was duly interviewed via Skype, sitting in my birdsong-filled garden, some time in June. Some considerable time later (at the end of December to be precise)  I was invited to revise the transcript someone had made of my interview, and did so. I removed some of the more egregious hesitations and a few garden path sentences, but left the bullshit intact, collection of that being after all the object of the exercise. And later still, I learned, my transcribed interview joined a group of fifty or so others on a website, in a proper digital archive no less. This was all very satisfactory, though I was a little disappointed to find that the edited transcriptions were being made available in PDF or in RTF only. Are those really now considered to be appropriate long term preservation formats? I suppose in a world where Boris Johnson can become prime minister, nothing should be surprising. And what happened to the audio? The design goals of this project were firmly based in a part of the forest of the Digital Humanities somewhat removed from linguistic analysis, discourse semantics, problems of speech transcription, or the textual analysis of academic talk. The project was to deliver something readable by human beings, like a book. Quite enough for one grant.

But hoorah for the open-minded spirit of open access, which makes it possible for me (and anyone else so inclined) to play with the resulting resources and do at least some things not originally envisaged in the project design! Entirely unsurprisingly, I spent a happy few days last week downloading the RTF files and converting them to TEI (the scripts and the results are now available in a github repo ). TEI because that’s what I do, but also because I wanted to be able to do textual analysis properly.

My resulting TEI corpus contains 46 small documents, one for each interview, each consisting of a sequence of TEI <sp> elements, with a who attribute to indicate a unique code for the speaker Each element contains one or more paragraphs of text, preceded by the speaker code as given in the transcription (there are a few differences). Like this:

<sp who="#LB">
<speaker>LB</speaker>
<p>I would have liked to have been better paid.</p></sp>
<sp who="#MK"><speaker>MK</speaker><p>Sure.</p></sp>

This was prefixed by a paragraph of background information about the interviewee, which I banished to the <front> of the document, as a source of metadata. I also created a rudimentary TEI Header for each document.

Importing my 36 documents into TXM, I found the corpus had a total of 237,271 words. I made a partition on the basis of the @who attribute, so that all the words for each distinct speaker were grouped together. Here’s the bar graph from TXM showing how many words were associated with each of the 51 distinct speakers. It shows that one speaker (JOS) talks a lot more than anyone else: but this is unsurprising, since he is one of the two interviewers, and I have simply aggregated his side of each discussion irrespective of participant. I did the same for the other interviewer, MK, who has fewer interviews (14 as opposed to 32 for JOS); at 6000 words, he actually talks less than the three most garrulous interviewees (JC, AG, and RR), all of whom manage more than 6500 words. At the other end of the scale, there are five interviewees who hover around 2000 words apiece. The bulk of respondents fall comfortably between these extremes.

What can I do with this data? Well, treating it just as data, it might be interesting to see whether the frequency with which words, or lemmas, or POS codes appear in each speaker’s chunk is much the same, or whether some stylometrical statistic can be used to group like-speaking speakers together. Does everyone talk in more or less the same way or (ex hypothese) do professors and old lags like me talk differently from early career researchers? It’s not quite the typical stylometric use-case (which tries to establish probable authorship on the basis of similarities) but close. Fortunately for the mathematically challenged, there exists a fairly well established range of tools designed to explore such matters. Unfortunately for the mathemagically challenged (amongst whom I unreservedly place myself), you do need to know what you’re doing with these really quite sharp edged tools. So please forgive any idiocies in what follows…

I played with TXM and with Stylo, both of which claim to be usable by the non specialist, and both of which have (interestingly different, but that’s another story) user interfaces. TXM has the great advantage of accepting TEI XML as input and treating it sensibly. Stylo requires me to pre-process my XML text into dumbed down plain ascii using arbitrary naming conventions to provide metadata. Both produce fancy graphics.

Here, for example, is a dendogram from Stylo, showing how my 50 locuteurs cluster together, if we look just at the highest frequency lemmas. If I interpret this correctly, it shows the interviewers (“allJOS” and “allDK”) grouped together and distinct from all other respondents, which seems reassuring.

This is even more evident in the Principle Component Analysis also produced with Stylo, which shows JOS and MK as complete outliers.

And TXM provides further confirmation of this from a lexical perspective:

I think what this is telling me is not only that JOS and MK are outliers, miles from all the other documents shown here only as a red splodge in the blue cloud, but also that the words they favour are characteristically to do with their role as interviewers (What, future,  question, projects, maybe etc.)  Or so I believe. But clearly I am going to have to do a lot more background reading before I can say anything really interesting about this little dataset…

Consistency is a good thing…

Now that I have all the available files in a form which is at least valid according to a TEI P5 schema, I can start fussing about the consistency of the markup they contain.

Let’s start with an easy one. Attribute names may be marked up using the element <att>, or using the element <ident>, or using the element <name>, or just flagged in the running text, by a preceding @ (I didn’t find any cases of a following =, though I suspect there are some). I really don’t care a lot which is used, but I do care that there should be just one rule to bind them, not four. So how do things stand at present?

There are 298 <att> elements, 212 <ident> elements, and 98 <name> elements. <name> is used both for personal names and for names of attributes, classes etc. A first step therefore is to turn all name[@type] elements into idents, and to simplify the values for type. A second step might be to look at all occurences of attribute names simply flagged with an @ sign. in wPressDox, the regex [[:space:]]@[[:alpha:]][[:alnum:]+] matches 608 times; in legacyTEI files, only 18. Some careful regex matching might enable me to turn the @thing cases into <att>s in most cases. But hold on a second: if consistency is the goal it would be much easier to turn all <att>thing</att>s into @thing than the reverse. The question really is : is faithfulness to the original tagging goals completely unimportant? We have 16 cases of ident[@type=’attr’] and 47 of ident[@type=’attrName’] and 21 cases of ident[@type=’attr’] as well as 32 name[@type=’attr’], and indeed some random occasions where <code> (with no @type value) is used to delimit an attribute name. Making all of these consistently <att>seems to preserve the original encoder’s goals. Turning them all into @xxx however seems somewhat different.

There are also cases where the original tagging has tried hard to make distinctions we would probably no longer bother with: for example <mentioned> and <soCalled>. I leave these well alone.

Now a more tricky one.

There are 1817 <div> elements, 705 of them being @typed. 619 of those are type=’h2’, but this does not necessarily reflect their hierarchic position, merely that they were originally indicated by an h2 level heading in the WordPress file. Of the 83 typed divs which are not “h2”, the most frequent are h3 (45), followed by h1 (8). However, of the 45 h3 divs, only 20 have an h2 parent; the remaining 25 are contained by an untyped div. Other very rare values include “agendaItem” (3), appendix (4), and glossary (3).

It seemed like a good idea to check that <div> elements contain something other than a nested div and remove the redundant layer if not. The xpath //div[count(*) eq 1][div] finds 52 items , though this may be an artefact of my retagging script. Somewhat more problematic are <div>s which contain just a <head: in some cases, these are probably genuine: for example

<div><head> 12:30 – 13:30: Lunch</head></div>
<div type="h2"><head>Goodbye Peter and thank you! 😟❤️</head></div>

but absent supernatural powers, it’s in principle impossible to know how to interpret a sequence of headings in the wordpress files. For example, here’s a snippet from the 2020-10_30824 document:

The “Review…” lines are div/head s containing a link to another part of the document (this kind of transclusion happens nine times in all). Are they siblings or parents of the “SUNDAY… ” div/head s ? Is the “Proposal on ruby glosses” a child or a sibling of “SUNDAY, 25…” ? You tell me. I have mostly left them all as siblings for the moment.

Then there’s lists: there are 2456 <list>s, only 109 of them typed, @type values are “unordered” (33), “simple” (21), “ordered” (25) and “gloss” (30).

There are 45 untyped lists which contain one or more items containing a label; three of these contain lists with labels as a child of the list as well as labels as child of the list/item; mercifully all confined to one file (2009-12). The label tag is used ambiguously: sometimes it contains a person siglum; sometimes a subheading. On further investigation, the siglum usage occurs only in one file (tcm24) so I changed all these to rs. The 30 explicitly gloss lists are all in the TEI legacy files too.

Further down the slippery slope: there are plenty of minority interest tagging distinctions: I have turned a scattering of <quote>s into <q>s, and all <ptr> elements into <ref>s, but not yet checked that all the @targets actually go somewhere. I have retained 9 cases of <time> which might just as well be <date>s; and six cases of <foreign>; also 76 <lb/> elements standing in (mostly) for proper structural markup. I have retained and made valid the one document in which <sp> and <speaker> elements have been used, but not tried to add such markup to the couple of occasions where the minutes launch into dramatic mode.

But rather than continue to polish this pig’s head, I have spent today providing it with some infrastructure, and putting it all in an accessible repository at https://github.com/lb42/theCellar

I also spent far too much time providing a CETEICEAN front end to it, at https://lb42.github.io/theCellar/tcMins/index.html

Plenty more to do. Of course.

The usual problem

Now that I have all of the TEI Council minutes in XML which is more or less valid against TEI-All, I can start worrying about defining a sensible schema for them, oh bliss. One possibility might be just to accept and preserve every tagging decision taken during the long history of this archive, even the silly ones. Another might be to retro-convert everything to a single brutalist vision of how things Ought To Be. Or somewhere between the two extremes, perhaps.

Over the last 23 years, different editors of TC minutes have taken different views in all the places where you might expect them to. Even in the days when minutes were prepared in kosher TEI, mostly conforming to TEI Lite, there was still plenty of scope for different practice. Shall we distinguish soCalled, and mentioned, q and quote, term and emph? Are we consistent in using emph for linguistic emphasis rather than formatting? Do we distinguish q and quote, and if so, for why? If we have gi and att (or occasionally ident type=’att’) do we also need tag, and code and ident?

In more recent times, when such ontological anxieties have become perhaps less feverish, the minutes use a comparatively restricted set of distinctions, mostly to do with whether a snippet of text is in italic or bold, or used as a heading or a link, or is a list item. Indeed, sometimes the tagging decisions we see in the XML file are purely an artefact of the formatting tweaks needed to present the minutes on the WordPress website and have little to do with document structure or meaning. And in sadly many cases, if a semantically tagged version of such documents ever existed, it is now lost. Should we, in the interests of consistency, enforce the lowest common denominator across the whole set of documents?

Consistency at least in the way major components of each document are presented would surely be advantageous. To take a simple example, every set of minutes begins with a list of the persons participating in the meeting. Sometimes it is presented as a list of items; sometimes as a single paragraph; sometimes as a sequence of paragraphs. Almost always the names of individual attendees are associated with a siglum or set of initials, but the way in which this is all represented in the XML structure varies considerably. This sort of thing is easy, if time consuming, to make consistent. And probably something like the current conventions, in which each person’s name is given as a distinct <item> within a <list> should be aimed for, since it is clear that the various ways these lists are currently presented is really only an accident of formatting, changes which are of much lesser interest than ease of processing the list in various ways. Whether or not the full TEI paraphernalia linking occurrences of a person’s initials in the text to their appearance in the list of attendees, is another question.

Of courser, if we were starting this exercise from zero, we would follow the textbooks and first carry out a data analysis. What are the important entities in a set of minutes, and what are their properties? Each of these documents relates to a meeting which took place over one or more days, in a specific place, or in cyberspace, with a specified set of participants. The minutes indicate the topics discussed, to some extent formalised in terms of identified issues, or action points. We might also ask what sorts of research questions should our analysis facilitate: how often do particular individuals or kinds of individual intervene? How long does it take for an issue to be resolved? How many different issues are under consideration at a particular time? Where do issues come from? And so on.

But we are not starting this exercise from scratch. The documents already exist. Moreover, the conceptual entities they are concerned with, and therefore represent, change over time, reflecting the Council’s evolution both in terms of its practice and its sense of purpose. That purpose has always been to maintain and develop the technical content of the TEI Guidelines, of course; but with the availability of sophisticated issue-tracking and reporting software the way in which this is carried out has changed a great deal. Consequently the operational model – the modus operandi – of the Council has also changed a great deal. These changes are necessarily reflected in the organization and content of the minutes.

Writing a full history of the TEI Council’s evolution is not however the purpose of this document, tempting though it is. A few salient aspects of that history do however affect our document analysis. For example, it’s necessary to understand that when first set up, the Council worked very much in the same way as the original TEI project: its role was largely to initiate, supervise, and integrate work carried out in more or less autonomous working groups. This had worked well for some major expansions of the P5 Guidelines, such as the addition of manuscript description, or character encoding issues following the adoption of Unicode, where the TEI had been able to constitute a motivated and informed group of experts to produce concrete proposals; less well in areas where such a group proved harder to constitute or motivate. For the first five years of its existence, from 2002 to the publication of TEI P5 1.0.0 in 2007, however, the Council’s minutes are full of reports from specific working groups, and actions on someone to pursue them.

This was also a period during which the TEI enjoyed the luxury of two paid editors. The process by which the Council itself took over editorial responsibility probably started with the full scale review of the first draft of P5, in which each chapter was assigned to a Council member for review, though actual implementation of changes to the Guidelines (which involved a content management system called perforce) remained a specialised activity, not available to all. The minutes from this period necessarily therefore have many “action points” aimed specifically at the editors.

For releases 1.0.1 to 2.7.0 (2008 to 2014) the following formulation of the Council’s role appeared on the PDF title page

TEI P5:
Guidelines for Electronic Text
Encoding and Interchange
by the TEI Consortium
Originally edited by C.M. Sperberg-McQueen and Lou
Burnard for the ACH-ALLC-ACL Text Encoding Initiative
Now entirely revised and expanded under the supervision
of the Technical Council of the TEI Consortium
edited by Lou Burnard and Syd Bauman

Only in September 2014, with the 2.7.0 release, did that last line disappear, establishing finally that the Council was now editorially responsible for the whole.

By this date the Council’s modus operandi had also changed considerably. Already, in 2009, we find the Council reviewing and acting on proposals for change to the Guidelines known as “feature requests”, originating from the wider TEI community rather than from the Council or the Board. A key step towards expanding this practice was the adoption of the open source issue tracker provided by sourceforge, which hosted the TEI Guidelines source from 2007 onwards, and remains a recognizable forerunner of the current github based system.

The move to such systems has several implications for the current archival project. Firstly it means that a substantial amount of the TEI’s intellectual history is now exhaustively documented, including all sorts of crazy ideas and false starts and frequent repetition, but all on a platform which the TEI itself does not own or control. Secondly, it means that the links into the documentary base provided by those external systems and the more diplomatic narrative constructions provided by the current minutes are really quite important if we wish to develop a proper historical understanding. And finally, of course, the availability of this detailed repository of issues and their resolution has changed dramatically the way the TEI Council does its work.

Defining the target

Defining the target

It’s easy to say rather glibly that TEI markup is a good archival format, and in many respects it is: experience shows that a TEI file can nearly always be read without too many assumptions about the platform or software needed to read it. Because a TEI document uses a very basic form of labelled bracketting, developing software to act upon the markup is a breeze; moreover because the semantics and syntax of the markup are well defined, the software can perform whatever tricks it likes on the basis of an explicit model of the document’s structure and semantics. The tricky part is deciding what exactly the components of that explicit model should be: what (to coin a phrase) is this text really?

On the screen I am currently typing at I see that the phrase “Defining the target” and the word “really” are both in an italic font. The first is a heading, and the second is a word I wish to emphasize. In neither case is it particularly helpful to state that there’s a font change here: if I lost that information for the first case you would still (probably) recognise the words as a heading on other grounds (it’s not a sentence; it’s a separate block; it’s in a place where a heading is conventionally appropriate etc.) but in the second, without the signal given by the font style change you have no easy way of noticing that this word is meant to be more salient than the others, still less of recognising the allusion it makes to a famous journal article. Is recognising (and preserving) this emphasis as essential a part of this document as distinguishing the heading from what follows, or noting the paragraph divisions ?

Although it’s tricky, it’s something I and others have been doing for decades, this business of deciding which are the “essential” components of a document, independently of its realisation on screen or paper. The claim is not just that this separation of the document from its realisation is meaningful, but that it’s also useful. Certainly it makes it much simpler to process masses of similar but different documents in a reasonably intelligent way if their structural components and semantically salient properties are explicitly and exhaustively flagged in the same way. Certainly there may be a price to pay for that simplicity: we may have to renounce the ability to visualise the document exactly as one or other of its many historical realisations did; just as we do for other cases where such a realisation depended on a specific software infrastructure. Good luck emulating a pixel-perfect WordPerfect 4.2 or WordStar view of your TEI document on the basis of its TEI archival form.

All this by way of prelude to the next stage in my attempts to recover/reconstruct a usable TEI archive of the deliberations of the TEI Council. Those deliberations currently exist (as previous blog entries have shown) in one or more of three different forms: as Google Docs, as WordPress HTML pages, or in one or other legacy TEI format. All of these formats are relatively simple to convert into XML without loss of such information as they already contain: the task is to define a minimal TEI markup scheme to which they can all be reduced, without losing anything essential. It is that classic TEI markup problem: what do you want to distinguish in your documents? With the added constraint that I’d rather not have to introduce distinctions not already explicit (one way or another) in the sources.

I started with the Word Press XML files, since these constitute the official published record, even though they have many shortcomings. I wrote another perl script to extract a list of all the different XML tags present in the files, and an XSLT stylesheet containing a default template for each of them, mapping format-oriented tags like <h1> and <b> to semantic ones like <head> and <hi>. I then spent a happy hour or three fiddling with that, before deciding that this approach was too labour intensive to be a general solution.

So I moved on to the Google Doc files. I exported them all as docx files, applied the default TEI docxtotei conversion, and started looking at my 100 or so allegedly TEI documents. The first step was to generate an ODD which described their actual tagging practice, for which I used the TEI

oddByExample utility. This is a good way of starting the process, but has some quirks (like specifying all the element classes you might use, even though you don’t actually use any of them and explicitly deleting each attribute supplied by a class rather than deleting the class), and one major drawback. The drawback is actually perhaps a virtue: the schema you get from the ODD it generates is a strictly conformant TEI subset of TEI All. So if your data has features which are not valid in TEI All, shall we say @xml:id values which are of the wrong datatype, or empty <list> elements, or <list> or <table> elements appearing directly inside <front> instead of being decently wrapped inside a <div> … it won’t be valid.

(These examples were not chosen at random, by the way: they are all the consequence of a bug– issue https://github.com/TEIC/Stylesheets/issues/604 reported this morning – in the current docxtotei tool). Anyway, this means that either the ODD needs to be adapted to be more forgiving, or the data needs to be corrected to be less weird. Doing the former would also mean tweaking the data (to avoid polluting the TEI namespace with the weirdness), so maybe choosing the latter course of action is the wiser decision. Especially since it’s not so hard to correct the aberations I have identified so far.

So my first XSLT stylesheet is simplify.xsl, which does just that. If it finds a list or a table directly inside a front it wraps them in a p and looks the other way. When it finds an anchor it sticks an extra letter in front of its identifier. After its ministrations, all 112 generated XML files (bar one, which had an empty <list> element) were valid against the generated ODD schema. Hosannah.

That leaves 51 items with no easy XML representation, or 12 items if we assume that the legacy TEI format also counts as potentially easy XML. Sadly all but 1 of those 12 are in the “ill formed” Word Press XML format, so some (more) manual tweaking will be required before I can safely apply the retagfromWP conversion to them. Then I will have to work out what to do with the legacy TEI files, some of which are still in P4. But I think I see a way forward…

Surveying the Remains

There have been 161 TEI council meetings up to February 2023. The minutes of each meeting (conference call or face to face) – except one – are available on the Council website, but only as Word Press pages.

I have tracked down a P4 or P5 source file for 40 of them, covering meetings up to October 2008. I think there must once have been more, because some of the WordPress pages show clear signs of having been adapted or converted from a TEI original. In several cases, some TEI tags are still present, notably <gi> (appears in 20 cases between 2009-04 and 2014-06) or publicationstmt (sic), which appears along with other remnants of a TEI header in 38 cases up to 2016-03. But there is no trace of the original source files anywhere on the current website.

From 2016 onwards, the website provides only Word Press format files, in which HTML tagging is used. However, this tagging is not entirely well-formed: there are many cases where hard line breaks in a table cell are marked by HTML p end-tags, for example. And at least one where the internal structure of a table row has been completely lost.

As a first step, I wrote a perl script which did its best to extract a single well-formed XML document from each set of Word Press pages. This failed consistently for the 36 pre-2016 pages which contain residual TEI tagging but worked reasonably well for the remainder, most of the time. Only 13 of the post-2016 files (out of 85) needed hand-editing to make them well formed, though the tagging still leaves much to be desired. In particular, I realised that some of of the WordPress files made no attempt to preserve the often deeply-nested structure of the minutes, or distinguish marginal annotation from the text.

Since 2016 the minutes have been edited in Google Docs and drafts are therefore (currently) available in Word, ODT, or other formats from the Google Docs website, if you know where to find them. This part (finding them) became much easier when I asked former Council colleagues to share their secret stash of drafts with me. Converting from Google Docs to TEI is comparatively simple and much less error prone than working with the WordPress pages directly. It really ought to be the WordPress pages which constitute the document of record for these minutes, but …

It seemed like a good idea to do a bit of checking in any case. So here’s what I did:

  1. use curl to download all the word press pages to 161 separate files called yyyy-dd.html
  2. use a perl script `articulate.prl` to extract from each of them a (hopefully) well formed xml file containing just the ‘article’ recognised by wordpress; save the result in a file called yyyy-dd_dddd.xml (where dddd is the wordpress article number)
  3. check the well formedness of the resulting files with `xmlwf` and spend no more than a day or two fiddling with the ill-formed ones to improve them
  4. spend a lot of time downloading and renaming files from Google Docs. The downloading was needed for files not in the zip James sent me; the renaming was essential for my personal sanity.
  5. I then enriched the XML file I made in the previous blog entry with links to all the files collected together.

At the last count, there are 162 entries (this includes one which is mysteriously missing from the current TEI website). Of these,

  • 85 are available as well formed wpressxml files
  • 37 are ill formed wpressxml files
  • 41 are only available in a legacy TEI format
  • 115 are available as draft versions from Google Docs

Of the 37 ill-formed word press files, 11 are not also available in Google docs format.

The Google Docs collection lacks anything before 2012-04, and (for no apparent reason) three more recent items : minutes from 2014-01, 2015-10, and 2017-11.

So my next step will probably be to define a target TEI format (with an ODD of course) and set about writing snippets of XSLT.

Yesterday’s Information Tomorrow (maybe)

If you go to the TEI’s website at http://www.tei-c.org you will find, as you might hope, a respectable number of documents tracking the evolution of the Text Encoding Initiative over the last umpteen decades. Curiously, though, the record for the most ancient period (before 2008, shall we say) is a lot easier to find and manipulate than for most recent times. This posting records my attempts to put together in archival format the full record of the meetings of the TEI Technical Council.

The Council, as any fule kno, met for the first time in 2002, and is still producing regular reports of its debates and its decisions. There is a page on the TEI website (https://tei-c.org/activities/council/Meetings/) which “lists TEI Technical Council meetings and teleconferences, with links to the meeting minutes.”

I downloaded that list (it’s a WordPress HTML file, of course), ran it through HTML Tidy, and processed the result to produce a nice simple TEI file of entries like this

<list>
<head>2022</head>
<item> conference call <date>8 December 2022</date><ref
target=”https://tei-c.org/activities/council/meetings/tei-technical-council-teleconference-2022-12-08/”
> [on website]</ref></item>
<item> conference call <date>10 November 2022</date><ref
target=”https://tei-c.org/activities/council/meetings/tei-technical-council-teleconference-2022-11-10/”
> [on website]</ref></item>

My quixotic goal is to enrich this data with links to TEI source files for each set of minutes, preferably in a consistent TEI format.

Now, twenty years ago this would have been quite a reasonable proposition since (as the current TEI Vault shows), the TEI once had an “eat your own dogfood” policy of producing all of its documents in TEI. Over the years, this policy has varied somewhat, largely as a consequence of changes in the tools available, and the culture that goes with them. These policy changes relate not just to the look and feel of the website itself but also to which versions of its contents are preserved and how. Today, I think it is not unreasonable to say that much of the TEI website exists only as WordPress pages: many of those pages were first created as Google Docs and then converted to WordPress, some of the older ones were created originally in hand-crafted TEI XML, and the very oldest were created in TEI P4 SGML, but the only versions that can reliably be downloaded from the current website are in WordPress HTML.

Much of the time, of course, this is unproblematic. Mostly, we just want to read the stuff, not analyse it. But occasionally, and especially for the older material, whoever or whatever was responsible for producing the WordPress files has really made a hash of it. Consider, for example, the working paper

https://tei-c.org/activities/council/working/tcw02-approaching-the-son-of-odd-source-markup-for-p5/

This working document is quite an important one in the history of ODD. But as currently presented on the TEI web site it is badly broken, to the extent that the text has become incomprehensible.  Consider this paragraph:

Comparison with earlier versions of the page (thank you Wayback Machine) show that this is a recent breakage: here for example is the same paragraph as it appeared back in 2005 when it entered the Internet Archive.

The Wayback machine, of course, can only archive what its crawlers find. They found this page a couple of times between 2005 and 2018, both of them looking fine, but thereafter only the WordPress version. This would not matter so much were it not for the fact that the original TEI P5 source has not apparently been archived anywhere, so the breakage cannot easily be fixed.

Such losses in translation occur occasionally in more recent documents too. Here’s a paragraph from the WordPress view of document tcm46  (Minutes of the TEI Council’s April 2011 meeting) for example:

Again, until or unless I track down the original version of this file, there’s no way of filling that particular gap.

Less annoying, but more pervasive is the fact that the WordPress files rarely try to preserve any structural or semantic information. The markup will mostly contain a long series of list items, some of which may pertain to the same topic, some of which may in fact be headings, some of which are an accident of formatting. In the text (apart from links) there’s no explicit indication of interesting things you might want to search for, such as names, places, or dates.

Very few WordPress files are well formed HTML, though the wonderful w3c utility tidy does a good job on pushing them into a processable shape. Out of 120 wordPress files, 38 (nearly a third) failed to respond to this treatment, mostly because they contained an unhealthy mixture of HTML and TEI or TEI-like tags.

And finally it has to be said (I’ll be brief) that it seems really sad that the TEI is preserving its deliberations in a proprietary, tool-dependent, presentation-oriented format … the kind of format which the TEI was set up to preserve scholarship from. What kind of apostacy is that?

Hunting for Lacy traces in the digital world

Title

Lacy’s Acting Edition was published in a series of 100 volumes, each containing up to 15 plays, between 1850 and 1874. (All dates approximate and unreliable). In addition to the collected volumes, Lacy sold individual play titles in cheap (6d) paper copies, many of which also found their way into private collections and public libraries. Consequently, copies of various components of the Lacy Acting Editions are now scattered across many research libraries. In some cases, they also exist in digital form, usually as scanned page images.

It is relatively easy to recover details of a library’s holdings from an online catalogue, for example by searching for the string “Lacy’s Acting Edition” or by specifying “Thomas Hailes Lacy” as publisher. It is less easy to restrict the search to generally available digital versions as there is still no reliable joint catalogue of digitized texts in major public collections, combining the digital holdings of say the British Library, the Bodleian, and other UK libraries, in the same way as has been done for many US libraries by the Hathi Trust, or more generally by the Internet Archive. (A project at the National Library of Scotland did set up such a site, under the name opentexts.world, a few years back, but its status is currently unclear and unsupported.

The ease with which the results of such searches can be obtained in a machine-tractable form (rather then simply displayed on a web page) is also quite variable. One is usually forced to fall back on web-scraping techniques and quite a lot of manual post-editing. This note documents my fairly uneven progress towards a definitive collection of links to existing and freely available digital copies of the plays constituting the Acting Edition on various sites. The fairly good news is that, as of today, of the 1498 titles making up the 100 volume Acting Edition, I have identified 586 which are freely available in some digital form somewhere. Track progress by looking at my online catalogue.

Hathi Trust

A search for the string “Lacy’s Acting Edition” anywhere in the catalogue record at https://catalog.hathitrust.org/ produces 294 hits, of which 246 are available in “full view” (i.e. should be downloadable without formality). A search for the string “Thomas Hailes Lacy” as publisher somewhat counter-intuitively produces only 94 hits. The web page displaying results looks like this:

  1. Results from a HT search. Setting page length to the maximum allowed (100) makes it feasible in this case to download all pages with minimal scrolling.

As usual, the easiest way to screen scrape is to save the HTML page as a file, use tidy to make it into well-formed XML, and then write XSLT to extract the useful information. In this case, the generated XML uses an undefined prefix “xlink:”, which I had to remove by hand, but apart from that everything needful was done by the XSLT stylesheet htScraper.xsl, resulting in a document (htListFull.xml) containing entries like this:

<bibl>
 <title>The first night; a comic drama in one
   act.</title>
 <pubDate>1800</pubDate>
 <author>Lacy, Thomas Hailes, 1809-73.</author>
</bibl>
<bibl>
 <title>After the party; a comedy in one act.</title>
 <pubDate>1870</pubDate>
 <author>Lacy,
   Thomas Hailes, 1809-1873.</author>
 <ref target=”https://hdl.handle.net/2027/hvd.32044072039373″>HT</ref>
</bibl>

No <ref> element is generated for entries which are not accessible in “full view” mode. Also note that the handle quoted above is for the Hathitrust index page; to download the whole text as a single PDF file you must visit that page, and wait while the PDF is constructed. Oh, and yes, you must also be logged in at a HathiTrust member institution. So much for “full view” access.

Open Texts

I blogged about this now sadly un-maintained site back in October 2020. The site was dark for a while, but seems to be back for the moment: this morning I visited and was able to download a list of 106 hits in CSV, XML, or JSON in one click, which was nice.

This is what I like to see at the foot of my first page of results

Individual results looking like this:

<doc>
 <str name=”organisation”>Bodleian Libraries</str>
 <str name=”idLocal”>016930688</str>
 <str name=”title”>King of the Merrows, or, The prince and the piper : a fairy extravaganza / written by F.C. Burnand, from an original plot constructed by J. Palgrave Simpson.</str>
 <str name=”urlMain”>http://purl.ox.ac.uk/uuid/42d1488e99284fef9421268c33e4730d</str>
 <int name=”year”>1862</int>
 <arr name=”date”>
  <str>1862</str>
 </arr>
 <arr name=”publisher”>
  <str>Thomas Hailes Lacy</str>
 </arr>
 <arr name=”creator”>
  <str>Burnand, F. C.</str>
 </arr>
 <arr name=”description”>
  <str>First performed at the Royal Olympic Theatre, 26th December, 1861.</str>
 </arr>
 <arr name=”placeOfPublication”>
  <str>London</str>
 </arr>
 <str name=”catLink”>http://solo.bodleian.ox.ac.uk/permalink/f/89vilt/oxfaleph016930688</str>
 <str name=”language”>English</str>
</doc>

are easily converted (e.g. by my stylesheet opentexts-conv.xsl) to produce

<bibl>
 <title>King of the Merrows, or, The prince and the piper : a fairy extravaganza / written by F.C.
   Burnand, from an original plot constructed by J. Palgrave Simpson.</title>
 <pubDate>1862</pubDate>
 <author>Burnand, F. C.</author>
 <note>First performed at the Royal Olympic Theatre, 26th December, 1861.</note>
 <ref target=”http://purl.ox.ac.uk/uuid/42d1488e99284fef9421268c33e4730d”/>
</bibl>

which is easily merged into the main Lacy catalogue.

Moreover, in this case (hoorah for the Bodleian), a visit to the publically available URL actually downloads the whole of the PDF file without further ado.

Sadly, PURLs are available for only three of the items in the Open Texts list of 106; the vast majority (90) being handles from HathiTrust, and the rest (13) links to archive.org. Moreover, the data has not apparently been updated since October 2020, which is presumably why it does not have anything like the 316 handles I found in the Hathi Trust catalogue for myself. In fact, every one of the handles it supplies exists also in the htListFull.xml list.

Google Books

A cauchemar. Google has digitized (almost certainly) all of the Lacy Acting Edition volumes, but it seems to be entirely arbitrary which ones you can access via Google Books. I have tried various approaches to searching (there is something called a `bibliogroup` for Lacy), and then reprocessing the resulting (very obscure) HTML, but cannot say I have succeeded in cracking this code. The file gbSearch.xml contains the screen- scraped-and-converted-to-XML output from a query for this; the stylesheet gbSearch.xsl filters out from this the 37 useful links it provides to files you can actually download from Google Books (but you still have to go through a captcha check, of course).

Searching specifically for “Lacy Acting Edition” on Google Books will provide an exciting list of entries for each of the first 93 volumes in the LAE — but only two of them (volumes 77 and 93) actually have anything you can download. (I belatedly discovered that this annoying behaviour can be modified by selecting “Full View” from the drop down menu at top left of the query screen, which hides the titles you cannot have). On the other hand, there are also a few occasions where the text actually digitized for a specific title is the whole of the volume in which that title appears. Thus, searching Google Books for The Half Caste will provide you with a link for the whole of volume 97, in which that title appears. Likewise a search for In Three Volumes actually gives you a link to the whole of Volume 91. Anyway, once you have a reliable link to Google’s equivalent of the Internet Archive’s “details” page (at the moment, it looks like https://www.google.co.uk/books/edition/Oberon_An_opera_in_four_acts_in_prose_an/IoFaWP1TQgkC) you can pass that to Google, and get back a nice “New” Google Books page in the middle of which is a nice “Download PDF” button. Which works — once you have completed the annoying captcha test of course.

All very well if you have the time to spend cutting and pasting links: but why couldn’t Google have provided a simple download in a form I can script? I assume it’s for the same reason they want to control access to these resources — to stop unscrupulous entrepreneurs in the “Print On Demand” industry from making a swift buck. And we all know how effective that policy is, don’t we?

Bodley

Real librarians do it with Z39.50. But my results (bodleyTexts.xml) show only 9 titles available in digital form.

The Hall Collection

Every now and then, serendipitous searching pays off. The Hall Collection contains approximately 600 English plays mostly from the late 18th and early 19th centuries, originally used as prompt books by a professional actress called Clara St. Casse. The Collection was donated to the University of Warwick Library by a Mrs G. F. Hall of Leamington Spa, together with a collection of other printed plays. Naturally it includes quite a few (102 to be exact) Lacy titles. Although the Warwick site (https://wdc.contentdm.oclc.org/digital/collection/hall) seems to provide only downloads and browsing of individual pages, someone, presumably from the Library, has also had the good sense and generosity to deposit the whole collection at archive.org, from which I was able to obtain an XML file (hallColl.xml) which can be readily processed to produce links to the 102 Lacy published titles: see hallCollTitles.xml

Internet Archive

This archive has an excellent search interface and will also deliver results in any tractable form you like, including json or xml. It cannot however perform magic to overcome variant cataloguing practices amongst the collections it has incorporated. So, for example, a search for “Lacy Acting Edition” throws up precisely one hit (“a copy graciously made available by Fordham University”). A more general search for “Thomas Hailes Lacy” gets me 125 hits, 102 of which come from the Hall Collection. A search (thomas hailes lacy) AND -collection:(hallcollection) finds me the 23 titles not included in the Hall Collection. On the other hand, a search for “T.H. Lacy AND -collection:(hallcollection)” finds 66 titles, not included in the Hall Collection, but not included in the foregoing either.

On the bright side, the hits can be downloaded in a format which is more or less identical to that generated by the XML option quoted for the Open Texts server above, so mungeing the results lists together is a Simple Matter Of Programming, resulting in iaList.xml.

An experiment in CLS

Some time ago, I agreed to participate along with several others much smarter than me in COST Action Work Group 3. The goals of this work group were, amongst other things, to run a small experiment in counting verb frequencies on ELTeC texts enhanced with POS and lemma information. It took a surprisingly long time to find out exactly what contribution was required of me, and I make no claim to have got it right even now. But here’s what I thought I was doing.

First, I wrote an insultingly simple XSL stylesheet to produce a list, in descending frequency order, of verbal lemmas in each of the (now) 10 ELTeC level 2 corpora. For example, here’s the start of the file rom/verbFreq.xml:

<frequencies>
 <lemma form=”face” freq=”30919″/>
 <lemma form=”avea” freq=”29391″/>
 <lemma form=”zice” freq=”22673″/>
<!– … and so on for several hundred more lines –>
</frequencies>

… which tells us that in our data Romanian’s favourite verb has the lemma face, and the next favourite is avea. The code for doing this is (like all the rest of the code described here) in the github repo COST-ELteC/ELTeC-data/Scripts if you care: it’s called imaginatively verbFreqs.xsl

Next, I wrote another simple-minded script to extract from each novel a bag of words, with no markup or punctuation: just all the verbs, for example, or all the nouns, in their order of appearance in the text. So the that celebrated work Hard Times, which begins in the original like this

<div type=”group”>
 <head>BOOK THE FIRST <hi>SOWING</hi> </head>
 <div type=”chapter”>  <head>CHAPTER I
     THE ONE THING NEEDFUL</head>
  <p>‘<hi>Now</hi>, what I want is, Facts.  Teach these boys and girls nothing but Facts.  Facts    alone are wanted in life.  Plant nothing else, and root out everything else.  You can only form    the minds of reasoning animals upon Facts: nothing else will ever be of any service to them.</p>
<!– … –>
 </div>
<!– … –>
</div>

generates a bag of words starting like this

want be teach be want|wanted plant root form be…    

if I ask for VERB lemmas, or like this

book sowing|sow chapter thing fact boy girl fact fact life mind reasoning|reason animal fact    service 

if I ask for NOUN lemmas. You may wish to complain about the behaviour of the lemmatizer here, but I am taking the path of least resistance and using whatever treetagger (in this case) produces without cavil. This deplorable laziness returns to bite me further below…

I wrote some python to run the xslt script filter.xsl which does this task: the script is called filter.py and it uses a Python interface to the Saxon C processor which I was very pleased with myself about when I got it working; less so later, see below. There’s more mundane detail of how to run it in the README in the Scripts folder.

If still awake, you are probably wondering what the point of all this was. And here comes the scientific bit. The little workgroup I had signed up for wished to test a Hypothesis, which (if I understand it correctly) might be crudely summarized thusly:

  • The European novel undergoes some sort of seismic shift around the turn of the 19th century, which is popularly known as The Rise of Modernism
  • Modernism has many stylistic correlatives, but they include notably a focus on the interior life of characters, on sensation and feeling, rather than on objective omniscient narrative
  • If this is true, we should expect to see a change in the frequency with which verbs associated with that ‘inner life’ appear over time.

I hope you can see where we are going with this, now. All we need is a reasonably plausible list of verbs which express aspects of ‘inner life’. And so, for the next few months, with zoom and email and similar modern contrivances, the group theorized how to actually produce such a list. I may have fallen asleep during the process and missed something critical, but eventually (I think) it was decided that we would explore two approaches to identifying our list. Firstly, we’d ask language experts to vote for their top ten “inner” verbs. Secondly, we’d use a statistical procedure (word vector embedding) to identify a list of candidate verbs automagically. Then we’d compare the results, declare victory, and move on.

What could possible go wrong? Well, at least two things.

Firstly, the ask-an-expert approach turned out to be less successful than it might have been, largely for purely logistical reasons. If we had asked the experts simply to review the existing verb frequency lists for their language and identify in them those verbs which were indubitably and always betokeners of interiority, plus any others which were a bit thus inclined sometimes, then we might have got our results a bit faster. But we didn’t, and the experts, understandably a bit mystified by the whole process, gave us lists which varied widely in their format and scope. So I found myself having to tweak and readjust their contributions, to remove duplicates and ambiguity. As for the automagical procedure, it proved a little challenging for most participants to run, if only because it required access to a machine capable of running Google’s word2vec program which is not meant for your average laptop. In any case, you can see the resulting word lists in the file innerVerbs.xml which I hope is fairly self-explanatory.

Secondly, my simplistic notion of ‘lemma’ turned out to be problematic. As you noticed above, when unable to choose between two alternatives, treetagger obligingly gives you both of them, separated by a vertical bar. That’s no problem for me: I just discard the alternative. But other lemmatizers behave differently. For example, in our Portuguese data, the lemmas for reflexive verbs are suffixed by a # and an indication of person. In our Hungarian data, spelling variations of the same basic lemma are sometimes presented as different lemmas. In the first case, should I simply ignore the part of the lemma after the #? In the second, should I aggregate all the differently spelled variants and consider matches for any of them as equivalent? As usual in computational linguistics, it all depends what you think you’re counting…

Despite these metalinguistic anxieties, I wrote a (needlessly complicated) python script called verbCount.py to count the frequencies of the inner verbs through time, comparing the things-called-lemmas in our various lists of inner verbs with the things-identified-as-lemmas in the level2-encoded files. Invoking various XSLT scripts and Saxon C as before, this script grudgingly churned out a file for each text in the corpus under examination, with a row for each title and a column for each inner verb, like this:

   extId year verbs innerVerbs aimer connaître croire entendre regarder savoir sembler trouver voir vouloir     FRA00101 1860 3889 310  17 9 28 22 18 52 5 47 83 29    FRA00102 1883 5499 465  112 21 38 16 17 55 32 30 77 67       FRA00201 1910 7577 682  26 20 41 75 96 63 49 93 128 91   

I say ‘grudgingly’ because the script was obliged to process the whole of every file in order to extract a year of publication from its TEI header, and consequently ran with noticeable slowness. If I’d thought to include the year of publication along with other metadata in the filename of the “bag of words” I could have used that instead, which would have been much quicker. Maybe if I get a better set of inner life verbs I’ll revise the scripts to do so.

Anyway, we now have a bunch of CSV files. And why? Because my colleague Diana has produced some R scripts which will plot this data set so everyone can understand it. Or at least look at it. Here’s what we get for some of the Portuguese data:

innerVerbs.png

I leave it to the statistically-informed to interpret this and other similar results. The closing conference of the COST Action, taking place next week, includes a paper (on which I am somewhat embarassingly cited as co-author) presenting the results in more detail.

Reviving the VPP : a start

The Victorian theatre has not enjoyed documentation or digitization as systematically as has the Victorian novel, reflecting perhaps scholarly perception of their comparative artistic significance. Yet it is a truism that the influence of the Victorian popular theatre on the development of the novel during this period was by no means limited to the efforts of dedicated amateur enthusiasts such as Dickens and Collins and their circle. In Emily Allen’s words “Victorian theatre was the novel’s ally, inspiration, and competitor”. As an ongoing expression of popular culture, nineteenth century theatre has deep roots and many branches; its lineage runs from the high gothic of romantic melodrama to the memes of cinema and modern day television, embracing both the theatre of sensational spectacle and that of domestic realism. Yet for those wishing to see the phenomenon as a whole, to perform a kind of distant reading of its texts, there is nothing approximating to Bassett’s At the Circulating Library database of Victorian fiction (http://www.victorianresearch.org/atcl/search.php) in terms of completeness or coverage. Such attempts to document the Victorian theatre as do exist, have generally done so in terms of the careers of individual actors, writers, or institutions. Although collections of the primary source materials exist in a few libraries, it is as a consequence of individual collections or bequeaths, rather than any attempt at systematic coverage.

One notable exception is Richard Pearson’s Victorian Plays Project (VPP), originally funded by the AHRC 2005-2007, and still hosted at the National University of Ireland in Galway. A key deliverable of this project was an online catalogue of the approximately 1500 titles making up Lacy’s Acting Edition of Plays, derived from the (apparently unique) surviving copies of that edition preserved in what was then the Birmingham Central Library.

Thomas Hailes Lacy began publishing contemporary plays at his Covent Garden printing house shortly after the Theatre Regulation Act of 1843 which removed the duopoly previously enjoyed by the Covent Garden and Drury Lane theatres. In a far-sited move, Lacy acquired the rights to print plays from the theatrical managers, ostensibly to protect their copyrights, though he was not averse to a little piracy himself. These “Acting Editions” contained everything needful to produce a play: including details of costumes, settings, blocking, accompanying business etc. as well as cast lists and the text of the play itself. New titles appeared every year until the 1870s when Lacy sold the whole collection to Samuel French, an American publisher with whom he had exchanged plays for publication for the previous two decades.

According to the existing VPP website (http://victorian.nuigalway.ie/modx/index.php?id=187), in addition to producing this on-line catalogue, the project aimed to “generate e-texts in .pdf format that replicate the original texts re-edited for electronic usage” and also to “create a database of plays marked up using TEI encoding in XML that will be searchable”. The website also states that “Transcription of the Lacy’s Catalogue, and editing and encoding of the texts was undertaken by the Victorian Plays Project using OxyGen TEI mark-up software and Acrobat Professional. ” (http://victorian.nuigalway.ie/modx/index.php?id=182).

As of today, the website does provide a list of all 1428 titles in the Acting Edition, including basic data about their authorship and performance history. It also makes available a set of 239 titles which have been transcribed and reformatted as PDF files preserving much of the typography of the originals. Other formats, if they exist, are not visible on the website, though a small number of titles have clearly been annotated and indexed at some point in the past with separate lists of named entities and striking phrases. (Some further information on this and a closely related sister project concerned with the records of the Lord Chamberlain’s Office is provided by Radcliffe, C. & Mattacks, K., (2009) “From Analogues to Digital: New Resources in Nineteenth-Century Theatre”, 19: Interdisciplinary Studies in the Long Nineteenth Century 8. doi: https://doi.org/10.16995/ntn.499 )

However, the VPP website does not seem to have been developed since 2015, and the untimely death of Professor Richard Pearson at the end of 2018 (https://bavs.ac.uk/uncategorized/obituary-richard-pearson/) casts its future development into serious doubt. As is all too often the case, preservation of a digital archive turns out to depend as much on individual personal support as on technological constraints.

I have therefore applied for funding to carry out an initial scoping study investigating the feasibility of reviving and bringing up to date the Victorian Plays Project. If accepted (and there’s no reason to suppose it will be) this would naturally begin by reviewing any additional digital materials which have been archived, and by interviewing personnel associated with the original project at Galway. The inventory resulting from this review would be extended with a survey of other digital versions of the Lacy Acting Edition now available online (for example, in transcribed form at Project Gutenberg and elsewhere and in digital facsimile via the Hathi Trust or the Internet Archive). Contacts at Galway and elsewhere (for example in the library and special collections community, and in the professional Victorian studies networks) would be approached for information about existing related endeavours, and to raise awareness of the project.

If sufficient suitable materials can be found, the next step will be to design, document, and implement procedures to convert them all to a single simple TEI encoding, consistent with (for example) that used by the DraCor project, or the ELTeC. Following these de facto community standards has many advantages, such as the ability to re-use existing software tools, or the ability to leverage existing community familiarity with the format. The resulting digital archive would be initially maintained as an open repository on GitHub, with all converted materials made available under a CC-BY licence.

It is probable that automatic conversion to this (or any other) target format will be much easier for texts already transcribed than for texts only available in digital image format. In a second phase of the project it is planned to explore and report on the applicability of “machine learning” techniques to enhance the performance of existing OCR platforms. By comparison with novels and other print material from this period, the Acting Edition texts are unusual in the complexity and variety of their typography. This complexity, derived from the need to clearly distinguish speaking parts, stage directions etc., is however regular and systematic and should thus be potentially beneficial in the task of automatic markup.

The availability of a consistently organized and encoded corpus of Victorian play texts will make possible the application of emerging distant reading methods and tools to a component of victorian cultural history which has been curiously neglected, if not undervalued, hitherto.

In the meantime, I have been tracking down other existing online resources for the description of the 19th century theatre. But that, as they say, is another and a different blog posting perhaps.

EEBO TCP in P5 – the return

It seemed a necessary act of piety to respond positively to a request for help from my former colleagues at the Oxford Text Archive, when they finally got around to considering the conversion of the latest (and one must fear the last) tranche of EEBO texts from the Text Creation Partnership. The conversion into a TEI P5 compatible version of the vast majority of EEBO-TCP phases 1 and 2 texts and their subsequent upload to a gazillion github repositories was accomplished by a team headed by Sebastian, back in the days when TEI Simple Print was new, and we were all a bit more bushy-tailed and bright eyed. Now the OTA has received their last tranche of TCP phase 2 texts, it should not (surely) be too much of a sweat to crank them through the same conversion process and deposit the results in Github too. Though of course nothing is ever quite that simple.

The XSLT script which does the heavy lifting is called tcp2tei and (thank you Sebastian) here it is, safe and sound in the TEI Stylesheets repository. And it still works. There is even a shell script for creating a new github repo and uploading each file to it from the same masterly hand; this one nearly works, as a consequence of github having got a little more fussy about authentication mechanisms in the last five years, but that’s not hard to fix. So I should just declare victory and move on.

On closer inspection however three issues have surfaced (so far).

Firstly, the catalogue numbers. In the current TCP P5 texts, each TEI header has a string of <idno> elements supplying its identifier in the Michigan DLPS database, the identifiers of one or more MARC records from OCLC or UMI catalogues, its Proquest number, identifiers in one or more standard bibliographies (ESTC, Wing, STC, Evans etc.) and the number of the image set which was scanned. For some reason I do not understand, not all the new texts supplied to the OTA have their full complement of these identifiers. For example, of the 6498 titles supplied, 3062 have OCLC Marc record identifiers, (discounting an additional 187 duplicated OCLC records in which the record identifier is prefixed redundantly by “ocn”). None of the 6498 has an image set number. Only 2987 have a Proquest identifier, and it’s always the same as the MARC catalogue number. And 963 have no bibliographic identifier of any sort.

No matter: after my skirmishes with EEBO metadata last summer (reported at https://foxglove.hypotheses.org/date/2020/08) , I am confident of being able to recover missing catalogue numbers from at least two different sources: one being Paul Shaffner (whom God preserve)’s eebodat1.xml and the other being Proquest (whom God has recently abandoned)’s title list. The stylesheet I am working on to do mundane things like change the availability statement in the header is duly expanded to supply the missing <idno>s. I decided to add the new Proquest numbers (the so called GOID) even though these are not present in the existing files.

Secondly the image links. One reason for caring about the Image Set numbers is that they are used as part of the address to which @facs attribute values scattered throughout the texts are mapped. Back in the day, it was possible to link directly to a page image in this way. This facility is however no more: Proquest (and presumably their successors) will only allow you to access individual page images by using their own interface, so far as I can tell. It is possible to access the same images via the JISC Historical Dataset sites by judiciously stringing together values from those <idno>s, but I have yet to find a reliable way of doing so for individual pages. For the present therefore the @facs values will remain a touching reminder of how things once were. Though I did add a link to the JISC site into the new headers, along with other useful documentation.

And thirdly the real subject of this entry: what to do about @rend. Now, I have long believed that the TCP P5 texts are not only valid TEI P5 XML but also valid against a specific TEI schema, to wit the schema named (after much argument and in the teeth of some opposition) TEI Simple Print. I distinctly remember (or think I remember) Sebastian and Magdalena putting quite a bit of effort into enhancing that schema with lots of @rendition values to match EEBO-TCP requirements. So when I actually tried validating my nice new files against it, I was a bit puzzled to find that they didn’t.

Specifically, the attribute @rend is not available in the TEI Simple schema, and has not been since at least August 2016. In its place, I should be using @rendition to point to one or more of the predefined simple:rendition values. So I spent an hour or two tabulating all the @rend values used in the new files, and finding their simple:equivalents. This proved easy for most cases, but impossible for just a few (7), some of them esoteric (@rend=’upsideDown’ anyone?), but others (e.g. @rend=”margQuotes” and its friends margSglQuotes and margDblQuotes) quite frequent and clearly necessary. I also realised (belatedly) that allowing my script to make this change was going to make my new texts inconsistent with the existing ones. For the existing TCP P5 texts are not valid against TEI Simple either, I discovered, and somewhat to my embarassment: they use @rend with all sorts of exotic values all over the place. They do use @rendition, but in one way only: some <pb/> elements have @rendition=’simple:additional’. This was entirely mysterious to me for a while (until Paul remembered what it was for). In any case, I will worry about that when a new systematic revision of the whole collection is undertaken, should that day ever dawn. For the moment, I will grit my teeth, stick with @rend, and assure anyone who asks that all TCP P5 texts are valid against, err, TEI ALL. This is known as biting the bullet, I believe.

Update: 10 June 2021. I uploaded all 6498 new texts to new repositories in the Github textcreationpartnership collection over a period of 24 hours last week. And, at somewhat greater length, I have now updated my repository at eebo-bib to describe more precisely what I did to create a TEI-compatible TCP bibliography. Definitely time to declare victory and move on.

Seven steps to Ossian

A TEI transcription of the 1773 edition of James Macpherson’s “translations” of the works of “Ossian”

Why would anyone want such a thing? I can’t imagine, but here’s how I made this one. It turned out to be a seven step process — so far. You can check out each stage from this github repo, if you’re really curious…

1. Decide which PDF to work from

You might think that one library’s digitized copy of “the 1773 edition of Ossian” would be much the same as another’s. But no. There are variations in the physical state of the originals, and the PDF format in which the digitization is made available may also vary. I downloaded three different digitized versions from the Internet Archive, but mainly I used the PDF version of the copy preserved at the National Library of Scotland. https://ia802302.us.archive.org/33/items/poemsofossiantra11macp/poemsofossiantra11macp.pdf I say “mainly” because that particular PDF file had a curious glitch in it which made some of the half-titles disappear when extracted as separate image files. I supplied the missing text from the PDF version of the New York Public Library’s copy.

2. Extract images from PDF

$ pdfimages [filename.pdf] [outputPrefix]

I am too lazy to install anything clever, so I use tried and tested ancient command line Unix tools, like pdfimages. Applying this to my chosen PDF file, I find that each page in the NLS PDF produces three files: two in PPM format which appear to be masks, and one in grayscale representing the page, in negative form. I extract the page image and save it in my img folder, ready for the next stage.

3. Do OCR using medieval rules

$ tesseract [inputfile] [outputfile] -l enm

As noted above, I have a preference for old-fashioned command-line Unix tools, and tesseract, once instructed to use an appropriate language model (enm, rather than eng), actually does a pretty good job of recognizing 18th century typography. It consistently fails on ligatured “ct” and a few other oddities, but is much better than I expected at distinguishing long-s from f. Most of its errors seem to be due to poor image quality. At the end of the process, I have about eight hundred text files, each corresponding with one page of the source, and most of them containing plausible text, which I save in the txt folder.

4. Hand-check, page by page, introducing minimal non-xml markup

I then (and this is where the time goes) proofread each one, introducing some absolutely minimal markup, of my own invention. The cheatsheet reads as follows:

  • introduce a — line at start and end of text on page
  • introduce a == line at start and end of note-text on page
  • introduce a blank line between paras but otherwise retain linebreaks
  • introduce an extra hyphen following end-of-line hyphens which are to be retained
  • replace * or +1 sigla for notes and note references with @ and a sequence number
  • use entity references for long dash, accented letters etc
  • use ““” for open double quotes
  • retain forme work on a single line
  • delimit smallcaps with { … }
  • delimit italicized phrases with {{ … }}
  • use % to mark the start of a dramatic style speech
  • add \ at end of verse lines
  • add $ at end of speech
  • add &end; at end of an argument or other chunk

I made the corrections using, need you ask, emacs aided and abetted with some perl one-liners for bulk corrections. Reading Ossian is an odd way to pass time during lockdown, but no worse than some of the other sanity-preserving expedients one reads about on Twitter. A good soundtrack seems to be almost any Sibelius symphony.

5. Transform and (slightly) reorganize the textfiles into proper XML, one per text

perl streamer.prl v1files.txt

This is not the most elegant or indeed sanitary code I have ever written; it also took quite a few iterations to get it working acceptably, which I defined as generating well-formed XML. It reads in a listof filenames, interspersed with flags to tell it when to start a new output file, and what its initial page number should be. Then it processes in succession each page of transcribed text, building up one string containing all the text chunks for a work, and another containing all the footnotes. Footnotes often span pages, of course. The resulting strings are then output as two separate XML <div> elements. Their contents also acquire some minimal XML tagging (<pb/>, <hi>, <p>, <sp> etc.) before they get flushed out. I gave up trying to overcome some inelegant results of not particularly elegant process. The code is in the Scripts folder of this repo for the morbidly curious; the results are in the xml folder: at least they are well-formed XML.

6. Run XSLT scripts to convert this stuff to kosher TEI documents and validate same.

Since this version is going to be my contribution to the “Ossian Online” project, it should probably follow that project’s usage and TEI practices. Alas, they do not have an ODD to tell me what that should be, and their files are apparently validated against TEI-All. But they do have a reasonable amount of documentation, and enough files already available online for me to be able to construct an ODD automagically (take a bow oddbyexample.xsl a well kept secret inside the TEI P5 Utilities repository) and thus a schema I can use to validate my TEI files when I have finished licking them into shape.

As ever, the fun part of the project is seeing how much of the remaining data-mungeing can be scripted in XSLT. Quite a lot, it transpires, though it remains necessary to hand-craft the details of titlepages, tables of contents etc. Another complete sweep through the text checking for miscellaneous things like the following is also needed:

  • words broken by a pagebreak but not properly reassembled (happens occasionally)
  • quotations not marked as such
  • verse lines not marked as such
  • code-switching
  • residual OCR errors (there are always residual OCR errors)

Before launching into that campaign, I checked the <pb/> elements introduced at stage 5 against the page numbering of the original as preserved in the paratextual comments of my transcription. Somewhat to my surprise, the page-numbering corresponded exactly with the number of such elements, enabling me to construct both a reliable reference system and reliable page image links as values for the facs attribute on each <pb/>.

7. Decide on the macrostructure

The Ossian Online project uses <div> elements for every subdivision of the 1773 edition, at whatever level, all the way down from its two volumes to the arguments of individual poems. It takes the perfectly reasonable view that every text can be organized as an ordered hierarchy of uniformly nested objects. As a consequence, the @type attribute for <div> has to do quite a lot of heavy lifting. Odd_by_example enumerates its values as follows:

  • advertisement
  • argument
  • book
  • contents
  • dedication
  • dissertation
  • duan
  • fragment
  • maintext
  • poem
  • preface

This list combines types that have a structural function (fragment, duan, book) with others that are purely descriptive (advertisement, argument, poem). Nothing wrong with that, but I still find this “divs all the way down” approach somewhat problematic, and that for for two reasons. Firstly a <div> is supposedly something incomplete, which is true of (for example) the argument prefixed to each poem or book, but not of the book or poem itself. Secondly, the relation between the argument and the poem requires that the two be siblings within some larger entity, but the poem is not really an incomplete part of that entity in quite the same way as the argument. Furthermore, in the 1773 edition, we have some texts which are undivided (Carricthura for example) along with others which are divided variously into “duan”s (Cathlona) or “book”s (Fingal). Should each book of Fingal be treated as a single text? Should the whole of Fingal be treated as a single text? Can’t we have both?

Values such as maintext and poem in the list above ring alarm bells indicating that these ontological issues are being evaded. Since the TEI in its wisdom already provides a mechanism for coping with exactly this (not at all unusual) kind of macrostructure, why not use it? I refer of course to the element <group>.

My version of the 1773 edition prefers to treat each distinct work as a <text>, rather than a <div type='maintext'>. Within it, there is a <front> containing a titlepage or half title and the argument, followed by a <body>, if the work is not further subdivided, or by a <group> if it is. A <group>, combines a number of lower-level <text> elements, each again with a <front> and a <body>. I also treat each of the two volumes of the 1773 edition as a <group>. The file driver.tei embeds each file in the structure using xInclude; it is commented to explain what’s going on (a bit).

It remains to be seen what my colleagues in Galway make of this radical re-organisation, to say nothing of my perverse desire to retain the long-s form. But at least changing to a format which matches more exactly that of the (excellent) work already done on the Ossian Online site will be a Simple Matter Of Programming.

ELTeCTiT : ELTeC Titles in Translation

(I haven’t posted here since last October. I expect it’s lockdown keeping me quiet. But this morning I did manage to dream up quite an interesting research proposal, which I post here for now.)

The ELTeC corpora were designed explicitly and deliberately [ref design criteria] to exclude translated works. [quote] Despite this principled design decision, it seems self-evident that analysis of the mechanisms and results of the cross-cultural dispersal of the novel across Europe – which emphatically is within the scope of the ELTeC project – depended largely if not entirely on the availability of works in translation. It seems probable that the spread of the novel as a popular form was largely determined by the success of particular works, or classes of work, in translation; in particular, we may surmise, those works which responded to common social problems and common cultural trends. Novelists in the traditions from which those works sprang influenced novelists working in entirely different cultural milieux; just as writers raised in other traditions may be presumed to have influenced the development of what we now perceive as a unified European culture by providing easily assimilated versions of the exotic.

Although some multi-lingual expertise was undoubtedly prevalent during this period, the availability of translated versions of novels must have been essential to this diffusion, in both directions. But even basic data about the scale and scope of translations over the period covered by the ELTeC (1840 to 1920) is hard to find [refs needed] being largely diffused across national library catalogues which vary in the extent to which such works are associated with their originals, and rarely give any indication of the exact pedigree of any translation. It seems probable for example that translations into some target languages (say Romanian) would have started not from the version in the original language (say English) but from some other more accessible L2 (say French), but this is hard to determine without substantial research into individual titles and authors. Even harder to find or quantify is any information about the linguistic skills or preferences of a novel’s intended or actual readership. While it is highly probable that the languages of the great imperial powers (English, French, German) would be widely understood in those countries directly under the political or cultural influence of those powers, the extent to which they would be considered appropriate vehicles for reading for pleasure is less clear.

There are many theoretical and formal difficulties associated with any investigation of the relationship between a source and its translation, particularly (perhaps) for works of a literary nature. Translation, like speech itself, is one of the more inexplicable human behaviours. It ought not to be possible, and yet it is done, apparently more or less successfully, every day. [For an entertaining and accessible discussion, written from the perspective of a professional translator, see David Bellos “Is that a fish in your ear?” (2011)] We do not propose to address any such issues in this project, though we may provide some indicative data points to help us others do so. Our goals are more modest. Each completed ELTeC corpus already provides us with a sample of novel production in a given language within a given time frame, hopefully more or less well balanced with respect to date, size, authorship, and impact. We propose to enrich this list of titles with bibliographic data about all translated versions published within a short period (say 15 years) of their first appearance, recording for example the target language, the translated title, date and other details of publication, the translator’s name, and (where this can be determined) the source of the translation. This data will of course be provided in an open format compatible with existing ELTeC deliverables.

Amongst other research questions which availability of this data should address, we identify at least the following:

  • To assess “impact” or “persistence” of titles, the ELTeC corpora rely on a simple reprint count. Do translation counts complement or contradict this classification?

  • Are translation counts statistically correlated with any of the ELTeC classification criteria? That is, where a given collection shows an imbalance for a given criterion, is this also reflected in the translation count?

  • What patterns are discernible in the L1/L2 pairings manifested by our data: for example, which languages are most frequently translated into for each source language?

  • Is there any correlation between stylistic properties of a given group of sources and the languages into which they are translated? Crudely speaking, are romances more often translated into romance languages?

     

Where translated texts are available in digital form, it would be easy also to provide an ELTeC encoded version, using existing production pipelines. At this stage in the project, it is impossible to say whether this will be feasible on a sufficiently large scale to constitute true parallel ELTeC corpora: it would in any case require significant investment of time and effort from the existing ELTeC partners, whereas the collection of metadata can be done more simply.

Lou Burnard

February 2021

A tale of precision and recall

Back in the day when “text retrieval” was a thing, I remember learning the difference between precision and recall, and the need for a philosophical attitude to the fact that an optimal search has to maximize both these fairly incompatible factors. I now realise how much this whole ATCL exercise has been about that fact. My earlier efforts to identify ATCL titles in the catalogues of existing digital archives involved comparison on the basis of a manufactured key, algorithmically derived from each resource by the same process, which seemed a good compromise. This method also seemed necessary because of the limited facilities some of those resources offered for querying and manipulating the results of queries. With the availability of the wonderful “opentexts.world” service neither of these constraints applies – but the difficulties of balancing precision and recall have not gone away.

Here are the steps I am jumping through:

1. Generate a list of queries, one for each title in ATCL which doesn’t yet have any digital copy

2. Using CURL, send the queries off to the opentexts.world server and get back an XML representation of the results, including catalogue information and a link to the digital version

3. Process the results to check that this is actually the title we are looking for, and then extract the link to add to my atcl-links database

As of today, my query list has 7791 items. The NLS server doesn’t seem to mind dealing with several thousand CURL requests in rapid succession: it takes about ten minutes to run and dutifully sends me back a fat file containing a fairly straightfoward XML representation of the data.

This is fortunate since I am finding it difficult to decide how exactly to construct my query with maximal precision (to avoid false positives) and maximal recall (to avoid missing any) Most titles contain lots of words, most of which are preserved in most catalogues, so an exact word match for the full title is a good start. There are however still a few problems: punctuation and articles sometimes disappear; some titles appear more than once; some titles are very short, and thus generate many false positives. Quite a few titles have the BTAO problem – that tendency of Victorian publishers to improve the title of a new work by adding to it the formula “By The Author Of [insert previously successful titles by this author]” which results in multiple titles containing the same (irrelevant) string. What’s a good filter to cut down the noise from such things? My first thought was to require that the author’s name should be included; my second was to use the date of publication.

The problem with using the author’s name of course is that that it isn’t necessarily present on the title page, and therefore not necessarily present in the title field of the catalogue record. Many novels are anonymous; many authors published under a pseudonym. The ATCL has done a great job of rounding up and normalising authors, grouping under a single entry all variations of an author’s names. Using this it would be possible to find all the works of “Isabella Harwood” whether published under her name or the more usual pseudonym of “Ross Neil”, by increasing the recall of my “creator” search to allow for either name, but I haven’t yet done that. Instead, for my first experiment, I just use the main ATCL surname of the author, and resign myself to less recall, but more precision.

Running my 7791 queries like this

curl “https://design.opentexts.world/search/export?advanced=true&format=xml&title=Abbot%27s%20Cleve%3A%20or%20Can%20It%20be%20Proved%3F%20A%20Novel&creator=Harwood

gets me a total of 6503 results saying “nothing doing”, and 1288 for which there is one or more matching record. I anticipate multiple hits for each title, since there are multiple editions, and of course most of these catalogues list works by volume rather than by work. A very large number of hits usually indicates a problem: for example, there is a novel with the title “Arthur” by Christiana Jane Douglas. Searching just for “title: arthur AND creator:douglas” gets many many titles containing the word “arthur”, some of them editions of the Morte D’Arthur, edited by James Douglas, and others being numerous editions of Crimean War memoirs by one Douglas Arthur Reid. But 1288 hits is not too big a list to refine further.

My second experiment searches for the full title as above, but filters by date of publication. This produces slightly different numbers: there are now 6143 “nothing doing” responses, and 1648 with at least one hit. More interesting perhaps is that I can now compare the two result sets and see which titles are not found by either query – by hypothesis these are genuinely not available, because they don’t exist in the OpenTexts database – and which are found by one but not the other. There are 659 records not found by the search-with-author queries but found by the search-with-date option, whereas there are only 299 records not picked up by the search-with-date query but found with the search-with-author option. Looking down that list very quickly, I see that in most cases the disparity in dates is because the digitized copy is of a later edition of the same work, and this starts me wondering how much later an edition has to be before I decide it’s not satisfactory. The ideal might be to include only digitizations of the first edition, but an edition produced a year or two later is probably fine. Some of these texts have a long and complicated publishing history in which distinguishing the edition is quite critical; others were reprinted once or twice and then disappeared forever.

I am now leaning to the view that the way forward is to maximize recall, simply by combining the 299 records missed by the search-with-date strategy with the rest, and then to pass those results through another filter to improve its precision. This filter would check, for example, whether the publication details for each candidate match, or are within an acceptable range. But it’s very pleasing to note that I have now identified at least one digital version for 13,769 of the 19,912 titles in ATCL, i.e. 69%. Now, if I could only persuade the British Library to be a bit less secretive…

Rechercher dans OpenEdition Search

Vous allez être redirigé vers OpenEdition Search