Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

Yesterday’s Information Tomorrow (maybe)

If you go to the TEI’s website at http://www.tei-c.org you will find, as you might hope, a respectable number of documents tracking the evolution of the Text Encoding Initiative over the last umpteen decades. Curiously, though, the record for the most ancient period (before 2008, shall we say) is a lot easier to find and manipulate than for most recent times. This posting records my attempts to put together in archival format the full record of the meetings of the TEI Technical Council.

The Council, as any fule kno, met for the first time in 2002, and is still producing regular reports of its debates and its decisions. There is a page on the TEI website (https://tei-c.org/activities/council/Meetings/) which “lists TEI Technical Council meetings and teleconferences, with links to the meeting minutes.”

I downloaded that list (it’s a WordPress HTML file, of course), ran it through HTML Tidy, and processed the result to produce a nice simple TEI file of entries like this

<list>
<head>2022</head>
<item> conference call <date>8 December 2022</date><ref
target=”https://tei-c.org/activities/council/meetings/tei-technical-council-teleconference-2022-12-08/”
> [on website]</ref></item>
<item> conference call <date>10 November 2022</date><ref
target=”https://tei-c.org/activities/council/meetings/tei-technical-council-teleconference-2022-11-10/”
> [on website]</ref></item>

My quixotic goal is to enrich this data with links to TEI source files for each set of minutes, preferably in a consistent TEI format.

Now, twenty years ago this would have been quite a reasonable proposition since (as the current TEI Vault shows), the TEI once had an “eat your own dogfood” policy of producing all of its documents in TEI. Over the years, this policy has varied somewhat, largely as a consequence of changes in the tools available, and the culture that goes with them. These policy changes relate not just to the look and feel of the website itself but also to which versions of its contents are preserved and how. Today, I think it is not unreasonable to say that much of the TEI website exists only as WordPress pages: many of those pages were first created as Google Docs and then converted to WordPress, some of the older ones were created originally in hand-crafted TEI XML, and the very oldest were created in TEI P4 SGML, but the only versions that can reliably be downloaded from the current website are in WordPress HTML.

Much of the time, of course, this is unproblematic. Mostly, we just want to read the stuff, not analyse it. But occasionally, and especially for the older material, whoever or whatever was responsible for producing the WordPress files has really made a hash of it. Consider, for example, the working paper

https://tei-c.org/activities/council/working/tcw02-approaching-the-son-of-odd-source-markup-for-p5/

This working document is quite an important one in the history of ODD. But as currently presented on the TEI web site it is badly broken, to the extent that the text has become incomprehensible.  Consider this paragraph:

Comparison with earlier versions of the page (thank you Wayback Machine) show that this is a recent breakage: here for example is the same paragraph as it appeared back in 2005 when it entered the Internet Archive.

The Wayback machine, of course, can only archive what its crawlers find. They found this page a couple of times between 2005 and 2018, both of them looking fine, but thereafter only the WordPress version. This would not matter so much were it not for the fact that the original TEI P5 source has not apparently been archived anywhere, so the breakage cannot easily be fixed.

Such losses in translation occur occasionally in more recent documents too. Here’s a paragraph from the WordPress view of document tcm46  (Minutes of the TEI Council’s April 2011 meeting) for example:

Again, until or unless I track down the original version of this file, there’s no way of filling that particular gap.

Less annoying, but more pervasive is the fact that the WordPress files rarely try to preserve any structural or semantic information. The markup will mostly contain a long series of list items, some of which may pertain to the same topic, some of which may in fact be headings, some of which are an accident of formatting. In the text (apart from links) there’s no explicit indication of interesting things you might want to search for, such as names, places, or dates.

Very few WordPress files are well formed HTML, though the wonderful w3c utility tidy does a good job on pushing them into a processable shape. Out of 120 wordPress files, 38 (nearly a third) failed to respond to this treatment, mostly because they contained an unhealthy mixture of HTML and TEI or TEI-like tags.

And finally it has to be said (I’ll be brief) that it seems really sad that the TEI is preserving its deliberations in a proprietary, tool-dependent, presentation-oriented format … the kind of format which the TEI was set up to preserve scholarship from. What kind of apostacy is that?

Hunting for Lacy traces in the digital world

Title

Lacy’s Acting Edition was published in a series of 100 volumes, each containing up to 15 plays, between 1850 and 1874. (All dates approximate and unreliable). In addition to the collected volumes, Lacy sold individual play titles in cheap (6d) paper copies, many of which also found their way into private collections and public libraries. Consequently, copies of various components of the Lacy Acting Editions are now scattered across many research libraries. In some cases, they also exist in digital form, usually as scanned page images.

It is relatively easy to recover details of a library’s holdings from an online catalogue, for example by searching for the string “Lacy’s Acting Edition” or by specifying “Thomas Hailes Lacy” as publisher. It is less easy to restrict the search to generally available digital versions as there is still no reliable joint catalogue of digitized texts in major public collections, combining the digital holdings of say the British Library, the Bodleian, and other UK libraries, in the same way as has been done for many US libraries by the Hathi Trust, or more generally by the Internet Archive. (A project at the National Library of Scotland did set up such a site, under the name opentexts.world, a few years back, but its status is currently unclear and unsupported.

The ease with which the results of such searches can be obtained in a machine-tractable form (rather then simply displayed on a web page) is also quite variable. One is usually forced to fall back on web-scraping techniques and quite a lot of manual post-editing. This note documents my fairly uneven progress towards a definitive collection of links to existing and freely available digital copies of the plays constituting the Acting Edition on various sites. The fairly good news is that, as of today, of the 1498 titles making up the 100 volume Acting Edition, I have identified 586 which are freely available in some digital form somewhere. Track progress by looking at my online catalogue.

Hathi Trust

A search for the string “Lacy’s Acting Edition” anywhere in the catalogue record at https://catalog.hathitrust.org/ produces 294 hits, of which 246 are available in “full view” (i.e. should be downloadable without formality). A search for the string “Thomas Hailes Lacy” as publisher somewhat counter-intuitively produces only 94 hits. The web page displaying results looks like this:

  1. Results from a HT search. Setting page length to the maximum allowed (100) makes it feasible in this case to download all pages with minimal scrolling.

As usual, the easiest way to screen scrape is to save the HTML page as a file, use tidy to make it into well-formed XML, and then write XSLT to extract the useful information. In this case, the generated XML uses an undefined prefix “xlink:”, which I had to remove by hand, but apart from that everything needful was done by the XSLT stylesheet htScraper.xsl, resulting in a document (htListFull.xml) containing entries like this:

<bibl>
 <title>The first night; a comic drama in one
   act.</title>
 <pubDate>1800</pubDate>
 <author>Lacy, Thomas Hailes, 1809-73.</author>
</bibl>
<bibl>
 <title>After the party; a comedy in one act.</title>
 <pubDate>1870</pubDate>
 <author>Lacy,
   Thomas Hailes, 1809-1873.</author>
 <ref target=”https://hdl.handle.net/2027/hvd.32044072039373″>HT</ref>
</bibl>

No <ref> element is generated for entries which are not accessible in “full view” mode. Also note that the handle quoted above is for the Hathitrust index page; to download the whole text as a single PDF file you must visit that page, and wait while the PDF is constructed. Oh, and yes, you must also be logged in at a HathiTrust member institution. So much for “full view” access.

Open Texts

I blogged about this now sadly un-maintained site back in October 2020. The site was dark for a while, but seems to be back for the moment: this morning I visited and was able to download a list of 106 hits in CSV, XML, or JSON in one click, which was nice.

This is what I like to see at the foot of my first page of results

Individual results looking like this:

<doc>
 <str name=”organisation”>Bodleian Libraries</str>
 <str name=”idLocal”>016930688</str>
 <str name=”title”>King of the Merrows, or, The prince and the piper : a fairy extravaganza / written by F.C. Burnand, from an original plot constructed by J. Palgrave Simpson.</str>
 <str name=”urlMain”>http://purl.ox.ac.uk/uuid/42d1488e99284fef9421268c33e4730d</str>
 <int name=”year”>1862</int>
 <arr name=”date”>
  <str>1862</str>
 </arr>
 <arr name=”publisher”>
  <str>Thomas Hailes Lacy</str>
 </arr>
 <arr name=”creator”>
  <str>Burnand, F. C.</str>
 </arr>
 <arr name=”description”>
  <str>First performed at the Royal Olympic Theatre, 26th December, 1861.</str>
 </arr>
 <arr name=”placeOfPublication”>
  <str>London</str>
 </arr>
 <str name=”catLink”>http://solo.bodleian.ox.ac.uk/permalink/f/89vilt/oxfaleph016930688</str>
 <str name=”language”>English</str>
</doc>

are easily converted (e.g. by my stylesheet opentexts-conv.xsl) to produce

<bibl>
 <title>King of the Merrows, or, The prince and the piper : a fairy extravaganza / written by F.C.
   Burnand, from an original plot constructed by J. Palgrave Simpson.</title>
 <pubDate>1862</pubDate>
 <author>Burnand, F. C.</author>
 <note>First performed at the Royal Olympic Theatre, 26th December, 1861.</note>
 <ref target=”http://purl.ox.ac.uk/uuid/42d1488e99284fef9421268c33e4730d”/>
</bibl>

which is easily merged into the main Lacy catalogue.

Moreover, in this case (hoorah for the Bodleian), a visit to the publically available URL actually downloads the whole of the PDF file without further ado.

Sadly, PURLs are available for only three of the items in the Open Texts list of 106; the vast majority (90) being handles from HathiTrust, and the rest (13) links to archive.org. Moreover, the data has not apparently been updated since October 2020, which is presumably why it does not have anything like the 316 handles I found in the Hathi Trust catalogue for myself. In fact, every one of the handles it supplies exists also in the htListFull.xml list.

Google Books

A cauchemar. Google has digitized (almost certainly) all of the Lacy Acting Edition volumes, but it seems to be entirely arbitrary which ones you can access via Google Books. I have tried various approaches to searching (there is something called a `bibliogroup` for Lacy), and then reprocessing the resulting (very obscure) HTML, but cannot say I have succeeded in cracking this code. The file gbSearch.xml contains the screen- scraped-and-converted-to-XML output from a query for this; the stylesheet gbSearch.xsl filters out from this the 37 useful links it provides to files you can actually download from Google Books (but you still have to go through a captcha check, of course).

Searching specifically for “Lacy Acting Edition” on Google Books will provide an exciting list of entries for each of the first 93 volumes in the LAE — but only two of them (volumes 77 and 93) actually have anything you can download. (I belatedly discovered that this annoying behaviour can be modified by selecting “Full View” from the drop down menu at top left of the query screen, which hides the titles you cannot have). On the other hand, there are also a few occasions where the text actually digitized for a specific title is the whole of the volume in which that title appears. Thus, searching Google Books for The Half Caste will provide you with a link for the whole of volume 97, in which that title appears. Likewise a search for In Three Volumes actually gives you a link to the whole of Volume 91. Anyway, once you have a reliable link to Google’s equivalent of the Internet Archive’s “details” page (at the moment, it looks like https://www.google.co.uk/books/edition/Oberon_An_opera_in_four_acts_in_prose_an/IoFaWP1TQgkC) you can pass that to Google, and get back a nice “New” Google Books page in the middle of which is a nice “Download PDF” button. Which works — once you have completed the annoying captcha test of course.

All very well if you have the time to spend cutting and pasting links: but why couldn’t Google have provided a simple download in a form I can script? I assume it’s for the same reason they want to control access to these resources — to stop unscrupulous entrepreneurs in the “Print On Demand” industry from making a swift buck. And we all know how effective that policy is, don’t we?

Bodley

Real librarians do it with Z39.50. But my results (bodleyTexts.xml) show only 9 titles available in digital form.

The Hall Collection

Every now and then, serendipitous searching pays off. The Hall Collection contains approximately 600 English plays mostly from the late 18th and early 19th centuries, originally used as prompt books by a professional actress called Clara St. Casse. The Collection was donated to the University of Warwick Library by a Mrs G. F. Hall of Leamington Spa, together with a collection of other printed plays. Naturally it includes quite a few (102 to be exact) Lacy titles. Although the Warwick site (https://wdc.contentdm.oclc.org/digital/collection/hall) seems to provide only downloads and browsing of individual pages, someone, presumably from the Library, has also had the good sense and generosity to deposit the whole collection at archive.org, from which I was able to obtain an XML file (hallColl.xml) which can be readily processed to produce links to the 102 Lacy published titles: see hallCollTitles.xml

Internet Archive

This archive has an excellent search interface and will also deliver results in any tractable form you like, including json or xml. It cannot however perform magic to overcome variant cataloguing practices amongst the collections it has incorporated. So, for example, a search for “Lacy Acting Edition” throws up precisely one hit (“a copy graciously made available by Fordham University”). A more general search for “Thomas Hailes Lacy” gets me 125 hits, 102 of which come from the Hall Collection. A search (thomas hailes lacy) AND -collection:(hallcollection) finds me the 23 titles not included in the Hall Collection. On the other hand, a search for “T.H. Lacy AND -collection:(hallcollection)” finds 66 titles, not included in the Hall Collection, but not included in the foregoing either.

On the bright side, the hits can be downloaded in a format which is more or less identical to that generated by the XML option quoted for the Open Texts server above, so mungeing the results lists together is a Simple Matter Of Programming, resulting in iaList.xml.

An experiment in CLS

Some time ago, I agreed to participate along with several others much smarter than me in COST Action Work Group 3. The goals of this work group were, amongst other things, to run a small experiment in counting verb frequencies on ELTeC texts enhanced with POS and lemma information. It took a surprisingly long time to find out exactly what contribution was required of me, and I make no claim to have got it right even now. But here’s what I thought I was doing.

First, I wrote an insultingly simple XSL stylesheet to produce a list, in descending frequency order, of verbal lemmas in each of the (now) 10 ELTeC level 2 corpora. For example, here’s the start of the file rom/verbFreq.xml:

<frequencies>
 <lemma form=”face” freq=”30919″/>
 <lemma form=”avea” freq=”29391″/>
 <lemma form=”zice” freq=”22673″/>
<!– … and so on for several hundred more lines –>
</frequencies>

… which tells us that in our data Romanian’s favourite verb has the lemma face, and the next favourite is avea. The code for doing this is (like all the rest of the code described here) in the github repo COST-ELteC/ELTeC-data/Scripts if you care: it’s called imaginatively verbFreqs.xsl

Next, I wrote another simple-minded script to extract from each novel a bag of words, with no markup or punctuation: just all the verbs, for example, or all the nouns, in their order of appearance in the text. So the that celebrated work Hard Times, which begins in the original like this

<div type=”group”>
 <head>BOOK THE FIRST <hi>SOWING</hi> </head>
 <div type=”chapter”>  <head>CHAPTER I
     THE ONE THING NEEDFUL</head>
  <p>‘<hi>Now</hi>, what I want is, Facts.  Teach these boys and girls nothing but Facts.  Facts    alone are wanted in life.  Plant nothing else, and root out everything else.  You can only form    the minds of reasoning animals upon Facts: nothing else will ever be of any service to them.</p>
<!– … –>
 </div>
<!– … –>
</div>

generates a bag of words starting like this

want be teach be want|wanted plant root form be…    

if I ask for VERB lemmas, or like this

book sowing|sow chapter thing fact boy girl fact fact life mind reasoning|reason animal fact    service 

if I ask for NOUN lemmas. You may wish to complain about the behaviour of the lemmatizer here, but I am taking the path of least resistance and using whatever treetagger (in this case) produces without cavil. This deplorable laziness returns to bite me further below…

I wrote some python to run the xslt script filter.xsl which does this task: the script is called filter.py and it uses a Python interface to the Saxon C processor which I was very pleased with myself about when I got it working; less so later, see below. There’s more mundane detail of how to run it in the README in the Scripts folder.

If still awake, you are probably wondering what the point of all this was. And here comes the scientific bit. The little workgroup I had signed up for wished to test a Hypothesis, which (if I understand it correctly) might be crudely summarized thusly:

  • The European novel undergoes some sort of seismic shift around the turn of the 19th century, which is popularly known as The Rise of Modernism
  • Modernism has many stylistic correlatives, but they include notably a focus on the interior life of characters, on sensation and feeling, rather than on objective omniscient narrative
  • If this is true, we should expect to see a change in the frequency with which verbs associated with that ‘inner life’ appear over time.

I hope you can see where we are going with this, now. All we need is a reasonably plausible list of verbs which express aspects of ‘inner life’. And so, for the next few months, with zoom and email and similar modern contrivances, the group theorized how to actually produce such a list. I may have fallen asleep during the process and missed something critical, but eventually (I think) it was decided that we would explore two approaches to identifying our list. Firstly, we’d ask language experts to vote for their top ten “inner” verbs. Secondly, we’d use a statistical procedure (word vector embedding) to identify a list of candidate verbs automagically. Then we’d compare the results, declare victory, and move on.

What could possible go wrong? Well, at least two things.

Firstly, the ask-an-expert approach turned out to be less successful than it might have been, largely for purely logistical reasons. If we had asked the experts simply to review the existing verb frequency lists for their language and identify in them those verbs which were indubitably and always betokeners of interiority, plus any others which were a bit thus inclined sometimes, then we might have got our results a bit faster. But we didn’t, and the experts, understandably a bit mystified by the whole process, gave us lists which varied widely in their format and scope. So I found myself having to tweak and readjust their contributions, to remove duplicates and ambiguity. As for the automagical procedure, it proved a little challenging for most participants to run, if only because it required access to a machine capable of running Google’s word2vec program which is not meant for your average laptop. In any case, you can see the resulting word lists in the file innerVerbs.xml which I hope is fairly self-explanatory.

Secondly, my simplistic notion of ‘lemma’ turned out to be problematic. As you noticed above, when unable to choose between two alternatives, treetagger obligingly gives you both of them, separated by a vertical bar. That’s no problem for me: I just discard the alternative. But other lemmatizers behave differently. For example, in our Portuguese data, the lemmas for reflexive verbs are suffixed by a # and an indication of person. In our Hungarian data, spelling variations of the same basic lemma are sometimes presented as different lemmas. In the first case, should I simply ignore the part of the lemma after the #? In the second, should I aggregate all the differently spelled variants and consider matches for any of them as equivalent? As usual in computational linguistics, it all depends what you think you’re counting…

Despite these metalinguistic anxieties, I wrote a (needlessly complicated) python script called verbCount.py to count the frequencies of the inner verbs through time, comparing the things-called-lemmas in our various lists of inner verbs with the things-identified-as-lemmas in the level2-encoded files. Invoking various XSLT scripts and Saxon C as before, this script grudgingly churned out a file for each text in the corpus under examination, with a row for each title and a column for each inner verb, like this:

   extId year verbs innerVerbs aimer connaître croire entendre regarder savoir sembler trouver voir vouloir     FRA00101 1860 3889 310  17 9 28 22 18 52 5 47 83 29    FRA00102 1883 5499 465  112 21 38 16 17 55 32 30 77 67       FRA00201 1910 7577 682  26 20 41 75 96 63 49 93 128 91   

I say ‘grudgingly’ because the script was obliged to process the whole of every file in order to extract a year of publication from its TEI header, and consequently ran with noticeable slowness. If I’d thought to include the year of publication along with other metadata in the filename of the “bag of words” I could have used that instead, which would have been much quicker. Maybe if I get a better set of inner life verbs I’ll revise the scripts to do so.

Anyway, we now have a bunch of CSV files. And why? Because my colleague Diana has produced some R scripts which will plot this data set so everyone can understand it. Or at least look at it. Here’s what we get for some of the Portuguese data:

innerVerbs.png

I leave it to the statistically-informed to interpret this and other similar results. The closing conference of the COST Action, taking place next week, includes a paper (on which I am somewhat embarassingly cited as co-author) presenting the results in more detail.

Reviving the VPP : a start

The Victorian theatre has not enjoyed documentation or digitization as systematically as has the Victorian novel, reflecting perhaps scholarly perception of their comparative artistic significance. Yet it is a truism that the influence of the Victorian popular theatre on the development of the novel during this period was by no means limited to the efforts of dedicated amateur enthusiasts such as Dickens and Collins and their circle. In Emily Allen’s words “Victorian theatre was the novel’s ally, inspiration, and competitor”. As an ongoing expression of popular culture, nineteenth century theatre has deep roots and many branches; its lineage runs from the high gothic of romantic melodrama to the memes of cinema and modern day television, embracing both the theatre of sensational spectacle and that of domestic realism. Yet for those wishing to see the phenomenon as a whole, to perform a kind of distant reading of its texts, there is nothing approximating to Bassett’s At the Circulating Library database of Victorian fiction (http://www.victorianresearch.org/atcl/search.php) in terms of completeness or coverage. Such attempts to document the Victorian theatre as do exist, have generally done so in terms of the careers of individual actors, writers, or institutions. Although collections of the primary source materials exist in a few libraries, it is as a consequence of individual collections or bequeaths, rather than any attempt at systematic coverage.

One notable exception is Richard Pearson’s Victorian Plays Project (VPP), originally funded by the AHRC 2005-2007, and still hosted at the National University of Ireland in Galway. A key deliverable of this project was an online catalogue of the approximately 1500 titles making up Lacy’s Acting Edition of Plays, derived from the (apparently unique) surviving copies of that edition preserved in what was then the Birmingham Central Library.

Thomas Hailes Lacy began publishing contemporary plays at his Covent Garden printing house shortly after the Theatre Regulation Act of 1843 which removed the duopoly previously enjoyed by the Covent Garden and Drury Lane theatres. In a far-sited move, Lacy acquired the rights to print plays from the theatrical managers, ostensibly to protect their copyrights, though he was not averse to a little piracy himself. These “Acting Editions” contained everything needful to produce a play: including details of costumes, settings, blocking, accompanying business etc. as well as cast lists and the text of the play itself. New titles appeared every year until the 1870s when Lacy sold the whole collection to Samuel French, an American publisher with whom he had exchanged plays for publication for the previous two decades.

According to the existing VPP website (http://victorian.nuigalway.ie/modx/index.php?id=187), in addition to producing this on-line catalogue, the project aimed to “generate e-texts in .pdf format that replicate the original texts re-edited for electronic usage” and also to “create a database of plays marked up using TEI encoding in XML that will be searchable”. The website also states that “Transcription of the Lacy’s Catalogue, and editing and encoding of the texts was undertaken by the Victorian Plays Project using OxyGen TEI mark-up software and Acrobat Professional. ” (http://victorian.nuigalway.ie/modx/index.php?id=182).

As of today, the website does provide a list of all 1428 titles in the Acting Edition, including basic data about their authorship and performance history. It also makes available a set of 239 titles which have been transcribed and reformatted as PDF files preserving much of the typography of the originals. Other formats, if they exist, are not visible on the website, though a small number of titles have clearly been annotated and indexed at some point in the past with separate lists of named entities and striking phrases. (Some further information on this and a closely related sister project concerned with the records of the Lord Chamberlain’s Office is provided by Radcliffe, C. & Mattacks, K., (2009) “From Analogues to Digital: New Resources in Nineteenth-Century Theatre”, 19: Interdisciplinary Studies in the Long Nineteenth Century 8. doi: https://doi.org/10.16995/ntn.499 )

However, the VPP website does not seem to have been developed since 2015, and the untimely death of Professor Richard Pearson at the end of 2018 (https://bavs.ac.uk/uncategorized/obituary-richard-pearson/) casts its future development into serious doubt. As is all too often the case, preservation of a digital archive turns out to depend as much on individual personal support as on technological constraints.

I have therefore applied for funding to carry out an initial scoping study investigating the feasibility of reviving and bringing up to date the Victorian Plays Project. If accepted (and there’s no reason to suppose it will be) this would naturally begin by reviewing any additional digital materials which have been archived, and by interviewing personnel associated with the original project at Galway. The inventory resulting from this review would be extended with a survey of other digital versions of the Lacy Acting Edition now available online (for example, in transcribed form at Project Gutenberg and elsewhere and in digital facsimile via the Hathi Trust or the Internet Archive). Contacts at Galway and elsewhere (for example in the library and special collections community, and in the professional Victorian studies networks) would be approached for information about existing related endeavours, and to raise awareness of the project.

If sufficient suitable materials can be found, the next step will be to design, document, and implement procedures to convert them all to a single simple TEI encoding, consistent with (for example) that used by the DraCor project, or the ELTeC. Following these de facto community standards has many advantages, such as the ability to re-use existing software tools, or the ability to leverage existing community familiarity with the format. The resulting digital archive would be initially maintained as an open repository on GitHub, with all converted materials made available under a CC-BY licence.

It is probable that automatic conversion to this (or any other) target format will be much easier for texts already transcribed than for texts only available in digital image format. In a second phase of the project it is planned to explore and report on the applicability of “machine learning” techniques to enhance the performance of existing OCR platforms. By comparison with novels and other print material from this period, the Acting Edition texts are unusual in the complexity and variety of their typography. This complexity, derived from the need to clearly distinguish speaking parts, stage directions etc., is however regular and systematic and should thus be potentially beneficial in the task of automatic markup.

The availability of a consistently organized and encoded corpus of Victorian play texts will make possible the application of emerging distant reading methods and tools to a component of victorian cultural history which has been curiously neglected, if not undervalued, hitherto.

In the meantime, I have been tracking down other existing online resources for the description of the 19th century theatre. But that, as they say, is another and a different blog posting perhaps.

EEBO TCP in P5 – the return

It seemed a necessary act of piety to respond positively to a request for help from my former colleagues at the Oxford Text Archive, when they finally got around to considering the conversion of the latest (and one must fear the last) tranche of EEBO texts from the Text Creation Partnership. The conversion into a TEI P5 compatible version of the vast majority of EEBO-TCP phases 1 and 2 texts and their subsequent upload to a gazillion github repositories was accomplished by a team headed by Sebastian, back in the days when TEI Simple Print was new, and we were all a bit more bushy-tailed and bright eyed. Now the OTA has received their last tranche of TCP phase 2 texts, it should not (surely) be too much of a sweat to crank them through the same conversion process and deposit the results in Github too. Though of course nothing is ever quite that simple.

The XSLT script which does the heavy lifting is called tcp2tei and (thank you Sebastian) here it is, safe and sound in the TEI Stylesheets repository. And it still works. There is even a shell script for creating a new github repo and uploading each file to it from the same masterly hand; this one nearly works, as a consequence of github having got a little more fussy about authentication mechanisms in the last five years, but that’s not hard to fix. So I should just declare victory and move on.

On closer inspection however three issues have surfaced (so far).

Firstly, the catalogue numbers. In the current TCP P5 texts, each TEI header has a string of <idno> elements supplying its identifier in the Michigan DLPS database, the identifiers of one or more MARC records from OCLC or UMI catalogues, its Proquest number, identifiers in one or more standard bibliographies (ESTC, Wing, STC, Evans etc.) and the number of the image set which was scanned. For some reason I do not understand, not all the new texts supplied to the OTA have their full complement of these identifiers. For example, of the 6498 titles supplied, 3062 have OCLC Marc record identifiers, (discounting an additional 187 duplicated OCLC records in which the record identifier is prefixed redundantly by “ocn”). None of the 6498 has an image set number. Only 2987 have a Proquest identifier, and it’s always the same as the MARC catalogue number. And 963 have no bibliographic identifier of any sort.

No matter: after my skirmishes with EEBO metadata last summer (reported at https://foxglove.hypotheses.org/date/2020/08) , I am confident of being able to recover missing catalogue numbers from at least two different sources: one being Paul Shaffner (whom God preserve)’s eebodat1.xml and the other being Proquest (whom God has recently abandoned)’s title list. The stylesheet I am working on to do mundane things like change the availability statement in the header is duly expanded to supply the missing <idno>s. I decided to add the new Proquest numbers (the so called GOID) even though these are not present in the existing files.

Secondly the image links. One reason for caring about the Image Set numbers is that they are used as part of the address to which @facs attribute values scattered throughout the texts are mapped. Back in the day, it was possible to link directly to a page image in this way. This facility is however no more: Proquest (and presumably their successors) will only allow you to access individual page images by using their own interface, so far as I can tell. It is possible to access the same images via the JISC Historical Dataset sites by judiciously stringing together values from those <idno>s, but I have yet to find a reliable way of doing so for individual pages. For the present therefore the @facs values will remain a touching reminder of how things once were. Though I did add a link to the JISC site into the new headers, along with other useful documentation.

And thirdly the real subject of this entry: what to do about @rend. Now, I have long believed that the TCP P5 texts are not only valid TEI P5 XML but also valid against a specific TEI schema, to wit the schema named (after much argument and in the teeth of some opposition) TEI Simple Print. I distinctly remember (or think I remember) Sebastian and Magdalena putting quite a bit of effort into enhancing that schema with lots of @rendition values to match EEBO-TCP requirements. So when I actually tried validating my nice new files against it, I was a bit puzzled to find that they didn’t.

Specifically, the attribute @rend is not available in the TEI Simple schema, and has not been since at least August 2016. In its place, I should be using @rendition to point to one or more of the predefined simple:rendition values. So I spent an hour or two tabulating all the @rend values used in the new files, and finding their simple:equivalents. This proved easy for most cases, but impossible for just a few (7), some of them esoteric (@rend=’upsideDown’ anyone?), but others (e.g. @rend=”margQuotes” and its friends margSglQuotes and margDblQuotes) quite frequent and clearly necessary. I also realised (belatedly) that allowing my script to make this change was going to make my new texts inconsistent with the existing ones. For the existing TCP P5 texts are not valid against TEI Simple either, I discovered, and somewhat to my embarassment: they use @rend with all sorts of exotic values all over the place. They do use @rendition, but in one way only: some <pb/> elements have @rendition=’simple:additional’. This was entirely mysterious to me for a while (until Paul remembered what it was for). In any case, I will worry about that when a new systematic revision of the whole collection is undertaken, should that day ever dawn. For the moment, I will grit my teeth, stick with @rend, and assure anyone who asks that all TCP P5 texts are valid against, err, TEI ALL. This is known as biting the bullet, I believe.

Update: 10 June 2021. I uploaded all 6498 new texts to new repositories in the Github textcreationpartnership collection over a period of 24 hours last week. And, at somewhat greater length, I have now updated my repository at eebo-bib to describe more precisely what I did to create a TEI-compatible TCP bibliography. Definitely time to declare victory and move on.

Seven steps to Ossian

A TEI transcription of the 1773 edition of James Macpherson’s “translations” of the works of “Ossian”

Why would anyone want such a thing? I can’t imagine, but here’s how I made this one. It turned out to be a seven step process — so far. You can check out each stage from this github repo, if you’re really curious…

1. Decide which PDF to work from

You might think that one library’s digitized copy of “the 1773 edition of Ossian” would be much the same as another’s. But no. There are variations in the physical state of the originals, and the PDF format in which the digitization is made available may also vary. I downloaded three different digitized versions from the Internet Archive, but mainly I used the PDF version of the copy preserved at the National Library of Scotland. https://ia802302.us.archive.org/33/items/poemsofossiantra11macp/poemsofossiantra11macp.pdf I say “mainly” because that particular PDF file had a curious glitch in it which made some of the half-titles disappear when extracted as separate image files. I supplied the missing text from the PDF version of the New York Public Library’s copy.

2. Extract images from PDF

$ pdfimages [filename.pdf] [outputPrefix]

I am too lazy to install anything clever, so I use tried and tested ancient command line Unix tools, like pdfimages. Applying this to my chosen PDF file, I find that each page in the NLS PDF produces three files: two in PPM format which appear to be masks, and one in grayscale representing the page, in negative form. I extract the page image and save it in my img folder, ready for the next stage.

3. Do OCR using medieval rules

$ tesseract [inputfile] [outputfile] -l enm

As noted above, I have a preference for old-fashioned command-line Unix tools, and tesseract, once instructed to use an appropriate language model (enm, rather than eng), actually does a pretty good job of recognizing 18th century typography. It consistently fails on ligatured “ct” and a few other oddities, but is much better than I expected at distinguishing long-s from f. Most of its errors seem to be due to poor image quality. At the end of the process, I have about eight hundred text files, each corresponding with one page of the source, and most of them containing plausible text, which I save in the txt folder.

4. Hand-check, page by page, introducing minimal non-xml markup

I then (and this is where the time goes) proofread each one, introducing some absolutely minimal markup, of my own invention. The cheatsheet reads as follows:

  • introduce a — line at start and end of text on page
  • introduce a == line at start and end of note-text on page
  • introduce a blank line between paras but otherwise retain linebreaks
  • introduce an extra hyphen following end-of-line hyphens which are to be retained
  • replace * or +1 sigla for notes and note references with @ and a sequence number
  • use entity references for long dash, accented letters etc
  • use ““” for open double quotes
  • retain forme work on a single line
  • delimit smallcaps with { … }
  • delimit italicized phrases with {{ … }}
  • use % to mark the start of a dramatic style speech
  • add \ at end of verse lines
  • add $ at end of speech
  • add &end; at end of an argument or other chunk

I made the corrections using, need you ask, emacs aided and abetted with some perl one-liners for bulk corrections. Reading Ossian is an odd way to pass time during lockdown, but no worse than some of the other sanity-preserving expedients one reads about on Twitter. A good soundtrack seems to be almost any Sibelius symphony.

5. Transform and (slightly) reorganize the textfiles into proper XML, one per text

perl streamer.prl v1files.txt

This is not the most elegant or indeed sanitary code I have ever written; it also took quite a few iterations to get it working acceptably, which I defined as generating well-formed XML. It reads in a listof filenames, interspersed with flags to tell it when to start a new output file, and what its initial page number should be. Then it processes in succession each page of transcribed text, building up one string containing all the text chunks for a work, and another containing all the footnotes. Footnotes often span pages, of course. The resulting strings are then output as two separate XML <div> elements. Their contents also acquire some minimal XML tagging (<pb/>, <hi>, <p>, <sp> etc.) before they get flushed out. I gave up trying to overcome some inelegant results of not particularly elegant process. The code is in the Scripts folder of this repo for the morbidly curious; the results are in the xml folder: at least they are well-formed XML.

6. Run XSLT scripts to convert this stuff to kosher TEI documents and validate same.

Since this version is going to be my contribution to the “Ossian Online” project, it should probably follow that project’s usage and TEI practices. Alas, they do not have an ODD to tell me what that should be, and their files are apparently validated against TEI-All. But they do have a reasonable amount of documentation, and enough files already available online for me to be able to construct an ODD automagically (take a bow oddbyexample.xsl a well kept secret inside the TEI P5 Utilities repository) and thus a schema I can use to validate my TEI files when I have finished licking them into shape.

As ever, the fun part of the project is seeing how much of the remaining data-mungeing can be scripted in XSLT. Quite a lot, it transpires, though it remains necessary to hand-craft the details of titlepages, tables of contents etc. Another complete sweep through the text checking for miscellaneous things like the following is also needed:

  • words broken by a pagebreak but not properly reassembled (happens occasionally)
  • quotations not marked as such
  • verse lines not marked as such
  • code-switching
  • residual OCR errors (there are always residual OCR errors)

Before launching into that campaign, I checked the <pb/> elements introduced at stage 5 against the page numbering of the original as preserved in the paratextual comments of my transcription. Somewhat to my surprise, the page-numbering corresponded exactly with the number of such elements, enabling me to construct both a reliable reference system and reliable page image links as values for the facs attribute on each <pb/>.

7. Decide on the macrostructure

The Ossian Online project uses <div> elements for every subdivision of the 1773 edition, at whatever level, all the way down from its two volumes to the arguments of individual poems. It takes the perfectly reasonable view that every text can be organized as an ordered hierarchy of uniformly nested objects. As a consequence, the @type attribute for <div> has to do quite a lot of heavy lifting. Odd_by_example enumerates its values as follows:

  • advertisement
  • argument
  • book
  • contents
  • dedication
  • dissertation
  • duan
  • fragment
  • maintext
  • poem
  • preface

This list combines types that have a structural function (fragment, duan, book) with others that are purely descriptive (advertisement, argument, poem). Nothing wrong with that, but I still find this “divs all the way down” approach somewhat problematic, and that for for two reasons. Firstly a <div> is supposedly something incomplete, which is true of (for example) the argument prefixed to each poem or book, but not of the book or poem itself. Secondly, the relation between the argument and the poem requires that the two be siblings within some larger entity, but the poem is not really an incomplete part of that entity in quite the same way as the argument. Furthermore, in the 1773 edition, we have some texts which are undivided (Carricthura for example) along with others which are divided variously into “duan”s (Cathlona) or “book”s (Fingal). Should each book of Fingal be treated as a single text? Should the whole of Fingal be treated as a single text? Can’t we have both?

Values such as maintext and poem in the list above ring alarm bells indicating that these ontological issues are being evaded. Since the TEI in its wisdom already provides a mechanism for coping with exactly this (not at all unusual) kind of macrostructure, why not use it? I refer of course to the element <group>.

My version of the 1773 edition prefers to treat each distinct work as a <text>, rather than a <div type='maintext'>. Within it, there is a <front> containing a titlepage or half title and the argument, followed by a <body>, if the work is not further subdivided, or by a <group> if it is. A <group>, combines a number of lower-level <text> elements, each again with a <front> and a <body>. I also treat each of the two volumes of the 1773 edition as a <group>. The file driver.tei embeds each file in the structure using xInclude; it is commented to explain what’s going on (a bit).

It remains to be seen what my colleagues in Galway make of this radical re-organisation, to say nothing of my perverse desire to retain the long-s form. But at least changing to a format which matches more exactly that of the (excellent) work already done on the Ossian Online site will be a Simple Matter Of Programming.

ELTeCTiT : ELTeC Titles in Translation

(I haven’t posted here since last October. I expect it’s lockdown keeping me quiet. But this morning I did manage to dream up quite an interesting research proposal, which I post here for now.)

The ELTeC corpora were designed explicitly and deliberately [ref design criteria] to exclude translated works. [quote] Despite this principled design decision, it seems self-evident that analysis of the mechanisms and results of the cross-cultural dispersal of the novel across Europe – which emphatically is within the scope of the ELTeC project – depended largely if not entirely on the availability of works in translation. It seems probable that the spread of the novel as a popular form was largely determined by the success of particular works, or classes of work, in translation; in particular, we may surmise, those works which responded to common social problems and common cultural trends. Novelists in the traditions from which those works sprang influenced novelists working in entirely different cultural milieux; just as writers raised in other traditions may be presumed to have influenced the development of what we now perceive as a unified European culture by providing easily assimilated versions of the exotic.

Although some multi-lingual expertise was undoubtedly prevalent during this period, the availability of translated versions of novels must have been essential to this diffusion, in both directions. But even basic data about the scale and scope of translations over the period covered by the ELTeC (1840 to 1920) is hard to find [refs needed] being largely diffused across national library catalogues which vary in the extent to which such works are associated with their originals, and rarely give any indication of the exact pedigree of any translation. It seems probable for example that translations into some target languages (say Romanian) would have started not from the version in the original language (say English) but from some other more accessible L2 (say French), but this is hard to determine without substantial research into individual titles and authors. Even harder to find or quantify is any information about the linguistic skills or preferences of a novel’s intended or actual readership. While it is highly probable that the languages of the great imperial powers (English, French, German) would be widely understood in those countries directly under the political or cultural influence of those powers, the extent to which they would be considered appropriate vehicles for reading for pleasure is less clear.

There are many theoretical and formal difficulties associated with any investigation of the relationship between a source and its translation, particularly (perhaps) for works of a literary nature. Translation, like speech itself, is one of the more inexplicable human behaviours. It ought not to be possible, and yet it is done, apparently more or less successfully, every day. [For an entertaining and accessible discussion, written from the perspective of a professional translator, see David Bellos “Is that a fish in your ear?” (2011)] We do not propose to address any such issues in this project, though we may provide some indicative data points to help us others do so. Our goals are more modest. Each completed ELTeC corpus already provides us with a sample of novel production in a given language within a given time frame, hopefully more or less well balanced with respect to date, size, authorship, and impact. We propose to enrich this list of titles with bibliographic data about all translated versions published within a short period (say 15 years) of their first appearance, recording for example the target language, the translated title, date and other details of publication, the translator’s name, and (where this can be determined) the source of the translation. This data will of course be provided in an open format compatible with existing ELTeC deliverables.

Amongst other research questions which availability of this data should address, we identify at least the following:

  • To assess “impact” or “persistence” of titles, the ELTeC corpora rely on a simple reprint count. Do translation counts complement or contradict this classification?

  • Are translation counts statistically correlated with any of the ELTeC classification criteria? That is, where a given collection shows an imbalance for a given criterion, is this also reflected in the translation count?

  • What patterns are discernible in the L1/L2 pairings manifested by our data: for example, which languages are most frequently translated into for each source language?

  • Is there any correlation between stylistic properties of a given group of sources and the languages into which they are translated? Crudely speaking, are romances more often translated into romance languages?

     

Where translated texts are available in digital form, it would be easy also to provide an ELTeC encoded version, using existing production pipelines. At this stage in the project, it is impossible to say whether this will be feasible on a sufficiently large scale to constitute true parallel ELTeC corpora: it would in any case require significant investment of time and effort from the existing ELTeC partners, whereas the collection of metadata can be done more simply.

Lou Burnard

February 2021

A tale of precision and recall

Back in the day when “text retrieval” was a thing, I remember learning the difference between precision and recall, and the need for a philosophical attitude to the fact that an optimal search has to maximize both these fairly incompatible factors. I now realise how much this whole ATCL exercise has been about that fact. My earlier efforts to identify ATCL titles in the catalogues of existing digital archives involved comparison on the basis of a manufactured key, algorithmically derived from each resource by the same process, which seemed a good compromise. This method also seemed necessary because of the limited facilities some of those resources offered for querying and manipulating the results of queries. With the availability of the wonderful “opentexts.world” service neither of these constraints applies – but the difficulties of balancing precision and recall have not gone away.

Here are the steps I am jumping through:

1. Generate a list of queries, one for each title in ATCL which doesn’t yet have any digital copy

2. Using CURL, send the queries off to the opentexts.world server and get back an XML representation of the results, including catalogue information and a link to the digital version

3. Process the results to check that this is actually the title we are looking for, and then extract the link to add to my atcl-links database

As of today, my query list has 7791 items. The NLS server doesn’t seem to mind dealing with several thousand CURL requests in rapid succession: it takes about ten minutes to run and dutifully sends me back a fat file containing a fairly straightfoward XML representation of the data.

This is fortunate since I am finding it difficult to decide how exactly to construct my query with maximal precision (to avoid false positives) and maximal recall (to avoid missing any) Most titles contain lots of words, most of which are preserved in most catalogues, so an exact word match for the full title is a good start. There are however still a few problems: punctuation and articles sometimes disappear; some titles appear more than once; some titles are very short, and thus generate many false positives. Quite a few titles have the BTAO problem – that tendency of Victorian publishers to improve the title of a new work by adding to it the formula “By The Author Of [insert previously successful titles by this author]” which results in multiple titles containing the same (irrelevant) string. What’s a good filter to cut down the noise from such things? My first thought was to require that the author’s name should be included; my second was to use the date of publication.

The problem with using the author’s name of course is that that it isn’t necessarily present on the title page, and therefore not necessarily present in the title field of the catalogue record. Many novels are anonymous; many authors published under a pseudonym. The ATCL has done a great job of rounding up and normalising authors, grouping under a single entry all variations of an author’s names. Using this it would be possible to find all the works of “Isabella Harwood” whether published under her name or the more usual pseudonym of “Ross Neil”, by increasing the recall of my “creator” search to allow for either name, but I haven’t yet done that. Instead, for my first experiment, I just use the main ATCL surname of the author, and resign myself to less recall, but more precision.

Running my 7791 queries like this

curl “https://design.opentexts.world/search/export?advanced=true&format=xml&title=Abbot%27s%20Cleve%3A%20or%20Can%20It%20be%20Proved%3F%20A%20Novel&creator=Harwood

gets me a total of 6503 results saying “nothing doing”, and 1288 for which there is one or more matching record. I anticipate multiple hits for each title, since there are multiple editions, and of course most of these catalogues list works by volume rather than by work. A very large number of hits usually indicates a problem: for example, there is a novel with the title “Arthur” by Christiana Jane Douglas. Searching just for “title: arthur AND creator:douglas” gets many many titles containing the word “arthur”, some of them editions of the Morte D’Arthur, edited by James Douglas, and others being numerous editions of Crimean War memoirs by one Douglas Arthur Reid. But 1288 hits is not too big a list to refine further.

My second experiment searches for the full title as above, but filters by date of publication. This produces slightly different numbers: there are now 6143 “nothing doing” responses, and 1648 with at least one hit. More interesting perhaps is that I can now compare the two result sets and see which titles are not found by either query – by hypothesis these are genuinely not available, because they don’t exist in the OpenTexts database – and which are found by one but not the other. There are 659 records not found by the search-with-author queries but found by the search-with-date option, whereas there are only 299 records not picked up by the search-with-date query but found with the search-with-author option. Looking down that list very quickly, I see that in most cases the disparity in dates is because the digitized copy is of a later edition of the same work, and this starts me wondering how much later an edition has to be before I decide it’s not satisfactory. The ideal might be to include only digitizations of the first edition, but an edition produced a year or two later is probably fine. Some of these texts have a long and complicated publishing history in which distinguishing the edition is quite critical; others were reprinted once or twice and then disappeared forever.

I am now leaning to the view that the way forward is to maximize recall, simply by combining the 299 records missed by the search-with-date strategy with the rest, and then to pass those results through another filter to improve its precision. This filter would check, for example, whether the publication details for each candidate match, or are within an acceptable range. But it’s very pleasing to note that I have now identified at least one digital version for 13,769 of the 19,912 titles in ATCL, i.e. 69%. Now, if I could only persuade the British Library to be a bit less secretive…

Counting the books: yes, there’s more

My efforts to find links to digitized versions of all the titles in ATCL made one huge methodogological leap forward last week, and is now poised on the brink of another.

Going through the titles I had managed to extract from a rather uncooperative Google Books interface last week, I noticed that rather a lot of them were marked as “not available” for some reason: more precisely although my 11,104 searches, each corresponding to an entry in ATCL for which I had not yet found a digitized version, had succeeded in identifying 2186 previously unseen titles, they had also thrown up 3885 titles which Google considered inaccessible, presumably for copyright reasons, and 5033 of which it flatly denied any knowledge. Yet when I looked up a few of these same titles (whether allegedly “inaccessible” or “non-existent”) in SOLO – the Bodleian’s wizard student-friendly query interface to its catalogue– there they were, page images downloadable in PDF, no sweat.

Now, amongst other delights, SOLO allows quite rich facetted searching, so it is easy to formulate a query like “find me all titles classed as fiction published in London or Scotland between 1830 and 1900, which have also been digitized by Google”, which made me think for a few moments that my work was now done. But as with many other classy library interfaces, SOLO stops short of allowing a mere automaton to carry out any searching: you have to sit at a keyboard and type, though it will grudgingly allow you to save and download the results of your query … provided it contains no more than 50 (FIFTY!) hits. Which (as I politely pointed out to the harassed librarian on online-chat duty last week), is almost entirely useless for my purposes.

Then I remembered that Real Librarians Do It With Z39.50 and dusted off my YAZ skills. The Bodleian, like all real libraries, has a perfectly good Z39.50 interface, which is not only entirely unbothered by a succession of several hundred queries but also happy to send back directly as many full catalogue entries for the hits as you can (err) handle. The only catch is that the queries have to be expressed in some antique syntax called PQN (Prefix Query Notation) and the results come back in MARC 11. I cut my programming teeth on Fortran IV, so these ancient tongues scare me a lot less than, say, JSON. I turned my list of queries unsatisfied by Google Books into PQN, fired them at library.ox,ac.uk:210/ALEPH and put the kettle on for a nice cup of tea. PQN is not very discriminating, or not in my hands at any rate, and my queries massively overgenerated. But once my 11297 results had passed through through a couple of utilities (yaz-marcdump to produce marcxml, and my very own `marctotei` to identify and fillet the relevant records) I had a set of 780 CPF format records to add to the ATCL database list, and the tea wasn’t even cold (774 once I’d weeded out some duplicates and mismatches).

A natural question is: can we do the same trick with the British Library? Or any other library offering a Z39.50 interface? In principle, yes. But of course the Bodleian’s use of MARC fields may not be entirely the same as everyone else’s, and so the script I wrote to fillet the results of a query may need fine tuning. For example, the BL does not seem to use Marc code 856 (which I rely on) at all: its digital texts are stored in something called the Digital Store, and their identifiers there don’t seem to map directly to anything like a URL. And while I was thinking about that, something unexpected happened.

A tweet arrived, alerting me to the existence online of the “OpenTexts.world” search engine: a search interface to a much more ambitious and much more comprehensive view of the world’s digital resources, namely the Global Digitised Dataset Network (GDD Network), originally a research project into the feasibility of creating a global catalogue of digitised texts. At the end of this project’s first funding year it has made available not only a nice search interface but also (applause) the underlying complete dataset. The latter looks a bit like the HT snapshot dumps I have processed before, though it is missing quite a few useful fields, such as type of text, place of publication, etc. And the nice search interface so far has only limited functionality: nice if you are exploring the data, and really quite annoying if you know exactly what you want to find. On the bright side, it allows you to download the results of the query as a CSV file and even has a sort of API, apparently supporting Lucene-style queries to be passed in via a URL to a SOLR-indexed version of the data. This could well be the answer…

Counting the Books contd.

A couple of days ago I reported here on some imbalance in the representation of male and female novelists in current digital archives, written while I was still trying to persuade the Google Books server to do tricks for me. I can now report further progress. After 21 iterations, I did finally manage to confect a complete list of all the ATCL titles freely available from Google Books. Putting this together with data already stored in ATCL for Google, I have now identified 2823 ATCL titles in Google Books, which brings the total number of known digitizations up to 11,510: 58% of all available titles. This seemed good enough pretext to revisit the summary table I produced last time, so here it is again in a new and hopefully slightly more comprehensible form:

Revised counts

As might have been predicted, with more data the situation becomes more nuanced. Note first that the percentages of available titles which get digitized (column “%dig”) apparently decreases as the actual number of texts available for digitization (column “All”) increases, suggesting that the more titles you have available the less likely you are to deal with any one of them. Only tentative conclusions are warranted, since we are lacking so much data for the later part of the century. That said, comparing the columns M-dig and F-dig suggests that throughout the century, digitizers are consistently and disproportionately more likely to go for a male-authored text. Even in titles from the 1850s, where there are substantially more female authored titles available than male (778 as opposed to 595), the proportion of them which get digitized is still lower than the proportion of male authored titles (79% as opposed to 84%). In the 1880s, 55% of titles are explicitly female-authored, as opposed to 41% male (the remainder being unspecified); yet the male authors are still sampled for digitization at a far higher rate (59% as opposed to 37%).

My previous accusations of sexism amongst the digitizers en masse thus vindicated I next considered the practice of individual archives. The following table shows the numbers of ATCL titles I found in each of five major archives, and the proportions attributed to male and female authors in each.

A-digM-digF-digU-dig%Male%Fem
All115106207505025354%44%
Hathi Trust5655356820226563%36%
InternetArchive16658897482853%45%
Google Books28231138158010540%56%
British Lib51042742225211054%44%
Gutenberg22751682590374%26%
Digitization choices by archive

Overall, the balance is comparable with that shown in the previous table: a small preference for male as opposed to female authored titles (54% to 44%). But this is perhaps concealing a marked variation in practice amongst the archives. At one extreme, Project Gutenberg has nearly three times as many male authored titles as female, while at the other Google Books actually has significantly more female authors than male (56% as opposed to 40%). In between is the British Library Microsoft collection, which matches exactly the proportions for all the archives combined.

Irrespective of gender, how much variation is there in the holdings of these archives? Here’s a frequency distribution showing how many ATCL titles are available from 1, 2, or more archives (note that I cannot distinguish how many of these are actually copies of the same digital version).

1724763%
2278024%
3120110%
42672.3%
5160.7%
Archive overlap: how many titles are available from how many archives?

Encouragingly, this suggests that there is little overlap amongst the holdings of the main digital archives: 63% of all 11,511 digitized titles listed occur in only one archive, 87% in one or two.

Which titles get digitized most frequently? This is hard to tell, for several reasons. Some archives list multi-volume titles as multiple copies; some archives list items simply copied from other archives. For my Google Books listing I excluded titles which were already listed by another archive. But for what it’s worth, here, in no particular order, are the fifteen titles listed as available from all five archives I looked at:

  • Caine, Hall (1853-1931). The Deemster: A Romance (1887)
  • Caine, Hall (1853-1931). A Son of Hagar: A Romance of Our Time (1887)
  • Hamerton, Philip Gilbert (1834-1894). Wenderholme: A Story of Lancashire and YorkshireEdinburgh: Blackwood 1869
  • Collins, Wilkie (1824-1889). Antonina: or, The Fall of Rome. A Romance of the Fifth Century London: Bentley 1850
  • Collins, Wilkie (1824-1889). The Woman in White London: Sampson Low1860
  • Eliot, George (pseud.) (1819-1880). The Mill on the Floss. Edinburgh: Blackwood 1860
  • Dickens, Charles (1812-1870). Oliver Twist: or, The Parish Boy’s Progress. London: Bentley 1838
  • Eliot, George (pseud.) (1819-1880). Middlemarch: A Study of Provincial Life. Edinburgh: Blackwood 1872
  • Gaskell, Elizabeth Cleghorn (1810-1865). Mary Barton: A Tale of Manchester Life. London: Chapman and Hall 1848
  • Grant, James (1822-1887). The Romance of War: or, The Highlanders in Spain London: Henry Colburn 1847
  • Dickens, Charles (1812-1870). Barnaby Rudge: A Tale of the Riots of ‘EightyLondon: Chapman and Hall 1841
  • Dickens, Charles (1812-1870). Bleak HouseLondon: Bradbury and Evans 1853
  • Wood, Mrs. (-). It May be True: A Novel. London: T. C. Newby 1865
  • Oliphant, Margaret (1828-1897). Harry Jocelyn. London: Hurst and Blackett 1881
  • Ouida, (pseud.) (1839-1908). Folle-Farine. London: Chapman and Hall 1871

No, it makes no sense to me either. I expected to see Charles Dickens and George Eliot and Mrs Gaskell on the list, but Hall Caine and Philip Hamerton? Clearly one needs to be very careful in interpreting this data.

See previous bloggage for details of how the numbers were obtained. Supporting data and scripts have been updated in my github repo.

An experiment in counting the books

A couple of years ago I spent some time trying to determine which of the titles in the wonderful “At the Circulating Library” (ATCL) database were freely available online in digital form. This was for largely pragmatic reasons to do with building the ELTeC English language collection: other blog entries describe the method I used and some preliminary results. It’s not as easy as you might suppose to download reliable catalogue information from most digital libraries, nor is it always readily tractable when you do. After some experimentation, I hit on the idea of creating a magic key, a kind of fingerprint, derived from the title and author name as specified, which could then be matched against keys in the same format derived from ATCL entries.

More recently, it occurred to me that this data might also provide some interesting numbers to contribute to current debates about digitization priorities. Exactly why some titles make it to Project Gutenberg, or the HathiTrust, or the Internet Archive and others don’t is not a question to which simple un-nuanced answers are likely or even (maybe) possible, but we should still ask them. Those responsible for the digitization efforts of major libraries are a little coy about the principles on which books are chosen for digitization, or even whether they actually have explicit selection policies, for some reason. I assume that there is a difficult tightrope walk between on the one hand practical but purely adventitious matters such as the relative locations of volume and scanner, the size and state of the volume, the time of day, the temperament of the scanner operator etc.) and on the other principled criteria aiming to ensure a balance of say titles by female and male authors, or high and low brow, date of production, longevity of readership, and so on. It would be surprising if the choices were completely unrelated to characteristics of the population being sampled, or totally failed to reflect the cultural priorities of the scanning operation; the same uncertainties apply, of course, to the collection being sampled for digitization itself.

Anyway, I recently read an interesting article by Allen Riddell and Troy Bassett (“What library digitization leaves out”; preprint available from https://arxiv.org/abs/2009.00513)  which reports that in the data they looked at – the comparatively small sample of surviving English novels published in 1836 and 1838 – shorter books, and books with male authors are disproportionately more likely to be digitized. I naturally wondered whether this applies equally well across the whole of the 19th century.  Which is what led me to revisit my efforts of two years ago. But first, here are the results.

There are 19,912 titles in the current ATCL database. Of these, 9152 (46%) have authors identified in the database as male, 9809 (49%) are identified as female, and 951 (4%) are identified as unknown. These relative proportions are rather different if we look at titles with at least one digital surrogate, of which there are in total 9099 (45%). Of these 9099 digitized texts, we find 5221 (57%) are of male authorship, 3718 (41%) of female authorship, and 160 (2%) are unsexed.

Look at that again. Although there are actually more titles available for digitization from female authors than for male, the number that actually gets digitized is significantly smaller (if, like me, you think a gap of 16 percentage points is pretty significant). Hmmm. These counts of course derive from the whole period covered by ATCL, from 1800 to 1900, so I also calculated them for each decade, only to find that the proportions and their imbalance remain fairly consistent across the century. And this despite huge changes in the numbers: for the last decade of the century ATCL lists nearly 6000 titles, a six-fold increase on (for example) the fourth decade. What percentage of those titles were digitized? In both decades, over 51%. And what proportion of those digitized titles were male-authored ? In both decades, 62%. There is some variability across the decades, but the basic picture remains the same

One possible explanation might be that titles with unknown or unsexable authorship (e.g. the ubiquitous “Anonymous”) are more likely to have been female, and that hence we are not seeing all the truly female authors. But even were this the case (after all, why should we not equally well hypothesize that male authors might be bashful or crave secrecy?), the proportions for books ostensibly male-authored with respect to books ostensibly not male-authored (i.e. those classed as either F or U by ATCL) remain stubbornly higher than the proportions for books definitely not male-authored. And indeed, the same mutatis mutandis is true for the ostensibly-female to ostensibly-not-female ratio.

Here’s a table showing the raw counts:

Decade All “Male” “Female” “U” A-dig M-dig F-dig U-dig
19912 9152 9809 951 9099 5221 3718 160
1830s 482 256 174 52 250 164 85 1
1840s 1037 543 422 72 538 334 202 2
1850s 1483 595 778 110 718 347 358 13
1860s 2341 1019 1093 229 1015 540 456 19
1870s 2866 1189 1514 163 1300 642 633 25
1880s 4126 1693 2287 146 1765 945 782 38
1890s 5979 2995 2863 121 3092 1929 1103 60

 

And here’s another showing the percentages:

               
Decade Ad% M% Md% F% Fd% U% Ud%
45.70% 45.96% 57.38% 49.26% 40.86% 4.78% 1.76%
1830s 51.87% 53.11% 65.60% 36.10% 34.00% 10.79% 0.40%
1840s 51.88% 52.36% 62.08% 40.69% 37.55% 6.94% 0.37%
1850s 48.42% 40.12% 48.33% 52.46% 49.86% 7.42% 1.81%
1860s 43.36% 43.53% 53.20% 46.69% 44.93% 9.78% 1.87%
1870s 45.36% 41.49% 49.38% 52.83% 48.69% 5.69% 1.92%
1880s 42.78% 41.03% 53.54% 55.43% 44.31% 3.54% 2.15%
1890s 51.71% 50.09% 62.39% 47.88% 35.67% 2.02% 1.94%

 

In an ideal world, you’d expect the percentages for titles with male authors (M%)  and for digitized titles with male authors (Md%)  to be roughly the same, right?  Think on… And feel free to download the csv file behind these tables for your own experimentation.

One should always suspect the data, so I make no excuse for the following detailed blow by blow account of how I got these numbers. Full gruesome details, including the scripts mentioned below, are available from https://github.com/lb42/bookLists

The basic method was to download a complete catalogue of relevant titles available from each target digital library, and then try to match them with records in the ATCL. For Google Books, which does not seemingly provide a complete catalogue online, I tried a different method, discussed further below.

I started by downloading the latest (June 2020) dump of the ATCL database, and converting it to a basic TEI XML format. I then did much the same for the holdings of five digital libraries with good holdings of 19th century novels: the Hathi Trust, the British Library, the Internet Archive, Project Gutenberg, and Google Books. As a control, and for testing purposes, I also looked at a few smaller collections, notably the Victorian Women Writers Project at Indiana University and the (now defunct) University of Adelaide “ebooks” repository. I wanted to provide something similar to John Mark Ockerbloom’s lovely Online Books Pages at https://onlinebooks.library.upenn.edu/ but more precisely tied in to ATCL.

Hathi Trust makes available a monthly dump of their entire collection as a huge tab-delimited file. Working with the most recent dump, dated September 1 2020, I used a simple minded perl script `hathiProcess.prl` to parse this file and select from it only freely-available English language books published in Great Britain between 1800 and 1920; an  XSLT stylesheet `htConv.xsl` then converted the results to the common project format (CPF).

The British Library website makes available an Excel spreadsheet providing metadata for the titles from their collection which were digitized some time ago by the Microsoft Books project I downloaded this, converted it to TEI with `csvtotei` and converted the result to CPF, (selecting just the 19th century titles) with `blConv.xsl`.

Project Gutenberg makes available several versions of its catalogue data. I worked with the most recently updated one, which is a vast archive of unbelievably verbose RDF files. Despite its complexity, this data doesn’t include any publication data for the source texts concerned (unsurprising really), though it does provide birth and death dates for the authors. To cut down the numbers a little, I rejected titles whose authors were not born during the 19th century, and also those which specified a MARC relator field “edt” (to cut out non-original editions). Once I had remembered how on earth to handle a gazillion tiny files of RDF (I did this back in 2018 ), I used the `gutConvRDF.xsl` script to process them all to CPF, and concatenated the results into a single file.

The Internet Archive, so far as I can see, doesn’t have any generally available or downloadable catalogue, though it does have a really good query interface. The method I used for attacking Google Books would presumably work equally well (or equally badly; see below) in this case, but I haven’t tried it. Instead I just used a predefined collection called `19thcennov` which someone at UIC Urbana Champaigne thoughtfully created back in December 2008. This gave me 7828 XML records which were easily converted to CPF using `iaConv.xsl`.

The common project format files all consist of TEI <bibl> elements with either an @xml:id attribute or an <idno> specifying the identifying code for this item in the relevant repository, e.g. ‘ia:foreignersnovel03pric` identifies the Internet Archive’s digitization of volume three of Eleanor Price’s novel “The Foreigners” . Each <bibl> also has an @n attribute supplying the magic key for the title, which is confected as follows:

  • remove the full stop following Mr or Mrs in any title containing one
  • take the substring of the title up to the first occurrence of one of the punctuation marks . , : ; or /
  • concatenate this with the author’s last name
  • convert to lower case and remove all punctuation characters and spaces

So, for example ATCL lists a work with the title “The Foreigners: A Novel” attributed to author “Eleanor C. Price”. The same work appears in the Internet Archive list, but with the author “Price, Eleanor C. (Eleanor Catherine)” and the title “The foreigners : a novel”. Despite the differing strings, both will get the same magic key “theforeigners|price”. This method is far from bullet proof, but it’s serviceable.

For Google Books, as noted above, there is no readily downloadable catalogue. But there is an API, which in a moment of madness I thought it might be cool to learn how to use. A day of poking around led me to a neat python script some helpful person had written to look up ISBN numbers (hat tip to AO8’s treasury , which I mercilessly hacked to my own purposes. My version reads a file of URL-encoded search requests like this “inTitle:the+inTitle:foreigners+inAuthor:Price”, fires them at the Google API, and processes the returns into a rudimentary bibl or a comment lamenting the absence or unavailability of the item in question. The file of search requests is rather long (one for each title in ATCL for which I have not yet found any digital version – a total of 11,203 ) so I make the program sleep for a while after firing off about 40 consecutive requests, to help the Google server catch up. Despite this considerate behaviour on my part, it did not take Mr Google long to decide that my program (or my IP address) was a threat, and then to start returning unco-operative HTTP messages like 503 (“Service Unavailable”) and 429 (“Too many requests”). The API Help pages confirm that Google considers “using an app, program or script to perform a large number of searches in a short time” prima facie justification for temporarily blocking the IP address in question; though it’s not clear what exactly is meant by “large” (more than 100?) and “short” (less than a minute?) in that phrase. Furthermore, when I search using my specially-minted API key, there seems to be a hard limit of 1000 queries per day in any case: so this job is not going to be finished very quickly. Still, I do now have an extra 1517 records to show for two day’s work.

Once I’ve created all these lists, I run the merger.xsl script to add <ref> elements to the ATCL-TEI file I created in the first step. This makes for some redundancy, for two reasons: firstly, for most of the archives a three volume novel is likely to get a separate entry for each volume; secondly, for many titles, there exist multiple digitizations – which may (or may not) derive from the same source. The following table shows for each archive the number of records selected for processing, the number of references to ATCL titles found, and the number of titles affected. Note that I haven’t yet done any de-duplication to remove overlaps.

British Library 62015 9920 5104  
Hathi Trust 460070 18891 5655  
Internet Archive 7829 4691 1655  
Project Gutenberg 38338 2880 2275  
Google Books ? 1517 1517  

I haven’t made available the CPF files for each archive, nor the final merged TEI version of the ATCL dump, since this is not really my data to share. But I have made available a file called atcl-links.csv, which is a spreadsheet with a row for each ATCL title digitized in one or more publicly available digital collection, mapping its ATCL identifier to its identifier in each repo. I’ll  update these as and when the data improves.

Can we trust the ATCL database?

I’ve been enthusiastic about the database behind the At the Circulating Library website ever since I discovered it nigh on three years ago. Troy Bassett, its creator, deserves much respect and lots of credit both for the work put into creating it and maintaining it, and for his generous open minded policy of making the data itself freely available in snapshot form for nerds like me to play with. I think of the ATCL as the nearest thing we are ever likely to get to an authoritative catalogue of the 19th century novel, and use it as such in other work which I’ll report on later. But (you guessed there was going to be a but) just how reliable is its coverage? Does it cover each decade of the 19th century equally well, or are there gaps?

When asked, Troy assures me that he’s fully aware that there are gaps. He estimates that the true size of the database should be well over 25,000 titles, rather than its current 18,000. He and his team are slowly and meticulously filling the gaps, by hoovering up and checking data from resources such as published catalogues and bibliographies, or online collections like Proquest’s NSTC. It’s tedious and fiddly work, ideally suited to a small farm of graduate students, should you have such a resource at your disposal.

My interest in ATCL is to use it as a reference point when assessing the coverage of other resources, in particular the catalogues of digital libraries. Take, for example, the current Hathi Trust catalogue. This lists 17,433,331 titled volumes in all; 235,035 of them being volumes published in the UK between 1800 and 1900. Removing obvious duplicates (Hathi catalogues each volume of a three-decker separately, usually though not always) brings this down to an only slightly more manageable 129,817 titles. How good is its coverage of 19th century novels? The difficulty of course is to winnow out just the entries which are novels. As I suggested on my blog back in June 2018, the word “novel” in a title turns out to be a very good indicator (other good words include “or” and “tale”, “history” etc), so I extracted from the HT records just those titles containing the word “novel” for further investigation: there are 953 of them.

In my earlier work, I’d just assumed that if a title in HT didn’t appear in ATCL then it wasn’t a novel. But how complete is ATCL? A good way of finding out might be to work the other way round: first get a list of things which are definitely (probably) novels — and then check to see whether  they also appear in ATCL.

On my first pass, 284 of my 953 “novels” (i.e. titles containing that word) did not appear in ATCL, which surprised me. But this was mostly a result of my matching procedure being insufficiently robust to cope with the amount of variation in cataloguing practice within the HT records and their occasional divergence from ATCL practice. I spent a happy day or two going through the list of delinquents by hand, taking the opportunity to discard 19th c. reprints of earlier works, translations from other languages (mostly scandalous Zola), and a handful which were clearly not novels at all (e.g. “Photographic amusements : including a description of a number of novel effects obtainable with the camera / by Walter E. Woodbury.”) – all of which brought the number down to 191. I then fixed the “magic keys” used to identify each title in ATCL as necessary, all of which brought me to a manageable 40 titles apparently missing from ATCL, with which to torment Troy Bassett and his team. 40 titles missing from a sample of 953 is pretty good; even 191 missing is not so bad in this always very approximate work. I remain persuaded that the ATCL is a reliable approximation to a representative sampling of the 19th century novel.

A brief report on my attempts to understand the make-up and cataloguing of EEBO-TCP

Q. What exactly is EEBO-TCP made of?

A. (digital) copies of (microfilm) copies of works.

Q. “Works”?

A. Well, no. Specific copies of works, held in libraries. What I think FRBR calls “instances”.

Q. FRBR makes me think of unhappy cats. Can you give an example?

A. Consider Jeremy Taylor’s 1668 page-turner “XXV sermons preached at Golden-Grove being for the winter half-year beginning on Advent-Sunday until Whit-Sunday”. Two copies of this book from the British Library’s collection, as printed by one E.Tyler for book-seller Richard Royston, have been microfilmed, one as image set 199641 and one as image set 45789. Both copies are of the same book and therefore have the same identifier (T410) in Wing. Check ’em out at https://search.proquest.com/eebo/docview/2248511138 and https://search.proquest.com/eebo/docview/248511188 and see if you can tell them apart (I can’t, in this case) And yes, EEBO also has images of two different copies of the 1655 edition of this book: same author, title , and publisher, but completely different bibliographic entity, with a different Wing identifier (T409). And probably many others. He was big in the 17th century, that Jeremy Taylor.

Q. So?

A. Perhaps the primary key to the EEBO catalogue should be the image sets, since each catalogue entry concerns one set of images. However, given the choice, no-one would prefer a title like “microfilm no 42” to one like “amusing title of work”. Consequently, Proquest provides a unique identifier for each bibliographic work, which is suffixed with an image set identifier if there are multiple copies of the same work (which happens a lot), or if the work exists in multiple volumes which have been scanned separately (fortunately, less frequent). And, not to be outdone, TCP also provides a unique identifier, but in their case it identifies a specific copy of a specific work. The URLs I quoted above behave similarly: they uniquely identify a specific copy of a specific work.

Q. Wait, how many identifiers are there now?

A. Sticking with the 1668 third edition of Taylor’s XXV Sermons, we have two image sets (identifiers 45789 and 199641). Since these correspond with a single bibliographic entity, also known as Wing 610, Proquest gives them the same catalogue number (10772247). (You may wonder why they don’t just use the Wing number, unless you are a librarian). But we can’t have two entries in our catalogue with the same identifier, so Proquest appends the image set identifier to the catalogue number for one of them (I don’t know how they decide which, nor why they don’t do it for both). TCP, as noted above, just give each distinct item a distinct identifier: in this case the transcription of image set 45789 is in TCP text A64140, and the other is in TCP text B30404. All clear now?

Q. There must be an easier way of doing this

A. Yes. Amongst a raft (or maybe a coracle) of other improvements this year when Proquest moved the online access to EEBO to a new platform, they also introduced a new identifier called a “GOID”, which is essentially a unique number for every entry in the catalogue. That’s what I used in the URLs quoted above. Hoorah! We can now access anything in the EEBO catalogue using a simple numeric code, just like we can in the TCP subset of it.

Q. What about the TCP identifiers though?

A. Alas, these are currently not included in the dataset which underlies the Proquest online catalogue. As noted here earlier, I am working on a super TEI-compliant version of said dataset, and that will assuredly include the TCP identifiers. More on that anon.

My humble thanks to Paul Schaffner for patiently explaining all this to me. Any residual errors are mine, not his.

EEBO Bib

There you are idly scrolling through Twitter when someone announces a tempting resource that’s just crying out for TEI-ification. And there goes the afternoon and most of the evening.

Anyway, creating my EEBO Bibliography in TEI was insultingly easy. I just grabbed the Excel spreadsheet (34 Mb) thoughtfully created by those lovely people at Protext, and even more thoughtfully publicized by the even more lovely Heather Froelich on twitter this morning. I opened it up in Open Office. I exported it as a CSV file (85 Mb) and munged same through the standard TEI cvstotei stylesheet to generate 175 Mb of TEI compatible data. Then I sat down to consider how to make it actually TEI conformant, i.e. how to make the title of a bibliographic entry appear not as the content of <cell n=”5″> but as the content of a …. <title>. As you might suppose, defining the right mapping was easy for some things, but less so for others of the 17 cells in each of the 146,323 rows of the spreadsheet. There’s a table to show the mapping I decided on at the end of this blog, for those unwilling to read my pellucid XSLT code which actually uses it.

The resulting TEI file isn’t quite complete because it doesn’t have a TEI Header, needed to define the prefixes I use to save space in the URLs, but (at 120 Mb) it’s too big for github. And it’s now available at https://app.box.com/s/r8sxc68239g6pen09blzmul93tqs8rbv for your xpathing pleasure.

Here’s the table. The whole spreadsheet is a <listBibl> and each row becomes a <bibl>. I like simple solutions. I’m not proud of the <note type=”foo”>s, but that’s the best I could think of without getting far too complicated.

1MARC identifier@xml:id : prefixed by eebo:
2Image set identifier@facs : prefixed by eeboIs:
3Publication type@type (always either Book or Issue)
4Collection <series>
5Title<title>
6 Author<author>
7Publication Date<pubDate>
8Publisher<publisher>
9Country name<pubPlace>
10Publication language@xml:lang gives ISO code equivalent; text goes in a<note type=”langNote”> c
11Accession number<idno>
12Source Library<note type=”sourceLibrary”>
13Full text imageif “Y”, <note type=”transcriptType”> contains “image”
14Full textif “Y”, <note type=”transcriptType”> contains “text”
15USTC Classification<note type=”keywords”>
16Release dateToo boring to include
17 URL@ref with prefix proquest:
Mapping EEBO spreadsheet fields to bits of a TEI <bibl>

Counting the books

As a follow up to my previous rather excited posting I have finally got round to actually trying to count how many copies of each title selected for the ELTeC English collection various prestigious national libraries hold. Here’s how I have operationalised the need for some kind of metric approximating to the persistence or canonicity of a given title.

First I run a little XSLT script against the corpus to create a file full of lines like the following:

f @and @attr 1=1003 sinclair @attr 1=4 "modern flirtations"
set marcdump ENG18410.usmarc
show all

This means:

      • find records in which the author field contains “sinclair” and the title contains the words “modern” and “flirtations”.
      • send the output to a file called ENG18410.usmarc
      • display all the results from that query

Creating this query automagically is not without problems. Including words like “the” or punctuation like the question mark is ill advised. Some records include subtitles in their “titles” but most don’t. When the records do contain subtitles they may result in false hits: see further below.

Next I throw this at a z3950 server and go make myself a cup of tea while it chunters away. As noted in my previous posting, getting Z3950 access to a library in question is mostly just a matter of knowing the address of the server and its port, the name of a database, and sometimes (as with the British Library) also wheedling a login and password. The reason I use the recondite syntax above for my query input, and the reason that I accept the results in usmarc 21 format is … that’s what every z3950 server I have looked at so far promises to provide. Some have other exotic options for query or for output, but nothing else is universally guaranteed to work.

Returning with my cup of tea, I now have a bunch of inscrutable marc21 records tidily filed away. I wasted the best part of an evening yesterday trying but failing to find a simple online tool which would convert them into marcxml or indeed anything readable, but the best I could come up with was a perl utility called marcdump. Here’s the start of the output it gives me for ENG18410.usmarc

LDR 00535nam a2200181uu 4500
001 006812208
005 20100212180700.0
008 040420s1841 xx || 000 ||eng
019 u _aG11034382
040 _aUk
_cUk
082 04 _a823
100 1 _aSinclair, Catherine,
_d1800-1864.
245 10 _aModern flirtations :
_bor, A month at Harrowgate /
_cCatherine Sinclair. Vol. 1.
260 _a[S.l.] :
_b[s.n.],
_c1841.
336 _atext
_2rdacontent
337 _aunmediated
_2rdamedia
338 _avolume
_2rdacarrier
852 41 _aBritish Library
_bDSC
_jW5/2649

Exciting stuff, eh. The useful bit here is the publication date, which appears as subfield _c of field 260 here (sadly, there are other possibilities), and even more useful the following, which appears at the end of the output file:

Recs Errs Filename
----- ----- --------
4 0 ENG18410.usmarc

Tis but a matter of moments to grep through these files and extract a list of record counts for each title, together with a list of publication dates.

Furthermore, and much to my relief, the counts do seem to reflect my initial expectations as to which titles would be highly rated and which not. The top ten titles in my 90 are (drumroll)…

94 ENG18860 Hardy: The Mayor of Casterbridge
106 ENG18531 Yonge: The Heir of Redclyffe
135 ENG18621 Braddon : Lady Audley’s Secret
143 ENG18481 Dickens: Dombey and son
148 ENG18610 Eliot: Silas Marner
152 ENG18530 Dickens: Bleak House
157 ENG18540 Dickens : Hard Times
168 ENG18480 Thackeray: Vanity Fair
298 ENG18471 Bronte: Wuthering Heights
664 ENG18652 Carroll: Alice in Wonderland

Nearly all of these would figure on any list of long-lasting 19th c English novels. An eyebrow night be raised by some in the English department about the appearance of Yonge and Braddon, but the explanation is simple: both ladies (or their publishers) were very fond of including the phrase “by the author of ‘Most Famous Title’ ” on the title of their less famous works, and I have not yet worked out how to remove such imposters as “Work you’ve never heard of (by the author of Most Famous Title) ” from the results of a search for “Most Famous Title”.

Another eyebrow might be raised at the frequency distribution of the scores found: there is a very long tail, with nearly two-thirds of my 90 titles scoring 20 or less, while the top scorers, as shown above, score very much more. To some extent, this is explained by the crudity of my search technique, which will include musical adaptations, commentaries, versions for the use of slow readers, study notes, etc etc provided that “Most Famous Title” appears in the title somewhere. This worries me less, since the existence of such things is surely also testimony to the salience of the title in question. This factor does however have an inflationary effect on the scores, so that titles which don’t benefit from it appear lower than might be expected. “Middlemarch” for example – widely regarded as amongst the greatest English novels of the period, but not subject to – scores only 77, ahead of Sherlock Holmes debut novel “The sign of four” (72) but behind George Eliot’s closest rival for the depiction of provincial life Mrs Gaskell’s “Mary Barton” (82).

But these scores should not be subjected to such close scrutiny. If we are looking for a proxy metric for the “impact factor” of these works, it’s not implausible to be guided by the numbers of different editions of them that have accumulated in our great national libraries. If we say that a score of less than (say) 20 suggests a low impact, and anything above (say) 50 a high one, we should not go too far wrong.

So far I have tested this procedure only on the British Library’s collection. An obvious next step is to try a different English-language library (COPAC springs ro mind) to check that the ranking is not too widely different. And then to try out a different language: the BNF also has a z3950 server so I plan to subject the French collection to the same treatment.