My efforts to find links to digitized versions of all the titles in ATCL made one huge methodogological leap forward last week, and is now poised on the brink of another.
Going through the titles I had managed to extract from a rather uncooperative Google Books interface last week, I noticed that rather a lot of them were marked as “not available” for some reason: more precisely although my 11,104 searches, each corresponding to an entry in ATCL for which I had not yet found a digitized version, had succeeded in identifying 2186 previously unseen titles, they had also thrown up 3885 titles which Google considered inaccessible, presumably for copyright reasons, and 5033 of which it flatly denied any knowledge. Yet when I looked up a few of these same titles (whether allegedly “inaccessible” or “non-existent”) in SOLO – the Bodleian’s wizard student-friendly query interface to its catalogue– there they were, page images downloadable in PDF, no sweat.
Now, amongst other delights, SOLO allows quite rich facetted searching, so it is easy to formulate a query like “find me all titles classed as fiction published in London or Scotland between 1830 and 1900, which have also been digitized by Google”, which made me think for a few moments that my work was now done. But as with many other classy library interfaces, SOLO stops short of allowing a mere automaton to carry out any searching: you have to sit at a keyboard and type, though it will grudgingly allow you to save and download the results of your query … provided it contains no more than 50 (FIFTY!) hits. Which (as I politely pointed out to the harassed librarian on online-chat duty last week), is almost entirely useless for my purposes.
Then I remembered that Real Librarians Do It With Z39.50 and dusted off my YAZ skills. The Bodleian, like all real libraries, has a perfectly good Z39.50 interface, which is not only entirely unbothered by a succession of several hundred queries but also happy to send back directly as many full catalogue entries for the hits as you can (err) handle. The only catch is that the queries have to be expressed in some antique syntax called PQN (Prefix Query Notation) and the results come back in MARC 11. I cut my programming teeth on Fortran IV, so these ancient tongues scare me a lot less than, say, JSON. I turned my list of queries unsatisfied by Google Books into PQN, fired them at library.ox,ac.uk:210/ALEPH and put the kettle on for a nice cup of tea. PQN is not very discriminating, or not in my hands at any rate, and my queries massively overgenerated. But once my 11297 results had passed through through a couple of utilities (yaz-marcdump to produce marcxml, and my very own `marctotei` to identify and fillet the relevant records) I had a set of 780 CPF format records to add to the ATCL database list, and the tea wasn’t even cold (774 once I’d weeded out some duplicates and mismatches).
A natural question is: can we do the same trick with the British Library? Or any other library offering a Z39.50 interface? In principle, yes. But of course the Bodleian’s use of MARC fields may not be entirely the same as everyone else’s, and so the script I wrote to fillet the results of a query may need fine tuning. For example, the BL does not seem to use Marc code 856 (which I rely on) at all: its digital texts are stored in something called the Digital Store, and their identifiers there don’t seem to map directly to anything like a URL. And while I was thinking about that, something unexpected happened.
A tweet arrived, alerting me to the existence online of the “OpenTexts.world” search engine: a search interface to a much more ambitious and much more comprehensive view of the world’s digital resources, namely the Global Digitised Dataset Network (GDD Network), originally a research project into the feasibility of creating a global catalogue of digitised texts. At the end of this project’s first funding year it has made available not only a nice search interface but also (applause) the underlying complete dataset. The latter looks a bit like the HT snapshot dumps I have processed before, though it is missing quite a few useful fields, such as type of text, place of publication, etc. And the nice search interface so far has only limited functionality: nice if you are exploring the data, and really quite annoying if you know exactly what you want to find. On the bright side, it allows you to download the results of the query as a CSV file and even has a sort of API, apparently supporting Lucene-style queries to be passed in via a URL to a SOLR-indexed version of the data. This could well be the answer…
OpenEdition vous propose de citer ce billet de la manière suivante :
foxglove (19 octobre 2020). Counting the books: yes, there’s more. Foxglove. Consulté le 6 décembre 2024 à l’adresse https://doi.org/10.58079/otml