HDDA2012 (“Historical Documents in the Digital Age”) at the University of Rouen turned out to be unusual (for me at least) in a number of respects. Firstly it was organised as part of a project (“DocExplore”) funded under the Interreg framework of the EU, and hence attended by people from both sides of the channel, rather than being exclusively French. As a consequence the presentations were in both English and French, with apparently quite successful simultaneous translation, though I did not test this for more than a a few minutes. Secondly, I didn’t have to explain to anyone what the TEI was, and why it might be interesting; everyone seemed to know all about that already, even the informaticiens. And thirdly, there was no-one else from Adonis present, so it fell to me to ask the man from the Archives Nationales why they did not provide an OAI feed into Isidore as well as into Europeana (they’re planning to).
There were about eighty attendees, most of whom survived the full day and a half of invited presentations/round tables. There was a bit of audience interaction, but not much, and surprisingly perhaps only a couple of desultory tweeters, one of which doesn’t count since it was me. There was however plenty of time for old-fashioned face to face discussion over lengthy pauses for sustenance in Rouen’s rather nice Maison de l’Universite. As far as I can tell there were roughly equal numbers of archivistes and informaticiens, but they did not mix a great deal.
Proceedings were kicked off with two very good “state of the art” summaries of what’s going on in the way of cultural heritage digitization in France by J F Moufflet from the Archives de France, and Matthieu Bonicel from the BNF. I particularly liked the latter because of his optimism about using the technology to break down the walls between the silos of digital artefacts being created everywhere, pointing to evidence from maybe half a dozen great projects previously unknown to me. Both of these speakers pushed all the right buttons about open public access and accountability, transparency and integration of resources, respect for standards etc. thus making quite a contrast with the following speaker, from the archive of Canterbury Cathedral, who found herself having to explain why they’d made a deal with Satan in the form of findmypast.co.uk to get their parish records database online, thus perhaps revealing the very different business models in which archivists operate on either side of the channel.
The second session was given over to tools for transcribing and indexing all those lovely digital images. Stephane Nicolas from LITIS, the Rouen team responsible for software development, laid out clearly the challenges and advantages of integrating transcription and images. Two rather more technical presentations followed: one from Franck Lebourgeois which felt a bit like a graduate seminar about the mathematical basis of OCR, and another from Marcal Rusinol from a Spanish lab about vision processing techniques for word recognition or (as it seems it is called in the trade) “word spotting”.
The last session of the day was billed as being about digital paleography proper, and was divided appropriately between two contributions from palaeographers (Elizabeth Lalou from Rouen, and Marc Smith from the Ecole Nationale des Chartes) and two computer engineers (Veronique Eglin from LIRIS and Richard Guest from Kent). The former group clearly understood the potential the technology offered to address some long standing difficulties in the treatment of e.g. allographic variation or the use of frequency statistics in the definition of “writing style” ; the latter group maybe had a harder job in making explicit just what the state of those particular arts currently is.
The second day I arrived a bit late, for some rather odd discussions, again revealing extraordinary differences in attitude on either side of the Channel, about the “ludique” use of IT in cultural heritage applications, i.e. how to make cool exhibits in museums. It began with a moderately dreadful intervention from a professional French developer of such things, but was rescued by a man from the British Library called Clive Izard who gave a historical survey of the BL’s flirtations with technology, from the days of the Information Access programme (which, I may say, was one of the funders of the BNC) up to the present, third, generation of the “Turning the Pages” application. He was followed by another excellent (and splendidly named) speaker Clotilde Vaissaire-Agard from Le Havre, who reminded us about the need to place the scholar at the centre of the picture (I was reminded of a former OUC S colleague’s plaintive cries of “What about the users”?) . She also endeared herself to me forever by citing the manuscriptorium project (remember Enrich?) as an outstanding example of what the technology facilitated by making it possible to share metadata and digital resources across institutional boundaries for the benefit of manuscript scholarship.
The final session though labelled as con cerning that old war horse “Is there such a thing as Digital Humanities”, actually contained three very good and complementary talks intimately concerned with the themes of the conference. Alison Wiggins from Glasgow’s Bess of Hardwick project gave a convincing account of their attempt to ground the project in practical user-focussed concerns (she cited Claire Warwick et al’s Lairah as one of their inspirations); Dominique Stutzmann from IRHT raged, with ample evidence, against the lack of decent interfaces in transcription software; and finally Alixe Bovey from Kent gave a well illustrated overview of the strengths and limitations of various interfaces developed for interacting with the physicality of medieval sources. She concluded by lamenting, in the way people do, the absence of smell associated with digital images, and the mismatch between the haptics of the touchscreen and the codex. I was more impressed by Alison’s comment that it was more useful to know what kind of paper Bess of Hardwick wrote to the Queen on than it was to be able to reproduce it.
OpenEdition vous propose de citer ce billet de la manière suivante :
foxglove (29 octobre 2012). Digital Palaeography meets Optical Glyph Recognition in Rouen. Foxglove. Consulté le 6 décembre 2024 à l’adresse https://doi.org/10.58079/otlt
Dear Foxglove,
Just a precision:
“vision processing techniques for word recognition or (as it seems it is called in the trade) “word spotting”.”
In fact, both exist but they do not describe the same techniques.
“Word recognition” would be in the same family as “Pattern Recognition” or “Optical Character Recognition”. These techniques imply a knowledge of what a written document is. They are based on segmenting the document image in written/unwritten part, then segmenting the lines, then segmenting the words, then characters or maybe graphemes etc.
“Word spotting” would fall in the family of “object spotting” methods. They do not imply a segmentation of the image or any knowledge of what is searched, the method just tries to spot a shape in the image, here the shape of a word. This is what your digital camera or smartphone does when it adds a lit rectangle around people faces in the visor/screen.
Recognition is a more ancient domain. It is perfectly adapted to cleanish document images, where cutting the image in little pieces (“segmenting”) will be efficient. Spotting is relatively more recent and will be more efficient on noisy or degraded images.
merci de cette clarification!