At the beginning of February, I had the pleasure of co-organising (with Sebastian Rahtz, Camille Bloomfield, and Hélène Campaignolle-Catel) a workshop on data capture, as the second event in the Algoritm Seminar Series which forms part of an interesting ANR funded project called DifdePo. The project is a collaboration between the BnF and Ecritures de modernité, a research unit located at Paris III, and its objectives include creation of a TEI-based digital archive of the archives of the OuLiPo, which are currently stashed away in boxes at the Bibliothèque Nationale’s Arsenal depository. The papers include letters, photos, press cuttings, postcards, drafts, and notes of all sorts, but for the purpose of this exercise we decided to focus on the records of the OuLiPo’s regular meetings, which began back in the early 1960s. The Archive has already been catalogued, and work is in hand to produce digital images of a sizeable proportion of it. The object of our workshop was to explore ways of transcribing these documents, given that the project has very little funding, and will therefore have to rely on the good will of volunteer transcribers, enthused by things OuLiPien but maybe a little deficient in TEI knowledge.
About a dozen people participated, most of them surviving to the end of the day. We began by asking them to transcribe a page from a small collection of pre-selected digital page images, using Word. (I freely admit to a degree of smugness on discovering at the last minute that the teaching room was initially equipped only with old-style doc-producing Word, which had to be enhanced to a more modern docx-producing version at rather short notice by the unflappable Joël) This exercise demonstrated, as we had hoped, quite a bit of variation about what exactly should be transferred from the image to the text, and on what editorial principles, thus motivating a useful initial discussion about the principles and praxis of text encoding. One of the participants proposed (unprompted) the principle of “fidelity” to the source, while another argued repeatedly for “capturing the meaning”.
Once lulled into a false sense of security by this exercise, participants were exposed to the weirdness of an XML-editing environment using everyone’s favourite XML Editor oXyGen and my usual tutorial — create a document, learn how to tag parts of it, learn how to manipulate the structure, etc. We then offered them a more demanding work flow, involving first capturing a document in Word using a Word Template, which defined Styles to highlight a number of significant features (headings, list items, etc., but also personal names etc.), secondly converting this to a TEI form, using oxGarage and a specialised profile, thirdly looking at (and possibly modifying) that in oXyGen, and then converting it back to Word to confirm the feasibilty of round-tripping. Sebastian Rahtz of Oxford (whom God preserve) invested quite a bit of pre-workshop effort into setting up the necessary infrastructure for this, and making sure that it all worked correctly on the day. He also made it possible for us to inflict on the encoders a third alternative approach, based on an experimental installation of Ben Brumfield’s “From The Page” crowd-sourcing prototype software. I had expected this to be everyone’s favourite, but (maybe because we had already by then sensitized them to the delights of structural markup) our encoders seemed to find the simplicity of its interface made it hard to take seriously. We had prepared tutorial scripts for each of the three approaches (TEI source code available from my tei-fr repository, if you’re interested) so I was able to spend some of the time wandering about taking photos of hard-working encoders.
By the end of the day, everyone had tried all three approaches, and everyone had produced a couple of TEI XML files conforming to a simple transcription schema I had prepared earlier. We collected them all up and Sebastian showed how our pretend archive could be displayed on a web page, complete with corresponding page images, and vocabulary lists, and personography. This was (of course) all done with a straightforward customization of the standard TEI-HTML stylesheets, now available in the Stylesheet package as part of the Difdepo profile.
Conclusions? We still don’t really know whether our TEI-XML transcriptions are aiming for “fidelity” or “meaning”, but we have at least demonstrated the possibility of either (or both) . And we do know that the participants all seemed to be more enthusiastic about using the customized-word-template approach than either raw Oxygen or (possibly over-cooked) From The Page. We didn’t explore the idea of a pre-customised oXygen author-mode interface, which might well repay the necessary investment of effort, if there is a lot of metadata to be entered, for example.
I take the liberty of listing the names of the registered participants, for their greater glory:
- Camille Bloomfield
- Hélène Campagnolle-Catel
- Paula Klein (Projet DifdePo)
- Chris Clarke (Projet DifdePo)
- Jeanne Devautour
- Julie Bernard (Poitiers)
- Marie Bonnot
- Marianne di Benedetto (ENS Lyon)
- Guillermo Hector
- Pradeep Claassen
- Louise Kari-Merau
- Leïla Berlot
- Barbara Servant (Univ Rennes II)
- Clara de Reigniac
- Gabrielle Bruzzone (Poitiers)
- Claire Leroy
All affiliated with Paris III, unless otherwise indicated.
OpenEdition vous propose de citer ce billet de la manière suivante :
foxglove (28 février 2014). Encoding the history of the OuLiPo. Foxglove. Consulté le 11 octobre 2024 à l’adresse https://doi.org/10.58079/otlz
Une réflexion sur « Encoding the history of the OuLiPo »