Ressources numériques en sciences humaines et sociales OpenEdition Nos plateformes OpenEdition Books OpenEdition Journals Hypothèses Calenda Bibliothèques OpenEdition Freemium Suivez-nous

A day in Lower Normandy

And so to Caen, whose University campus boasts magnificent if vaguely fascist architecture, at the top of a hill, commanding splendid views over the urban sprawl to the countryside beyond, and liberally decked with graffiti to bewilder future epigraphers

OK Epidoc, encode this.

The University Press of Caen having joined forces with two other departments to offer him a visiting fellowship, my distinguished and white haired Danish colleague Matthew Driscoll is organising a series of seminars over the next few months, and I am here for the kick off session “TEI et encodage des sources”. About a dozen or so TEI fans are gathered in the Belvedere Room which is vast and very cold but still affords delightful prospects (as they say).

First up is Julia Rogers, a local doctorante describing the online edition of Descartes on which she is working under the watchful tutelage of Pierre-Yves Buard inter alia. No manuscript survives of Descartes’ works, and modern editors have played fairly fast and loose with them as a consequence: this impeccable electronic edition returns to the first printed editions as its basis, but uses all the possibilities of digital editing. Text is captured and maintained collaboratively by up to 15 scholarly editors, using a customisation of XML Mind to enforce a simple P5-conformant protocol designed by Pierre-Yves (and built with Roma), allowing for such niceties as the addition of editorial notes, citations, tracking of quotations, mathenmatical formulae (currently done in TeX though this will change) etc. Elsewhere in the University a fairly sophisticated morphologically-aware search engine is being developed, so that the original text can be queried in Modern French. The online edition will also integrate high quality page images supplied by the BNF, compensating for the decision not to encode all features of the layout. Impeccable, as I said. I was also impressed (as usual) by Sourcencyme,  presented by Isabelle Draelants from Nancy and Catherine Jacquemard from Caen. This ongoing project will combine a textual corpus of medieval encylopaedias (about seven so far) with a sophisticated indexing system tracing the chains of reference and citation amongst them, extending in some cases beyond into the 19th century. As a real hand-built hypertext, it is thus increasingly becoming the thing it represents: a complete encyclopaedia of medieval learning, endowed with tools for collaborative editing and annotation, and also with a specialist journal-like addition published by the ubiquitous revues.org. However, unless I misunderstand, a significant number of the texts it treats are owned by Brepols, which may pose access problems. Next before lunch, we were entertained by Vincent Olivet and Frederick Glorieux from the Ecole Nationale des Chartes whose home-grown RelaxNG tools continue to advance in the general direction of TEI conformance. They have been working on a direct conversion from ODT to TEI, using the same principles as Sebastian Rahtz’ stylesheets but aiming at a more specific homegrown RelaxNG schema, now expressed (I think) using an ODD. This was all very satisfactory, as is the fact that the tools in their workshop continue to be readily accessible.

Lunch (a three course affair involving some rather good salmon, and a chocolate mousse) was also highly satisfactory, and we reconvened much restored for an afternoon combining three short project presentations with set pieces from Matthew and from myself. Subhasree Pasupathy, from Caen, first of the three, described her use of the TEI mechanism to represent textual variation in her thesis on the projects of the Abbe de St Pierre. Thomas Lebarbe introduced us to the pleasingly heterodox digital Stendhal project at Grenoble during which I wondered not for the first time how hard could it be to write an ODD corresponding with their home grown DTD. Finally, Jorge Fins from Tours showed us how the Bibliotheque Virtuelle des Humanistes at Tours is now using both XTF and Philologic to search its corpus.

And so to the grand old man of TEI -based editing: not me, but Matthew Driscoll. He spoke in English but (as someone said to me afterwards) with such limpidity of discourse as to pose no problem (which sounds even better in French), Citing WS Greg’s distinction between “substantive” and “accidental” variation he showed how TEI markup enables one to capture both, but display either, by the judicious tweaking of rather cunning stylesheets developed by Eric Haswell. He also talked about gaiji, news of the existence and facilities of which does not seem to have penetrated everyone’s consciousness to the extent that it probably should have by now. And finally, a good half of the material I had prepared for my own talk having been presented by previous speakers, I was able to close the day in a suitably forward-looking way by focussing mainly on the new concepts proposed for handling l’edition genetique (sourceDoc, mod, change etc.) in TEI P5 which all seemed to go down quite well.

Deplacements Septembre – 4 et finale

I wake in yet another extraordinarily overpriced (but architecturally impressive) hotel, this time in London, to find that the sun is already shining brightly and I have

Breakfast in Russell Square
Breakfast in Russell Squaree

ample time to look for a real breakfast — my first for many days — which I enjoy alfresco, in the much to be recommended cafe in Russell Square. For today really is the last stop on this little tour of duty, and I am down to give a closing address at University College London where Juliette Nyhan and Anne Welsh have jointly organised a symposium on “the Hidden Histories of Digital Humanities”. Sitting in the leaf-dappered sunshine, I mentally resolve to try to take seriously the notion of Digital Humanities as something having an identifiable history, and (with slightly less difficulty) the notion that I myself have contributed to it. I also resolve not to get too much ego on my tie, and not to use that joke to open my talk, since I am scheduled to appear at the end of a long day. My talk is a revamped version of one I have now given twice, both times in French, so doing it in English will be entertaining (for me, at any rate)

I arrive at UCL in good time to pay my respects to Jeremy Bentham still sitting in his glass case, and to be issued with my little folder of publicity for UCL’s new Centre for Digital Humanities, a free pen, a completely empty USB key, and some fairly nasty coffee. Ah yes, English coffee. And sugary English biccies in plastic bags. Never mind, we pile into the massive auditorium, and listen to Professor Willard McCarty, for it is he, reading wise words, and dropping names in an appropriately professorial manner. His title was something about how to write a history of Digital Humanities. Amongst topics mentioned I seem to have noted only the following: a wurlitzer book vending machine; Margaret Masterman; I A Richards; N Mccullock; Jasia Reichardt’s cybernetic serendipity and of course Homer’s Odyssey, xix, lines 149-50. Willard somehow manages to appear both encyclopaedic and parochial. While I share his desire that the usual suspects when pontificating on the “discipline” should meet the highest of academic standards, it also seems to me that writing history is largely about taking revenge. But before it all gets too personal, the charming Claudine Moulin from Trier’s DH Zentrum takes to the floor and delivers a lengthy meditation on the concept of “knowledge spaces”, in particular (I think) the way that such spaces can be regarded as “the spatial concretion of knowledge stocks” (I understood that when I wrote it down, but now I am not so sure).

After a pause for more brown liquid and sugary snacks, during which I traded scurrilous gossip with Lorna Hughes, Edward Vanhoutte, attended by not one but two tall and glamorous Flemish ladies, gave what was probably the best researched presentation of the day, concerning how humanities computing mutated into digital humanities, and what the difference between them might be. Like Fred Gibbs and Patrick Svenson’s (whose definitional forays he cites), Vanhoutte seems to think that these terms label ontologically distinct fields of activity, or “disciplines” which can be located independently of any social, historical, or political context, and that there is an inherent circularity in defining either of these terms by means of lists or surveys or (one might say) communities of practice. But then he also seems to think that terms exist independently of their application, so he is at least consistent. I learn with interest that the currently foundational text in the field (Schreibman et al) was titled a Digital Humanities Reader, at the suggestion of John Unsworth because the marketing department of their publisher did not like the originally proposed “Humanities Computing” label ; I immediately tweet the proposition that said publisher (Blackwell) was keen to put deep blue water between this product and the immediately preceding foundational text of the domain, OUP’s “Humanities Computing Yearbook” and “Research in Humanities Computing” series. That’s how historical truth is created, Edward.  Next up is the totally charismatic Melissa Terras, back from maternity leave and showing her dedication to the cause by making this seminar her first return gig, and on a Sunday to boot. She does us the compliment of actually addressing the ostensible topic of the seminar by talking (ostensibly) about the implications of “Crowd sourcing: beyond the traditional boundaries of academic history”. I think the idea was to make connexions between the role of the internet beyond the academy, the need to pay attention to ephemera when constructing history, and how crowd sourcing might help us explicate them. She used as real life example an amusing document from the TEI archives, but Mel has the rare ability to make any document sound fascinating. we resumed for more serious matters. Last before lunch, James Cronin from UC Cork spoke fairly inpenetrably about what is presumably a cause celebre in Irish architectural history : the elision from history of Lehmann James Oppenheimer , a key figure in the Irish Arts and Crafts movement at the end of the 19th century (I learn), but no longer featuring in any discussion of the churches whose interior he designed, presumably on account of his name (I hypothesize).

After lunch, about which I report only that it was not French, Andrew Flinn from UCL spoke entirely over my head about the information theory behind oral history methodologies (I plead total ignorance of this topic). Also from UCL, Vanda Broughton was much more accessible on the social history of what we now know as facetted searching, which she made a plausible case for having been invented by a real-life group of information scientists inspired by Ranganathan long before its reinvention or rediscovery in shopping systems and super cool web search engines (like Isidore). In the absence of the sadly-missed Claire Warwick (she was off sick) the next item on the programme was a “virtual presentation” by Ray Siemens, i.e. (I think) a previously recorded video of Dr Siemens gazing at us and chatting away very much like a late night radio broadcaster. I say that not just because of Ray’s mellifluously smooth mid atlantic phrasing, but also because he sounded unsure that anyone was actually listening to him. He had a number of interesting anecdotes about “pioneers of DH” (the Stanwood brothers, for example), some of which were new to me,but I found it difficult to engage with discourse so carefully inoffensive, and so clearly addressed to no-one in particular. And so, after a brief tea break, to my own presentation  which was (need I say) entirely wonderful, if a bit chaotic. More to the point, I managed to get through it within the time allocated, leaving a bit of time to answer questions before the six o clock deadline for everyone to decamp to the room where several bottles of a really quite drinkable chardonnay awaited us. I got pleasantly drunk, staggered out into the night, hailed a taxi to Paddington, and thus eventually back to Oxford for a well-earned rest.

Willard and me, then and now

Deplacements, sept 2011 – 3

16 septembre  Wherefore to Exelmans? Because it is within walking distance of the CNRS “campus” at Michel-Ange of course, where I am due to attend a day long event hosted by the TGE Adonis. Rather like a JISC town meeting, or an EU pre-proposal briefing, the main purpose was partly to inform people about the latest ANR Call for proposals and also of course to present the services offered by the TGE Adonis for the use of such projects. Not surprisingly it was a well subscribed event : between 150 and 200 people turned up, handed in their identity cards (security is strict at the CNRS campus) and made for the coffee and croissants. I met lots of people I know and recognised, and also some I knew but failed to recognise, embarassingly enough. There were lots of the kind of intense conversations you tend to get in academic contexts when there’s money in the air. Patrice Bordelais assured us that the overall budget of the Institut des SHS would not be too much affected by current financial anxieties, and then apologised for having to rush off to a crisis budget meeting to discuss precisely how much they would be. Jean-Claude Rabier, from the ANR, gave a much longer talk about the thinking behind this “ANR Corpus” call, the first since 2007, what they were looking for, how to get funded, what to do and not to do, how much money was on the table, etc. This was what the punters had come for, and he was listened to very attentively. Laurent Doucet talked about the TGIR Corpus which I found interesting since I have been wondering what it is for for some time. Apparently, it will set up lots of subject-focussed consortia, with small amounts of money for meetings, sharing of expertise, training etc. One or two already exist, and more are planned. Are they like the “centres de ressources numeriques” of the TGE Adonis? You might think so, but they’re not, even though some of the same people and institutions are involved in both. Jean-Luc Pinol, Richard Walter, and Laurence Rageot then each spoke about the services offered by the TGE-ADONIS (grille de service, Isidore, archivage perenne etc) and its guides de bonnes pratiques, with a quick plugf also for next month’s ANGD summer school. Asked to be “ponctuelle” in their questions before lunch, some of the audience (notably Serge Woliakowsky) nevertheless threw some googlies, such as “how does this relate to other european initiatives?” or “is there any procedure for automatically enhancing data capture to meet the recommended norms?” or even “what about the Centres de Ressources Numeriques then?”

And so to an excellent buffet lunch and a reprise of intense conversations as above (I mostly learned about Marie-Luce Demonot’s new proposed consortium called Cahier, focussing on literary texts) followed in the afternoon by a briefing on some technical aspects of the Grille de Service, followed by more questions and discussions from the floor. I noted the following comments: yes, multidisciplinary (i.e. not just SHS) projects would be considered; no, there is no charge for the TGE’s services, unless you want more than we are funded to provide, and no, the ANR does not fund anything except “research”; yes there will be another call; at least one-third of the members of the selection panel are “international” (i.e. non-French) experts; 4% of the ANR overall budget is allocated to infrastructural support (I think this is how one might calculate the level of support that the TGE is “funded to provide”), which is why management costs for individual projects should not be claimed. Finally, Patrice Bourdelais reappeared from his own budgetary meerings, re-assured us that the CNRS had plenty of lawyers to defend its interests and said “pour l’instant y a pas de catastrophe”, which is probably as much as one might hope for in these days of ubqiuitous financial deficiency. So the meeting broke up very agreeably, and I reclaimed my passport and headed for the Eurostar.

Deplacements, sept 2011 – 2

13 September The AFLS conference has another half day to go, but I have to move on. I have foolishly agreed to do both opening and closing plenaries at a week-long CNRS-funded Ecole Thematique on linguistic annotation to be held in (I am told) a chateau in Biarritz : who could resist. So I catch the midday train from Nancy into Paris, traverse the city of light by metro ligne 4 once more, emerging at Montparnasse to find that it is infeasibly hot outside as well as beneath. How well I remember leaving rainy Oxford with a suitcase full of autumnal long sleeved shirts and a raincoat; what folly not to have anticipated the need for cooler clothing. Well, there is time and opportunity to buy a short sleeved shirt from the C&A store underneath the Tour Montparnasse, and doing so makes me feel cooler already. I even have time to eat something fairly horrible and cheesy at a Flams’ fast food joint , before catching the TGV to Biarritz. This splendid train is sans arret for 3.5 hours to Bordeaux, at which point the climate and the scenery suddenly change, and the train starts pottering through the wonderful Camargue for another hour or two, before finally stopping at Biarritz, where I descend and join the other stragglers hopefully waiting at the taxi rank.

Good lord, I realise, as I descend from the taxi, they were not kidding about the chateau. The Domaine de Francon was built for a wealthy English milord in the 1880s, and is (it says here  “une vaste demeure de style anglo-normand ou Old English, au luxueux décor intérieur très éclectique et très raffiné”. Though now owned by a holidays rental outfit, it still retains most of the original decor (huge wooden staircase, stained glass windows, painted ceilings, second empire japanese-style decorations and marble fittings passim) as well as a very fine tree-filled park, and verandahs from which you can see huge atlantic breakers banging away on the beach half a mile away. See further my photos. It has the atmosphere and the style of an Agatha Christie country house, as well as an excellent cuisine, though by the time I arrive, there’s not much left to enjoy except the cheese board. Never mind: I retire up the vast creaky staircase, past the glorious stained glass window, into my huge room and sleep soundly.

Next morning, slightly disappointed to find that some French version of Jeeves has not discreetly laid out a morning suit for me, I slip into one anyway, grab a hasty breakfast, and proceed to the salle de conferences, to receive another little conference pack (wooden USB key, another badge, another bag) and listen dutifully to the organisers explain the modalities of this event, which has been in the planning for almost exactly a year. Coffee and buns on the verandah, and then it’s time to wheel out my talk on Linguistic Annotation for a (hopefully slightly improved) second performance. There is some discussion, mostly positive, and afterwards Anne Dister kindly takes me to one side and corrects numerous typos in the slides, which I perversely read also as a positive response. The rest of the day is devoted to brief presentations about oral, written, and video data, which nicely reinforced my comment that the Tour of Babel is still very much with us, and also demonstrated to my relief, that I had at least chosen relevant topics (variation in transcription practice, mark up of named entities, co-referencing, etc.). I was also struck by the fact that formats tied to particular software (lPraat, Childes, Elan, Transcriber etc.) are seen as de facto standards.

The next day we meet in a custom-built bunker under the lawn where there is a swish conference suite (though the wifi is a bit rubbish). I enjoy listening to two grandes dames d’annotation linguistique — Anne Lacheret and Marie-Paule Pery-Woodley — give their take on the complexities of the field, but it becomes clear that these formal presentations, though each in their way impressive, are not the real business of this event. Instead it is in the extensive and occasionally heated discussions taking place over coffee and during meals and, yes, during early evening excursions to the beach that the intellectual action is taking place. As it should be, at an Ecole, of course. The twenty or so participants are an interesting mix of people at different stages in their careers, including recent doctorates, established scholars, and a scattering of engineers , and they also come from several different parts of the applied linguistics forest despite a shared interest in annotation (for example, video capture, the language of signs, psycholinguistics…). I struggle to keep up and feel obscurely honoured to be involved, though I am also somewhat preoccupied by more mundane matters like getting the next few talks ready, or getting my washing done.

In retrospect, taking a day out in the middle of the Ecole to go back to Paris was somewhat eccentric, if not barking mad, but I did it all the same. On Monday evening, I found my way to Biarritz airport in time to catch the evening Easyjet flight to Roissy : a fairly nasty experience, not improved by the excessively long walk from the arrival gate at Charles de Gaulle airport all the way to its railway station. I spent the night at a slightly shabby hotel near the Gare du Nord, and on Tuesday morning, I met Celine who guided me by RER out to the wilds of Villetaneuse, and the Universite de Paris XIII. Here, as guest of the UMR “Lexiques, Dictionnaires, Informatique” I gave a completely different talk about the TEI  (the third one of this trip) to a vast and heterogenous crowd of students. Then I got a lift on the back of Fabrice’s scooter back to the station, took the first train back to Paris, and across to Montparnasse in good time to make the 1540 train back to Biarritz, arriving at the chateau somewhat out of breath, but just in time for dinner.

All of this meant that I missed entirely the chance to learn more from Antoine Widlocher and Yves Mathet about the standoff XML annotation tools developed at Caen, which is a shame, judging from the copy of their presentation on the Ecole’s wiki. But I did get to see Alexei Lavrentev demonstrating txm in action (and successfully broke it yet again). . I continued to worry about the two talks I still hadn’t written, but also went for a good walk around the grounds of the Domaine. I started to feel at home. Next day, we were assured the weather would improve enough to warrant a picnic, which would be provided at the end of the morning, as indeed it was.

And so to Thursday, which began with a good talk from Sylvain Loiseau , almost but not quite saying “use the TEI”, followed by my final wrap-up talk on why standards might be considered to be a good thing, even in this fragmented field. As requested, I managed to finish this early enough for everyone to go off to the beach with their packed lunches. I however was condemned to sit around finishing my next (and final) talk, before setting off back to the airport. This time I took the Air France flight, which was better in that AF still hand out free drinkies, and don’t quite treat the passengers like refractory parcels, but worse in that when it arrived the Orly bus which comes allegedly every 15 mins, did not, on this occasion, come for an hour, which led to much overcrowding and grumpiness on the short hop across to Denfert-Rocherau . I then sleep-walked my way via metro to Exelmans, checked into another overpriced hotel, and collapsed.

Deplacements, sept 2011 – 1

It’s September: the rentrée — when everyone goes back to work, including me even though I am retreated –– looms. I had a very nice summer, thank you, sitting in my garden for possibly the last time, enjoying being visited by daughters and grandchildren, and not having to go anywhere, except occasionally down the shops to buy a newspaper or some more mushrooms. And in particular not having to get up in the morning. It couldnt last. The leaves on my conker tree are brown and as I write the conkers are already starting to fall. Time for some quick blog entries rapidly surveying the first series of displacements I undertook this month: five talks at four different venues in 12 days.

September 8th. I am sitting once more on the eurostar, waiting for the closure of the doors, when my phone rings. The expensive hotel in the Marais where I am booked in for the night wants to cancel my reservation. I am delighted to discover that lack of usage has not impaired my ability to remonstrate in French, so I remonstrate. How very dare they. Then I spend the journey feverishly correcting the first of the four different talks I have to give on this trip (the others will come in due course) and not thinking about hotels any more. Which seems to work, since when I arrive at gare de nord and switch on my French phone, the hotel meekly apologises for deranging me and assures me that everything has been regulated. Good. For 200 euro a night I expect a bit of servility. Luxury would be nice, but is not (it becomes apparent when I actually get to La Turenne du Marais) on the menu today. Never mind: all I need now is some dinner, which (I am pleased to report) was available from a quite acceptable Italian trattoria just across the street. I wolf down a good tricolour salad, some creamy pasta, some wine, and (a mistake this last) an allegedly sicilian cannoli before retiring to my absurdly over-crowded hotel room.

<p>Les archives nationales</p>

Next morning, after a typical hotel breakfast, it’s a five minute stroll to Les Archives Nationales, which are apparently on strike despite the sunshine.  The lovely Anais Wion appears and kindly carries my suitcase up hundreds of stairs, along dozens of corridors, and through all sorts of winding twisty passages which eventually lead me, puffing along behind, to the attic in which Denise Ogilvie hangs out. It boasts splendid views over the rooftops, and its own bathroom. The three of us then spend an agreeable morning debating how to mark up postcards in TEI and gossiping about the other ANGD trainers who haven’t apparently made much progress either. I realise that I still havent done the work I should have done on revising the workplan for the “structurer” session ; in particular I haven’t written to tell the nice lady from INIST that I think there is too much Dublin Core in her proposed scenario.

No time for lunch. A not very quick taxi ride across town through the traffic to rue Lhomond, where I pop into the office to say hello, pop out again to get a sandwich, pop in again for a meeting with the neighbours at ITEM to discuss the possibility of a TEI-compliant version of their legendary Optima program. I form the doubtless incorrect impression that someone has told P-M de Biaisi that he won’t get another round of funding for this software unless it can export TEI. He impresses on me how different it is to everything else I may have seen of the kind, and I assure him it would be utterly delightful to collaborate on such a project and Daniel Ferrer who organized the meeting blinks a bit in a mildly donnish way. Anyway, PM dashes off to his next meeting, and I dash off to mine, which is actually just a brief teabreak with Florence outside her office in the place de l’université. We exchange gossip, and drink what passes for tea in France. And then it’s back to the sweaty metro ligne 4 and off to the Gare de l’Est, for the 1809 train to Nancy, on which I continue to work on that wretched talk. Except for the bit where the train achieves its maximum vitesse, appearing determined to shake itself (and me) into pieces in the process.

Bertrand meets me off the train, walks me to my hotel, buys me a beer, and makes sure we get to the right restaurant for dinner not too late. He’s a pal.  The hotel, the Akena, is a French take on the American motel — lacking in frills, but actually offering slightly more space than the one in the Marais, with functioning free wifi and a third the price.Dinner is upstairs at Grand Cafe Foy in the Place Stan’, of course. I go for the obligatory quiche lorraine, followed by entrecote and chips, and truly delicious tarte de rhubarbe. Service is very slow, but the food is worth waiting for: so maybe not nickel (can this word be applied to food?) but definitely correct. I slowly warm up to academic discourse in French again : its been a while.

Next morning infeasibly early, I accompany my fellow plenary speaker, a distinguished lady called Catherine Kerbrat-Orecchioni, down to Nancy’s remarkably ugly Palais des Congres (it looks like a carpark) in good time to observe my hosts of last night doing conference organiser panicky things for a while, get some coffee, get my very own conference bag, check my email, and give my intervention on linguistic annotation (Is it A Good Thing?) yet another final polish. And so eventually to this talk’s first outing, as opening keynote for the annual conference of the Association of French Language Studies which seems to go down better than it has any right to. I can spend the rest of the day graciously accepting compliments on the clarity of my french and re-acquainting myself with applied corpus linguistics, a field which (had I forgotten) seems characterised as much by some really very nice people as much as by any methodology. After lunch, I listen with interest to various presentations about dozens of small oral corpora, and wonder if the time is really ripe for the TEI to be adopted as an interchange format for them. The closing plenary is from a grand homme de corpus linguistique, Bernard Combettes, who extemporizes with no visible visual aids or notes in the impressive way that only French grands hommes can. At the end of the day, we form a disputacious crocodile and trek across town down to the University Library’s vast salle d’honneur in the cours Léopold, which I distinctly recognise from the time back in 2008 when the TEI annual meeting was hosted at ATILF. As on that occasion, there are speeches, (largely content free) lots of champagne, and lots of amuse-gueules I realise after a while that I am very tired, as well as pleasantly drunk, and stagger back to le motel, missing out the son et lumiere.

Next day, being Monday, I feel I can justifiably skip some sessions in order to deal with some of the minor crises which have popped up in my email, notably a panic about a deliverable for the Agora project. I have a long and interesting discussion with Carol Etienne about the CLAPI project, which seems to have accumulated and (more usefully) makes available lot of information about the various oral transcription formats currently in use. And I listen to another plenary, this time in English, from a Canadian called Tom Cobb, largely advertising his wonderful teaching software: he apparently runs a company called Linguasoft which (in BNC days) I had occasion to speak to rather sternly.

After lunch, I attend a session largely devoted to people talking about something called “la langue des jeunes” which is seemingly the politically correct term for the language of the banlieus, aka banlieusart, many of the presenters taking a curiously anthropological approach to the task of collecting and reporting linguistic evidence. After lunch, Jenny Cheshire’s plenary reported some results from her ongoing English (well, Hackney) based project on innovation in “Multicultural London English” : an impressive talk for its methodological rigour and thought provoking analysis: see further this earlier report though that one doesn’t talk about pronominal “man”.

The evening was rounded off by a celebratory dinner in the downstairs dining room at Nancy’s gloriously art nouveau restaurant Flo during which I spent a fair amount of time chatting with the ladies who run AFLS, partly because they seemed to like it, partly in the hope they might invite me back. some time.

« Exploiter les données structurées en XML »

Here’s a nice way of spending a day in the heart of the Marais. Get together a bunch of people who do actually use the TEI (or some other kind of structured XML markup) to do cool things and ask them to talk for a maximum of 10 minutes each about the software they use and what they do with it. I claim no credit at all for this idea: the event was master minded by Anais Wion, Fabrice Melka, and Denise Ogilvie who just coincidentally have to prepare a workshop on the verb “exploiter” in Aussois later this year. Whatever its origins, this turned out to be a really worthwhile day, and not just because of the venue (the alabaster hall of the Archives Nationales) or the lunch (yum, Lebanese buffet).

A proper account of the proceedings has been promised for a couple of weeks hence, so this note is just the consequence of me jotting down some immediate impressions on the train home. There is already a useful page of links to stuff mentioned at the workshop at http://www.delicious.com/workshopexploiter, which I should probably update with this report.

I kicked off by explaining why the TEI really didn’t ought to have much to do with software production, except for its own nefarious purposes. I conceded, however, that those purposes led inelectably led to the production of Sebastian’s Excellent Stylesheets and hence to a generic software tool of some importance in the community. Marjorie Burghart then talked about XML database eXist, showing it in action on her sermones.net site, and also her paleographic exercise site; the main problem with it, for her, was that its installation and maintenance on a local server require a little more technical expertise (for example, fine tuning a java environment, recovering tomcat when it falls over, etc.) than is available for the typical humanities department. This need for infrastructural computing support turned out to be a major theme of the day. Next up was Lauranne Bertrand from the CESR team at Tours, who showed how they currently use XTF to display various versions of their richly encoded texts. Maud Ingaro then introduced us to a new XML database from the University of Konstanz called BaseX which seems worth a second look, if only for its very sparkly visualisation features, though its main claim to fame is probably its ability to handle REALLY BIG (multi-gigabyte) databases, which (if true) should give several current pontificators pause for thought. Jorge Fins, also from CESR then talked about Philologic, which provides traditional text searching (full text indexing, concordancing, etc) capabilities, running on a distinct (and distinctly dumbed down) copy of the Bibliotheques Virtuelles des Humanistes exported to Chicago.

After a brief pause for coffee, Alexei Lavrentev, standing in for Serge Heiden (reportedly recently immobilised by a close encounter with a crampon) showed us the current state of  txm the open source text analysis system developed by the textometrie project at Lyon. Severine Gedzelman, also from Lyon, then described Hypermachiavel, an application for handling multiple aligned corpora (or, to be more exact, one specific set of multiple aligned corpora). I found the difference in software design between these two projects interesting: txm was developed very consciously as a generic text processing framework, incorporating and rationalising feaures from many other systems; whereas hypermachiavel was developed (almost from zero) very much to meet the specific needs of a particular research project, but without any particular generic intention.

Does the world need another generic tool for doing textual annotation in XML? Certainly many linguists and computer scientists seem to think so. Cue Antoine Widlocher from the University of Caen, and Glozz, a new plateform for distributed linguistic annotations of text segments, overlapping or otherwise, relationships, graphs, etc. etc. Very nice visualisations, as per other java applications; nice features such as annotation histories; no evidence that any researchers from the humanities had been involved in its design or application up to now. Florence Clavaud, from the Ecole Nationale des Chartes, then spoke very briefly (no really) about Pleade and her plans to enhance this mainstream EAD-muncher to include TEI capabilities. Pleade is one of the tools of choice in the French Archival community so that enhancing it to handle TEI as well as it currently manages EAD and sets of digital images would be very cool. Also from ENC, Vincent Jolivet and Frederick Glorieux showed us diple which is a nice simple package written in php to transform complex TEI markup to static web pages with a complementary suite of stylesheets to render them, and something called xrem, a very glamorous tool for visualisation and construction of RELAX NG schemas. Fred likes to work directly in RELAX NG rather than via ODD, but the results almost justify such heresy. Nicole Dufournaud, aided and abetted by Denise Ogilvie, told the (possibly) instructive history of how Millefeuille (a nice customized TEI editing and indexing application based on work Nicole pioneered back in the nineties)  is now in a suspended state of animation. Following one unsuccessful attempt at reanimation, it appears that another one is proposed as part of a European project. Finally before lunch, Maud Ingaro showed us some camstudio videos about dinah : this “philological platform for the construction of multi-structured documents” is currently being developed at Lyon in a project studying the manuscripts of Jean-Toussaint Desanti, and seems worth a second look, even though it’s a long way from being stable yet.

After the afore-mentioned very nice lunch, there was a wide-ranging free-form discussion, from which I took away chiefly the following points (as aforesaid, there will be a more complete and correct report later):

  • a general feeling that IT infrastructural support was lacking: in particular, people wanted
    • some kind of sandpit environment in which they could experiment with different tools
    • some easily accessible web-publishing service for e.g. doctoral students to showcase their work
  • a general feeling that development and implementation of XML-based projects was hard work requiring input from specialists, consequently a need for more training
  • a desire to share experience of these and other tools; the existence of TEI-FR, and the TEI Tools SIG were agreed to be appropriate channels.

Some pointed requests were made for the TGE to do more to provide some of these services, which proposal I agreed to go away and investigate.

Tweaking the Agora Stylesheets – 1

The AGORA project (this one, not to be confused with this other one nor even this other one again) has defined a very simple TEI XML schema for  scholarly publishing. In this series of blog entries, I report my attempts to process a set of documents which conform to that schema into PDF and other formats, using the TEI stylesheet library.  My environment is a laptop running Ubuntu 10.04, on which I have installed the 5.1.4 release of the tei-xsl package and most of the texlive Ubuntu packages (versions dating from July 2009 according to dpkg).

On the train to London this morning, I  wrote a Makefile which validates each file and, if valid, then processes it using the teitolatex and xelatex commands. This produced something not entirely discouraging, with the  following obvious things to fix:

  • some of my files had  numbered headings and others didn’t. By  default the stylesheets added numbers willy nilly. I need to switch this behaviour off.
  • some of my files used <byline> in the header to indicate the affiliation for an author, like this:
    <byline><docAuthor>Fred Flintstone </docAuthor>
        Euphoria State University, Kansas</byline>.

    By default, the stylesheets clearly have no idea what to do with the text fragment following the <docAuthor>, and therefore spit it out on a page of its own.

I learned at the excellent MUTEC workshop last week that the recommended way of modifying these stylesheets is to set up a new “profile”, so I duly visited the directory  /usr/share/xml/tei/stylesheet/profiles and created a new folder there called   /usr/share/xml/tei/stylesheet/profiles/agora (somewhat to my surprise this did not require root access).  I then copied the existing default specifications for each of the target transformations I thought I might use in my Agora work into this folder. Like this:

$cd /usr/share/xml/tei/stylesheet/profiles
$mkdir agora
$cp -r default/latex agora
$cp -r default/docx agora
$cp -r default/oo agora

The directory names (latex, docx, etc.) are not particularly well publicized: I worked out by inspection that “oo” must be the one invoked by the command “teitoodt”… presumably at some point it will be renamed Liboff vel sim.

Anyway, this setup should mean that if I now do e.g.

$teitolatex --profile=agora foo.xml

I should get the same result as I would if I left out the –profile … and so indeed I do. Good.  Time to start messing about.

I take a peek into the contents of my agora/latex folder. It contains just one file, called to.xsl — which presumably controls the conversion from tei to latex. One day maybe some clever person will add a file called from.xsl which does the opposite. Or not.

The file is rather dull: all it does is remind me that the file is copyright TEI Consortium 2008, and that the library it invokes is “distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY”. Fair enough. It also loads the stylesheet at
../../../latex2/tei.xsl but all it does to modify that is set some mysterious parameter called reencode to false. So clearly I am at liberty to add further modifications in this file… or will be once I have  changed permissions on the file.
../../../latex2 (i.e. /usr/share/xml/tei/stylesheet/latex2, sibling of the profiles directory) is the directory with the real biz. It contains files named for most TEI modules, as well as promising looking files like tei-param.xsl. A little sniffing around, and I have discovered the XSL template for procesing the TEI <head> element inside the file core.xsl, which contains the following magic:

<xsl:choose>
<xsl:when test="ancestor::tei:floatingText">Star</xsl:when>
<xsl:when test="parent::tei:div/@rend='nonumber'">Star</xsl:when>
<xsl:when test="ancestor::tei:back and $numberBackHeadings='false'">Star</xsl:when>
<xsl:when test="$numberHeadings='false' and      ancestor::tei:body">Star</xsl:when>
<xsl:when test="ancestor::tei:front and $numberFrontHeadings='false'">Star</xsl:when>
</xsl:choose>

That looks to me suspiciously like there should be a parameter called numberHeadings which I should set to false in order to suppress those pesky generated section numbers. (Of course, I’d have found that out immediately if I’d bothered to read the documentation, but …)

Back in my file profiles/agora/latex/to.xsl, I add the following line

<xsl:param name="numberHeadings">false</xsl:param>

and then regenerate the PDF, using the tweaked stylesheet in my agora profile:

teitolatex --profile=agora aaberge_2007.xml
xelatex aaberge_2007.tex

Bingo! no numbering. This could maybe be easier than it looks…

My second problem is trickier. The challenge and the delight of the TEI is precisely its open-endedness, and so it often happens that something which looks plausible in TEI has no obvious translation in some other markup system, such as LaTeX. In my case, how *should* the <byline> element be processed? A grep through the LaTeX directory shows me that at present there is no template at all for it, so my hands are comparatively untied. My first thought is just to add a template like the following to my file:

<xsl:template match="tei:byline/text()">
\author{<xsl:value-of select="."/>}
</xsl:template>

on the assumption that the bit of text inside the byLine elements might as well be treated in the same way as an author as anything else. But LaTeX is not so liberal: when it finds that I have generated

\title{The Semantic Web in a philosophical perspective}\author{Terje Aaberge}
\author{,
Sogndal, Norway}

it simply ignores the first \author. This suggests that I cannot solve this without learning more about LaTeX than I really want to.

Maybe I can modify the existing template for <docAuthor> to deal with this special case. In the file header.xsl there is a template like this

<xsl:template match="tei:docAuthor">
<xsl:if test="not(preceding-sibling::tei:docAuthor)">
<xsl:text>\author{</xsl:text>
</xsl:if>
<xsl:apply-templates/>
<xsl:choose>
<xsl:when test="count(following-sibling::tei:docAuthor)=1"> and </xsl:when>
<xsl:when test="following-sibling::tei:docAuthor">, </xsl:when>
</xsl:choose>
<xsl:if test="not(following-sibling::tei:docAuthor)">
<xsl:text>}</xsl:text>
</xsl:if>
</xsl:template>

It’s a horrible kludge, but if I insert the following before the final <xsl:if> element, it should make sure I output any following sibling text fragment before outputting the }

<xsl:if test="parent::tei:byline and (following-sibling::text())">
<xsl:value-of select="following-sibling::text()"/>
</xsl:if>

I therefore copy the whole of the <xsl:template> for docAuthor into my to.xsl file, add the above clause, and blow me down it (nearly) works. I had, of course, forgotten to suppress a second appearance of those pesky text fragments caused by the default processing for <byline>.  One more template:

<xsl:template match="tei:byline/text()"/>

fixes that.

Of course, the more I look at this, the less I like it. A much better solution would be to tag the affiliation data as such in the XML source, using an element such as <affiliation> perhaps, and then process it correctly into whatever LaTeX provides for the treatment of such things. But that would, as aforesaid, require some research into what LaTeX can do, as well as changing the Agora schema.

Not a bad way to pass the train journey to Paris, especially when surrounded by kids returning home after the half term hols.

Quel avenir pour l’édition génétique sans "digital forensics"?

Ce texte représente une intervention au séminaire général de l’ITEM qui a eu lieu à Paris le 31 janvier 2011. Remerciements à ma collègue Nadine Dardenne qui l’a relu pour en corriger les fautes d’orthographe et de syntaxe répandues dans la version originelle; je revendique cependant toute faute intellectuelle résiduelle.

Je souhaiterais vous proposer une brève présentation d’un champ d’études émergeant qui se nomme “digital forensics”. Ce terme reprend un ensemble de techniques et théories propres aux procédures juridiques, mais probablement également d’une importance incontournable pour l’archivage et l’étude des objets nativement numériques; considéré du point de vue patrimonial. Le besoin de mettre en évidence, d’une manière crédible et certaine, les traces de mots enregistrés sur disque dur ou floppy, même supprimées, et d’associer ces traces avec un écrivain, est un enjeu qui afflige l’éditeur critique autant que l’agent de police, ou les services secrets. À chaque fois on a besoin d’une connaissance des affordances des systèmes de stockage numérique, de ce qu’ils rendent possible, et de ce qu’ils cachent. À chaque fois, il est question de balancer des probabilités, de proposer une vérité vraisemblable basée sur des évidences. On pourrait rester aveugle devant ces possibilités, bien sûr. On pourrait dire que l’histoire d’un texte est réduit à l’histoire de ses incarnations multiples, sur ces feuilles de papier que nous aimons si bien. On pourrait renoncer à l’investigation de la manière par laquelle ces incarnations ont été réalisées. Mais dans ce cas il faudrait également renoncer à la majorité du discours artistique actuel, qui est nénumérique, vit et évolue dans le numérique, et meurt dans les archives numérisées de M Google. Car les objets d’étude des humanités et sciences sociales sont de plus en plus conçus et stockés sous forme numérique; il est donc indispensable de revoir et de transformer l’outillage  avec lequel on espère les archiver et les analyser. L’ordinateur de l’auteur, ses disques, son téléphone portable, ses espaces virtuels sur le réseau internet, remplacent ses cahiers, ses brouillons, et ses manuscrits. Il faut ré-équiper le chercheur avec une compréhension des principes d’enregistrement numériques, pour compléter sa compréhension des principes de l’écriture analogique. Le choix est simple: ou bien il faut redéfinir la diplomatique pour le numérique, ou bien il faut renoncer à l’étude de la genèse textuelle des oeuvres modernes.

Comment constituer cette redéfinition? Je propose un réajustement à deux niveaux: intellectuel, et substantif.

Au niveau intellectuel d’abord, il faut affecter une bonne compréhension de l’informatique aux disciplines des SHS. En dépit de deux décennies (au moins) de “humanities computing”, à present relabelisé comme “digital humanities”, il reste une étonnante ignorance autour de l’ordinateur et de ses capacitàs à faire (ou à ne pas faire). En partie, c’est une des conséquences de l’émergence de l’informatique grand public, comme phénomène de marché de masse. Des impératifs commerciaux restreignent l’usage de l’ordinateur à des plateformes spécifiques, et transforment ce moteur universel en un jouet uni-fonctionnel. Ce n’est guère surprenant alors d’entendre les gens affirmer que cette technologie réductive pervertit l’intelligence humaine en la transformant dans une disposition de bits. Ou, à l’extrême opposé, d’y voir l’éternel attrait du divin se manifestant cette fois dans la tendance à vouloir ‘attribuer une intelligence consciente aux effets d’échelle (par exemple, le crowd sourcing, les réseaux neuronaux, le data mining…) Peut être il y en a-t-il parmi nous qui ont besoin de récalibrer le cadre de leur esprit pour supporter l’ère de l’information, juste comme nos ancêtres ont dû s’ajuster à l’ère de la vapeur… mais un tel ajustement consisterait en une extension de nos perceptions, en aucun cas d’une transformation. Dans la langue française, un ordinateur a pour objectif de mettre de l’ordre dans les choses; le mot “ordinateur” porte même des nuances religieuses en rappelant par exemple l’ordination des prêtres. Dans les langages anglo-saxons par contre, un “computer” n’est qu’une machine pour calculer. Mais les objets auxquels l’ordinateur apporte un ordre ne sont pas que les chiffres: il est la machine par excellence pour organiser n’importe quelle espèce de signe, pour le ré-encodage des systèmes sémiotiques de toute sorte. Voilà pourquoi j’ai toujours insisté pour que l’informatique soit considérée comme une branche des sciences humaines, plutot que de l’ingenierie ou de la mathématique. Au niveau materiel, je propose une élargissement des connaissances attendues pour ceux qui veulent faire des études philologiques. On attend de tels gens une compréhension assez intime des technologies typographiques ou paleographiques. Maintenant, a l’urgence on doit élargir ces compétences pour le numérique.

Je termine avec quelques mots sur quelques elements de ce qu’il faut faire apprendre aux futurs généticiens. Quand j’écris un document sur mon ordinateur, le texte apparaît et disparaît sur l’écran, sous le contrôle d’un logiciel avec lequel j’interagis à travers mon clavier. Les traces propres à mon texte sont de deux sortes: lettres, et ce que l’on pourrait nommer “meta-lettres”: c’est-à-dire des codes qui déterminent la façon d’ afficher ou de traiter les lettres. (Un autre terme possible serait “markup” ou “balisage”). Ma conscience de ces meta-lettres est variable: quelques-unes (la ponctuation par exemple) me semble être un composant de ce système sémiotique que l’on appelle la langue naturelle; d’autres (les retours de chariot, les indications de rature, etc) me semblent moins visibles, et j’attends que la machine s’en occupe seule. De la même façon, les codes insérés par le logiciel de traitement de texte pour générer des effets spéciaux tels que les changements de police ou de couleur appartiennent, de mon point de vue, à un niveau sémiotique tout à fait différent. Cependant, mon texte est composé de signes appartenant à ces trois niveaux. Le texte numérisé que j’ai ainsi composé commence son existence physique comme des changements d’état dans la partie dynamique de la mémoire de mon ordinateur; très rapidement ces changements sont transférés et enregistrés dans un format plus permanent quelque part sur mon disque dur, ou dans une autre mémoire. D’habitude ceci s’effectue automatiquement par l’infrastructure informatique, le OS: à noter que c’est fait sans aucune intervention de ma part. Même au moment ou je me décide consciemment d’enregistrer l’état courant de mon texte, bien que je pense savoir ou je le mets (dans un fichier nommé, sur un médium specifique), la manière dans laquelle sont organisés à cet emplacement les composants de mon texte — par exemple, les adresses des secteurs concernés, leurs tailles, la disposition des caractères et autres signes dans ces secteurs — est entièrement hors de mon contrôle et de ma connaissance.

Quand j’écris un document sur papier, le texte apparaît, mais ne disparaît que rarement. Je dois utiliser un ensemble assez complexe de “meta-markup” pour indiquer que tel ou tel signe n’existe plus dans mon texte, qu’il a été remplacé par un autre etc. Le système sémiotique auquel appartient ce markup sera entièrement le mien (exception faite des signes de correction imposés par une maison d’édition). Plus significativement, chacun de mes bouts d’écriture a sa propre existence physique, qu’il m’est impossible d’ignorer, surtout si j’ai un petit bureau ou déjà bien rempli … Par conséquence, il me faut trouver rapidement des stratégies de stockage (ou de recyclage), qui vont déterminer les possibilités de récupérer à l’avenir mes procédures d’écriture. Ces stratégies seront déterminées, bien naturellement, par ce qui me paraît utile, ou ce qui semble approprié dans le contexte institutionnel dans lequel mon écriture prend place. Elles représentent des jugements de valeurs considerés justes dans ces contextes, et c’est pour cela qu’on dit que l’histoire est toujours écrite par les gagnants, et que les archives de n’importe quelle société ont tendance à ne contenir que ce qui est valorisé par cette société. Avec l’arrivée des média numériques, pourtant, les affordances de nos systèmes de stockage se sont transformés d’une manière fondamentale. En dépit des efforts des artistes modernistes, on ne peut lire un bout de papier que d’une seule manière. Mais l’organisation des fragments d’écriture sur un medium numérique de stockage est indépendant de son écriture; elle peut être lue de plusieurs façons. Les séquences de bits constitutifs de ce document peuvent être lus (comme je le suppose assez naïvement) à travers le système de gestion des fichiers sur mon laptop. Mais ce dernier n’est qu’une espèce d’index, comprenant un ensemble de pointeurs sur des segments de stockage éparpillés sur mon disque dur. Ou bien, dans le cas où on recupère mon texte à travers un logiciel plus complexe comme un blog sur le réseau, les traces de mon texte sont hebergées dans une base de données en Californie sur une machine que j’ignore totalement. Mais il demeure possible de récupérer ces mêmes séquences de bits en adressant n’importe quels systèmes de stockage d’une autre manière, tout à fait différemment du système d’acces prévu, que cela soit le système de fichiers sur mon laptop ou le blog, qui (je croyais) représenterait la seule structuration correcte de mon texte. Au contraire. Pour le texte numérique, la structuration est contingente, protéenne.

Ces morceaux écrits, comme je l’ai déjà souligné, pouvaient ne contenir que des materiels raturés, ou des signes qui ne servent qu’à indiquer la manière ou d’autres signes devraient ou pourraient être affichés ou intégrés dans un texte visible. D’où des problèmes pour l’archiviste, et un défi supplémentaire pour la critique textuelle. En acceptant une boîte de papiers comme dépôt, l’archiviste peut raisonnablement supposer que les parties savent exactement ce qu’elles sont en train d’offrir. Mais, quand l’archiviste accepte en dépôt un disque dur, peut-on envisager que les déposants sachent quelles traces d’activités sur l’internet ou quels fichiers supprimés restent encore à découvrir à l’intérieur, au-delà des materiaux proposés et visibles? Un récent rapport américain du Council on Library and Information Resources s’est interrogé sur ce problème, justement perçu comme un vrai défi pour l’éthique professionelle, qui nécessite une mise à jour des standards de contrats de dépôt. Mais je demande aux critiques textuels ici présents — si vous pouviez accéder à l’histoire de browsing sur internet de disons Joyce ou Flaubert, hésiteriez-vous à y aller, par crainte de la violation de la loi sur la vie privée? Peut être moins chimériquement, si vous pouviez récupérer chaque étape de l’écriture d’une oeuvre de l’importance du Satanic Verses de Rushdie (ce qui sera en effet le cas) — chaque rature, chaque ajout, chaque déplacement de mot — de quels outils auriez-vous besoin pour gérer une telle richesse? Les outils et les méthodes élaborés jusqu’à présent sont tous dans la mesure de ce que nous pouvons comprendre: c’est l’abondance de ces informations dans le monde numérique qui nécessite de repenser ces outils et ces méthodes.

Je termine en soulignant encore que le texte numérique serait une construction, pas seulement au sens qu’il est composé de plusieurs séquences fragmentaires de bits, mais aussi au sens que ces séquences reprennent de l’information à plusieurs niveaux. Les mots seuls ne suffisent pas: les documents numériques contiennent inévitablement un balisage, dont une grande partie est (selon le term du philosophe anglais J.L. Austen, repris notamment par Allen Renear) performative — il détermine la nature du texte. D’où l’importance pour le critique textuel numérique de comprendre le balisage et les technologies qui y sont associées . Mais vous vous attendiez probablement à que je vous dise cela…

Does genetic criticism have a future without digital forensics?

This is the text of a presentation I gave at the ITEM’s general symposium on the future of genetic editing, held in Paris on 31 January 2011. I started writing it in French, switched to English for speed, translated it all into French (with the invaluable assistance of my colleague Nadine Dardenne), and then re-Englished it for this version.

I’d like to introduce you to an emerging field called “digital forensics”. This term covers a set of techniques and theories originating in the domain of criminal justice, but also of major importance for the archiving and study of born digital objects considered from a cultural heritage perspective. The need to plausibly identify traces of words recorded on hard or floppy disk, and to reliably associate them with a specific writer, even after their deletion, is a goal which torments the textual critics as much as the police officer or secret service agent. In both cases, a knowledge of the affordances of digital storage systems is needed, to know what they make possible and what the conceal. In both cases, there is a need to balance probabilities when seeking to establish plausible evidence-based conclusions. Ignoring these possibilities is also an option, of course. We could consider the history of a text to be no more than the history of its various embodiments on those sheets of paper we like so well. We could abandon any attempt to investigate by which those embodiments have been achieved. But in that case, we have to give up on the majority of current artistic discourse, which is born digital, lives and evolves digital, and dies in the digital archives of Mr Google. The objects studied in the human and social sciences are increasingly conceived and stored only in digital form; that is why it is essential to rethink and transform the toolkit we use to archive and analyse them. The author’s computer and its disks, their portable telephone, and the virtual spaces they use on the Internet, are taking over from their notebooks, their drafts and their manuscripts. We must re-equip the researcher with an understanding of the principals of digital storage to complement an understanding analog writing. The choice is simple: either redefine diplomatic studies to include the digital world, or abandon any attempt to study the textual genesis of modern works. What are the components of this redefinition? I propose a readjustment at two levels: the intellectual, and the substantive. At the intellectual level first, we need to re-appropriate a proper understanding of information studies within the humanities disciplines. Despite more than two decades of “humanities computing”, now rebranded as “digital humanities”, there is still an astonishing amount of ignorance about what the computer can and cannot do. Partly this is one of the results of the emergence of computing as a mass market phenomenon. Commercial imperatives restrict usage of the infinitely plastic computer to certain platforms, transforming a universal engine into a mono-functional toy. Unsurprisingly, therefore, we still hear people assert that this reductive technology perverts human intelligence as a transient patterns of bits. Or, at the other extreme, we still see evidence of the eternal desire for the divine, now appearing as a tendency to attribute conscious intelligence to effects of scale (for example crowd sourcing, neural nets, data mining…). Maybe some of us need to adjust our mental framework to deal with the information age, just as our ancestors adjusted theirs to deal with the steam age, but such an adjustment is a matter of expanding our perceptions, not transforming them. In the French language, a computer is something which puts things in order: the word ordinateur even has religious overtones, suggesting “ordination” and consecration. In the English and German languages, it is just a machine that “computes”. But the things that a computer puts in order are not just numbers: it is a machine above all for organizing any kind of sign, for re-encoding semiotic systems of all kinds. This is why I have always maintained that computer science is more a branch of the humanities than it is of engineering or mathematics. At the materiel level, I propose an extension of the knowledge expected from those undertaking philological study. Such people are expected to acquire a detailed understanding of typographic or paleographic technologies. There is an urgent need to expand those skills to embrace the digital medium. I conclude with a brief discussion of a few components of the understanding that future genetic editors needed to acquire. When I write a text on my laptop, the text appears and disappears on the screen under control of some piece of software with which I am interacting via a keyboard. The traces which constitute my text are of two kinds — letters, and what we may call meta-letters; codes which determine how the text should be displayed or processed in some way. (Another word we might use is markup). I may or may not be aware of all of these — some, the punctuation for example, is almost a part of the semiotic system I call “natural language” so I am very aware of it; others — the carriage returns, deletion characters, etc. — seem less salient, I expect the machine to deal with them. In the same way, the codes my word processor inserts to produce special effects such as changes of font or colour seem to belong to some other semiotic level entirely. But signs at all three of these levels are what constitute my text. The digital text I create starts its physical existence as detectable changes of state in the dynamic part of my computer’s memory, but very rapidly is transferred to a more permanent form, somewhere on my hard disk, or on some other store. Usually this will be done automatically by the software environment: critically, this will happen without any knowledge or intervention on my part. Even when I do deliberately request that the state of my text should be stored away in its current form, although I may think I know where I am putting it (in a file with such a name, on a specified physical medium), the way in which the components of my text are organized at that location — the order and number of blocks of characters and other signs represented — is entirely beyond my control or knowledge. When I write a text on a piece of paper, signs appear, but rarely disappear. I have to deploy quite a complex range of meta-markup to indicate that some sign is no longer significant or has been superceded by another, but the semiotic system to which that metamarkup belongs is entirely my own (unless forced on me by a publisher in the shape of proof reading marks, of course). More significantly, each of my scraps of writing has a physical existence which forces itself on my attention, especially if my desk is small, or my office already crowded. Consequently, I will rapidly adopt recycling or storage strategies, which effectively determine the future re-traceability of my writing processes. Those strategies are naturally determined by what is useful or perceived as appropriate by myself or the institutional context in which my writing takes place. They represent value judgments deemed appropriate within that context, and that is why (as they say) history is written by the victors, and why the archives of every society represent and maintain what that society values. With the advent of the digital medium however, the affordances of our storage systems change fundamentally. Despite the best efforts of modernist artists, you can only read a written scrap of paper in one way. But the organization of written fragments on a digital storage medium is independent of its reading, and thus can be read in many ways. The blocks of storage constituting this text may be read, as I naively think they should be, via the file system on my laptop, which contains a number of pointers indicating more or less continguous segments of storage scattered across my hard disk.They might be recovered via a more complex piece of software such as a networked blog, which stores my text as records on some database system in California. But it is also possible to recover the same written fragments by addressing those storage systems in an entirely different way, by-passing the intermediate access systems (the file system, the blog) which represent the “organization” of my text. In the digital text, organization is contingent and protean. Those written fragments, as noted above, ma
y actually contain nothing but material that has been deleted, or signs that serve only to indicate how other signs should be, or might be, displayed or integrated into a visible text. The first case poses problems for the archivist, as well as a challenge for the textual critic. When accepting a box of papers for deposit, the archivist can reasonably assume that both parties know exactly what is being handed over. But when the archivist accepts for deposit a hard disk, is it equally likely that the depositor will know what traces of internet activity or deleted files may remain to be recovered from it, in addition to the intended and apparent materials? A recent American report from the Council on Library and Information Studies agonizes considerably over this problem, which it perceives rightly as a challenge to the maintenance of professional ethics, necessitating a reappraisal of such deposit agreements. But I ask the textual critics here present — if you could have access to (say) Joyce’s or Flaubert’s web browsing history, would you hesitate to examine it on the grounds of a breach of confidence? Less fancifully, if you could (as you will soon be able to) recover every stage of the writing of a great work such as Rushdie’s Satanic Verses, every deletion, insertion, and the movement of every word, what tools would you need to make sense of that richness? The tools and methods so far elaborated have been done so in the measure of what we know how to handle; it is the very abundance of information to the textual critic that necessitates a rethinking of those tools and methods. I close by underlining again the fact that the digitized text is a construction, not only in the sense that it is composed of fragmentary byte sequences, but also in the sense that those byte sequences contain information at many levels. The words alone are not enough: digital documents inevitably contain markup, much of which is (in a term Allen Renear borrows from the English philosopher J L Austen) performative — it determines what the text is. Hence the importance of a proper understanding of markup, and markup technologies to the digital textual critic. But you probably expected me to say that.