“Textkritik som analysemetod” (textual criticism as a method of analysis) was the title of this years conference of the Nordic Network for Edition Philology (NNE), held in beautiful Gothenburg in the first week of October. The NNE gathers bi-annually editors, edition philologist, book historians and literary scholars from all the Nordic countries to discuss developments in recent research and editorial method, and present scholarly editions.

This year’s conference was the 14th in a row of successful gatherings in the North and the 20th anniversary of the NNE – with 60 participants (and an amazing 50/50 gender distribution!) and 12 talks in three languages (Swedish, Norwegian & Danish) on various subjects more or less closely tied to this year’s topic. The talks will be published in the NNE-book series and made digitally (XML-TEI P5 encoded!) available afterwards.

What became obvious in the discussions and debates not only here at the NNE meeting but generally in edition philology, is, that the scholarly editions we editors prepare in a very sophisticated manner and with a special eye for detail are not really suited for computer aided corpus analysis like topic modeling, text mining, stylistics etc. The issue is not under-complexity of the (digital) scholarly editions, but rather their complexity and depth of encoding and enrichment. In a corpus of 100.000 books, a textual error is statistically insignificant – no need to make the effort of emendation or provide an explanation and possible rectification. – I think it has to ‘sink in’ that especially quantitative (digital) literary or text studies ask very different questions from those commonly anticipated by edition philologists (that is: those of traditional literary studies). And since editions are not an end in itself but user oriented, what do we have to change in order to meet the needs (also) of those literary scholars who are interested in quantitative, corpus-based analyses & distant reading?

Since summer is almost over and the fresh semester in Norway has already started, let’s make a plan for some blog posts I have been drafting during the last couple of weeks. Most likely, I will blog more than what I put on this “to-do-list” here – at least one post per month, additionally to the semi-regulary blogging on the Greflinger archive-edition website.

  • An overview of DH & pedagogy books, articles, and (web) resources – focussing on modern language, literature and cultural studies (Sep 2015)
  • A micro-study on gender distribution in journals (Aug 2015)
  • Textual criticism & (computational) textual analysis – a report of the Nordic Network for Edition Philology-conference in Gothenburg, Sweden (Oct 2015)
  • Die Rückkehr des Werkes (Return of the Work) – report of the symposion at Herrenhausen Palace, Hannover, Germany (Oct 2015)
  • Editing early modern (German) prints in and for a digital environment: conceptual draft and teaser for an upcoming article (Nov 2015)
  • Querying the archive: DH, European enlightenment newspapers, and the Nordischer Mercurius (Dec 2015)
03. June 2015 · Comments Off on Software Carpentry – Or: What You Can Learn About Learning & Teaching DH · Categories: Conference Report, Digital Humanities · Tags: , , ,

A few days ago I had the pleasure to take part in my first Software Carpentry hands-on workshop at the Realfagsbibliotek at the University of Oslo on June 2–3, 2015. It was a last-minute decision – a colleague from computer science suggested the event to me since I wanted to learn some Python (and SWC’s workshop was offering that, among other things…).

Basically, the course was meant to provide an introduction to and hands-on work with Unix Shell (i.e. using the command line and thus interacting with your computer without using a graphical interface), GitHub for version control and Python, incl. using iPython notebook and TextWrangler.

I’ve participated in my fare share of technology and programming workshops over the past years and I have to say: I was awestruck! I was the only humanities person there (well: the only one who ‘outed’ themself), without much prior knowledge (of either Unix Shell, GitHub or Python). And I didn’t really know what to expect – but it was fantastic. The instructors were wonderful, the ‘mode of teaching’ (especially using the sticky-notes for trouble shooting and keeping track with where people are where they got stuck) was working refreshingly well with quite a heterogenous group of learners, and the overall atmosphere was friendly, helpful, encouraging and explorative.

As I learned, SWC has an instructor training (they’re always looking for people who want to become teachers) and pays special attention to the pedagogy of teaching ‘scary computer stuff’ and programming skills to researchers with all kinds of disciplinary backgrounds. – Apart from learning some / getting comfortable teaching myself Python (which was my personal goal), I also took the workshop to observe and evaluate it from a digital humanities point of interest. I asked myself: Would the SWC-format be of use in a DH-context. At the University of Oslo? Who would be the intended audience from SWC’s point of view and who would think they could use this workshop from the Faculty of Humanities? Would their needs and wants be met? (And what would those be?) Would an SWC ‘standard’ courses meet the needs or be too far from what a humanities researcher’s day-to-day work looks like?

After the workshop I talked to one of the teachers, Lex Nederbragt, about SWC, its outreach, the humanities, and UiO. He was, too, much interested in the matter and suggested to investigate a little further. I’m not going to provide results of an extensive search on the web, however, I will link to some posts I found that specifically made a connection between SWC and Digital Humanities.

What I found out was:

  • Most of the workshops (it were only a few in number) that were targeted at DH folk had been held in the US (as far as I could see), often within some bigger workshop event or a THATcamp or HASTAC thing. Those schooling events are quite common and well received in DH and thus a good entering point.
  • The overall experience of the learners was positive with few suggestions on how to tailor the SWC workshop program to meet the specific needs of DHers even better. However, as a first step, those needs have to be pointed out (from the DHers)!
  • SWC itself went out to gather suggestions for workshops specifically targeted at DHers and wanted to know what to expect from humanities folk who want (or should) take one of their workshops.
  • What they learned was: you have to first know the DHers tech-background, familiarity with the command line and their computers files system, with using a database and programming etc., starting then perhaps with a very basic workshop that teaches “getting used to using your computer”, as, for example, suggested by Fiona Tweedie when asked by SWC.
  • However, this by no means is to suggest that humanities researchers are less computer savvy than natural and social sciences people (they’re also often not that experienced and fluent in tech and informatics), but that their exposure to technology is discipline-specific and data-specific and thus often quite different from “the sciences” who make up the usual participants of an SWC workshop. (Meaning: where they will ‘get lost’ during a workshop setting might be unexpected by the instructors as well as some of the questions and issues might be surprising.)
  • It was suggested that it might be useful for SWC to have amongst their teaching staff either humanists or digital humanities people who know the needs, wants, and requirements of (digital) humanities researchers, their ‘data’ and research methods as well as their habitual attitudes towards technology, computer science, and programming.

I particularly liked what I found on Audrey Watters Tumblr about SWC and teaching programming and basic computer skills to non-tech and non-natural sciences people:

“I focus more on some of these questions surrounding how do we create learning environments for non-programmers to learn programming […] by helping train scholars in new tools (and, as such, in new methodologies); learning to work with technologists; coming to terms with the ways in which storage, processing, interactivity, data, and so on might enhance teaching, research, and their dissemination”

Perhaps, SWC’s local UiO instructors and the Digital Humanities Network in Oslo could stick their heads together and see if they could come up with some suggestions for a basic, introductory hands-on workshop especially tailored to (digital) humanities researchers!? I would very much appreciate this and consider taking the instructor training with SWC for some of the technologies commonly used in a DH context: XML and the other X’s (XSLT, XPath, XQuery, eXist database), HTML, Python, and (My)SQL.

02. March 2015 · Comments Off on What Do You Do With 375.000 Digitised Norwegian Books? · Categories: Conference Report, Digital Humanities · Tags: , ,

On 24. and 25. February, the Digital Humanities Forum at the University of Oslo hosted two half-day seminars focussing especially on digital textual studies. The first instance was a joint seminar with the newly established Digital Humanities Center at Gothenburg University and the Digital Humanities Lab Denmark. Gathered under the topic “litteraturforskning og digitale verktøy” (literary studies and digital tools), Jon Haarberg (University of Oslo), Jenny Bergenmar, Mats Malm and Sverker Lundin (Gothenburg University) shared their experiences with digitisation, digital editing, electronic literature and textual analysis. Among the presented projects were the digital edition of Petter Dass’ catechism songsSpråkbanken and Litteraturbanken (Swedish), the Women Writers Network and poeter.se, the largest Swedish online platform and archive for modern poetry and writing. Bergenmar and Malm also presented the new DH center at Gothenburg University and their future plans for a master programme in DH. The Swedes startet a seminar series on DH in the fall semester 2014 that will continue in 2015.

DHF_2015-02-24/1 DHF_2015-02-24/2

 

 

 

 

 

The second half-day seminar on 25. February was dedicated to textual analysis, especially topic modeling: “Kulturens tekster som big data. Om å analysere tekster digitalt” (Cultural textual heritage as big data. On analysing texts digitally). Starting with a  presentation by Peter Leonard (Yale University Library & Digital Humanities Lab) titled “Topic Modeling & the Canon. Using curated collections to understand the ‘Great Unread'” that served as a thorough introduction to topic modeling and showed some great case studies in the end (e.g. Robots Reading Vogue). After lunch, Jon Arild Oslen from the Norwegian National Library presented their long-term digitisation project that started in 2006 wherein their complete holdings will be digitised (image & text recognition & text encoding) and made available to the public. This will include ca. 375.000 books (from as early as 1790), 3.2 mio newspapers (i.e. single issues), 42.000 periodicals (summing up to 2 mio single volumes). The project will be finished in 2018. Arne Martinus Lindstad (Norwegian National Library) talked about the library’s n-gram project while Lars Johnsen presented topic modeling with the National Library’s text corpus.

DHD_2015-02-25

 

 

 

 

 

After a lively discussion with the audience, this time’s DH Forum host Anne Birgitte Rønning and I proposed a hands-on workshop for topic modeling to be held at the University of Oslo in the near future, and the current vice dean for research, Ellen Rees, announced the re-animation of the interdisciplinary research group “tekstutgivelse” (text editing & publishing) that will serve as a link between the National Library’s digital corpus and the Department for Linguistic and Scandinavian Languages’ corpus-based research and teaching and hopes to stimulate digital textual analysis endeavours.

I also did some live-tweeting during the seminars: #DHOslo