Digital Textual Analysis (Voyant, etc.)

Andrew Longman’s Session

Opening: Explanation of Basic Interest

Question to all: Why are we all here? Why do we want to know about these tools?

Some reasons:

A little experience with the tools, but some clarity needed for interpretation.

Questions about mechanics: how to load texts to Voyant--large and small corpi (sp?)

Initial test text for session was Moby Dick. Used to demonstrate basic features of the interface.

Previous experiences:

Using smaller sections of a work for comparision

Distant reading assignments (and what exactly is distant reading?)

Interesting questions about copyright and how that affects potential for use.

Other tools mentioned:

Sporkforge -;

Many Eyes -

Tapor -

Google N-Gram -

Juxta -

(session proposals included a more detailed look at Juxta);

TAPAS: (TEI publishing)

Tapas is the TEI Archiving Publishing and Access Service for scholars and other creators of TEI data who need a place to publish their materials in different forms and ensure it remains accessible over time. Tapas is also for anyone interested in reading and exploring TEI data, and communicating with those that share that interest.


SEASR is designed to enable digital humanities developers to rapidly design, build, and share software applications that support research and collaboration.

AntConc: A concordance tool: java command-line program which acts as a pipeline manager for processes performing morphological adornement of words in texts.

What’s so interesting about these digital tools for students?

Pedagogy - how do you approach this in class?


Sorry for the listy nature of this--it was the best I could do!