Tools

Digital tools and applications provide those engaged in the Digital Humanities with methods such as data analysis, data capture, and data structuring to explore the humanities disciplines in new and powerful ways. Use these digital tools to showcase your scholarship in creative, dynamic, and interactive ways.

Lists/Directories of Tools

arts-humanities.net Tools

arts-humanities.net Tools

A catalogue of software tools used at different stages of the research lifecycle in the arts and humanities. The tools were selected from those used in research projects included in the arts-humanities.net database.

Hybrid Pedagogy Concordance of Digital Tools

Hybrid Pedagogy’s Concordance of Digital Tools

A list of useful online tools compiled by Hybrid Pedagogy, a Digital Journal of Teaching and Technology. The Concordance lists several online tools that may be useful for extending conversation and activities outside the boundaries of the traditional online or face-to-face classroom.

Project Bamboo

Project Bamboo

Bamboo DiRT is a tool, service, and collection registry of digital research tools for scholarly use. Developed by Project Bamboo, Bamboo DiRT makes it easy for digital humanists and others conducting digital research to find and compare resources ranging from content management systems to music OCR, statistical analysis packages to mindmapping software.

Back to Top

Select Online Tools/Applications

Corpus of Contemporary American English

Corpus of Contemporary American English (COCA)

The Corpus of Contemporary American English (COCA) is the largest freely-available corpus of English, and the only large and balanced corpus of American English. The corpus was created by Mark Davies of Brigham Young University, and it is used by tens of thousands of users every month (linguists, teachers, translators, and other researchers).

The corpus contains more than 450 million words of text and is equally divided among spoken, fiction, popular magazines, newspapers, and academic texts. It includes 20 million words each year from 1990-2012 and the corpus is also updated regularly. Because of its design, it is perhaps the only corpus of English that is suitable for looking at current, ongoing changes in the language.

Corpus of Historical American English

Corpus of Historical American English (COHA)

The Corpus of Historical American English (COHA) is the largest structured corpus of historical English. The corpus was created by Mark Davies of Brigham Young University, with generous funding from the U.S. National Endowment for the Humanities.

COHA allows you to quickly and easily search more than 400 million words of text of American English from 1810 to 2009. You can see how words, phrases and grammatical constructions have increased or decreased in frequency, how words have changed meaning over time, and how stylistic changes have taken place in the language.

GeoNames

GeoNames

GeoNames is a global geographical database that may be used to identify, tag and disambiguate all references to location. The GeoNames geographical database is available for download free of charge under a creative commons attribution license. It contains over 10 million geographical names and consists of over 8 million unique features whereof 2.8 million populated places and 5.5 million alternate names. All features are categorized into one out of nine feature classes and further subcategorized into one out of 645 feature codes.

Google Books Ngram Viewer

Google Books Ngram Viewer

The Google Ngram Viewer is a phrase-usage graphing tool which charts the yearly count of selected n-grams (letter combinations), words, or phrases, as found in over 8 million of the 20 million books digitized by Google. The words or phrases are matched by case-sensitive spelling, comparing exact uppercase letters, and plotted on the graph if found in 40 or more books. The Google Ngram Viewer allows users to investigate the usage of phrases across time.

The Ngram tool was first released in mid-December 2010.

Small Demons

Small Demons

Small Demons collects, catalogs and connects the details within books, allowing users to trace references to music, plays, characters, movies, objects and more. Small Demons uses metadata from within the texts of the stories to facilitate discovery not just of other books, but movies, music and other types of pop culture woven into the books.

Text2Mindmap

Text2Mindmap

Launched in 2008, Text2MindMap is one of the most popular mind mapping tools on the web. Mindmaps can be saved, downloaded as PDFs or PNG images, and shared.

Timetoast

Timetoast

Timetoast allows users to create interactive timelines, which they can share anywhere on the web.

Wordle

Wordle

Wordle is a toy for generating “word clouds” from text that you provide. The clouds give greater prominence to words that appear more frequently in the source text.

Back to Top

Select Open-Source Tools

Drupal

Drupal

Drupal is an open source content management platform powering millions of websites and applications. It’s built, used, and supported by an active and diverse community of people around the world.;

Omeka

Omeka

Omeka is a free, flexible, and open source web-publishing platform for the display of library, museum, archives, and scholarly collections and exhibitions. Its “five-minute setup” makes launching an online exhibition as easy as launching a blog. Omeka is a project of the Roy Rosenzweig Center for History and New Media, George Mason University.

SIMILE Widgets

SIMILE Widgets

Free, open-source data visualization web widgets, including:

  • Timeline – visualize temporal information on an interactive drag-able timeline
  • Timeplot – plot time series and overlay temporal events over them
  • Runway – display images in a Coverflow-like visualization

SIMILE Widgets is an open-source spin-off from the SIMILE Project at MIT.

Textus

TEXTUS

TEXTUS is an open source platform developed by the Open Knowledge Foundation for working with collections of text. It enables students, researchers and teachers to share and collaborate around texts using a simple and intuitive interface.

Textus enables users to:

  • Collaboratively annotate texts and view the annotations of others
  • Reliably cite electronic versions of texts
  • Create bibliographies with stable URLs to online versions of those texts.
TAPoR (Text Analysis Portal for Research)

TAPoR (Text Analysis Portal for Research)

TAPoR is a gateway to tools for sophisticated analysis and retrieval, along with representative texts for experimentation. TAPoR allows users to:

  • Manage electronic texts
  • Experiment with online text tools
  • Learn about digital textuality

TAPoR is a Canada Foundation for Innovation (CFI) funded research infrastructure project.
http://www.innovation.ca/en

Visual Understanding Environment

Visual Understanding Environment (VUE)

The Visual Understanding Environment (VUE) is a concept and content mapping application, developed to support teaching, learning and research and for anyone who needs to organize, contextualize, and access digital information. Using a simple set of tools and a basic visual grammar consisting of nodes and links, faculty and students can map relationships between concepts, ideas and digital content. This open-source project is based at Tufts University.

Back to Top

Standards

Dublin Core

Dublin Core

Dublin Core is a basic metadata standard supported by international standards bodies. Dublin Core Metadata can be used for multiple purposes, from simple resource description for the purposes of discovery, to combining metadata vocabularies of different metadata standards, to providing interoperability for metadata vocabularies in the cloud and Semantic web applications.

EAD (Encoded Archival Description)

EAD (Encoded Archival Description)

The EAD Document Type Definition (DTD) is a standard for encoding archival finding aids using Extensible Markup Language (XML). The standard is maintained in the Network Development and MARC Standards Office of the Library of Congress (LC) in partnership with the Society of American Archivists.

METS: Metadata Encoding and Transmission Standard

METS: Metadata Encoding and Transmission Standard

The METS schema is a standard for encoding descriptive, administrative, and structural metadata regarding objects within a digital library, expressed using the XML schema language of the World Wide Web Consortium. The standard is maintained in the Network Development and MARC Standards Office of the Library of Congress, and is being developed as an initiative of the Digital Library Federation.

MODS (Metadata Object Description Schema)

MODS (Metadata Object Description Schema)

Metadata Object Description Schema (MODS) is a schema for a bibliographic element set that may be used for a variety of purposes, and particularly for library applications. The standard is maintained by the Network Development and MARC Standards Office of the Library of Congress with input from users.

TEI: Text Encoding Initiative

TEI: Text Encoding Initiative

The Text Encoding Initiative (TEI) is a consortium which collectively develops and maintains a standard for the representation of texts in digital form. Its chief deliverable is a set of Guidelines which specify encoding methods for machine-readable texts, chiefly in the humanities, social sciences and linguistics. Since 1994, the TEI Guidelines have been widely used by libraries, museums, publishers, and individual scholars to present texts for online research, teaching, and preservation. In addition to the Guidelines themselves, the Consortium provides a variety of supporting resources, including resources and training events for learning TEI, information on projects using the TEI, TEI-related publications, and software developed for or adapted to the TEI.

Back to Top

This Project was designed by Ellen Dubinsky
for INST 523—Information Access and the Internet
Fall 2012 - Dr. Thanh Nguyen
Bridgewater State University

Thanks to Dr. Nguyen and my INST 523 classmates for their feedback and support.

© All Rights Reserved.