|Name||URL||type||online||collaborative||open-source||free||multi-media||integration||active||activity/state||listed in||notes||Self-statement (from website)||developed by|
|0xdb||http://0xdb.org/about||y||y||y||y||y||Harvard||cf. pan.do/ra||Movie database with full text search within movies, and instant previews of search results.|
|AAV - Annotating Academic Video Tool||http://entwinemedia.com/2013/annotations-tool/||educational, extension for Matterhorn||y||y||y||n||(y)||y||Harvard||entwine + SWITCH; based on and for the open source media capture platform Matterhorn, player adapter API|
on GitHub: https://github.com/entwinemedia/annotations/
|Since January 2012, Entwine has collaborated with SWITCH on the Annotating Academic Video (AAV) project with the goal of creating a standardized, open and flexible tool/framework to enable Swiss University faculty, staff and students to annotate video across a mix of platforms including players, video management and learning management systems.||Entwine; funded by SWITCH, the Swiss national research and education network organisation|
|Advene (Annotate Digital Video, Exchange on the NEt)||http://liris.cnrs.fr/advene/||linguistic annotation, HyperVideo||y||y||y||y||?||(y)||last activity -forum: 2012||Harvard; Bamboo||HyperVideo, templates for custom views|
on GitHub: https://github.com/oaubert/advene
|It aims at providing a model and a format to share annotations about digital video documents (movies, courses, conferences...), as well as tools to edit and visualize the hypervideos generated from both the annotations and the audiovisual documents.|
cross-platform Advene; comments and analyses of video documents; definition of time-aligned annotations and their mobilisation into automatically-generated or user-written comment views (HTML documents). virtual montage, captioning, navigation... capabilities. Users can exchange their comments/analyses in the form of Advene packages, independently from the video itself.
|LIRIS laboratory, University Claude Bernard Lyon 1|
|annotator's Workbench||http://www.phil.uni-mannheim.de/romanistik/romanistik3/akira3/||filmstudies||n||n||n||n||?||out of date; single desktop user only||Jost, et al 2013||qualitative social science||approach in analogy to film cutter's desk, Partitur eines audiovisuellen ›Textes‹||Universität Mannheim, Lehrstuhl Romanistik III|
|AKTive Media||http://www.aktors.org/technologies/aktivemedia/index.html||y||y||y||y||y||n||out of date; last update on SF: 03/2009||Crossmedia Knowledge Acquisiition; dev RDF, Java, SPARQL||AKTive Media is a user centric ontology based multimedia annotation system. The goal is to automate the process of annotation by means of knowledge sharing and reuse||Advanced Knowledge Technologies - AKT. funding 2003-2007. |
|Annotator's Workbench||http://www.eviada.org/element.cfm?mc=6&ctID=31&eID=1||annotation||y||y||(y)||(y)||n||y||Bamboo||client/server annotation||create a collection from a set of existing video files, segment that collection, create annotations and assign vocabulary terms to the segments, and control access.||part of the Ethnographic Video for Institution and Analysis (EVIA) Digital Archive Project at Indiana University and the University of Michigan|
|ANVIL||http://www.anvil-software.org/||linguistic||y||y||y||n||y||y||Harvard, Bamboo, LinkedT||originally developed for Gesture research||ANVIL is a free video annotation tool. It offers multi-layered annotation based on a user-defined coding scheme. During coding the user can see color-coded elements on multiple tracks in time-alignment. Some special features are cross-level links, non-temporal objects, timepoint tracks, coding agreement analysis and a project tool for managing whole corpora of annotation files. Originally developed for gesture research in 2000, ANVIL is now being used in many research areas including human-computer interaction, linguistics, ethology, anthropology, psychotherapy, embodied agents, computer animation and oceanography.||Michael Kipp 2000-2012|
|AV Portal||http://www.tib-hannover.de/de/dienstleistungen/kompetenzzentrum-fuer-nicht-textuelle-materialien-knm/av-portal/||Wissenschaftliches Filmportal||y||?||?||?||?||y||still unpublished||TIB Hannover, Leibnitz||The future AV portal will optimise access to and the use of scientific films from the fields of engineering and science (e.g. computer animations, and recordings of lectures and conferences).|
The portal integrates new methods for searching, enabled by an automated video analysis with scene, speech, text and image recognition. The search results are connected to new knowledge by linking the data semantically.
|Competence Centre for Non-Textual Materials - German National Library of Science and Technology (TIB Hannover) and Hasso Plattner Institute|
|Catool - Collaborative Annotation Tool||https://github.com/Harvard-ATG/Catool/||annotation||y||y||y||?||alpha||on GitHub, PHP based||academic application that gives faculty and students the ability to collaboratively annotate text, images, audio, and video. Students and faculty highlight points of interest and discussions with text or media annotations. Other users may create additional annotations, or reply to previous annotations, thus creating discussions around common points of interests.||Harvard Academic Technology Group|
|dotSUB - Online Crowdsourced Video Translation||http://dotsub.com/||subtitling, translating||y||y||n||(n)||n||(embed)||y||user tagging; free/commercial||Harvard||browser based, one-stop, self contained system for creating and viewing subtitles for videos in multiple languages across all platforms. developed own player for embedding|
Watch videos with subtitles in any language, •upload your videos, •create your own subtitles; free basic version, also enterprise solution
|ELAN - EUDICO Linguistic Annotator||http://tla.mpi.nl/tools/tla-tools/elan/||linguistic||n||n||y||y||y||y||Bamboo, LinkedT||create, edit, visualize and search annotations for video and audio data. specifically designed for the analysis of language, sign language, and gesture||Max Planck Institute for Psycholinguistics, Nijmegen|
|EXMARaLDA - Extensible Markup Language for Discourse Annotation||http://www.exmaralda.org||linguistic||n||n||y||y||(n)||y||LinkedTV||Java, MIT license||ein System von Konzepten, Datenformaten und Werkzeugen für die computergestützte Transkription und Annotation gesprochener Sprache, sowie für das Erstellen und Auswerten von Korpora gesprochener Sprache|
main users are students and researchers in discourse or conversation
analysis as well as language acquisition studies. A video panel is provided as well, though
EXMARaLDA’s main use is the annotation of multi-lingual spoken corpora
|Sonderforschungsbereichs "Mehrsprachigkeit" (SFB 538) der Universität Hamburg; Hamburger Zentrum für Sprachkorpora, seit November 2011 in Zusammenarbeit mit dem Archiv für Gesprochenes Deutsch am IDS Mannheim|
|HyperRESARCH||http://www.researchware.com/products/hyperresearch.html||n||(y - asynchronous)||n||n||y||y||Bamboo||qualitative social science, limited free version||Cross-Platform Qualitative Analysis Software||ResearchWare|
|InqScribe||http://www.inqscribe.com/||transcriptions||(n)||n||n||n||n||y||commercial||Bamboo||desktop client, export (srt, scc, …)||transcription and subtitling. You may view and transcribe audio or video side-by-side. You may insert blocks of text, time codes, as well as convert your transcript into a subtitled movie.||InqScribe|
|interact||http://www.mangold-international.com/en/software/interact/||observational studies||n||n||n||n||y||commercial||Jost, et al 2013||qualitative social science||Video Coding, Live Observation and Analysis Software for Observational Studies||Mangold|
|Kaltura||http://corp.kaltura.com/||video platform||y||n||n||n||y||commercial||Our video platform is designed to help you create value with video.||Kaltura|
|Kaltura||http://www.kaltura.org/||video platform||y||y||y||y||Harvard||community project of kaltura.com|
on GitHub: https://github.com/we4tech/acts_as_kaltura
|Full featured open source video platform running on your own servers or cloud.|
An extension for supporting acts_as_kaltura_video and acts_as_kaltura_annotation (which automatically maintains kaltura video and cuepoint)
Kaltura is SaaS based video streaming platform, official kaltura ruby API: http://corp.kaltura.com/Products/Kaltura-API
Based on velir kaltura ruby library, we somewhere in... ruby devs have built this gem to simplify kaltura video and cuepoint synchronization process.
Statement 2008: http://osvideo.constantvzw.org/kaltura-10/
|Kat||http://west.uni-koblenz.de/koblenz/fb4/AGStaab/Research/koblenz/fb4/institute/IFI/AGStaab/Research/systeme/kat||annotation||n||y||y||y||last changes 2010-10-22||LinkedTV||part of K-Space;|
on Launchpad: https://launchpad.net/kat
|open source framework for semi-automatic annotation of multimedia content|
Formal model based on the Core Ontology on Multimedia (COMM)
|K-Space Network of Excellence; Uni Koblenz-Landau|
|Klynt||http://www.klynt.net/||interactive storytelling||y||?||(player)||(player)||y||n||y||presentation platform||Editor not free, with pro edition|
Player is on GitHub: https://github.com/Klynt/Klynt-Player
|Visual Storyboard, Mixed Media Editing, WYSIWYG and Timeline Edition with Immediate Preview|
Klynt is an editing & publishing application dedicated to interactive storytellers. It was designed originally for Honkytonk Films in-house productions to create an affordable and easy-to-use solution to explore new narrative formats on the Internet.
|Honkytonk Films, Paris|
|LabelMe||http://labelme.csail.mit.edu/Release3.0/||image labeling tool||y||y||y||y||n||n||image annotation||LinkedTV||developed for use in ComputerVision; js and perl based, online image labeling|
on GitHub: https://github.com/CSAILVision/LabelMeAnnotationTool
|The goal of LabelMe is to provide an online annotation tool to build image databases for computer vision research. You can contribute to the database by visiting the annotation tool.||MIT, Computer Science and Artificial Intelligence Laboratory|
|LinkedTV||http://www.linkedtv.eu/||broadcast specific||y||?||?||n||y||n||y||commercial?||FP7 initiative, some reports on state-of-the-art etc||Networked Media, Digital TV market|
Our vision of future Television Linked To The Web (LinkedTV) is of a ubiquitously online cloud of Networked Audio-Visual Content decoupled from place, device or source. Accessing audio-visual programming will be “TV” regardless whether it is seen on a TV set, smartphone, tablet or personal computing device, regardless of whether it is coming from a traditional or new media broadcaster, a Web video portal or a user-sourced media platform.
television content and Web content should be seamlessly connected
requires systems to be able to provide networked audio-video information usable in the same way as text based information is used today in the original Web: interlinked with each other at different granularities, with any other kind of information, searchable, and accessible everywhere and at every time. Ultimately, this means creating hypermedia at the level of the Web.
Television Linked To The Web (LinkedTV) provides a novel practical approach to Future Networked Media. It is based on four phases: annotation, interlinking, search, and usage (including personalization, filtering, etc.).
The result will make Networked Media more useful and valuable, and it will open completely new areas of application for Multimedia information on the Web.
|MediaGlobe – the digital archive||www.projekt-mediaglobe.de||broadcast specific||y||n||n||exclusive||semantic search, entities aus dbpedia, video-part annotation, interface, text recognition, audio mining, auto segmentation|
Project ended May 2012
software NOT accessible for re-use/adaptation (open source)
|entwickelt Lösungen, die es Medien- und Rundfunkarchiven erlauben, ihr audiovisuelles Material optimal zu digitalisieren, umfassend zu erschließen, effizient zu verwalten und online zugänglich zu machen.|
Sicherung des audiovisuellen Erbes, deutsche Zeitgeschichte
|Project within Theseus|
Partner: Transfer Media, Defa Spektrum, Hasso Plattner Institute, FlowWorks
|Mediathread||http://mediathread.ccnmtl.columbia.edu/accounts/login/?next=/||multimedia analysis platform||y||y||y||y||y||y||y||visual platform for web-based content; successor of VITAL |
on GitHub: https://github.com/ccnmtl/mediathread
|Mediathread is an open-source platform for exploration, analysis, and organization of web-based multimedia content. Mediathread connects to a variety of image and video collections (such as YouTube, Flickr, library databases, and course libraries), enabling users to lift items out of these collections and into an analysis environment. In Mediathread, items can then be clipped, annotated, organized, and embedded into essays and other written analysis.||Columbia Center for New Media Teaching and Learning http://ccnmtl.columbia.edu/|
|M-OntoMat 2.0||http://mklab.iti.gr/m-onto2||image annotation||n||n||discontinued||LinkedTV||part of K-Space;|
successor: Video Image Annotation Tool (via-tool)
|Linking Ontologies and Multimedia High-Level Features for Multimedia Analysis, Reasoning and Retrieval; part of the Visual Annotation Framework (VAF) tool||Information Technologies Institute (CERTH-ITI)|
|Mozilla popcorn||https://popcorn.webmaker.org/||y||y||y||y||y||y||js libray||Mozilla initiative, uses popcorn.js, part of Mozilla Webmaker||Mozilla|
|Mozilla Webmaker||https://webmaker.org||interactive storytelling||y||y||y||y||n||y||web litearcy||Mozilla initiative: create something amazing on the web||Webmaker – a site where users can learn about web technologies by submitting and mashing up their content and allowing others to mash up their content using web technologies.|
community dedicated to teaching digital skills and web literacy
|Multimedia Annotator||http://engage.wisc.edu/accomplishments/mma/index.html||n||out of date||Harvard, Chan||Univ. of Wisconsin||The University of Wisconsin-Madison has created an instructional application called the Multimedia Annotator (MmA). The Multimedia Annotator is an easy-to-use tool designed by and for language teachers to provide students with various types of assitance as they view a video clip.||Univ. of Wisconsin|
|MyStoryPlayer||http://www.eclap.eu/portal/?q=en-US/node/3748||annotation and relation||y||y||n||n||y||y||standalone version http://www.mystoryplayer.org/||audiovisual annotation tools integrated in ECLAP||Paolo Nesi; ECLAP - European Collected Library of Performing Arts|
|OACVideoAnnotator||https://github.com/umd-mith/OACVideoAnnotator||OAC demonstration||y||y||y||y||y||OAC for streaming video; requires: jQuery, Raphaël.js, MITHgrid; |
on GitHub: https://github.com/umd-mith/OACVideoAnnotator
|Video Annotation facet of MITHGrid for the OAC - Alexander Street Press project. |
The Video Annotator developer library is the result of an experiment un as part of the Open Annotation collaboration to test the OA data model for use in exchanging annotations of streaming video. A module has been written for Drupal 7.x that incorporates the OAC Annotation Tool as a demonstration of how to incorporate the tool into an application.
|Maryland Institute for Technology in the Humanities|
|Open Video Annotation Project||http://www.openvideoannotation.org/||educational||y||y||?||y||Harvard||uses OAC, expands annotator.js (OKFN Project), on top of HTML5 player (video.js)|
on GitHub: https://github.com/okfn/annotator
|Media-rich Video Annotation for the Web, To support teaching, learning and research with web video.||Philip Desenne and Daniel Cebrián Robles|
Center for Hellenic Studies, at Harvard University; Becas Talentia program from the Junta de Andalucia, Spain
|Opencast Matterhorn||http://opencast.org/matterhorn/||lecture capture & video management||y||y||y||y||incl. auto captioning and text analysis|
|Matterhorn is a free, open-source platform to support the management of educational audio and video content. Institutions will use Matterhorn to produce lecture recordings, manage existing video, serve designated distribution channels, and provide user interfaces to engage students with educational videos.|
Services include video encoding, metadata generation, scene detection, preview image generation, trimming and captioning and text analysis.
|Pad.ma||http://pad.ma/||y||y||y||y||y||y||y||Harvard||Short for Public Access Digital Media Archive - is an online archive of densely text-annotated video material, primarily footage and not finished films. The entire collection is searchable and viewable online, and is free to download for non-commercial use.||0x2620|
|Pan.do/ra||http://pan.do/ra||y||y||y||y||y||y||y||successor von pad.ma||pan.do/ra is a free, open source media archive platform. It allows you to manage large, decentralized collections of video, to collaboratively create metadata and time-based annotations, and to serve your archive as a desktop-class web application.||0x2620|
|Project Pad||http://projectpad.northwestern.edu/ppad2/||educational, multimedia annotation||y||?||y||y||y||y||n||stark integriert, out of date||Harvard, Bamboo, Chan||The Video and Audio Tools lets you attach comments to time segments of Flash FLV video and MP3 audio streams. The tools can be used by instructors and/or student teams to critique student-produced video and audio or to provide a way for students to analyze scientific, historic or artistic recordings.|
web-based system for media annotation and collaboration for teaching and learning and scholarly applications. Project Pad provides tools for browsing and working with audio, video, and images from digital repositories. The user may organize and annotate excerpts within their own "online notebook."
|Replay||https://www1.ethz.ch/replay/||lecture recordings||y||y||y||y||n||discontinued||successor: Opencast Matterhorn|
|REPLAY is an open source solution developed in java to manage the workflow of audiovisual lecture recordings from production in the classroom to distribution on various channels in an automated manner. In this, it also provides comprehensive functionalities for existing audiovisual archives, repositories or collections.|
REPLAY is a solution not only for academia, but also for institutions and companies producing, hosting, managing and allocating audiovisual content.
|Semex||http://www.hpi.uni-potsdam.de/meinel/knowledge_tech/semex.html||webservice, analysis||(y)||n||n||n||commercial(?)||cloud-based services, automatic semantic annotation||automated processes for semantically analyzing audiovisual content|
automatic scene segmentation, intelligent character recognition, and the ability to recognize genres and faces in videos.
|Hasso Plattner Institute|
|Simple Video Annotation tool||http://videoannotation.codeplex.com/||youtube||y||n||out of date; last code updates 2009||Harvard||This simple tool allows you to add tags and annotations to video, similar to YouTube video annotation. Written in C#, this is a working application, with many extra features still to be added.|
|Transana||http://www.transana.org/||annotation||n||(y - asynchronous)||n||n||y||y||commercial||Bamboo||qualitative social science||transcribe and analyze large collections of video and audio data, qualitative analysis of video, audio, and still image data.||Transana; Chris Fassnacht. later David K. Woods at the Wisconsin Center for Education Research, University of Wisconsin-Madison|
|TranscriberAG||http://transag.sourceforge.net/||speech signals||n||y||y||n||current version 2011-07-04||LinkedTV||successor of transcriber|
on Sourceforge: http://sourceforge.net/projects/transag/files/
|a tool for segmenting, labeling and transcribing speech. developed mainly for linguistic research on speech signals. It supports multiple hierarchical layers of segmentation, named entity annotation, speaker lists, topic lists, and overlapping speakers. Unicode encoding is provided, and the main architecture is in TCL/TK.|
|trAVis - Musikzentriertes Transkriptionsprogramm für audiovisuelle Medienprodukte||http://www.travis-analysis.org||focus on music||y||(y - asynchronous)||n||y||(y)||n||y||last changes 2012? publication Springer 2013||Musikzentriertes Transkriptionsprogramm für audiovisuelle Medienprodukte||Universität Basel, Medienwissenschaften|
|VARS (Video Annotation and Reference System)||http://vars.sourceforge.net/||annotation||(y)||y||y||y||"old"; last update on SF 04/2013||Harvard, Chan||on SourceForge: http://sourceforge.net/projects/vars/||The Video Annotation and Reference System (VARS) is a software interface and database system that provides tools for describing, cataloging, retrieving, and viewing the visual, descriptive, and quantitative data associated with video. Developed by the Monterey Bay Aquarium Research Institute (MBARI) for annotating deep-sea video data, VARS is currently being used to describe over 3000 dives performed by remotely operated vehicles (ROV). VARS has allowed MBARI scientists to produce numerous quantitative and qualitative scientific publications based on video data.||Monterey Bay Aquarium Research Institute (MBARI)|
|VAST: Academic Video Online||http://alexanderstreet.com/products/vast-academic-video-online||commercial video streaming platform with annotations||y||n||n||n||commercial||developed by Alexander Street Press||VAST: Academic Video Online is Alexander Street’s flagship video subscription, delivering key video that touches on the undergraduate curriculum needs of virtually every department. VAST is growing constantly and currently offers well over 20,000 full videos and 9,000 hours of content.|
VAST is designed to bring the highest-quality video content to the broadest range of subject areas.
|Alexander Street Press|
|VAT - Video Annotation Tool||http://www.boemie.org/vat||n||n||out of date||Harvard||FP6 funded; successor is Video Image Annotation Tool|
|VATIC - Video Annotation Tool from Irvine, California||http://web.mit.edu/vondrick/vatic/||crowdsourced annotation||y||y||y||y||y||tracking, large scale annotations||developed for use in ComputerVision; links to MechanicalTurk|
on GitHub: https://github.com/cvondrick/vatic
|VCode & VData: Video Annotation Tools||http://social.cs.uiuc.edu/projects/vcode.html||n||y||y||n||out of date||Harvard||Mac only|
on GoogleCode: http://code.google.com/p/vcode/
|VCode and VData are a suite of "open source" applications which create a set of effective interfaces supporting the video annotation workflow. The system has three main components: VCode (annotation), VCode Admin Window (configuration) and VData (examination of data, coder agreement and training). The Design of VCode and VData was grounded in existing literature, interviews with experienced coders, and ongoing discussions with researchers in multiple disciplines.||Joshua Hailpern & Joey Hagedorn, University of Illinois at Urbana Champaign|
|Vertov||http://digitalhistory.concordia.ca/vertov/||annotation||y||y||y||n||out of date||Harvard, Bamboo, Chan||plugin for Zotero||Vertov is a free media annotating plugin for Zotero, an innovative, easy-to-use, and infinitely extendable research tool. Both are Firefox extensions. Vertov allows you to cut video and audio files into clips, annotate the clips, and integrate your annotations with other research sources and notes stored in Zotero.||Concordia Digital History Lab, Concordia University, Montreal|
|VidArch||http://www.ils.unc.edu/vidarch/index.php||preservation||n||n||n||discontinued||to preserve a video work's context and highlighting its essence|
This project focused on developing a preservation framework for digital video context by applying it to two important digital video collections: the complete series of NASA broadcast educational videos and the complete set of juried ACM SIGCHI videos presented at annual conferences from 1983 to the present.
|School of Information and Library Science, The University of North Carolina at Chapel Hill|
|Viddler||http://www.viddler.com/||professional content branding||y||n||n||y||commercial||Harvard||interactive enriched video content||Allows adding Tags, text, links, and video comments on the video timeline •Allow others to add timed comments •Threaded conversations||Viddler|
|Videana||http://www.uni-marburg.de/fb12/verteilte_systeme/forschung/videana||webservice, analysis||(y)||n||n||(y)||y||commercial(?)||cloud-based services, automatic semantic annotation||semantic/content-based video search. Users can search for particular objects, events, locations, or persons in a video archive or database. For this purpose, the audiovisual content is automatically analyzed in a pre-processing step and all relevant information is saved: what is to see and what is to hear in a scene. This information serves as a basis for a high-quality search.||University of Marburg|
|Video Image Annotation Tool||http://via-tool.sourceforge.net/||dev in C++||n||n||y||y||y||?||last update on SF 02/2013||Harvard||successor of VAT; Windows only|
on SourceForge: http://sourceforge.net/projects/via-tool/
|Video Image Annotation Tool is a Windows application to manually annotate video and images. It provides a user friendly interface for the accurate and undemanding live and "frame by frame" annotation of video and region-based annotation of still images. ... The development of this tool has been supported by the Bootstrapping Ontology Evolution with Multimedia Information Extraction (BOEMIE)||Multimedia Knowledge and Social Media Analytics Laboratory - Information Technologies Institute (CERTH-ITI)|
|VideoAnnEx||http://www.research.ibm.com/VideoAnnEx/||annotation||n||n||n||y||n||n||out of date||Win95||IBM|
|videoAnnotation||http://web.mit.edu/changc/www/videoAnnotation/videoAnnotation.html||user annotation prototype demo||y||y||y||y||?||prototype||MIT development of videoAnnotation.js, for OpenCourseware|
on gitHub: https://github.com/cmchang/videoAnnotation
|prototype of annotation interface for lectures||MIT OpenCourseware|
|VideoAnt - Video Annotation Tool||http://ant.umn.edu/||annotation||y||y||y||Harvard, Bamboo, Chan||web video||Synchronizes web-based video with timeline-based text annotations.|
|ViPER: The Video Performance Evaluation Resource||http://viper-toolkit.sourceforge.net/||command line scrips, Java Swing||-||n||y||y||n||n||out of date; last update on SF: 02/2006||developed for use in ComputerVision; extensive methodological explanations|
on SourceForge: http://sourceforge.net/projects/viper-toolkit/
|The Video Processing Evaluation Resource: A toolkit for evaluating computer vision algorithms on video, and a corresponding tool for annotating video streams with spatial metadata.||Language and Media Processing Laboratory, Univ. of Maryland|
|http://ccnmtl.columbia.edu/our_services/tools/vital/||combine text and video, write "multimedia essays"||y||y||y||n||discontinued||successor: mediathread, http://ccnmtl.columbia.edu/vital/nsf/ http://ccnmtl.columbia.edu/our_services/tools/vital/|
on gitHub: https://github.com/ccnmtl/vital
|Columbia Center for New Media Teaching and Learning http://ccnmtl.columbia.edu/|
|Waisda||https://www.prestocentre.org/library/tools/waisda||video labeling game||y||y||y||y||y||?||y||on gitHub: https://github.com/beeldengeluid/waisda||launched 2009 |
Waisda? is a video labelling game. Players watch short video fragments (typically between 1 and 10 minutes long) and add relevant tags to the video at specific moments in time. If a tag matches another player’s tag (perhaps from a previous session for the same video), both players are awarded points. By showing score lists on the home page and giving out various prizes, players are encouraged to collect more points and enter more tags.
In this way, video fragments are tagged, which makes them indexable and searchable. Whether a tag is trustworthy can be measured using heuristics, the most important one being how many matching tags exist. The game has been online in various versions and has been used to tag video archives of several public broadcasting associations.
|Netherlands Institute for Sound and Vision, Q42, VU University Amsterdam|
|YouTube Video Annotations||http://www.youtube.com/t/annotations_about||annotation||y||y||(n)||y||n||n||y||youtube only||UCLA-KB; Harvard, Chan||Player on GitHub: https://github.com/ttsiodras/Youtube-Video-Annotations-Player||It allows you to playback Youtube videos with their annotations, offline (via MPlayer).|
Web application; Free to use; Only annotates videos on YouTube. You can only annotate videos you have uploaded, while others can see the annotation; Text annotation (“text bubbles” or notes), highlight part of the screen; All annotations are editable
|Yovisto||www.yovisto.com||educational, lecture recordings||y||y||(y - replay)||(n)||n||y||user tagging (subscribers)||based on replay: open source solution to manage the complete life cycle of audiovisual recordings in an automated way||video search engine specialized on educational video content, platform to upload, share, search, tag, and discuss academic videos. automated generation of a full-text index, users can write or edit wiki-pages to enrich the video content with further information, such as images, hyperlinks, and text.||yovisto|