|Start Date||End Date||Headline||Text||Media|
Towards a theoretical information science : information science and the concept of a paradigm by Andrew Douglas Brown
The Santa Fe Convention of the Open Archives Initiative by Herbert Van de Sompel, Carl Lagoze
The Semantic Web: a new form of Web content that is meaningful to computers will unleash a revolution of new possibilities by Tim Berners-Lee, James Hendler and Ora Lassila
A new form of Web content that is meaningful to computers will unleash a revolution of new possibilities. <a href="http://www.cs.umd.edu/~golbeck/LBSC690/SemanticWeb.html" target="_blank">Find out more...</a>
Open Directory RDF Dump by Netscape Communication Corporation
The Open Directory Project (ODP) is the most comprehensive human edited directory of the Web, compiled by a vast global community of volunteer editors. The ODP is also known as DMOZ, an acronym for Directory Mozilla. This name reflects its loose association with Netscape's Mozilla project, an Open Source browser initiative. <a href="http://www.dmoz.org/rdf.html" target="_blank">Find out more...</a>
RDF Primer by World Wide Web Consortium
The Resource Description Framework (RDF) is a language for representing information about resources in the World Wide Web. This Primer is designed to provide the reader with the basic knowledge required to effectively use RDF. It introduces the basic concepts of RDF and describes its XML syntax. It describes how to define RDF vocabularies using the RDF Vocabulary Description Language, and gives an overview of some deployed RDF applications. It also describes the content and purpose of other RDF specification documents. <a href="http://www.w3.org/TR/rdf-primer/" target="_blank">Find out more...</a>
Towards Semantically-Interlinked Online Communities by John H. Breslin, Andreas Harth, Uldis Bojars, Stefan Decker
The Open Directory Project (ODP) is the most comprehensive human edited directory of the Web, compiled by a vast global community of volunteer editors. The ODP is also known as DMOZ, an acronym for Directory Mozilla. This name reflects its loose association with Netscape's Mozilla project, an Open Source browser initiative. <a href="http://www.dmoz.org/rdf.html" target="_blank">Find out more...</a>
Business Rules in the Semantic Web, are There Any or Are They Different? by Silvie Spreeeuwenberg, Rik Gerrits
The semantic web community and the business rules community
have common roots. This article explores the differences and similarities between the two fields in order to encourage collaboration between the communities with respect to standardization efforts and research topics.
Encoding Library of Congress Subject Headings in SKOS: Authority Control for the Semantic Web by Corey Harper
This paper will explore using XSLT stylesheets to translate LCSH Authority Records from MARC/XML or MADS XML formats into RDF documents according to the SKOS project's Quick Guide to Publishing a Thesaurus on the Semantic Web. Creating an RDF Data Store that represents the content of LCSH will have tremendous long-term benefits in allowing a greater breadth of applications to make full use of the relationships between concepts provided by LC Subject Headings.
MODS 2 RDF stylesheet by Stefano Mazzocchi, SIMILE project, MIT
XSLT stylesheet to convert MODS records to RDFXML <a href="http://simile.mit.edu/repository/RDFizers/marcmods2rdf/stylesheets/mods2rdf.xslt" target="_blank">Find out more...</a>
Some Functions are More Equal than Others: The Development of a Macroappraisal Strategy for the National Archives of Australia by Adrian Cunningham, Robyn Oswald
Metadata Interoperability and Standardization - A Study of Methodology Part I by Lois Mai Chan, Marcia Lei Zeng
Metadata Interoperability and Standardization - A Study of Methodology Part II by Marcia Lei Zeng, Lois Mai Chan
Linked Data - Design Issues by Tim Berners-Lee
Seminal definition of Linked Data concepts and purposes (including the "five stars"). <a href="http://www.w3.org/DesignIssues/LinkedData" target="_blank">Find out more...</a>
Primer - Getting into the semantic web and RDF using N3 by Tim Berners-Lee
The world of the semantic web, as based on RDF, is really simple at the base. This article shows you how to get started. It uses a simplified teaching language -- Notation 3 or N3 -- which is basically equivalent to RDF in its XML syntax, but easier to scribble when getting started. <a href="http://www.w3.org/2000/10/swap/Primer" target="_blank">Find out more...</a>
Event Ontology by Yves Raimond, Samer Abdallah
This document describes the Event ontology developed in the Centre for Digital Music in Queen Mary, University of London. The first draft of the ontology was written in October, 2004. Further details about the Event ontology, related ontologies, and the technologies on which this ontology is founded, please see the reference section. <a href="http://motools.sourceforge.net/event/event.html" target="_blank">Find out more...</a>
IsaViz Overview by Emmanuel Pietriga
IsaViz is a visual environment for browsing and authoring RDF models represented as graphs. It features:
a 2.5D user interface allowing smooth zooming and navigation in the graph
creation and editing of graphs by drawing ellipses, boxes and arcs
RDF/XML, Notation 3 and N-Triple import
RDF/XML, Notation 3 and N-Triple export, but also SVG and PNG export
Since version 2.0, IsaViz can render RDF graphs using GSS (Graph Stylesheets), a stylesheet language derived from CSS and SVG for styling RDF models represented as node-link diagrams. <a href="http://www.w3.org/2001/11/IsaViz/" target="_blank">Find out more...</a>
Visual Velcro: Hooking the Visitor by Peter Samis
|1/1/2008||RDFizers - SIMILE|
wiki page has links to downloadable tools written at MIT and elsewhere (MIT's run from a command line and require a Java VM and Apache Maven) for converting data in other formats to RDF (including MARC to MODS to RDF) <a href="http://simile.mit.edu/wiki/RDFizers" target="_blank">Find out more...</a>
|1/1/2008||Linked Data on the Web 2008|
Conference proceedings volume <a href="http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-369/" target="_blank">Find out more...</a>
Linked Movie Data Base by Mariano P. Consens, Adrian M. Hassanzadeh, Adrian M. Teisanu
Project, awarded the first prize at the LOD Triplification Challenge.; that takes data from exisitng sources and documents and uses a tool, ODDLinker, and similarity join techniques to find links between different sources. <a href="http://www.linkedmdb.org/" target="_blank">Find out more...</a>
Dublin Core SPARQLer by Dublin Core Metadata Initiative
Interface for using SELECT and CONSTRUCT commands to query the Dublin Core Metadata Registry for terms and other metadata using SPARQL. <a href="http://dcmi.kc.tsukuba.ac.jp/sparql/" target="_blank">Find out more...</a>
Expressing Dublin Core metadata using the Resource Description Framework (RDF) by Dublin Core Metadata Initiative, Andy Powell, Mikael Nilsson, Pete Johnston, Ambj√∂rn Naeve
This document provides recommendations for expressing DC metadata using RDF, the Resource Description Framework. It does this by describing how the features of the DCMI Abstract Model [ABSTRACT-MODEL] are represented using the RDF model (or abstract syntax), as defined by the RDF Concepts and Abstract Syntax specification [RDF-CONCEPTS]. It does not rely on any specific RDF syntax encoding, though examples using the RDF/XML Syntax Specification [RDF-SYNTAX-GRAMMAR] are provided in Appendix A. This allows Dublin Core metadata to be encoded using this specification in any RDF encoding syntax or other RDF representation system, such as RDF databases. <a href="http://dublincore.org/documents/dc-rdf/" target="_blank">Find out more...</a>
Semantic MARC, MARC21 and the Semantic Web by Rob Styles, Danny Ayers, Nadeem Shabir
The MARC standard for exchanging bibliographic data has been in use for several decades and is used by major libraries worldwide. This paper discusses the possibilities of representing the most prevalent form of MARC, MARC21, as RDF for the Semantic Web, and aims to understand the tradeoffs, if any, resulting from transforming the data. Critically our approach goes beyond a simple transliteration of the MARC21 record syntax to develop rich semantic descriptions of the varied things which may be described using bibliographic records. We present an algorithmic approach for onsistently generating URIs from
textual data, discuss the algorithmic matching of author names and suggest how RDF generated from MARC records may be linked to other data sources on the Web. <a href="http://sunsite.informatik.rwth-aachen.de/Publications/CEUR-WS/Vol-369/paper02.pdf" target="_blank">Find out more...</a>
Open Archives Initiative Protocol - Object Reuse and Exchange by Open Archives Initiative, Herbert Van de Sompel, Carl Lagoze
Open Archives Initiative Object Reuse and Exchange (OAI-ORE) defines standards for the description and exchange of aggregations of Web resources. These aggregations, sometimes called compound digital objects, may combine distributed resources with multiple media types including text, images, data, and video. The goal of these standards is to expose the rich content in these aggregations to applications that support authoring, deposit, exchange, visualization, reuse, and preservation. Although a motivating use case for the work is the changing nature of scholarship and scholarly communication, and the need for cyberinfrastructure to support that scholarship, the intent of the effort is to develop standards that generalize across all web-based information including the increasing popular social networks of ‚Äúweb 2.0‚Äù. <a href="http://www.openarchives.org/ore/" target="_blank">Find out more...</a>
Ontological Engineering: SPARQL Railroad Diagram from Hell by Nick Main
Tutorial on Semantic Digital Libraries at ICSD'09 by Sebastain R. Kruk, knowledgehives.com
Tutorial given at International Conference for Digital Libraries and the Semantic Web (ICSD) 2009 <a href="http://www.slideshare.net/knowledgehives/tutorial-on-semantic-digital-libraries-at-icsd09" target="_blank">Find out more...</a>
Linked Data - The Story So Far by Christian Bizer, Tom Heath, Tim Berners-Lee
The term Linked Data refers to a set of best practices for publishing and connecting
structured data on the Web. These best practices have been adopted by an increasing
number of data providers over the last three years, leading to the creation of a global data
space containing billions of assertions - the Web of Data. In this article we present the
concept and technical principles of Linked Data, and situate these within the broader context
of related technological developments. We describe progress to date in publishing Linked
Data on the Web, review applications that have been developed to exploit the Web of Data,
and map out a research agenda for the Linked Data community as it moves forward. <a href="http://tomheath.com/papers/bizer-heath-berners-lee-ijswis-linked-data.pdf" target="_blank">Find out more...</a>
|1/1/2009||FactForge by Ontotext A.D.|
FactForge provides several kinds of web-based search interfaces including keyword search and SPARQL query, over a combined store of linked data harvested from various sources including DBpedia, Freebase, Geonames, UMBEL, WordNet, CIA World Factbook, Lingvoj, MusicBrainz (RDF from Zitgist). <a href="http://factforge.net/" target="_blank">Find out more...</a>
Adventures in Semantic Publishing: Exemplar Semantic Enhancements of a Research Article by David Shotton, Katie Portwin, Graham Klyne, Alistair Miles
Scientific innovation depends on finding, integrating, and re-using the products of previous research. Here we explore how recent developments in Web technology, particularly those related to the publication of data and metadata, might assist that process by providing semantic enhancements to journal articles within the mainstream process of scholarly journal publishing. We exemplify this by describing semantic enhancements we have made to a recent biomedical research article taken from PLoS Neglected Tropical Diseases, providing enrichment to its content and increased access to datasets within it. These semantic enhancements include provision of live DOIs and hyperlinks; semantic markup of textual terms, with links to relevant third-party information resources; interactive figures; a re-orderable reference list; a document summary containing a study summary, a tag cloud, and a citation analysis; and two novel types of semantic enrichment: the first, a Supporting Claims Tooltip to permit ‚ÄúCitations in Context‚Äù, and the second, Tag Trees that bring together semantically related terms. In addition, we have published downloadable spreadsheets containing data from within tables and figures, have enriched these with provenance information, and have demonstrated various types of data fusion (mashups) with results from other research articles and with Google Maps. We have also published machine-readable RDF metadata both about the article and about the references it cites, for which we developed a Citation Typing Ontology, CiTO (http://purl.org/net/cito/). The enhanced article, which is available at http://dx.doi.org/10.1371/journal.pntd.0‚Äã000228.x001, presents a compelling existence proof of the possibilities of semantic publication. We hope the showcase of examples and ideas it contains, described in this paper, will excite the imaginations of researchers and publishers, stimulating them to explore the possibilities of semantic publishing for their own research articles, and thereby break down present barriers to the discovery and re-use of information within traditional modes of scholarly communication. <a href="http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1000361#s7" target="_blank">Find out more...</a>
SP¬≤Bench Generator and DBLP RDF Data
Describes a tool developed as part of the Sp2 Bench project to generate arbitrarily large sets of triples, derived from the DBLP database of bibliographic information about Computer Science, currently available in XML. Mappings and structure. <a href="http://dbis.informatik.uni-freiburg.de/index.php?project=SP2B/data.php" target="_blank">Find out more...</a>
About that 1952 Sedelijk Museum audio guide, and a certain Willem Sandburg by Loic Tallon
Fiona Bradley takes a tour of Linked Data endeavors and explains how they can help us make library data easier for everyone to use by Fiona Bradley
Examples of what was being done in 2009 and use cases for future linked data and library data applications <a href="http://www.libraryjournal.com/lj/ljinprintnetconnect/888240-335/fiona_bradley_takes_a_tour.html.csp" target="_blank">Find out more...</a>
RDFa for HTML Authors by World Wide Web Consortium
RDFa is a thin layer of markup you can add to your web pages that makes them understandable for machines as well as people. You could describe it as a CSS for meaning. By adding it, browsers, search engines, and other software can understand more about the pages, and in so doing offer more services or better results for the user. For instance, if a browser knows that a page is about an event such as a conference, it can offer to add it to your calendar, show it on a map, locate hotels or flights, or any number of other things.
This document introduces RDFa and gives examples of its use. <a href="http://www.w3.org/MarkUp/2009/rdfa-for-html-authors" target="_blank">Find out more...</a>
rdfquery - RDF processing in your browser - Google Project Hosting
SKOS Simple Knowledge Organization System Primer by World Wide Web Consortium
SKOS‚ÄîSimple Knowledge Organization System‚Äîprovides a model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, and other similar types of controlled vocabulary. As an application of the Resource Description Framework (RDF), SKOS allows concepts to be composed and published on the World Wide Web, linked with data on the Web and integrated into other concept schemes.
This document is a user guide for those who would like to represent their concept scheme using SKOS.
In basic SKOS, conceptual resources (concepts) are identified with URIs, labeled with strings in one or more natural languages, documented with various types of note, semantically related to each other in informal hierarchies and association networks, and aggregated into concept schemes.
In advanced SKOS, conceptual resources can be mapped across concept schemes and grouped into labeled or ordered collections. Relationships can be specified between concept labels. Finally, the SKOS vocabulary itself can be extended to suit the needs of particular communities of practice or combined with other modeling vocabularies.
This document is a companion to the SKOS Reference, which provides the normative reference on SKOS. <a href="http://www.w3.org/TR/skos-primer/" target="_blank">Find out more...</a>
Music and the Web of Linked Data | - ISMIR 2009 Tutorial by Kurt Jacobson, George Fazekas, Yves Raimond, Michael Smethurst
This site is part of a tutorial for ISMIR 2009 in Kobe Japan. All materials related to the tutorial will appear here including slides, links, software, examples, etc. Whether or not you were able to attend the tutorial, we hope that this site will be a valuable resource for those interested in applying Semantic Web technologies to music informatics. <a href="http://ismir2009.dbtune.org/" target="_blank">Find out more...</a>
Querying Linked Data with SPARQL by Olaf Hartig
A brief introduction to SPARQL (includes many example queries) <a href="http://www.slideshare.net/olafhartig/querying-linked-data-with-sparql?fb_action_ids=4140690048070&fb_action_types=slideshare%3Aview&fb_source=aggregation&fb_aggregation_id=10150872971651587&code=AQClh7BBsBflRX0HQ8GfFKYINeHeZ6p9FNUD1IXgNnufntFOjDhpk6qAFN0_HoQCsDgMYVe1XsjdQ_U-EWKiU2jxSeg_cOQ6kl_IKDNCNShvR60pn4idIU-mECYWUUAd65SMy5TbILX-OPYRz8gwmxhCYWEp8NK-CZ9ELFuoqSTk39wz_IZ248mwoZuYQdNkSAA2lKkw9ikxPZtYMjfSKBXL#_=_" target="_blank">Find out more...</a>
OWL 2 Web Ontology Language Primer by World Wide Web Consortium
The OWL 2 Web Ontology Language, informally OWL 2, is an ontology language for the Semantic Web with formally defined meaning. OWL 2 ontologies provide classes, properties, individuals, and data values and are stored as Semantic Web documents. OWL 2 ontologies can be used along with information written in RDF, and OWL 2 ontologies themselves are primarily exchanged as RDF documents. The OWL 2 Document Overview describes the overall state of OWL 2, and should be read before other OWL 2 documents.
This primer provides an approachable introduction to OWL 2, including orientation for those coming from other disciplines, a running example showing how OWL 2 can be used to represent first simple information and then more complex information, how OWL 2 manages ontologies, and finally the distinctions between the various sublanguages of OWL 2. <a href="http://www.w3.org/TR/2009/REC-owl2-primer-20091027/" target="_blank">Find out more...</a>
Lightweight Image Ontology (LIO) by Margaret Warren, Pat Hayes, Metadata Authoring Systems, LLC
A Lightweight Ontology for Describing Images <a href="http://www.imagesnippets.com/lio/lio.owl" target="_blank">Find out more...</a>
When owl:sameAs isn‚Äôt the Same: An Analysis of Identity Links on the Semantic Web by Harry Halpin, Patrick J. Hayes
In Linked Data, the use of owl:sameAs is ubiquitous in
‚Äòinter-linking‚Äô data-sets. However, there is a lurking sus-
picion within the Linked Data community that this use of
owl:sameAs may be somehow incorrect, in particular with
regards to its interactions with inference. In fact, owl:sameAs
can be considered just one type of ‚Äòidentity link,‚Äô a link that
declares two items to be identical in some fashion. After
reviewing the definitions and history of the problem of iden-
tity in philosophy and knowledge representation, we outline
four alternative readings of owl:sameAs, showing with ex-
amples how it is being (ab)used on the Web of data. Then
we present possible solutions to this problem by introducing
alternative identity links that rely on named graphs. <a href="http://events.linkeddata.org/ldow2010/papers/ldow2010_paper09.pdf" target="_blank">Find out more...</a>
Linking Enterprise Data | 3 Round Stones
Linking Enterprise Data is the application of Semantic Web architecture principles to real-world information management issues faced by commercial, not-for-profit and government enterprises.This book aims to provide practical approaches to addressing common information management issues by the application of Semantic Web and Linked Data research to production environments. <a href="http://3roundstones.com/linking-enterprise-data/" target="_blank">Find out more...</a>
Reliable and Persistent Identification of Linked Data Elements: Linking Enterprise Data by David Wood
Linked Data techniques rely upon common terminology in a manner similar to a relational database‚Äôs reliance on a schema. Linked Data terminology anchors metadata descriptions and facilitates navigation of information. Common vocabu- laries ease the human, social tasks of understanding datasets sufÔ¨Åciently to construct queries and help to relate otherwise disparate datasets. Vocabulary terms must, when using the Resource Description Framework, be grounded in URIs. A current best practice on the World Wide Web is to serve vocabulary terms as Uniform Resource Locators (URLs) and present both human-readable and machine-readable represen- tations to the public. Linked Data terminology published to the World Wide Web may be used by oth- ers without reference or notiÔ¨Åcation to the publishing party. That presents a problem: Vocabulary publishers take on an implicit responsibility to maintain and publish their terms via the URLs originally assigned, regardless of the inconvenience such a responsibility may cause. Over the course of years, people change jobs, publish- ing organizations change Internet domain names, computers change IP addresses, systems administrators publish old material in new ways. Clearly, a mechanism is required to manage Web-based vocabularies over a long term. This chapter places Linked Data vocabularies in context with the wider con- cepts of metadata in general and speciÔ¨Åcally metadata on the Web. Persistent iden- tiÔ¨Åer mechanisms are reviewed, with a particular emphasis on Persistent URLs, or PURLs. PURLs and PURL services are discussed in the context of Linked Data. Fi- nally, historic weaknesses of PURLs are resolved by the introduction of a federation of PURL services to address needs speciÔ¨Åc to Linked Data. <a href="http://3roundstones.com/led_book/led-wood.html" target="_blank">Find out more...</a>
The LUCERO Project >> About by JISC Information Environment 2011 Programme, Open University
LUCERO is a a project funded by the JISC Information Environment 2011 Programme under the call Deposit of research outputs and Exposing digital content for education and research.
Working with groups of learners, researchers and practitioners based at the Open University, LUCERO will scope, prototype, pilot and evaluate reusable, cost-effective solutions relying on the linked dataprinciples and technologies for exposing and connecting educational and research content. <a href="http://lucero-project.info/lb/about/" target="_blank">Find out more...</a>
|1/1/2010||LOCAH Project by Mimas|
LOCAH was a JISC-funded project working to make data from the Archives Hub (aggregation of archival collection descriptions in the UK) and COPAC (union catalog of libraries in UK and Ireland) available as Linked Data. <a href="http://blogs.ukoln.ac.uk/locah/" target="_blank">Find out more...</a>
|1/1/2010||Assessing Jena and Sesame | SPQR|
Detailed comparison of Jena and Sesame including command line interface... <a href="http://spqr.cerch.kcl.ac.uk/?page_id=130" target="_blank">Find out more...</a>
A survey of techniques for achieving metadata interoperability by Bernhard Haslhofer, Wolfgang Klas
Achieving uniform access to media objects in heterogeneous media repositories requires dealing with the problem of metadata interoperability. Currently there exist many interoperability techniques, with quite varying potential for resolving the structural and semantic heterogeneities that can exist between metadata stored in distinct repositories. Besides giving a general overview of the field of metadata interoperability, we provide a categorization of existing interoperability techniques, describe their characteristics, and compare their quality by analyzing their potential for resolving various types of heterogeneities. Based on our work, domain experts and technicians get an overview and categorization of existing metadata interoperability techniques and can select the appropriate approach for their specific metadata integration scenarios. Our analysis explicitly shows that metadata mapping is the appropriate technique in integration scenarios where an agreement on a certain metadata standard is not possible. <a href="http://doi.acm.org/10.1145/1667062.1667064" target="_blank">Find out more...</a>
Practical Semantic Web and Linked Data Applications: Java, JRuby, Scala, and Clojure Edition by Mark Watson
This book is intended to be a practical guide for using RDF data in information processing, linked data, and semantic web applications using both the AllegroGraph product and the Sesame open source project. RDF data represents a graph. You probably are familiar to at least some extent with graph theory from computer science. Graphs are a natural way to represent things and the relationships between them. <a href="http://www.markwatson.com/opencontent_data/book_java.pdf" target="_blank">Find out more...</a>
sioc-project.org|Semantically-Interlinked Online Communities
The SIOC initiative (Semantically-Interlinked Online Communities) aims to enable the integration of online community information. SIOC provides a Semantic Web ontology for representing rich data from the Social Web in RDF. It has recently achieved significant adoption through its usage in a variety of commercial and open-source software applications, and is commonly used in conjunction with the FOAF vocabulary for expressing personal profile and social networking information. By becoming a standard way for expressing user-generated content from such sites, SIOC enables new kinds of usage scenarios for online community site data, and allows innovative semantic applications to be built on top of the existing Social Web. The SIOC ontology was recently published as a W3C Member Submission, which was submitted by 16 organisations. <a href="http://sioc-project.org/" target="_blank">Find out more...</a>
|5/1/2010||Web 3.0 by Kate Ray|
A story about the Semantic Web. Transcript, interview bios, and other info (incluyendo una transcripcion espanol) on kateray.net Downloadable version on drop.io/web3point0 Interviews with: Tim Berners-Lee Clay Shirky Chris Dixon David Weinberger Nova Spivack, Jason Shellen, Lee Feigenbaum, John Hebeler, Alon Halevy, David Karger, Abraham Bernstein <a href="http://vimeo.com/11529540" target="_blank">Find out more...</a>
Linking biodiversity data by Roderic Page
Example of linking sequences to a specimen, a taxon, a publication, and a journal, using linked data. <a href="http://vimeo.com/11739104" target="_blank">Find out more...</a>
Linked Data Workshop Participants Survey Results
The results of the survey (carried out using Survey Monkey) indicate that the presentations were well received and the event overall was seen as very positive. People enjoyed the networking opportunities and discussing an emerging technical area. 26 out of the 50 participants responded. <a href="http://www.clir.org/globaldigitallibraries/BL_LinkedData/LDSurveySummary_06082010.pdf" target="_blank">Find out more...</a>
ResearchSpace by The British Museum; Andrew Mellon Foundation
ResearchSpace (RS) is an Andrew W. Mellon Foundation funded project aimed at supporting collaborative internet research, information sharing and web applications for the cultural heritage scholarly community. The ResearchSpace environment intends to provide following integrated elements;
Data and digital analysis tools.
Semantic RDF data sources
Data and digital management tools.
Internet design and authoring tools.
One of the first datasets to be made available in RS for annotation and other research activity will be the British Museum's collection data (currently 2 million objects). <a href="http://www.researchspace.org/" target="_blank">Find out more...</a>
Digital Library Brown Bag: RDF for Librarians by Jenn Riley
Learning the lingo is important; however for us it's probably better to start with understanding how some of the basic concepts are different than what we're used to. That's what we're going to do today. <a href="http://breeze.iu.edu/p48776227/?launcher=false&fcsContent=true&pbMode=normal" target="_blank">Find out more...</a>
Learning SPARQL: querying and updating with SPARQL 1.1 by Bob DuCharme
Archival description in OAI-ORE by Deborah Kaplan, Anne Sauer, Eliot Wilczek
This paper proposes using OAI-ORE as the basis for a new method to represent and manage the description of archival collections. This strategy adapts traditional archival description methods for the contemporary reality of digital collections and takes advantage of the power of OAI-ORE to allow for a multitude of non-linear relationships, providing richer and more powerful access and description. A schema for representing finding aids in OAI-ORE would facilitate more sophisticated methods for modeling archival collection descriptions. <a href="http://journals.tdl.org/jodi/article/view/1814/1769" target="_blank">Find out more...</a>
Linked Data: Evolving the Web into a Global Data Space by Tom Heath, Christian Bizer
This book gives an overview of the principles of Linked Data as well as the Web of Data that has emerged through the application of these principles. The book discusses patterns for publishing Linked Data, describes deployed Linked Data applications and examines their architecture. <a href="http://linkeddatabook.com/editions/1.0/" target="_blank">Find out more...</a>
Graphs in Libraries: A Primer by James E. Powell, Daniel Alcazar, Matthew Hopkins, Robert Olendorf, Tamara M. McMahon, Amber Wu, Lin Collins
Whenever librarians use Semantic Web services and standards for representing data, they
also generate graphs, whether they intend to or not. Graphs are a new data model for
libraries and librarians, and present new opportunities for library services. In this paper,
we introduce graph theory and explore its real and potential applications in the context of
digital libraries. Part I describes basic concepts in graph theory and how graph theory has
been applied by information retrieval systems such as Google. Part II discusses practical
applications of graph theory in digital library environments. Some of the applications
have been prototyped at the Los Alamos National Laboratory Research Library, others
have been described in peer-reviewed journals, and still others are speculative in nature.
Overall, the paper is intended to serve as a high-level tutorial to graphs in libraries <a href="http://ejournals.bc.edu/ojs/index.php/ital/article/view/1867/1705" target="_blank">Find out more...</a>
|1/1/2011||NYC Open Data|
This catalog supplies hundreds of sets of public data produced by City agencies and other City organizations. The data sets are now available as APIs and in a variety of machine-readable formats, making it easier than ever to consume City data and better serve New York City‚Äôs residents, visitors, developer community and all! <a href="http://nycopendata.socrata.com/" target="_blank">Find out more...</a>
SPARQL Tutorial - data.lib.cam.ac.uk (beta)
This tutorial provides a very basic introduction to SPARQL, the query language for RDF data. It is based around the datasets loaded into data.lib.cam.ac.uk. It is also a work in progress. <a href="http://data.lib.cam.ac.uk/sparql.php" target="_blank">Find out more...</a>
PoolParty Products by The Semantic Web Company
Suite of commercial linked data tools, including:
PoolParty Thesaurus Manager, PoolParty Extractor, and PoolParty Smart Content (link management and automatic content recommendations) <a href="http://poolparty.biz/products/" target="_blank">Find out more...</a>
SNAC: The Social Networks and Archival Context Project by IATH, University of Virginia, UC Berkeley School of Information, California Digital Library
The Social Networks and Archival Context Project (SNAC) will address the ongoing challenge of transforming description of and improving access to primary humanities resources through the use of advanced technologies. The project will test the feasibility of using existing archival descriptions in new ways, in order to enhance access and understanding of cultural resources in archives, libraries, and museums. <a href="http://socialarchive.iath.virginia.edu/" target="_blank">Find out more...</a>
SPIN - SPARQL Inferencing Notation by Holger Knublauch, James A. Hendler, Kingsley Idehen
SPIN is a W3C Member Submission that has become the de-facto industry standard to represent SPARQL rules and constraints on Semantic Web models. SPIN also provides meta-modeling capabilities that allow users to define their own SPARQL functions and query templates. Finally, SPIN includes a ready to use library of common functions.
What You Can Do with SPIN
SPIN is a way to represent a wide range of business rules.
You will not need to learn another proprietary rules language to do so. With SPIN, rules are expressed in SPARQL. In fact, SPIN is also referred to as SPARQL Rules. SPARQL is a well-established W3C standard implemented by many industrial-strength RDF APIs and all databases. This means that rules can run directly on RDF data without a need for ‚Äúmaterialization‚Äù. SPIN provides a framework that helps users to leverage the fast performance and rich expressivity of SPARQL for various application purposes.
SPIN can be used to:
Calculate the value of a property based on other properties - for example, area of a geometric figure as a product of its height and width, age of a person as a difference between today's date and person's birthday, a display name as a concatenation of the first and last names
Isolate a set of rules to be executed under certain conditions - for example, to support incremental reasoning, to initialize certain values when a resource is first created, or to drive interactive applications
These rules are implemented USING SPARQL CONSTRUCT or SPARQL UPDATE requests (INSERT and DELETE). SPIN Templates also make it possible to define such rules in higher-level domain specific languages so that rule designers do not need to work with SPARQL directly.
Another common need in applications is to check validity of the data. For example, you may want to require that a field is entered and/or that the string entered follows your format requirements.
SPIN offers a way to do constraint checking with closed world semantics and automatically raise inconsistency flags when currently available information does not fit the specified integrity constraints. Constraints are specified using SPARQL ASK or CONSTRUCT queries, or corresponding SPIN Templates.
SPIN combines concepts from object oriented languages, query languages, and rule-based systems to describe object behavior on the web of data. One of the key ideas of SPIN is to link class definitions with SPARQL queries to capture rules and constraints that formalize the expected behavior of those classes. To do so, SPIN defines a light-weight collection of RDF properties.
Finally, SPIN also supports the definition of new SPARQL functions with a transparent and web-friendly framework.
<a href="http://www.spinrdf.org/" target="_blank">Find out more...</a>
Dublin Core Metadata Registry by Dublin Core Metadata Initiative
"The Dublin Core Metadata Registry is designed to promote the discovery and reuse of properties, classes, and other types of metadata terms. It provides an up-to-date source of authoritative information about DCMI metadata terms and related vocabularies. The registry aids in the discovery of terms and their definitions and shows relationships between terms."
The web page for each term has a link to the RDF/XML, N-Triple and N3 linked data notations.
This application, developed by the OCLC Office of Research in cooperation with the Dublin Core Metadata Initiative Registry Community, is currently being hosted by the Resource Center for Knowledge Communities at the University of Tsukuba as a collaborative service for the DCMI community. The registry was developed and continues to be available from DCMI as an open-source project, built entirely with open-source and open-standards software. <a href="http://dcmi.kc.tsukuba.ac.jp/dcregistry/" target="_blank">Find out more...</a>
CNI: Linked Open Data: The Promises and the Pitfalls... - YouTube by Coalition for Networked Information, Dean Krafft, Martin Kalfatovic, MacKenzie Smith, Kris Carpenter Negulescu
Presentations on Vivo, Smithsonian Information, Civil War Data 150 at CNI fall 2010 membership meeting <a href="http://www.youtube.com/watch?v=uSmG1-hoZfE" target="_blank">Find out more...</a>
SPARQL by Example - Cambridge Semantics by Lee Feigenbaum, Eric Prud'hommeaux
Nexus 3D RDF Visualization as an OpenSimulator Region Module Displaying Researcher Interest VIVO Data | eBremer by Erich Bremer
Describes the author's visualization tool (Nexus 3D RDF Visualizer in Opensimulator) and its use with DNA RDF and VIVO researcher interests. Scalability issues and use of PubMed RDF representation and MESH terms are mentioned. <a href="http://www.ebremer.com/nexus/2011-02-19/Nexus-OpenSimulator-Region-Module" target="_blank">Find out more...</a>
The Way to Linked Library Data (part 1) by Karen Coyle
Slides from a webinar sponsored by ASIS&T. High level linked data concepts, particularly geared to libraries. <a href="http://kcoyle.net/presentations/asisti.pdf" target="_blank">Find out more...</a>
The Way to Linked Library Data, part 2: Tools and Techniques by Karen Coyle
Slides and notes from the second webinar in the ASIS&T series. High level concepts, geared for libraries. This session gives some more information about RDF, vocabularies and ontologies, and examples. <a href="http://kcoyle.net/presentations/asistii.html" target="_blank">Find out more...</a>
|3/21/2011||Okkam, enabling the web of entities|
A large scale integrating project co-funded by the European Commission, ran from January 2008 to June 2010. <a href="http://project.okkam.org/" target="_blank">Find out more...</a>
DoCO, the Document Components Ontology by David Shotton, Silvio Peroni
DoCO, the Document Components Ontology, is an ontology for describing the component parts of a bibliographic document. It forms part of SPAR, a suite of Semantic Publishing and Referencing Ontologies. Other SPAR ontologies are described at http://purl.org/spar/. It provides a structured vocabulary written in OWL 2 DL of document components, both structural (e.g. block, inline, paragraph, section, chapter) and rhetorical (e.g. introduction, discussion, acknowledgements, reference list, figure, appendix), enabling these components, and documents composed of them, to be described in RDF. It imports the Discourse Elements Ontology (http://purl.org/spar/deo) and the Document Structural Patterns Ontology (http://www.essepuntato.it/2008/12/pattern), and uses seven rhetorical block elements (background, conclusion, contribution, discussion, evaluation, motivation and scenario) abstracted from the SALT Rhetorical Ontology (http://salt.semanticauthoring.org/ontologies/sro.rdfs). <a href="http://www.essepuntato.it/lode/http://purl.org/spar/doco" target="_blank">Find out more...</a>
Intro to Linked Data: Part 1 of 5: Context by David Hyland-Wood
Intro to Linked Data: Part 3 of 5: Data Modeling by David Hyland-Wood
Covers some of the key aspects of modeling data as linked data <a href="http://www.slideshare.net/prototypo/intro-to-linked-data-data-modeling" target="_blank">Find out more...</a>
Authority SPARQL Examples - Library Linked Data
Wiki page with some example SPARQL queries, based on VIAF's inclusion of dbpedia links <a href="http://www.w3.org/2005/Incubator/lld/wiki/Authority_SPARQL_Examples" target="_blank">Find out more...</a>
Semantic Web for the Working Ontologist: Effective Modeling in RDFS and OWL by Dean Allemang, Jim Hendler
Practical information for programmers and subject domain experts engaged in modeling data to fit the requirements of the Semantic Web. <a href="http://www.workingontologist.com/" target="_blank">Find out more...</a>
British Library Data Model: Overview by Tim Hodson
This is the first in a series of posts to delve deeper into specific aspects of the model and explain some of the thoughts behind the modelling. We start by exploring the background to the modelling process. <a href="http://consulting.talis.com/2011/07/british-library-data-model-overview/" target="_blank">Find out more...</a>
British Library Data Model, version 1.1 by Tim Hodson, Corine Deliot, Alan Danskin, Heather Rosie, Jan Ashton
Diagram of the RDF data model used for publishing the British National Bibliography as linked data. <a href="http://consulting.talis.com/wp-content/uploads/2011/07/British-Library-Data-Model-v1.01.pdf" target="_blank">Find out more...</a>
Turning the CIA's data into pretty pictures on your site using Views by Lin Clark
This screencast shows how you can access the CIA World Factbook (or data in any SPARQL endpoint) and reuse the content on your site. <a href="http://lin-clark.com/blog/turning-cias-data-pretty-pictures-your-site-using-views" target="_blank">Find out more...</a>
More fun with CIA data: SPARQL Views with relationships and contextual filters by Lin Clark
Yesterday, I posted about reusing the CIA's data on your site. I demonstrated how you can use Views to access the CIA World Factbook data and turn it into charts. You can enhance these views further by using relationships to get information about related things, such as bordering countries, and you can use contextual filters to tailor the SPARQL results to your node content. <a href="http://lin-clark.com/blog/more-fun-cia-data-sparql-views-relationships-and-contextual-filters" target="_blank">Find out more...</a>
Linked Data Patterns: A pattern catalogue for modelling, publishing, an consuming Linked Data by Leigh Dodds, Ian Davis
This book attempts to add to the steadily growing canon of reference documentation relating to Linked Data. Linked Data is a means of publishing "web-native" data using standards like HTTP, URIs and RDF. The book adopts a tried and tested means of communicating knowledge and experience in software development: the design pattern. The book is organized as a pattern catalogue that covers a range of different areas from the design of web scale identifiers through to application development patterns. The intent is to create a ready reference that will be useful for both the beginner and the experienced practioner alike. It's also intended to grow and mature in line with the practitioner community. <a href="http://patterns.dataincubator.org/book/index.html" target="_blank">Find out more...</a>
Linked Open Data: Opportunities & Barriers for Archives by Adrian Stevenson
Slides giving a brief overview of linked data technology, some information about the LOCAH project and what it's doing, exposing Archives Hub (archives and COPAC (library) data together, the model use, key benefits, and some challenges and issues. <a href="http://www.slideshare.net/adrianstevenson/saa-chicago2011" target="_blank">Find out more...</a>
SPARQLing UniProt RDF: Using RDF based technologies to aid biological curation efforts by Jerven Bolleman
NBDC / DBCLS BioHackathon 2011 was held in Kyoto, Japan. Main focus of the BioHackathon is to develop technologies for handling Linked Data in life science. The participants discussed, explored and developed SPARQL endpoints, semantic web services, triple stores, ontologies, natural language processing, visualization and Open Bio* tools to utilize RDF data.
On the first day of the BioHackathon (Aug. 21), public symposium of the BioHackathon 2011 was held at Campus Plaza Kyoto. In this talk, Jerven Bolleman makes a presentation entitled "SPARQLing UniProt RDF: Using RDF based technologies to aid biological curation efforts." <a href="http://www.youtube.com/watch?v=AczWuWc4ua0" target="_blank">Find out more...</a>
Every Story Has a Beginning: Entering the Web of Data by Tim Sheratt
A keynote presentation by Tim Sherratt, delivered to the annual conference of the Australia and New Zealand Society of Indexers, September 2011. <a href="http://wraggelabs.com/shed/presentations/anzsi/" target="_blank">Find out more...</a>
Constructing the Open Data Landscape by Nicola Hughes
[M]uch of the public sector data published so far has been pretty much useless. Governments, finally, are beginning to realize that data has little value unless people understand its context and provenance...The key to a sustainable Open Data landscape lies not in the organisational heads of government bodies but in the provenance of the data they release and the ways in which it is released. The goal should be to gain the 5 stars of open linked data. For this to be achieved the data needs to be pared down to its raw ingredients. <a href="http://blog.scraperwiki.com/2011/09/07/constructing-the-open-data-landscape/" target="_blank">Find out more...</a>
Customer Relationship Management and the Social and Semantic Web: Enabling Cliens Conexus by Ricardo Colomo-Palacios, João Varajão, Pedro Soto-Acosta
The ever-growing influence of the Internet has caused a paradigm shift in relationships between customers and companies. New types of interaction introduced by Web 1.0 have undergone a dramatic change in quantity and quality with the advent of Web 2.0. Web 3.0, better known as the Semantic Web, will also significantly impact how companies understand Customer Relationship Management (CRM). Customer Relationship Management and the Social and Semantic Web: Enabling Cliens Conexus provides an overview of the field of the Semantic Web, social Web, and CRM by uniting various research studies from different subfields. Providing a forum for the exchange of research ideas and practices, this book is a reference convergence point for professionals, managers, and researchers in the CRM field together with IT professionals. It also aims to explore the opportunities and challenges confronting organizations in the light of customers in Web 2.0 by using new technologies, including semantic technologies (Web 3.0). <a href="http://www.igi-global.com/book/customer-relationship-management-social-semantic/51928" target="_blank">Find out more...</a>
An Introduction to Linked Open Data in Libraries, Archives and Museums by Jon Voss
Video and slides from an introductory talk Jon Voss gave at the Smithsonian Institution, 2011-09-16, as part of the "LODLAM Washington DC" gathering. Includes linked data concepts and open linked data potential, and covers recent activity to bring together libraries, archives and museums who are working in this area. <a href="http://lod-lam.net/summit/2011/09/15/intro-to-lodlam-talk-live-from-the-smithsonian/" target="_blank">Find out more...</a>
Semantic Web Tech: Linking Data Open Cloud # overview by Francesco Ferzini
Linked data is about connecting data. It is about linking data that was not previously linked in the web sphere or lower any barriers to linking current data, using other methods. <a href="http://actualizink.typepad.com/blog/2011/09/semantic-web-tech-linking-data-open-cloud-overview.html" target="_blank">Find out more...</a>
Proceedings of the 1st International Workshop on Semantic Digital Archives by Livia Predoiu, Steffen Hennicke, Andreas Nurnberger, Seamus Ross
Proceedings of the 1st International Workshop on Semnatic Digital Archives, Berlin, Germany, Sept. 29, 2011. Co-located with the 1st International Conference on Theory and Practice of Digital Libraries (TPDL 2011), formerly known as European Conference on Digital Libraries (ECDL) and held on the 29.09.2011 in Berlin, Germany. <a href="http://ceur-ws.org/Vol-801/" target="_blank">Find out more...</a>
Is it Time for Law Libraries to Collaborate on Description for Their Own Institutions‚Äô Legal Scholarship? (Linked Data and the Law ¬ª VoxPopuLII) by Michelle Pearse
Article (or blog post?) at the Legal Information Institute site, Cornell University, putting forth a case for developing an ontology specific to legal journal articles and the role of librarians in such development. Includes value to Semantic web... <a href="http://blog.law.cornell.edu/voxpop/category/linked-data-and-law/" target="_blank">Find out more...</a>
Using an RDF Data Pipeline to Implement Cross-Collection Search | museumsandtheweb.com by David Henry, Eric Brown
This paper presents an approach to transforming data from many diverse sources in support of a semantic cross-collection search application. It describes the vision and goals for a semantic cross-collection search and examines the challenges of supporting search of that kind using very diverse data sources. The paper makes the case for supporting semantic cross-collection search using semantic web technologies and standards including Resource Descriptive Framework (RDF), SPARQL Protocol and RDF Query Language (SPARQL ), and an XML mapping language. The Missouri History Museum has developed a prototype method for transforming diverse data sources into a data repository and search index that can support a semantic cross-collection search. The method presented in this paper is a data pipeline that transforms diverse data into localized RDF; then transforms the localized RDF into more generalized RDF graphs using common vocabularies; and ultimately transforms generalized RDF graphs into a Solr search index to support a semantic cross-collection search. Limitations and challenges of this approach are detailed in the paper.
Read more: Using an RDF Data Pipeline to Implement Cross-Collection Search | museumsandtheweb.com <a href="http://www.museumsandtheweb.com/mw2012/papers/using_an_rdf_data_pipeline_to_implement_cross_" target="_blank">Find out more...</a>
Every Story Has a Beginning: Entering the Web of Data by Tim Sheratt
Keynote delivered at the annual conference of the Australia and New Zealand Society of Indexers, 14 September 2011. <a href="http://discontents.com.au/shoebox/every-story-has-a-beginning" target="_blank">Find out more...</a>
Library Linked Data Incubator Group Final Report by W3C Library Linked Data Incubator Group
The mission of the W3C Library Linked Data Incubator Group, chartered from May 2010 through August 2011, has been "to help increase global interoperability of library data on the Web, by bringing together people involved in Semantic Web activities ‚Äî focusing on Linked Data ‚Äî in the library community and beyond, building on existing initiatives, and identifying collaboration tracks for the future." In Linked Data [LINKEDDATA], data is expressed using standards such as Resource Description Framework (RDF) [RDF], which specifies relationships between things, and Uniform Resource Identifiers (URIs, or "Web addresses") [URI]. This final report of the Incubator Group examines how Semantic Web standards and Linked Data principles can be used to make the valuable information assets that library create and curate ‚Äî resources such as bibliographic data, authorities, and concept schemes ‚Äî more visible and re-usable outside of their original library context on the wider Web. <a href="http://www.w3.org/2005/Incubator/lld/XGR-lld-20111025/" target="_blank">Find out more...</a>
Linked Data and the OpenART project by Julie Allinson, Richar Stephens, Steve Bayliss, Martin Dow, University of York
Describes advantages of linked data and experience working with it in the OpenART project, including developing ontologies <a href="http://www.slideshare.net/j.allinson/linked-data-and-the-openart-project" target="_blank">Find out more...</a>
Library Linked Data Incubator Group: Datasets, Value Vocabularies, and Metadata Element Sets by W3C Library Linked Data Incubator Group
This report on datasets, value vocabularies and metadata elements sets is a complement to the main report of the group. Based on the data gathered in the use cases and with additions from the expert group, this document provides a summary of the current state of Linked Data building blocks, in particular those most related to library Linked Data efforts. <a href="http://www.w3.org/2005/Incubator/lld/XGR-lld-vocabdataset-20111025/" target="_blank">Find out more...</a>
Library Linked Data Incubator Group: Use Cases by W3C Library Linked Data Incubator Group
This final report of the Incubator Group examines how Semantic Web standards and Linked Data principles can be used to make the valuable information assets that library create and curate ‚Äî resources such as bibliographic data, authorities, and concept schemes ‚Äî more visible and re-usable outside of their original library context on the wider Web.
The Incubator Group began by eliciting reports on relevant activities from parties ranging from small, independent projects to national library initiatives. These use cases provided the starting point for the work summarized in the main report: an analysis of the benefits of library Linked Data; a discussion of current issues with regard to traditional library data, existing library Linked Data initiatives, and legal rights over library data; and recommendations for next steps. <a href="http://www.w3.org/2005/Incubator/lld/XGR-lld-usecase-20111025/" target="_blank">Find out more...</a>
FAST Linked Data by OCLC Research
FAST (Faceted Application of Subject Terminology)
FAST (Faceted Application of Subject Terminology) is an enumerative, faceted subject heading schema derived from the Library of Congress Subject Headings (LCSH). The purpose of adapting the LCSH with a simplified syntax to create FAST is to retain the very rich vocabulary of LCSH while making the schema easier to understand, control, apply, and use. The schema maintains upward compatibility with LCSH, and any valid set of LC subject headings can be converted to FAST headings.
FAST Linked Data
Linked Data is one of the underpinnings of the Semantic Web, the effort to make the meaning of information on the Web more understandable to computers. These Linked Data authorities are formatted using SKOS (Simple Knowledge Organization System). A search API is available to help identify and select headings for use.
The Linked Data version of the FAST authorities also incorporates links to corresponding LCSH authorities. In addition, many of the geographic headings have links to the GeoNames geographic database.
The use of the FAST authorities is open and FAST is made available under an Open Data Commons Attribution (ODC-By) License. OCLC will update FAST periodically, at least twice yearly. <a href="http://fast.oclc.org/WebZ/Authorize?sessionid=0&next=startscreen&bad=error/authofail.html&autho=WebZUser&password=WebZUser" target="_blank">Find out more...</a>
The Future of Research: A ResearchSpace Perspective by Dominic Oldman
Presentation at , on the ResearchSpace project <a href="http://docs.google.com/a/researchspace.org/viewer?a=v&pid=sites&srcid=cmVzZWFyY2hzcGFjZS5vcmd8cmVzZWFyY2hzcGFjZXxneDo2NDU5NjY2OWNhMmQ0OWY4" target="_blank">Find out more...</a>
Semantic Web im Bibliotheken [Semantic Web in Libraries] by German National Library of Economics ‚Äì Leibniz Information Centre for Economics North Rhine-Westphalian Library Service Centre
Conference devoted to issues related to linked data/semantic web technology from a library perspective. This conference focused on infrastructure needs and on changing contexts of scholarly communication and publishing, and included workshops. Presentation materials are now linked to for many items on the program, and for workshops. The primary language of the site and materials is German (with some English versions); many presentations are in English. <a href="http://swib.org/swib11/" target="_blank">Find out more...</a>
High and Lows of Library Linked Data by Adrian Stevenson
British Library Terms RDF schema by Tim Hodson, Corine Deliot
RDF schema (OWL ontology) for the British Library's British National Bibliography published as linked data. <a href="http://www.bl.uk/schemas/bibliographic/blterms#" target="_blank">Find out more...</a>
google-refine: Google Refine, a power tool for working with messy data (formerly Freebase Gridworks)
Google Refine is a power tool for working with messy data, cleaning it up, transforming it from one format into another, extending it with web services, and linking it to databases like Freebase <a href="http://code.google.com/p/google-refine/" target="_blank">Find out more...</a>
Sweet Tools ¬ª AI3:::Adaptive Information by Michael K. Bergman
A database of over 10,000 Linked Data/Semantic Web tools. Part of the AI3 site which is primarily devoted to linked data technology. The Sweet Tools page provides a search interface; the Sweet Tools Simple List provides a list of all the tools, sorted by type and in alphabetical order within the type categories. <a href="http://www.mkbergman.com/sweet-tools/" target="_blank">Find out more...</a>
Stanford Linked Data Workshop Technology Plan by Stanford University Libraries & Academic Information Resources, Jerry Persons, Philip Schreur, Michael A. Keller
Published plan to implement a comprehensive data service focused on academic information of many kinds including library material, using linked data technology, at Stanford. Follows draft reports arising from on workshops held 2011-06-27 through 2011-07-01 with invited participants and support from the Mellon foundation and CLIR. <a href="http://www.clir.org/pubs/reports/pub152/LDWTechDraft_ver1.0final_111230.pdf" target="_blank">Find out more...</a>
Stanford Linked Data Workshop Technology Plan by Stanford University Libraries & Academic Information Resources, Jerry Persons, Philip Schreur, Michael A. Keller
Linked Data in Libraries, Archives and Museums (ISQ v.24 no. 2-3, 2011)
Entire issue of ISQ devoted to linked data topics. <a href="http://www.niso.org/publications/isq/2012/" target="_blank">Find out more...</a>
ARC2 is a PHP 5.3 library for working with RDF. It also provides a MySQL-based triplestore with SPARQL support.
Feature-wise, ARC2 is now in a stable state with no further feature additions planned. Issues are still being fixed and Pull Requests are welcome, though.
It runs in most web server environments (it's PHP 5.3 E_STRICT-compliant).
Features include: ConNeg-capable Web Reader ; Support for proxies, redirects, and Content Negotiation;
Various parsers: RDF/XML, N-Triples, Turtle, SPARQL + SPOG, Legacy XML, HTML tag soup, RSS 2.0, Google Social Graph API JSON‚Ä¶ ;
Serializers: N-Triples, RDF/JSON, RDF/XML, Turtle, SPOG dumps‚Ä¶ ;
Two internal structures: resource-centric processing; statement-centric processing; RDF Storage (using MySQL); SPARQL SELECT, ASK, DESCRIBE, CONSTRUCT, + aggregates, LOAD, INSERT, and DELETE; SPARQL Endpoint Class; Set up a compliant SPARQL endpoint with 3 lines of code; SemHTML RDF extractors; DC, eRDF, microformats, OpenID, RDFa‚Ä¶ ; RemoteStore Class -- Query remote SPARQL endpoints as if they were local stores (results are returned as native PHP arrays);
Turtle templating -- Generate dynamic graphs.
Plugins -- Extend ARC with your own custom extensions; Triggers -- Register event handlers for selected SPARQL Query types;
SPARQLScript; SPARQL-based scripting and output templating. <a href="https://github.com/semsol/arc2" target="_blank">Find out more...</a>
Supporting multilingual bibliographic resource discovery with Functional Requirements for Bibliographic Records - Semantic Web - Volume 3, Number 1 / 2012 - IOS Press by Hugo Manguinhas, Nuno Friere, Jorge Machado, Jorge Borbinha
id.loc.gov - LC Linked Data Service (Library of Congress)
The Linked Data Service provides access to commonly found standards and vocabularies promulgated by the Library of Congress. This includes data values and the controlled vocabularies that house them. The following are currently offered as part of this service:
LC Subject Headings
LC Name Authority File
LC Children's Subject Headings
LC Genre/Form Terms
Thesaurus for Graphic Materials
Preservation Level Role
Cryptographic Hash Functions
MARC Geographic Areas
Extended Date/Time Format <a href="http://id.loc.gov/" target="_blank">Find out more...</a>
Publishing and Using Cultural Heritage Linked Data on the Semantic Web by Eero Hyv√∂nen
|1/1/2012||Muninn Project WW1|
The Muninn Project is a multidisciplinary, multinational, academic research project investigating millions of records pertaining to the First World War in archives around the world. We currently have agreements with Library and Archives Canada, the British National Archives and the National Archives of Australia.
Our aim is to take archives of digitized documents, extract the written data using massive amount of computing power and turn the resulting information into structured databases. These databases will then support further research in a number of different areas. <a href="http://www.muninn-project.org/" target="_blank">Find out more...</a>
|1/1/2012||EasyRdf by Nicholas J. Humfrey|
EasyRdf is a PHP library designed to make it easy to consume and produce RDF. It was designed for use in mixed teams of experienced and inexperienced RDF developers. It is written in Object Oriented PHP and has been tested extensively using PHPUnit.
After parsing EasyRdf builds up a graph of PHP objects that can then be walked around to get the data to be placed on the page. Dump methods are available to inspect what data is available during development.
Data is typically loaded into a EasyRdf_Graph object from source RDF documents, loaded from the web via HTTP. The EasyRdf_GraphStore class simplifies loading and saving data to a SPARQL 1.1 Graph Store.
SPARQL queries can be made over HTTP to a Triplestore using the EasyRdf_Sparql_Client class. SELECT and ASK queries will return an EasyRdf_Sparql_Result object and CONSTRUCT and DESCRIBE queries will return an EasyRdf_Graph object. <a href="http://www.aelius.com/njh/easyrdf/" target="_blank">Find out more...</a>
Raptor RDF Parser Toolkit - Raptor RDF parser utility by Dave Beckett
Raptor is a free software / Open Source C library that provides a set of parsers and serializers that generate Resource Description Framework (RDF) triples by parsing syntaxes or serialize the triples into a syntax. The supported parsing syntaxes are RDF/XML, N-Triples, TRiG, Turtle, RSS tag soup including all versions of RSS, Atom 1.0 and 0.3, GRDDL and microformats for HTML, XHTML and XML. The serializing syntaxes are RDF/XML (regular, and abbreviated), N-Triples, RSS 1.0, Atom 1.0, XMP, Turtle, GraphViz DOT and JSON. <a href="http://librdf.org/raptor/rapper.html" target="_blank">Find out more...</a>
ARC2: Easy RDF and SPARQL for LAMP systems
ARC is a flexible RDF system for semantic web and PHP practitioners. It's free, open-source, easy to use, and runs in most web server environments (it's PHP 5.3 E_STRICT-compliant)...ARC started in 2004 as a lightweight RDF system for parsing and serializing RDF/XML files. It later evolved into a more complete framework with storage and query functionality. By 2011, ARC2 had become one of the most-installed RDF libraries. Nevertheless, active code development had to be discontinued due to lack of funds and the inability to efficiently implement the ever-growing stack of RDF specifications. The sources continue to be available to the community through GitHub. <a href="https://github.com/semsol/arc2/wiki" target="_blank">Find out more...</a>
AllegroGraph is a modern, high-performance, persistent graph database. AllegroGraph uses efficient memory utilization in combination with disk-based storage, enabling it to scale to billions of quads while maintaining superior performance. AllegroGraph supports SPARQL, RDFS++, and Prolog reasoning from numerous client applications.
AllegroGraph New V4.8 Features
MongoDB Integration - Presentation: MongoDB meets the Semantic Webtarget blank image., and a recent Webcast on MongoGraph
SOLR Interface for free text indexes, integrated with the SPARQL 1.1 query engine. View the webcast: Making Solr Search Smarter using RDF
Many SPARQL 1.1 enhancements and performance improvements
SPIN support (SPARQL Inferencing Notation). The SPIN API allows you to define a function in terms of a SPARQL query and then call that function in other SPARQL queries. These SPIN functions can appear in FILTERs and can also be used to compute values in assignment and select expressions.
New Transactional Duplicate triple/quad deletion and suppression
New Support for Client Authentication via x.509 certificates
Improved efficiency of Warm Standby and Replication
Numerous internal improvements, particularly to index updating and query processing, which results in improved performance.
The primary emphasis of AllegroGraph version 4.8 development has been additional enterprise functionality, efficiency and overall scalability. There are many new features as well. Please refer to the release notes for a complete list of enhancements and improvements.
AllegroGraph is 100 percent ACID, supporting Transactions: Commit, Rollback, and Checkpointing.
Full and Fast Recoverability
100% Read Concurrency, Near Full Write Concurrency
Online Backups, Point-in-Time Recovery, Replication, Warm Standby
Dynamic and Automatic Indexing ‚Äì All committed triples are always indexed (7 indices)
Advanced Text Indexing ‚Äì Text indexing per predicate
All Clients based on REST Protocol ‚Äì Java Sesame, Java Jena, Python, C#target blank image., Clojuretarget blank image., Perltarget blank image., Rubytarget blank image., Scalatarget blank image., and Lisp clients
Completely multi-processing based (SMP) ‚Äì Automatic Resource Management for all processors and disks, and optimized memory use. See the performance tuning guide here, and server configuration guide here
Column-based compression of indices ‚Äì reduced paging, better performance
Triple Level Security with Security Filters
Cloud-Hosted AllegroGraph - Amazon EC2
CLIF++, Common Logic Rule Language
Soundex support - Allows Free text indexing based on phonetic pronunciation
User-defined Indices - fully controllable by system administrator
Client-Server GRUFF with Graphical Query Builder
Plug-in Interface for Text Indexers (use SOLR/Lucene, Native AG Full Text Indexer, Japanese Tokenizer)
Dedicated and Public Sessions ‚Äì In dedicated sessions users can work with their own rule sets against the same database
Visit our Learning Center
Mark Watson's new book: Practical Semantic Web and Linked Data Applications, Java, Clojure, Scala, and JRuby Edition
AllegroGraph is designed for maximum loading speed and query speed. Loading of quads, through its highly optimized RDF/XML and N-Quads parsers, is best-of-breed, particularly with large files. The AllegroGraph product line has always pushed the performance envelope starting with version 1.0 in 2004, which was the first product to claim 1 billion triples loaded and indexed using standard x86 64-bit hardware. AllegroGraph, a purpose built (not a modified RDBMS), NoSQL Graph Database continued to drive innovation in the marketplace with the 2008 SemTech conference example of 10 billion quads loaded on Amazon‚Äôs EC2 service. The new version 4 series continues to bring performance to the forefront of Franz‚Äôs Semantic Technologies as the industry‚Äôs first OLTP semantic web database. AllegroGraph‚Äôs ability to automatically manage all available hardware resources to maximize loading, indexing and query capabilities once again raises the bar for RDF storage performance. The following table displays examples of AllegroGraph's performance in loading and indexing. Benchmark Results.
Load Rate (T/Sec)
36min, 49 sec
12 hrs, 18m, 16s
78 hrs, 9m, 23s
338 hrs, 5m
*32 core Intel E5520, 2.0 GHz, with 1 TB RAM, RedHat v6.1.
**64 core Intel x7560, 2.27 GHz, 2TB RAM, 22TB Disk, Redhat v6.1. LUBM-like data.
***240 core Intel x5650, 2.66GHz, 1.28TB RAM, 88TB Disk, Redhat v6.1. LUBM-like data.
AllegroGraph provides a REST protocol architecture, essentially a superset of the Sesame HTTP Client. Franz's staff directly supports adapters for various languages, Sesame Java, Sesame Jena, Python using the Sesame signatures, and Lisp. There are Open Source Adapters through community projects for C#, Ruby, Clojure, Scala, and Perl. Links to download here.
Powerful and Expressive Reasoning and Querying
AllegroGraph provides the broadest array of mechanisms to query and access knowledge in an RDF datastore:
RDFS++ Reasoning - Dynamic Materialization
Description logics or OWL-DL reasoners are good at handling complex ontologies. They tend to be complete (give all the possible answers to a query) but can be totally unpredictable with respect to execution time when the number of triples increases beyond millions. AllegroGraph offers a very fast and practical RDFS++ reasoner.
We support all the RDF and RDFS predicates and some in full OWL. The supported predicates are RDF:type, RDFS:subClassOf, range, domain, subProperty.
OWL:sameAs inverseOf, TransitiveProperty, hasValue, someValuesFrom, allValuesFrom, one of, equivalentClass, restriction, onProperty, intersectionOf.
AllegroGraph's RDFS++ engine dynamically maintains the ontological entailments required for reasoning: it has no explicit materialization phase. Materialization is the pre-computation and storage of inferred triples so that future queries run more efficiently. The central problem with materialization is its maintenance: changes to the triple-store's ontology or facts usually change the set of inferred triples. In static materialization, any change in the store requires complete re-processing before new queries can run. AllegroGraph's Dynamic Materialization simplifies store maintenance and reduces the time required between data changes and querying.
SPARQL Queries on Named Graphs
SPARQL, the W3C standard RDF query language, returns RDF, XML and other formats in responses to queries. AllegroGraph's SPARQL, one of the W3C's "interoperable implementations", includes a query optimizer, and has full support for named graphs. It can be used with the RDFS++ reasoning turned on (i.e., query over real and inferred triples). SPARQL can be used with every available AllegroGraph interface mentioned in the previous section.
AllegroGraph's RDF Prolog provides concise, powerful, industry-standard, domain-specific reasoning to build high-level concepts (that require complex rules or numerical processing) on top of RDF data. AllegroGraph Prolog is an option because many use cases are difficult (or very cumbersome) to model with only RDF/RDFS and OWL. Prolog can also be used on top of the RDFS++ reasoner as a rule based system.
Low-level APIs Allow fast, 'close-to-the-metal' access to triples by subject, predicate, and object.
GeoTemporal Reasoning, Social Network Analysis, and Additional Features
Other essential Triple-Store features:
Geospatial and Temporal Reasoning
AllegroGraph stores geospatial target blank image. and temporal target blank image. data types as native data structures. Combined with its indexing and range query mechanisms, AllegroGraph lets you perform geospatial and temporal reasoning efficiently.
Social Networking Analysis
AllegroGraph includes an SNA library that treats a triple-store as a graph of relations, with functions for measuring importance and centrality as well as several families of search functions. Example algorithms are nodal-degree, nodal-neighbors, ego-group, graph-density, actor-degree-centrality, group-degree-centrality, actor-closeness-centrality, group-closeness-centrality, actor betweenness-centrality, group-betweenness-centrality, page-rank-centrality, and cliques. Geospatial and temporal primitives combined with SNA functions form an Activity Recognition framework for flexibly analyzing networks and events in large volumes of structured and unstructured data.
Native Data Types and Efficient Range Queries
AllegroGraph stores a wide range of data types directly in its low level triple representation. This allows for very efficient range queries and significant reduction in triple-store data size. With other triple-stores that only store strings, the only way to do a range query is to go through all the values for a particular predicate. This works well if everything fits in memory; but if the predicate has millions of triples, it will need costly machines with huge amounts of RAM. AllegroGraph supports most XML Schema types (native numeric types, dates, times, longitudes, latitudes, durations and telephone numbers).
AllegroGraph supports free-text indexing on the objects of triples whose predicates have been registered for indexing. Once indexed, triples can be found using a simple but robust query language. Free-text indexing support includes functions to register predicates and see which predicates are registered. Support for Solr was added in AllegroGrpah version 4.5
Named Graphs for Weights, Trust Factors, Provenance
AllegroGraph actually stores quints. A triple in AllegroGraph contains 5 slots, the first three being subject (s), predicate (p), and object (o). The remaining two are a named-graph slot (g) and a unique id assigned by AllegroGraph. The id slot is used for internal administrative purposes, but can also be referred to by other triples directly.
The W3C proposal is to use the 'named-graph' slot for clustering triples. So for example, you load a file with triples into AllegroGraph and you use the filename as the named-graph. This way, if there are changes to the triple file, you just update those triples in the named graph that came from the original file. However, with AllegroGraph, you can also put other attributes such as weights, trust factors, times, latitudes, longitudes, etc, into the named graph slot.
AllegroGraph allows triple-ids to be the subject or object of another triple. This is beyond the scope of pure RDF. The advantage of this approach is that you can reduce the total number of triples in the store to a more manageable size, and, even more importantly, dramatically reduce query time because a single query can retrieve more data.
Automatic Resource Management
The AllegroGraph architecture is designed to maximize hardware resources for all data management procedures (Loading, Indexing, Query, etc.). The hardware utilization can be managed through the AllegroGraph configuration file as necessary.
Dynamic and Automatic Indexing
Triple-indices are user configurable, or index management can be taken care of entirely by AllegroGraph. By default, all committed triples are always indexed (default: 7 indices). AllegroGraph now supports any index combination of S, P, O, G. The default indices are:
S, P, O, G, I - Subject, Predicate, Object, Named Graph, ID
P, O, S, G, I
O, S, P, G, I
G, S, P, O, I
G, P, O, S, I
G, O, S, P, I
AllegroGraph supports queries with distributed databases. You can group multiple triple-stores, both local and remote into a single virtual store. It allows thread-safe opening of multiple triple-databases from one application (for the read only parts of the database). Queries over multiple databases are easy with direct data access from applications. It also supports physical merging of databases.
Production AllegroGraph databases can now be paired with transactionally consistent Warm Standby databases, co-located in the same data center or across the globe. Whether for planned maintenance or a hardware failure, your enterprise application never needs to be down.
Point in Time Recovery
Provides a user the option to advance the state of a restored database forward to any later commit that was made to the original (and perhaps still running) database. This functionality performs as if the user performed a backup after every commit thus providing complete data integrity.
Allows multiple AllegroGraph databases to be kept synchronized and transactionally consistent in real-time with the master. These replicates can provide scalability and load balancing for applications by offering numerous clients the ability to read data that reflects content in the master database. Replication occurs across the network so any set of AllegroGraph databases connected by a network can participate in replication.
Make the most of your use of semantic technologies by utilizing our consulting services. We provide:
Vision Building - How to Apply Semantic Technologies
Rapid Prototyping and Proof-of-Concept Development
Complete Enterprise Technology Solution Stack
Best Practices to Maximize Value from Semantic Technologies
New Organizational Skills Required ÷†Custom Training
More Details - www.franz.com/ps
Compatible Semantic Technologies
TopBraid Composer, developed by TopQuadrant, Inc., is an enterprise-class modeling and application development environment It provides comprehensive support for modeling ontologies and data, connecting data sources, designing queries, rules and semantic data processing chains, and developing Semantic Web applications. For details see TopBraid Composer
The Semantic Web reasoning system developed by Racer Systems GmbH, RacerPro, has been integrated with AllegroGraph, exposing RDF data in AllegroGraph to Racer's highly optimized Description Logic (DL) reasoner. It is most suitable for ontology-driven applications or theorem proofing. RacerPro's interfaces also include DIG over HTTP and support for rules (SWRL). For details see RacerPro
AGWebview, developed by Franz, Inc., is an interface for exploring, querying, and managing AllegroGraph triple stores through a web browser. For details see AGWebview
Gruff is an RDF browser that displays visual graphs and has an interface to build SPARQL or Prolog queries as visual graphs. Gruff can also display tables of all properties of selected resources or generate tables with SPARQL queries, and resources in the tables can be added to the visual graph. For details see Gruff
Data mining has increasingly played a key role in the enterprise decision process because of today's competitive necessity to respond to changing market conditions quickly and correctly, leveraging the enormous operating data now available for such process. PEPITo, developed by PEPITe S.A. brings unique capabilities to meet today's data mining needs. For details see Pepito
The COGITO platform by Expert System S.p.A., conceived to bring intelligence to the search, extraction and classification of unstructured information for internal management purposes and for monitoring and analyzing external sources, such as the Internet. For details see Cogitotarget blank image.
The Sentient Suite, developed by IO Informatics Inc., integrates heterogeneous data to solve knowledge and project management problems for the Life Sciences industry. For details see Sentient Suitetarget blank image. <a href="http://www.franz.com/agraph/allegrograph/" target="_blank">Find out more...</a>
D2ME Digitized Manuscripts to Europeana
DM2E will develop the tools needed to convert content from diverse metadata sources into the Europeana Data Model. DM2E will also catalyse an active community of cultural heritage institutions wanting to openly license their metadata and submit it to Europeana through a series of hands-on workshops, scholarships and educational documentation provided through the Open GLAM initiative.
DM2E will develop new tools for use within the Digital Humanities community. Italian SME Net7 will lead this work guided by a high-profile Advisory Board.
Work will focus on the development of two cutting edge tools drawn from Net7‚Ä≤s well-know Muruca stack:
Korbo ‚Äì an aggregation platform for Europeana Linked Data allowing users to create baskets of cultural content
Pundit ‚Äì a light-weight and easy to use semantic annotation tool <a href="http://dm2e.eu/" target="_blank">Find out more...</a>
Exhibit 3.0 by Zepheira, Library of Congress, MIT Libraries, MIT CSAIL, Simile Project
Exhibit 3.0 is a publishing framework for large-scale data-rich interactive Web pages.
Exhibit lets you easily create Web pages with advanced text search and filtering functionalities, with interactive maps, timelines, and other visualizations. The Exhibit 3.0 software has two separate modes: Scripted for building smaller in-browser Exhibits, and Staged for bigger server-based Exhibits. <a href="http://www.simile-widgets.org/exhibit3/" target="_blank">Find out more...</a>
|1/1/2012||Semantic Web Dog Food|
Welcome to the Semantic Web Conference Corpus - a.k.a. the Semantic Web Dog Food Corpus! Here you can browse and search information on papers that were presented, people who attended, and other things that have to do with the main conferences and workshops in the area of Semantic Web research. <a href="http://data.semanticweb.org/" target="_blank">Find out more...</a>
Descarga de ficheros - Biblioteca Nacional de Espa√±a by Biblioteca Nacional de Espa√±a
"Acceso directo a los datadump de los ficheros en formato RDF" Downloadable datasets from BNE in RDF format, include bibliographic, authority "sameAs" relations to VIAF, and "sameAs" relations to Libris, Sudoc, DBPedia... <a href="http://www.bne.es/es/Catalogos/DatosEnlazados/DescargaFicheros/" target="_blank">Find out more...</a>
Data links at the BNE by Biblioteca Nacional de Espa√±a
datos.bne.es is a joint project involving the Ontology Engineering Group (OEG) and Biblioteca Nacional de Espa√±a aimed at enriching the Semantic Web with bibliographic data from its catalogue. This initiative has kicked off with the publication, under Linked Data principles, of information from the bibliographic and authorities catalogues, making them available as RDF (Resource Description Framework) knowledge bases. Furthermore, these bases are interrelated with other knowledge bases existing within the Linking Open Data initiative. With this initiative, Spain has joined the ranks of other institutions such as the British Library and the Deutsche Nationalbibliothek that have recently launched similar projects. <a href="http://www.bne.es/en/Catalogos/DatosEnlazados/index.html" target="_blank">Find out more...</a>
Free Your Metadata by Multimedia Lab (ELIS ‚Äî Ghent University / IBBT), MasTIC (Universit√© Libre de Bruxelles)
Polish and publish your metadata using Google Refine. CLEAN UP: Before linking, metadata always need to be cleaned up. Take out those hands out of your pockets and discover how to handle those embarrassing errors. RECONCILE: Match your metadata with controlled vocabularies connected to the Linked Data cloud and join the place which everyone knows but has never seen.
JOIN IN: The Free Your Metadata team is coming to a city near you.
Join and learn how to free your own metadata! <a href="http://freeyourmetadata.org/" target="_blank">Find out more...</a>
Prot√©g√© Ontology Editor and Knowledge Acquisition System by Community; Stanford University
Prot√©g√© is a free, open source ontology editor and knowledge-base framework.
The Prot√©g√© platform supports two main ways of modeling ontologies via the Prot√©g√©-Frames and Prot√©g√©-OWL editors. Prot√©g√© ontologies can be exported into a variety of formats including RDF(S), OWL, and XML Schema. (more)
Prot√©g√© is based on Java, is extensible, and provides a plug-and-play environment that makes it a flexible base for rapid prototyping and application development.
The Prot√©g√©-OWL editor enables users to build ontologies for the Semantic Web, in particular in the W3C's Web Ontology Language (OWL). "An OWL ontology may include descriptions of classes, properties and their instances. Given such an ontology, the OWL formal semantics specifies how to derive its logical consequences, i.e. facts not literally present in the ontology, but entailed by the semantics. These entailments may be based on a single document or multiple distributed documents that have been combined using defined OWL mechanisms" (see the OWL Web Ontology Language Guide). <a href="http://protege.stanford.edu" target="_blank">Find out more...</a>
R√•data n√•! ‚Äî Norwegian personal name authorities as linked data by BIBSYS, NTNU University Library
The R√•data n√•! data set is a collection of around 9 million triples representing around 1.5 million personal names. The data was created by a joint project between BIBSYS and NTNU University Library with funding from ABM-Utvikling (the Norwegian Archive, Library and Museums authority). The data is structured as linked data and is available via a public SPARQL endpoint and as a bulk download. <a href="http://www.bibsys.no/files/out/linked_data/autreg/index.html" target="_blank">Find out more...</a>
Semantic Multimedia Wiki by Information Systems and Semantic Web (IsWeb) of the University of Koblenz
This wiki currently serves as a documentation platform for the K-Space Annotation Tool (KAT) and the Multimedia Metadata Ontology (M3O). The purpose of this wiki is not only to provide the documentation, but also to exchange ideas about semantic multimedia, and the hosted tools. <a href="http://semantic-multimedia.org/index.php/Main_Page" target="_blank">Find out more...</a>
|1/1/2012||The Muninn Project|
Blog for the Muninn Project,
"a multidisciplinary, multinational, academic research project investigating millions of records pertaining to the First World War in archives around the world. We currently have agreements with Library and Archives Canada, the British National Archives and the National Archives of Australia.
Our aim is to take archives of digitized documents, extract the written data using massive amount of computing power and turn the resulting information into structured databases. These databases will then support further research in a number of different areas." <a href="http://blog.muninn-project.org/" target="_blank">Find out more...</a>
Europeana: moving to Linked Open Data by Antoine Isaac, Robina Clayphan, Bernhard Haslhofer
Linked data vocabulary management: infrastructure support, data integration, and interoperability by Corey Harper,, Gordon Dunsire,, Diane Hillman, Jon Phipps
Reviews some of the activities, needs, iand ssues around vocabulary management in the linked data sphere, activities of the Dublin Core Metadata Initiative in this regard, and the need for guiding principles, best practices and supporting infrastructure.
Linking LIves: Creating an end-User Interface Using Linked Data by Jane Stevenson
As one of the earliest digital learning providers The Open University (OU) has a rich heritage of archived learning materials. Whilst there is an increasing interest in the reuse of legacy learning materials there has been no systematic research into understanding their value. The STELLAR project proposes to undertake a survey to ascertain the extent to which academic staff and stakeholders at the OU value these materials, whether that value is based on pedagogic or other reasons, and what they see as the barriers to realising that value. In phase two it is proposed to investigate whether collecting this material into a digital library that makes use of linked data will enhance the value of these materials. This phase of the project will focus on whether these technologies can improve their discoverability, visibility and re-usability. <a href="http://www.open.ac.uk/blogs/stellar/" target="_blank">Find out more...</a>
Report on the Linked Ancient World Data Institute by Thomas Elliott, Sebastian Heath, John Muccigrosso
The Liberty of Invention: Alchemical Discourse and Information Technology Standardization by John A Walsh, Wallace Edd Hooper
The Chymistry of Isaac Newton project, an online scholarly edition of Newton's alchemical manuscripts, has engaged in a process to include a number of core alchemical symbols into the Unicode standard, a standard for digital representation of characters and symbols from the world's languages, scripts, and writing systems. Our article explores the relationship between information technology standardization and humanities research. We discuss Newton's engagement with alchemy and explore the graphic dimensions of alchemical discourse. We illustrate this discussion with examples of Newton's use of alchemical symbols. We examine Unicode itself, particularly a core Unicode principle distinguishing between the abstract character and the image or glyph of the character, and we discuss the tensions between this core principle and the representation of graphic, symbolic, and pictorial discourse. We describe our experience with the Unicode proposal process and illustrate again‚Äîthis time with an organizational scheme for the symbols‚Äîhow the technical standardization process forced a reexamination of our historical materials. Our conclusions reemphasize the potential for mutually beneficial relationships between certain types of information technology standardization and humanities research and suggest that study of the graphic qualities of alchemical discourse, especially in light of competing theories of text represented by standards like Unicode, may contribute to our understanding of the increasingly graphic, iconic, and pictorial nature of information and communication. <a href="http://llc.oxfordjournals.org/content/27/1/55" target="_blank">Find out more...</a>
Semantic Web Special Interest Group by IFLA (International Federation of Library Associations
"About" page for the IFLA Semantic Web Special Interest Group, with a link to news about activities <a href="http://www.ifla.org/en/about-swsig" target="_blank">Find out more...</a>
LRMI Specification version 0.7 by Learning Resource Metadata Initiative Technical Working Group
Final draft for comment, of a set of "lightweight" properties developed to support discovery of educationaol resources, particularly as incorporated into online published content and products.
This metadata extension builds on the work of Schema.org.
The LRMI is co-sponsored by The Association of Educational Publishers and Creative Commons <a href="http://wiki.creativecommons.org/LRMI/Properties/Version_0.7" target="_blank">Find out more...</a>
RDF Extension for Google Refine by Fadi Maali, Richard Cyganiak
RDF Extension for Google Refine enables exporting interlinked RDF data from Google Refine projects. Reconcile against SPARQL endpoints, RDF dumps; Search the Web for related RDF datasets; GUI for defining the shape of the RDF graph; Use your own vocabulary or import existing ones; Acutomplete for property and class names... <a href="http://lab.linkeddata.deri.ie/2010/grefine-rdf-extension/" target="_blank">Find out more...</a>
Tools - semanticweb.org by AFIB at the Karlsruhe Institute of Technology
A community wiki site (using Semantic MediaWiki) with information about semantic web/linked data tools (as of this posting, 174 tools are listed). Information includes version, release date, responsible party, and status (stable, beta, etc.). <a href="http://semanticweb.org/wiki/Tools" target="_blank">Find out more...</a>
DAMS object rdf graph by Fleming, Declan
Visualization of an instance of the RDF data model used by the Digital Asset Management System developed at UC San Diego for metadata stores. Incorporates properties derived from MODS and PREMIS schemas and locally defined properties. <a href="http://dl.dropbox.com/u/6923768/Work/DAMS%20object%20rdf%20graph.png" target="_blank">Find out more...</a>
SCoRO, the Scholarly Contributions and Roles Ontology by David Shotton
SCoRO, the Scholarly Contributions and Roles Ontology, is an ontology for describing the contributions that may be made and the roles that may be held by a person with respect to a research project, a research investigation, or a research output such as a journal article or a dataset, and for describing the effort contributed by each person in achiving particular research goals or creating particular research outputs.
SCoRO also permits the recording of an author's position in the authorship list of an article, and of the credit assigned to that author for his/her overall contribution to the journal article and the research underlying it, expressed as a percentage of the overall credits assigned to all the authors for that article, which sum to 100%. <a href="http://www.essepuntato.it/lode/http://purl.org/spar/scoro" target="_blank">Find out more...</a>
|2/14/2012||Linked Open Data by Europeana|
Simple animation to explain what Linked Open Data is and why it's a good thing, both for users and for data providers. To find more information about Europeana's linked data pilot, visit http://data.europeana.eu. If you'd like to read more on our open data policy, find it at pro.europeana.eu/support-for-open-data <a href="http://vimeo.com/36752317" target="_blank">Find out more...</a>
The Linked Data Service of the German National Library by Deutsche National Bibliothek
Describes the purposes and approach of the linked data services provided by and planned for the DNB, the data modeling for bibliographic and authority data, methods of access. <a href="http://files.d-nb.de/pdf/linked_data_e.pdf" target="_blank">Find out more...</a>
Dokumentation des Linked Data Services der DNB - linked-data-service - Deutsche Nationalbibliothek - Wiki by Deutsche Nationalbibliothek
Main documentation site for DNB linked data <a href="https://wiki.d-nb.de/display/LDS/Dokumentation+des+Linked+Data+Services+der+DNB" target="_blank">Find out more...</a>
The Linked Data Service of the German National Library, Version 4.1 by Deutsche National Bibliothek
Documents the project to provide linked data services from the DNB metadata, and the data modeling. <a href="http://files.d-nb.de/pdf/linked_data_e.pdf" target="_blank">Find out more...</a>
FaBiO, the FRBR-aligned Bibliographic Ontology by David Shotton, Silvio Peroni
FaBiO, the FRBR-aligned Bibliographic Ontology, is an ontology for recording and publishing on the Semantic Web descriptions of entities that are published or potentially publishable, and that contain or are referred to by bibliographic references, or entities used to define such bibliographic references. FaBiO entities are primarily textual publications such as books, magazines, newspapers and journals, and items of their content such as poems and journal articles. However, they also include datasets, computer algorithms, experimental protocols, formal specifications and vocabularies, legal records, governmental papers, technical and commercial reports and similar publications, and also bibliographies, reference lists, library catalogues and similar collections.
FaBiO classes are structured according to the FRBR schema of Works, Expressions, Manifestations and Items. Additional properties have been added to extends the FRBR data model by linking Works and Manifestations (fabio:hasManifestation and fabio:isManifestationOf), Works and Items (fabio:hasPortrayal and fabio:isPortrayedBy), and Expressions and Items (fabio:hasRepresentation and fabio:isRepresentedBy). <a href="http://www.essepuntato.it/lode/http://purl.org/spar/fabio" target="_blank">Find out more...</a>
OpenART Project - York Digital Library Wiki
OpenART, a partnership between the University of York, the Tate and technical partners, Acuity Unlimited, will design and expose linked open data for an important research dataset entitled "The London Art World 1660-1735", created as part of the AHRC funded Court, Country, City: British Art 1660 ‚Äì 1735 project. Drawing on metadata about artists, places and sales from a defined period of art history scholarship, the dataset offers a complete picture of the London art world during the late 17th and early 18th centuries. Furthermore, links drawn to the Tate collection and the incorporation of collection metadata will allow exploration of works in their contemporary locations. A history of working together, domain expertise, and an existing technological platform will help this short project run smoothly. OpenART will re-use existing authorities and vocabularies, ontologies, metadata application profiles and other services and data sources identified in the course of the project to normalise and structure the metadata and will complete the process of modelling and exposing a defined dataset as open metadata. The process will be designed to be scalable to much richer and more varied datasets, both at York, Tate and beyond. OpenART's output will be threefold: 1) a significant scholarly dataset exposed as open metadata; 2) enhanced resource discovery for cultural metadata; and 3) re-usable lessons and processes for the exposure of open metadata by cultural institutions. <a href="https://dlibwiki.york.ac.uk/confluence/display/openart/Home" target="_blank">Find out more...</a>
Web Data Commons by Freie Universitat Berlin, Karlsruhe Institute of Technology
The Web Data Commons project extracts structured data describing products, people, organizations, places, events from several billion web pages and provides the extracted data for download. Web Data Commons thus enables you to use the data without needing to crawl the Web yourself.
Formats available for download in RDF quads include RDFa as well as microformats. <a href="http://webdatacommons.org/" target="_blank">Find out more...</a>
CNI: Linked Data for Libraries: Why Should We Care? Where Should We Start? - YouTube by Jennifer Bowen, Phillip Schreur
IO10 unveils the beta of WordLift 2.0 in Saarbr√ºcken | WordLift by WordLift Team
Blog post about developments in the existing WordPress plugin (2.0) to more fully support schema.org vocabularies. <a href="http://wordlift.insideout.io/wordlift-2-0-beta/" target="_blank">Find out more...</a>
SemanticWebImport - Gephi:Wiki by Inria
Semantic WebImport is a plugin for the Gephi graph visualization software.
The SemanticWebImport plugin is intended to allow the import of semantic data into Gephi. The imported data are obtained by processing a SPARQL request on the semantic data. The data can be accessed following three manners:
by accessing local rdf, rdfs, rul files and using the embedded Corese engine to apply the SPARQL request;
by accessing a remote REST SPARQL endpoint. In that case, the SPARQL request is applied remotely and the graph is built locally by analyzing the result sent by the REST endpoint;
by accessing a remote SOAP SPARQL endpoint. As for the REST endpoint, the resulting graph is built from the result returned by the endpoint. <a href="http://wiki.gephi.org/index.php/SemanticWebImport" target="_blank">Find out more...</a>
Sharing cultural heritage the linked open data way: why you should ... by Johan Oomen, Lotte Belice Baltusen, Marieke Van Erp
Cultural heritage institutions are beginning to explore the added value of sharing data. We report on Dutch initiatives that have started opening up their data through far-reaching open licenses as well as initiatives that are using the Linked Open Data cloud to integrate and enriching heritage collection metadata. <a href="http://www.slideshare.net/PaulaUdondek/sharing-cultural-heritage-the-linked-open-data-way-why-you-should-sign-up#LODLAM" target="_blank">Find out more...</a>
Callimachus Project by 3 Round Stones Inc.
Callimachus (k…ôlƒ≠m'…ôk…ôs) is a framework for data-driven applications based on Linked Data principles. Callimachus allows Web authors to quickly and easily create semantically-enabled Web applications.
Callimachus builds on Sesame, Mulgara or OWLIM for RDF storage, AliBaba (a RESTful object-RDF library) and uses a revolutionary template-by-example technique for viewing and editing resources. One of the interesting aspects of Callimachus is its use of RDFa as a query language; templates are parsed to build SPARQL from RDFa markup and then filled with query results.
Callimachus is a stand alone server and data store bundled together. Java 6 is the only external dependency to run the server. Callimachus includes its own IDE that can be accessed and used within a Web browser. <a href="http://callimachusproject.org/index.xhtml?view" target="_blank">Find out more...</a>
Wikidata - Meta by Wikimedia Germany
Wikidata aims to create a free knowledge base about the world that can be read and edited by humans and machines alike. It will provide data in all the languages of the Wikimedia projects, and allow for the central access to data in a similar vein as Wikimedia Commons does for multimedia files. Wikidata is proposed as a new Wikimedia hosted and maintained project.
This page is intended to provide the entry point for information and discussions about the Wikidata project proposal and the status of the project. The initial development of the project is funded with a generous donation by the Allen Institute for Artificial Intelligence [ai]2, the Gordon and Betty Moore Foundation, and Google, Inc. <a href="http://meta.wikimedia.org/wiki/Wikidata" target="_blank">Find out more...</a>
Asset Description Metadata Schema (ADMS) by Makx Dekkers, Jo√£o Rodrigues Frade,, European Union
Download, description, documentation site for the ADMS schema.
From the email announcement by Frade: "ADMS v1.00 is expressed in UML, RDF and XSD and it helps projects and repositories to better document what their semantic assets are about, their status, theme, version, etc and where they can be found on the Web (URL). Once the ADMS description is created it can be published on ISA‚Äôs collaborative platform, Joinup, while the asset itself remains on the website of its publisher. This new functionality is known as the federation of semantic assets repositories and will be available in the summer of 2012. As the semantic asset becomes more visibleand discoverable, more people are likely to reuse it. This brings benefits to other projects (e.g. the project can be delivered faster and more interoperability) and to its publisher (larger user base)." <a href="https://joinup.ec.europa.eu/asset/adms/release/100" target="_blank">Find out more...</a>
Wikidata: a new open data repository for the world | Open Knowledge Foundation Blog by Pintscher, Lydia
This month Wikidata, a new project of Wikimedia Germany, finally started. The ambitious goal of the project is to create an open data repository for the world‚Äôs knowledge that can be accessed and edited by everyone, humans and machines alike. Wikidata will be a place where Wikipedia‚Äôs editors and others will be able to collect statements about the world we live in, and references for them. Wikidata will become an enormous open collection of knowledge. <a href="http://blog.okfn.org/2012/04/19/wikidata-a-new-open-data-repository-for-the-world/" target="_blank">Find out more...</a>
Linked Heritage is a 30 month EU project, started on 1st April 2011. <a href="http://www.linkedheritage.eu/" target="_blank">Find out more...</a>
Make your HTML pages smarter with RDFa 1.1 Lite by Uche Ogbuji
Resource Description Framework (RDF) has evolved into increasingly pragmatic formats over time. RDF annotation (RDFa) has been particularly successful as a system for annotating HTML documents inline on the web. It is supported by Google and other search engines in the form of Rich Snippets. The emergence of microdata and the Schema.org initiative applied pressure to simplify RDFa even further. The W3C took action and produced a radically simplified version: RDFa 1.1 Lite. In this article, learn about RDFa Lite, and get a head start on producing and processing the shape of Rich Snippets to come. <a href="http://www.ibm.com/developerworks/library/wa-rdfalite/index.html" target="_blank">Find out more...</a>
Linked Open Data for Libraries, Archives, and Museums: An Aggregators View by Richard Urban
Open Annotation Collaboration by Mellon Foundation
Website for the Open Annotation Collaboration, including announcements, "about", project plan, links to the vocabulary... "The overarching goals of the Open Annotation Collaboration (OAC) are to facilitate to emergence of a Web and resource-centric interoperable annotation environment that allows leveraging annotations across the boundaries of annotation clients, annotation servers, and content collections, to demonstrate the utility of this environment, and to see widespread adoption of this environment." <a href="http://www.openannotation.org/" target="_blank">Find out more...</a>
Global Interoperability and Linked Data in Libraries
URIs, identity, aliases & ‚Äúconsolidation‚Äù by Petej
Blog post about issues encountered in Linking Lives project, dealing with mapping EAD data to RDF, creating identifiers for persons, <a href="http://archiveshub.ac.uk/linkinglives/p=154" target="_blank">Find out more...</a>
XQuery/SPARQL Tutorial - Wikibooks, open books for an open world
This page is part of a book that is a collaborative project to furnish examples of XQuery. The page focuses on SPARQL - not just its use of XQUERY, but examples of use of many of its functions, including SPARQL 1.1 <a href="http://en.wikibooks.org/wiki/XQuery/SPARQL_Tutorial#Compute_employees_with_the_same_salary" target="_blank">Find out more...</a>
Muruca: Semantic Digital Library Framework
Muruca is a collection of Open Source applications to create, manage and run Semantic Digital Libraries.
Muruca tools are natively built for dealing with Linked Open Data, Open Access content and Semantic Web technologies. <a href="http://www.muruca.org/" target="_blank">Find out more...</a>
LOD-LAM Zotero Group Library sponsored by the Digital Library Federation by Nicole Colovos, ALA LITA Linked Library Data Interest Group, Digital LIbrary Federation
Information about the LOD-LAM Zotero Group webliography on linked data (this site) - including tips on installing and using Zotero client/plugin, contributing items, etc. Developed by an ad-hoc group contributing to this site: <a href="http://connect.ala.org/node/177340" target="_blank">Find out more...</a>
Extended Semantic Web Conference 2012 by STI International
The 9th conference occurs May 27-31st, 2012 at Heraklion, Crete, Greece.
The Extended Semantic Web Conference (ESWC) is a major venue for discussing the latest scientific results and technology innovations around semantic technologies. Building on its past success, ESWC is seeking to broaden its focus to span other relevant research areas in which Web semantics plays an important role.
The goal of the Semantic Web is to create a Web of knowledge and services in which the semantics of content is made explicit and content is linked to both other content and services novel applications allowing to combine content from heterogeneous sites in unforeseen ways and support enhanced matching between users needs and content. This network of knowledge-based functionality will weave together a large network of human knowledge, and make this knowledge machine-processable to support intelligent behaviour by machines. Creating such an interlinked Web of knowledge which spans unstructured, RDF as well as multimedia content and services requires the collaboration of many disciplines, including but not limited to: Artificial Intelligence, Natural Language Processing, Database and Information Systems, Information Retrieval, Machine Learning Multimedia, Distributed Systems, Social Networks, Web Engineering, and Web Science. These complementarities are reflected in the outline of the technical program of the ESWC 2012; in addition to the research and in-use tracks, we will feature two special tracks putting particular emphasis on inter-disciplinary research topics and areas that show the potential of exciting synergies for the future, eGovernment and Digital Libraries. ESWC 2012 will present the latest results in research, technologies and applications in its field. Besides the technical program organized over twelve tracks, the conference will feature a workshop and tutorial program, system descriptions and demos, a posters exhibition, a doctoral symposium, as well as the ESWC summer school, which will be held prior to the conference. <a href="http://2012.eswc-conferences.org/" target="_blank">Find out more...</a>
Representing knowledge ‚Äì metadata, data and linked data by Neil Jefferies
Although we can describe relationships between objects using RDF, we are limited to making assertions of the form <subject><predicate/relationship><object> (the RDF "triple"). In practice, relatively few statements of this form can be considered universally and absolutely true. For example: a person may live at a particular address but only for a certain period of time; the copyright on a book may last for 50 years, but only in a particular country. Essentially, what is needed is a mechanism to define the circumstances under which a relationship can be considered valid. A number of possible mechanisms could do this ‚Äì replacing RDF triples with "quads" that include a context object; annotation of relationships using OAC.
These examples are really just special cases of a more general requirement that is of great interest to scholars. This is the ability to qualify a relationship or assertion to capture an element of provenance. Specifically, we need to know who made an assertion, when, on the basis of what evidence, and under which circumstances it holds. <a href="http://en.m.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2012-07-02/Op-ed#section_1" target="_blank">Find out more...</a>
Semantic Tech & Business Conference by Webmediabrands, Semanticweb.ocm
June 3-7, 2012; San Francisco, CA.
The San Francisco Semantic Tech & Business Conference (SemTechBiz) brings together today‚Äôs industry thought leaders and practitioners to explore the challenges and opportunities jointly impacting both corporate business leaders and technologists. Five comprehensive days of fresh insight and immersive learning from global experts in technology, financial services, insurance, healthcare, publishing, government, automotive and enterprise data. Tutorials on Monday, 6/4/2012 include Introduction to RDF, RDFS & OWL and other technical basics. Early bird rates until 3/29/2012 <a href="http://semtechbizsf2012.semanticweb.com/" target="_blank">Find out more...</a>
Linked Data: A Personal View from Jerry Persons by Jerry Persons
This piece inaugurates an occasional series by or about linked data practitioners that will be published here on LODLAM.net and cross-posted on the Digital Library Federation blog. The first post in the series is a personal reflection on the linked data landscape written by Jerry Persons, technology analyst at Knowledge Motifs, Chief Information Architect emeritus at Stanford, and author of the CLIR-commissioned Literature survey in support of Stanford Linked Data Workshop. <a href="http://lodlam.net/2012/06/18/linked-open-data-a-personal-view-from-jerry-persons/" target="_blank">Find out more...</a>
Two Huge Linked Data Announcements (OCLC Research blog) by Roy Tennant
Discusses the announcements of full 23rd edition Dewey Decimal Classification as linked data, and the schema.org markup added to WorldCat <a href="http://hangingtogether.org/?p=1903" target="_blank">Find out more...</a>
Organization Ontology Specification - 0.3 by The Muninn Project, Robert Warren
The Muninn Organization Ontology is meant to deal with organizations, people and the relationships that bind them all.
The initial design objectives for the ontology were:
Support structures for future reasoning about military history problems.
Ensuring that the names of persons and organizations be easily linkage through record linkage methodologies.
Support for the recording of instances of organizations and countries that no longer exist.
Support for ontological structures which record changes in the state of the data.
Support for imprecise, inaccurate and contradictory historical data. <a href="http://rdf.muninn-project.org/ontologies/organization.html" target="_blank">Find out more...</a>
Military Ontology Specification - 0.2 by The Muninn Project, Robert Warren
The Muninn Military Ontology marks up information about military people, organizations and events. <a href="http://rdf.muninn-project.org/ontologies/military.html" target="_blank">Find out more...</a>
Linked Data, Libraries and Building on Cooperative Relationships by Eric Miller
Eric Miller gives general linked data information and talks about the role libraries and library data can play in building the semantic web. This was shortly after his firm, Zepheira, was hired as a consultant to the LC Bibliographic Framework initiative <a href="https://docs.google.com/file/d/0B6fsJZ8pZx88OVlITGpKSU9MWXM/edit" target="_blank">Find out more...</a>
OCLC WorldCat Linked Data Release ‚Äì Significant In Many Ways | Data Liberate by Richard Wallis
Discusses aspects of the WorldCat.org release of embedded RDFa linked data from the WorldCat database, under an ODC-BY license, based onuse of schema.org vocabularies <a href="http://dataliberate.com/2012/06/oclc-worldcat-linked-data-release-significant-in-many-ways/" target="_blank">Find out more...</a>
Beginner's Guide to RDF - TDWG RDF/OWL Task Force by Steve Baskauf, tdwg-rdf TDWG RDF/OWL Task Group
This document provides concise information about topics related to RDF and OWL in the context of the biodiversity informatics community. It is intended as an introduction for persons who are not already familiar with RDF and OWL and as a reference for persons who are familiar but would like organized access to additional reference material. <a href="http://code.google.com/p/tdwg-rdf/wiki/Beginners" target="_blank">Find out more...</a>
Asset Description Metadata Schema (ADMS) by Makx Dekkers, Jo√£o Rodrigues Frade,, European Union
ADMS is intended as a model that facilitates federation and co-operation. It is not the primary intention that repository owners redesign or convert their current systems and data to conform to ADMS, but rather that ADMS can act as a common layer among repositories that want to exchange data.
Asset Description Metadata Schema (ADMS) was developed under the European Commission's ISA Programme. This is the namespace document, generated from the associated RDF schema. Full documentation is provided in the ADMS Specification specification document itself. This includes background information, use cases, the conceptual model and full definitions for all terms used. <a href="http://www.w3.org/ns/adms" target="_blank">Find out more...</a>
Bricolage: A Linked Data Project, funded by JISC by University of Bristol
The University of Bristol Collections as Linked Open Data (BRICOLAGE) project will publish catalogue metadata as Linked Open Data for two of its most significant collections: the Penguin Archive, a comprehensive collection of the publisher‚Äôs papers and books; and the Geology Museum, a 100,000 specimen collection housing many unique and irreplaceable resources. The metadata will be licensed for ease of reuse according to JISC guidelines. The project will re-apply the best practice processes and tools produced by relevant preceding projects to create persistent identifiers, identify and create links to authoritative datasets and vocabularies, and work with the two collections‚Äô infrastructure platforms: CALM and Drupal. The Linked Data production workflows will be embedded in the collections‚Äô teams to ensure future sustainability. The project will also produce two simple demonstrators to illustrate the potential of data linking and reuse, and will encode resource microdata into the Geology Museum‚Äôs forthcoming online catalogue with the aim of improving collection visibility via the major search engines. <a href="http://bricolage.ilrt.bris.ac.uk/category/general/" target="_blank">Find out more...</a>
Linked Data: A Way Out of the Information Chaos and toward the Semantic Web by Michael A. Keller
Article from the University Librarian, Director of Academic Information Resources, and Publisher of HighWiare Press at Stanford - putting forth linked data as a solution to many discovery/metadata challenges facing libraries today. These include "too many silos", lack of precision and recall in library discovery mechanisms, metadata distant from the Web, and competition from Google and its competitors. <a href="http://www.educause.edu/EDUCAUSE+Review/EDUCAUSEReviewMagazineVolume46/LinkedDataAWayOutoftheInformat/231827" target="_blank">Find out more...</a>
|7/3/2012||Vocabulary of a Friend (VOAF)|
VOAF is a vocabulary specification providing elements allowing the description of vocabularies (RDFS vocabularies or OWL ontologies) used in the Linked Data Cloud. In particular it provides properties expressing the different ways such vocabularies can rely on, extend, specify, annotate or otherwise link to each other. It relies itself on Dublin Core and voiD. The name of the vocabulary makes an explicit reference to FOAF because VOAF can be used to define networks of vocabularies in a way similar to the one FOAF is used to define networks of people. <a href="http://lov.okfn.org/vocab/voaf/v2.0/index.html" target="_blank">Find out more...</a>
Turtle: Terse RDF Triple Language by David Beckett, Tim Berners-Lee, Eric Prud'hommeaux, Gavin Carothers
The Resource Description Framework (RDF) is a general-purpose language for representing information in the Web.
This document defines a textual syntax for RDF called Turtle that allows an RDF graph to be completely written in a compact and natural text form, with abbreviations for common usage patterns and datatypes. Turtle provides levels of compatibility with the existing N-Triples format as well as the triple pattern syntax of the SPARQL W3C Recommendation. <a href="http://www.w3.org/TR/turtle/" target="_blank">Find out more...</a>
Out of the Trenches : A Linked Open Data Project | Canadiana by William Wueppelmann
Partners of the Pan-Canadian Documentary Heritage Network (PCDHN) have developed a ‚Äúproof-of-concept‚Äù to showcase a sampling of the network‚Äôs wealth of digital resources using ‚Äúlinked open data‚Äù and principles of the semantic web. The underlying premise is to expose the metadata for these resources using RDF/XML and existing/published ontologies (element sets) and vocabularies, maximizing discovery by a broad user community.
The partners selected the First World War as the topic for the digital resources to be contributed to the proof-of-concept. The metadata for these digital resources was provided by five partner institutions. <a href="http://www.canadiana.ca/en/pcdhn-lod" target="_blank">Find out more...</a>
Integrated multilingual access to diverse Japanese humanities digital archives by dynamically linking data | Digital Humanities 2012 by Tameko Kuyama, Biligsaikkhan Batjargal, Fuminori Kimura, Akira Maeda
This poster provides a summary of our ongoing project for providing integrated access to Japanese multiple digital libraries, archives, and museums. The main goal to construct a federated access system for Japanese humanities databases, which searches multiple databases in parallel and provides on-the-fly integration of the results, has required the system to deal with heterogeneous metadata schemas in various formats. Aggregation and integration of the retrieved results in English and Japanese are complicated if a search needs to be performed from multilingual sources. Ukiyo-e, Japanese traditional woodblock printing, is known worldwide as one of the fine arts of the Edo period (1603-1868). Many museums and organizations in Japan as well as in western countries hold numerous Ukiyo-e prints in their collections. As a result of worldwide digitization over the last decade, many cultural institutions including libraries, archives, and museums started to expose digitized images of Ukiyo-e prints on the Internet. How to find the necessary information effectively from multiple databases is becoming an essential issue for users. In other words, users need an efficient way of searching multiple databases, especially when it is getting more difficult to know which museum has a particular Ukiyo-e print. Thus, federated search of multiple Ukiyo-e databases scattered around the world is a feature expected by humanities researchers of Japanese culture. This poster proposes a method of integrated multilingual access to heterogeneous Ukiyo-e databases for improving the search efficiency. <a href="http://www.dh2012.uni-hamburg.de/conference/programme/abstracts/integrated-multilingual-access-to-diverse-japanese-humanities-digital-archives-by-dynamically-linking-data/" target="_blank">Find out more...</a>
Modeling people and organizations for legislative information by Bruce Thomas
Talks about data modeling decisions, particularly in dealing with people and organizations when describing legislative relationships. <a href="http://blog.law.cornell.edu/metasausage/2012/07/17/modeling-people-and-organizations-for-legislative-information/" target="_blank">Find out more...</a>
Libraries and Linked Data: Looking to the Future
Want to know more about linked data and how it will transform libraries? In this new workshop, appropriate for anyone with an introductory knowledge of linked data, library data guru and consultant Karen Coyle will give you an overview of the tools currently available for the creation of linked data.
Archives & Linked Open Data: are our tools ready to "complete the picture‚Äù? by Anila Angjeli
Some observations from a metadata and stnadards specialist, a librarian collaborating with archivists to interconnect our data. SAA12, session 401 <a href="http://files.archivists.org/conference/sandiego2012/401-Angjeli.pdf" target="_blank">Find out more...</a>
Third Annual VIVO Conference: August 22-24, 2012 | VIVO
This three-day conference runs from August 22 - 24, 2012 at the InterContinental in Miami, FL.
This year's VIVO conference creates a unique opportunity for people from across the country and around the world to come together in the spirit of promoting scholarly collaboration and research discovery. <a href="http://vivoweb.org/conference2012" target="_blank">Find out more...</a>
LOD2 ‚Äì Creating Knowledge out of Interlinked Data
LOD2 is a large-scale integrating project co-funded by the European Commission within the FP7 Information and Communication Technologies Work Programme (Grant Agreement No. 257943). Commencing in September 2010, this 4-year project comprises leading Linked Open Data technology researchers, companies, and service providers (15 partners) from across 11 European countries (and one associated partner from Korea) and is coordinated by the AKSW research group at the University of Leipzig.
It is developing:
enterprise-ready tools and methodologies for exposing and managing very large amounts of structured information on the Data Web,
a testbed and bootstrap network of high-quality multi-domain, multi-lingual ontologies from sources such as Wikipedia and OpenStreetMap.
algorithms based on machine learning for automatically interlinking and fusing data from the Web.
standards and methods for reliably tracking provenance, ensuring privacy and data security as well as for assessing the quality of information.
adaptive tools for searching, browsing, and authoring of Linked Data. <a href="http://lod2.eu/Welcome.html" target="_blank">Find out more...</a>
Linked Data Services for Theses and Dissertations by Thomas Johnson, Michael Boock
Linked Data presents new opportunities to expand services surrounding theses and
dissertations. By creating open datasets, ETD systems can be built to interoperate with other
institutional data as well as with outside metadata sources. However, much foundational work
must be done before these advantages can be fully realized.
This paper details work at Oregon State University to create a Linked Dataset covering the
University‚Äôs theses and dissertations. Using data from existing MARC and Qualified Dublin
Core records, we have established a process and model for crosswalking data from existing
records into a variety of Semantic Web vocabularies. Our approach is to create basic services
on a dedicated thesis and dissertation interface, incrementally extending those available
through our institutional repository. We describe services implemented, those in progress and
plans for continued work. We also address the limitations of our existing metadata and
resulting challenges in crosswalking and interoperability.
While Linked Data has great promise, implementation must target specific services that can be
implemented today. We plan continued work to improve our data models and to utilize new
data from other linked data sources as they emerge. <a href="http://scholarsarchive.library.oregonstate.edu/xmlui/handle/1957/32977" target="_blank">Find out more...</a>
List of applications for browsing, querying and working with DBPedia data <a href="http://wiki.dbpedia.org/Applications" target="_blank">Find out more...</a>
Linked Open Data Conference by Cataloguing and Indexing Group Scotland
"Programme will be available shortly, and will feature keynote speaker Richard Wallis (LOD and semantic web expert and Technology Evangelist at OCLC) and speakers from across Europe, including EDINA, the Open Knowledge Foundation, the National Library of Scotland, Biblioth√®que nationale de France, the Polytechnic University of Madrid, and the DBC (Denmark)." Fees, gbp 50.00 + VAT (GBP 40 + VAR for CIG/CIGS members). <a href="http://www.slainte.org.uk/events/EvntShow.cfm?uEventID=2999" target="_blank">Find out more...</a>
LiAM: Linked Archival Metadata by Tufts University Digital Collections and Archives
LiAM is focused on planning for the application of linked data approaches to archival description. Our goal is to better understand the benefits that linked data could bring to the management, discovery, and use of archival collections while also investigating the efforts required to implement these approaches. Central to this effort is identifying graduated approaches that will enable archives to build on existing description as well as mapping out a more ambitious vision for linked data in archives. <a href="http://sites.tufts.edu/liam/" target="_blank">Find out more...</a>
EUCLID Module 1: Linked Data by Barry Norton
This module aims to provide a general overview of the main topics related to using Linked Data. It is only an introduction and some of the topics are only mentioned and then discussed in greater detail in one of the following modules. The main goal of this module is to describe the overall motivating scenario and to teach the fundamental Linked Data principles, while briefly describing the context of the technologies and possible application solutions. <a href="http://stadium.open.ac.uk/stadia/preview.php?whichevent=2056&s=29&option=&record=0" target="_blank">Find out more...</a>
SWAIE 2012 | Semantic Web and Information Extraction Workshop
Workshop dates Oct 8-9 2012 in Galway, Ireland.
The goal of this workshop is to bring researchers from the fields of Information Extraction and the Semantic Web together to foster inter-domain collaboration. There is a vast wealth of information available in textual format that the Semantic Web cannot yet tap into: 80% of data on the Web and on internal corporate intranets is unstructured, hence analysing and structuring the data ‚Äì social analytics and next generation analytics ‚Äì is a large and growing endeavour. Here, the Information Extraction community could help as they specialise in mining the nuggets of information from text. Information Extraction techniques could be enhanced by annotated data or domain-specific resources. The Semantic Web community has taken great strides in making these resources available through the Linked Open Data cloud, which are now ready for uptake by the Information Extraction community. The workshop invites contributions around three particular topics: 1) Semantic Web-driven Information Extraction, 2) Information Extraction for the Semantic Web, and 3) applications and architectures on the intersection of Semantic Web and Information Extraction. <a href="http://semanticweb.cs.vu.nl/swaie2012/" target="_blank">Find out more...</a>
IMi/Projects/Semantic Web and Knowledge Services by Knowledge Media Institute
This is a search on KMI projects in the category Semantic Web and Knowledge Services. Some are educational, some are creative. <a href="http://kmi.open.ac.uk/projects/theme/semantic-web-and-knowledge-services" target="_blank">Find out more...</a>
ISWC2012 | The 11th International Semantic Web Conference by SWSA - Semantic Web Science Association, University of Zurich
ISWC 2012 will occur 2012-11-11 through 2012-11-15 in Boston, Mass., USA. ISWC is the premier international forum, for the Semantic Web / Linked Data Community. Here, scientists, industry specialists, and practitioners meet to discuss the future of of practical, scalable, user-friendly, and game changing solutions. Includes tutorials and workshops, various tracks (see Extra section) and also Semantic Web Challenge (showcase for applications; Billion Triples Challenge and Open Track). <a href="http://iswc2012.semanticweb.org/" target="_blank">Find out more...</a>
DeRiVE 2012 | Detection, Representation and Exploitation of Events in the Semantic Web
Workshop in conjunction with ISWC, in Boston Mass.
The goal of DeRiVE 2012 is to strengthen the participation of the semantic web community in the recent surge of research on the use of events as a key concept for representing knowledge and organising and structuring media on the web. The workshop invites contributions to three central questions, and its goal is to formulate answers to these questions that advance and reflect the current state of understanding. Each submission will be expected to address at least two questions explicitly, if possible including a system demonstration. This year, we specifically invite contributions that address both event and conversation semantics in multimedia and social media. The most substantial contributions to the workshop will be presented orally (and if possible with a demo) in sessions organised according to the questions addressed, with time allocated for deep discussion. The workshop will also include a lightning talk session for late-breaking work. <a href="http://semanticweb.cs.vu.nl/derive2012/" target="_blank">Find out more...</a>
|11/12/2012||Semantic Web Standards (wiki)|
The goal of this wiki is to provide a ‚Äúfirst stop‚Äù for more information on Semantic Web technologies, in particular on Semantic Web Standards published by the W3C. It does not aim to give a complete set on information on Semantic Web related events, conferences, ontologies or community efforts. There are already a number of sites maintained by the community that users can refer to (see some below). <a href="http://www.w3.org/2001/sw/wiki/Main_Page" target="_blank">Find out more...</a>
Third International Workshop on Consuming Linked Data (COLD2012)
The quantity of published Linked Data is increasing dramatically. However, applications that consume Linked Data are not yet widespread. Current approaches lack methods for seamless integration of Linked Data from multiple sources, dynamic discovery of available data and data sources, provenance and information quality assessment, application development environments, and appropriate end user interfaces. Addressing these issues requires well-founded research, including the development and investigation of concepts that can be applied in systems which consume Linked Data from the Web. Following the success of the 1st International Workshop on Consuming Linked Data, we organize the second edition of this workshop in order to provide a platform for discussion and work on these open research problems. The main objective is to provide a venue for scientific discourse ‚Äî including systematic analysis and rigorous evaluation ‚Äî of concepts, algorithms and approaches for consuming Linked Data. <a href="http://km.aifb.kit.edu/ws/cold2012/" target="_blank">Find out more...</a>
Linked Data at the Open University: From Technical Challenges to Or... by Mathieu d'Aquin, Stuart Brown
Describes implementation of linked data to connect and present data from the library and universty operations <a href="http://www.slideshare.net/mdaquin/linked-data-at-the-open-university-from-technical-challenges-to-organizational-innovation" target="_blank">Find out more...</a>
LODLAM Summit 2013 - by LODLAM - Linked Open Data in Libraries Archives and Museums
Please note that this is not an informational conference, but a meeting focused on forwarding the adoption of Linked Open Data in libraries, archives, and museums worldwide. Ideal candidates will be actively involved in or planning Linked Open Data projects. Throughout the year, we will hold meetings and seminars at various locations around the world that are open to more participants. All summit proceedings will be open and published in real time via the summit2013.lodlam.net blog, twitter, and potential other medium. <a href="http://summit2013.lodlam.net/" target="_blank">Find out more...</a>
The LUCERO Project by Open University
Information about the Lucero project at Open University, Linking University Content for Education and Research Online... (blog and site) <a href="http://lucero-project.info/lb/" target="_blank">Find out more...</a>
Bibliographic Framework as a Web of Data: Linked Data Model and Supporting Services by Library of Congress, Eric Miller, Zepheira
"The Library of Congress ofÔ¨Åcially launched its Bibliographic Framework Initiative in May
2011. The Initiative aims to re-envision and, in the long run, implement a new bibliographic
environment for libraries that makes "the network" central and makes interconnectedness
commonplace. Prompted in no small part by the desire to embrace new cataloging norms, it
is essential that the library community redevelop its bibliographic data models as part of this
Initiative. Toward that objective, this document presents a high-level model for the library
community for evaluation and discussion, but it is also important to consider this document
within a much broader context, and one that looks well beyond the library community." <a href="http://www.loc.gov/marc/transition/pdf/marcld-report-11-21-2012.pdf" target="_blank">Find out more...</a>
VIAFbot Edits 250,000 Wikipedia Articles to Reciprocate All Links from VIAF into Wikipedia
Press release, describing conclusion of the "VIAFbot" phase of a project to automate provision of reciprocal links between VIAF (Virtual International Authority File) and Wikipedia (and by extension, DBpedia). Future work will incorporate this into Wikidata. <a href="http://www.oclc.org/research/news/2012/12-07a.html" target="_blank">Find out more...</a>
VISO: A Shared, Formal Knowledge Base as a Foundation for Semi-automatic InfoVis Systems by Jan Polowinski, Martin Voigt
Interactive visual analytic systems can help to solve the problem of identifying relevant information in the growing amount of data. For guiding the user through visualization tasks, these semi-automatic systems need to store and use knowledge of this interdisciplinary domain. Unfortunately, visualisation knowledge stored in one system cannot easily be reused in another due to a lack of shared formal models. In order to approach this problem, we introduce a visualization ontology (VISO) that formally models visualization-specific concepts and facts. Furthermore, we give first examples of the ontology‚Äôs use within two systems and highlight how the community can get involved in extending and improving it. <a href="http://www-st.inf.tu-dresden.de/semvis/papers/authors_version_polowinski_viso_chi2013_wip.pdf" target="_blank">Find out more...</a>
Towards an Editable, Versionized LOD Service for Library Data | Ostrowski | LIBER Quarterly by Felix Ostrowski, Adrian Pohl
The Northrhine-Westphalian Library Service Center (hbz) launched its LOD service lobid.org in August 2010 and has since then continuously been improving the underlying conversion processes, data models and software. The present paper first explains the background and motivation for developing lobid.org. It then describes the underlying software framework Phresnel which is written in PHP and which provides presentation and editing capabilities of RDF data based on the Fresnel Display Vocabulary for RDF. The paper gives an overview of the current state of the Phresnel development and discusses the technical challenges encountered. Finally, possible prospects for further developing Phresnel are outlined. <a href="http://liber.library.uu.nl/index.php/lq/article/view/URN%3ANBN%3ANL%3AUI%3A10-1-114290/8669" target="_blank">Find out more...</a>
ACM Computing Classification System ‚Äî Association for Computing Machinery by Association for Computing Machinery
The 2012 ACM Computing Classification System has been developed as a poly-hierarchical ontology that can be utilized in semantic web applications. It replaces the traditional 1998 version of the ACM Computing Classification System (CCS), which has served as the de facto standard classification system for the computing field. It is being integrated into the search capabilities and visual topic displays of the ACM Digital Library. It relies on a semantic vocabulary as the single source of categories and concepts that reflect the state of the art of the computing discipline and is receptive to structural change as it evolves in the future. ACM will a provide tools to facilitate the application of 2012 CCS categories to forthcoming papers and a process to ensure that the CCS stays current and relevant. The new classification system will play a key role in the development of a people search interface in the ACM Digital Library to supplement its current traditional bibliographic search.
The full CCS classification tree is freely available for educational and research purposes in these downloadable formats: SKOS (xml), Word, and HTML. In the ACM Digital Library, the CCS is presented in a visual display format that facilitates navigation and feedback. The full CCS classification tree is also viewable as a flat file in the Digital Library. <a href="http://www.acm.org/about/class/2012" target="_blank">Find out more...</a>
Trends in Linked Data Adoption by Erik Mitchell
Discuss a model for evaluating metadata systems; High-point comparative analysis for BIBFRAME, DPLA and Europeana; Discuss methods for metadata exploration and analysis <a href="http://connect.ala.org/files/ALCTS_MIG_ALA2013_Mitchell_20130630.pdf" target="_blank">Find out more...</a>
GI2MO Project Homepage ¬ª Drupal RDFme Plugin by GI2MO Project
RDFme is a Drupal extension that allows to publish RDF metadata attached to regular Drupal HTML pages.
In Gi2MO project we have developed this plugin to provide a testing ground for evaluation of our Semantic Web solutions with Idea Management Systems based on Drupal (e.g. commercial Atos Origin PGI 2.0 or open source IdeaTorrent). <a href="http://www.gi2mo.org/apps/drupal-rdfme-plugin/" target="_blank">Find out more...</a>
|1/1/2013||LODLAM Patterns by Richard Urban|
This site explores the benefits of design patterns ‚Äì optimized solutions to common problems ‚Äì for the development of cultural heritage Linked Data.
Participants in this work can:
suggest new patterns, review and comment on published patterns, engage in a discussion about the value of representation patterns for cultural heritage resources.
Cheers, Richard J. Urban, School of Library and Information Studies, Florida State University <a href="http://lodlampatterns.org/" target="_blank">Find out more...</a>
Heritage Data: Linked Data for Cultural Change | Vocabularies
Links to vocabularies used by a number of British cultural heritage organizations, now published as linked data (SKOS).
National cultural heritage thesauri and vocabularies have acted as standards for use by both national organizations and local authority Historic Environment Records but until now have lacked the persistent Linked Open Data (LOD) URIs that would allow them to act as vocabulary hubs for the Web of Data. The AHRC funded SENESCHAL project aims to make such vocabularies available online as Semantic Web resources. SENESCHAL will start with major vocabularies as exemplars and project partners will continue to make other vocabularies available. Other organizations are welcome to make use of the data and services which will be open licensed.
RESTful web services will be developed for the project to make the vocabulary resources programmatically accessible and searchable. These will include the provision to ‚Äòfeed back‚Äô new terms (concepts) suggested by users. A series of case studies will explore use of these web services, in collaboration with the project partners. <a href="http://www.heritagedata.org/blog/vocabularies-provided/" target="_blank">Find out more...</a>
EUCLID Learning Materials by Educational Curriculum for the Usage of Linked Data
EUCLID is a European project facilitating professional training for data practitioners, who aim to use Linked Data in their daily work. EUCLID delivers a curriculum implemented as a combination of living learning materials and activities (eBook series, webinars, face‚Äêto‚Äêface training), validated by the user community through continuous feedback. This page will aggregate learning materials such as ebook chapters slides, videos and courses. <a href="http://www.euclid-project.eu/resources/learning-materials" target="_blank">Find out more...</a>
Oracle Spatial and Graph by Oracle Corporation
Site for the Spatial and Graph feature set. The RDF Semantic Graph features include:
graph relationships represented as triples in compressed, partitioned tables
indexing, querying, and ontology management
RDFS, OWL and user-defined inferencing (parallel, batch and incremental) <a href="http://www.oracle.com/technetwork/database/options/semantic-tech/whatsnew/index.html" target="_blank">Find out more...</a>
|2/1/2013||EconStor LOD by ZBW Labs|
The Linked Open Data movement has gained momentum in the library world, resulting into a set of LOD publications from bibliographic metadata. By publishing the metadata from our repository for Working Papers in Economics and Business Studies (econstor.eu), we provide more than 40.000 bibliographic records as RDF triples. The dataset contains links to well-established external datasets for thesauri in Economics like our own STW and the JEL classification. The triplification is based on D2R-Server and can be both viewed as HTML pages and queried and processed via SPARQL. <a href="http://zbw.eu/labs/en/project/econstor-lod" target="_blank">Find out more...</a>
Linked Open Communism: Better discovery through data dis- and re- aggregation by Corey Harper
Current library search interfaces focus on books, journals and articles but offer little access to related entities, such as people, places, and events. These entities are generally only represented as attributes of other metadata records. Linked data can power interfaces that surface these entities as first-class resources, integrating them into results alongside library materials.
This presentation will describe research into such an interface for exploring a particular subject area: the history of the Communist Party & labor movements in the US. A triple store was seeded by 1,600 EAD records from NYU's Tamiment Library and Wagner Labor Archives. Based on access points in the finding aids, the store was further populated with data from various sources, including MARC, id.loc, VIAF, and dbpedia. Identifiers are being assigned for a wide array of typed entities, and triples can then be re-assembled into new entity "records". These new records will be loaded into a discovery interface that will allow typical keyword searching across all contained entities, show links between entities, and include faceting on entity types.
It is hoped that this prototype will be a model for a new kind of interface to library, archive & museum metadata targeted to particular subject domains, and could inform the development of a similar dis- and re- aggregation approach for entire library collections. <a href="http://code4lib.org/conference/2013/harper" target="_blank">Find out more...</a>
Linked Data and OCLC by Richard Wallis
Presentation to the OCLC EMEARC meeting in Strasbourg 27th February 2013, about OCLC's linked data activities, touching on Bibframe and W3C activity as well. <a href="http://www.slideshare.net/rjw/linked-data-and-oclc?from=ss_embed" target="_blank">Find out more...</a>
Data Catalog Vocabulary (DCAT) by Fadi Maali, John Erickson, Phil Archer
DCAT is an RDF vocabulary designed to facilitate interoperability between data catalogs published on the Web. This document defines the schema and provides examples for its use.
By using DCAT to describe datasets in data catalogs, publishers increase discoverability and enable applications easily to consume metadata from multiple catalogs. It further enables decentralized publishing of catalogs and facilitates federated dataset search across sites. Aggregated DCAT metadata can serve as a manifest file to facilitate digital preservation. <a href="http://www.w3.org/TR/vocab-dcat/" target="_blank">Find out more...</a>
Schema 201: Real World Markup For Success, from a Search Engine Perspective by Barbara Starr
Session description: You know semantic markup is important, but are you using it in the most effective way possible? This session‚Äôs speakers will share how they fine-tuned their approach to microdata and other ways to express authority to search engines. They will also offer case-study examples of how paying attention to detail has had a significantly positive impact on online visibility. <a href="http://www.slideshare.net/BarbaraStarr2009/smx-west-barbarastarrfinalmac-17185261" target="_blank">Find out more...</a>
SPARQL Syntax Diagrams by Vladimir Alexiev
Syntax (rail-road) diagrams of SPARQL 1.1 Query Language. I use this quite often while writing SPARQL
sparql11-grammar.xhtml: Cross-linked diagrams, one per production (173 total). A bit hard to understand: use this for reference, but not for learning SPARQL. <a href="http://vladimiralexiev.github.io/#sparql-syntax-diagrams" target="_blank">Find out more...</a>
2nd UK Ontology Network Workshop, 2013, Edinburgh, Scotland
The theme of this meeting is to understand how ontology development and application are being used to address problems in the UK. Amongst other areas of interest, there will be a particular focus on creating and using open data. The program and audience is intentionally very diverse; the aim is to cover areas from many disciplines. We are particularly interested in bringing together those creating and developing the technology with those using the technology in industry, government and public organisations.
The day is split into three sessions; a morning of 5 minute 'headline talks' from a wide range of areas followed by a short time for discussion after each talk, early afternoon 10 minute software and technology demos, late afternoon 'birds of a feather' networking. <a href="http://dream.inf.ed.ac.uk/events/ukont-13/2013_workshop_program.html" target="_blank">Find out more...</a>
Intelligent Exploration of Semantic Data (IESD) 2013
International Workshop at Hypertext 2013,Paris,France. May 1 2013.
IESD‚Äô13 will continue to provide a forum to discuss approaches for exploring semantic data following stimulating IESD 2012 workshop at EKAW 2012. Semantic data is available widely and semantic data exploration is becoming a key activity in a range of application domains, such as government organisations, education, life science, cultural heritage, and media. Several novel interfaces and interaction means for exploration of semantic data are being proposed, for example semantic data browsers, ontology/content visualisation environments and semantic wikis. Although on the rise, the current solutions are still maturing and need to take into account human factors to make exploration intuitive or employ necessary computational models to aid the intuitiveness and improve the effectiveness of exploration tasks. Lessons also can be learned from the commonalities and differences in exploration requirements between different domains. Hence, greater benefits can be achieved by bringing together expertise from different communities, including HCI, Semantic Web, and personalisation with the potential application domain demands.
Time is ripe to bring together the different disciplines related to semantic data exploration (semantic technologies, intelligent user interfaces, adaptation and personalisation, visualisation) and form an international community to identify the major challenges and research directions. The workshop is intended to make the first step in shaping such community and providing a forum that focuses on semantic data exploration and enables:
sharing techniques and experience
identifying potential domains and application areas
designing and reflecting on evaluation studies
identifying future research directions
IESD‚Äô13 will bring together different disciplines related to semantic data exploration and form an international community to identify the major challenges and research directions.
IESD brings an exciting format including keynote speaker(s) and interactive sessions. For more details use the menu in the left. <a href="http://imash.leeds.ac.uk/event/2013/iesd.html" target="_blank">Find out more...</a>
Metadata as the Cornerstone of Digital Archiving ¬´ EUscreen
Describes upcoming 2 day seminar on metadata and its significance for digital audiovisual archiving, at the Netherlands Institute for Sound and Vision in Hilversum., May 16 and 17 2013. Among the program's 4 tracks, one is Linked (meta) data, with a keynote by Seth van Hooland. This is the FIAT/IFTA Media Management Seminar for 2013, part IV of the MMC Seminar series "Changing sceneries, changing roles". <a href="http://blog.euscreen.eu/?p=3839&goback=%2Egde_126125_member_216281506" target="_blank">Find out more...</a>
The Enduring Myth of the SPARQL Endpoint | Dave's Blog by Dave Rogers
Argues that SPARQL endpoints are not the solutuion to opening your data. Gives exampels of the sorry state of public endpoints. Refers to SparqlES (sparql endpoint status) <a href="http://daverog.wordpress.com/2013/06/04/the-enduring-myth-of-the-sparql-endpoint/" target="_blank">Find out more...</a>
his document describes a core ontology for organizational structures, aimed at supporting linked data publishing of organizational information across a number of domains. It is designed to allow domain-specific extensions to add classification of organizations and roles, as well as extensions to support neighbouring information such as organizational activities.
This ontology was originally developed and published outside of W3C, but has been extended and further developed within the Government Linked Data Working Group. <a href="http://www.w3.org/TR/2013/CR-vocab-org-20130625/#class-organization" target="_blank">Find out more...</a>
Open Annotation (in Biomedicine): Annotation, Semantic Annotation and keeping the right crowd in the loop by Paolo Ciccarese
Presentation on open annotation and projects in the biomedical information field <a href="http://www.slideshare.net/paolociccarese/paolo-ciccarese-dils-2013-keynote" target="_blank">Find out more...</a>
Opportunities Abound: ETDs as Harbingers of Institutional Change (poster) by Devin HIggins, Aaron Collie, Lukas Mak, Shawn Nicholson
Poster presented at United States Electronic Theses and Dissertation Association 2013 meeting, showing how a linked data representation of ETD metadata can lead to new knowledge (and visualizations) of interdisciplinarity, departmental collaborations, scholarly trends, and other aspects of the program <a href="http://staff.lib.msu.edu/nicho147/Research/USETD_Linked_Data_Poster_2013.pdf" target="_blank">Find out more...</a>
Linked data, open data: Towards a semantic web of Anglo-Saxon England : Visionary Cross by Daniel O'Donnell
International Society of Anglo-Saxonists (ISAS), July 29th-August 2, 2013, Dublin
The Visionary Cross would like to propose a roundtable or three paper panel on linked and open data in Anglo-Saxon studies for ISAS 2013. The goal of this panel would be to assess the current state of practice in the development of linked and open datasets and to explore future directions. This topic should be of interest to textual editors, cultural heritage curators, art historians, researchers working on dictionaries and other reference works.
If you would like to explore this topic contact, please contact Daniel O‚ÄôDonnell (firstname.lastname@example.org) as soon as possible to discuss approaches. Session and paper proposals are due at the ISAS programme committee by September 13. <a href="http://visionarycross.org/linked-data-open-data-towards-a-semantic-web-of-anglo-saxon-england/" target="_blank">Find out more...</a>
SPARQL (pronounced "sparkle", a recursive acronym for SPARQL Protocol and RDF Query Language) is an RDF query language, that is, a query language for databases, able to retrieve and manipulate data stored in Resource Description Framework format. It was made a standard by the RDF Data Access Working Group (DAWG) of the World Wide Web Consortium, and is recongnized as one of the key technologies of the semantic web. On 15 January 2008, SPARQL 1.0 became an official W3C Recommendation, and SPARQL 1.1 in March, 2013. <a href="http://en.wikipedia.org/w/index.php?title=SPARQL&oldid=566718788" target="_blank">Find out more...</a>
UILLD 2013 | "User interaction built on library linked data" Pre-conference to the 79th World Library and Information Conference Jurong East Regional Library, Singapore
he quantity of Linked Data published by libraries is increasing dramatically: Following the lead of the National Library of Sweden (2008), several libraries and library networks have begun to publish authority files and bibliographic information as linked (open) data. However, applications that consume this data are not yet widespread. Particularly, there is a lack of methods for integration of Linked Data from multiple sources and its presentation in appropriate end user interfaces. Existing services tend to build on one or two well integrated datasets ‚Äì often from the same data supplier ‚Äì and do not actively use the links provided to other datasets within or outside of the library or cultural heritage sector to provide a better user experience. <a href="http://uilld2013.linkeddata.es/" target="_blank">Find out more...</a>
DC-2013 - Linking to the Future - Lisbon by Dublin Core Metadata Initiative
DC-2013 will explore questions regarding the persistence, maintenance, and preservation of metadata and descriptive vocabularies. The need for stable representations and descriptions spans all sectors including cultural heritage and scientific data, eGovernment, finance and commerce. Thus, the maintenance and management of metadata is essential to address the long term availability of information of legal, cultural and economic value.
On the web, data‚Äîand especially descriptive vocabularies‚Äîcan change or vanish from one moment to the next. Nonetheless, the web increasingly forms the ecosystem for our vocabularies and our data. DC-2013 will bring together in Lisbon the community of metadata scholars and practitioners to engage in the exchange of knowledge and best practices in developing a sustainable metadata ecosystem.
DC-2013 (Sept. 2-6, 2013) will be collocated and run simultaneously with ¬´ iPRES 2013 ¬ª providing a rich environment for synergistic exploration of issues common to both communities.
Twitter hashtag: #dcmi13 <a href="http://dcevents.dublincore.org/IntConf/dc-2013/index" target="_blank">Find out more...</a>
RDFBean by Sampa Saarela, Timo Westk√§mper, Marko Lavikainen
RDFBean was born out of the need for a flexible Object/RDF mapping tool. With the rise of ORM mapping tools such as Hibernate and Toplink JavaBean based domain model usage became popular in Java application development. JavaBean tool support is excellent for property binding and visualization in popular template frameworks for Web applications.
RDFBean aims to provide a transparent mapping between JavaBean domain models and RDF based schemas and ontologies to make Java based RDF usage as easy as in ORM tools and even easier.
In addition to the type-safe object-oriented layer, RDFBean provides also a Repository API as an SPI for persistence integration. The Repository API is easy to use and can also be used directly in cases where the type-safe API is impractical. <a href="https://github.com/mysema/rdfbean" target="_blank">Find out more...</a>
3rd International Workshop on Semantic Digital Archives
3rd International Workshop on Semantic Digital Archives(SDA),
in conjunction with the 17th Int. Conference on Theory and Practice of Digital Libraries (TPDL)
26thSeptember 2013 in Valetta, Malta. ...Fosters innovative discussion of knowledge representation and knowledge management solutions specifically designed for improving Archival Information Systems (AIS) and Archival Information Infrastructures (AII). Novel applications of semantic Web technologies and Linked Data offer possibilities to advance approaches to digital curation and preservation. <a href="http://mt.inf.tu-dresden.de/sda2013/call-for-papers.html" target="_blank">Find out more...</a>
Site for access/management of multiple vocabularies supporting medical and biomedical research etc. including SNOMED clinical terms, National Drug File, International Classification of Diseases, NCI Thesaurus <a href="http://bioportal.bioontology.org/" target="_blank">Find out more...</a>
Authority Control and Linked Data for Digital Library Metadata by Jeremy Myntti, Nathan Cochran
|11/25/2013||WordLift by InSideOUt10|
WordLift is a WordPress Plug-in developed by InSideOut10 to help you organise your post and pages using "concepts" like you do in the real world.
The web is changing fast and search engines update the algorithms to find quality content. For editor it means to mark up their pages in the right way.
WordLift comes to the rescue with new In-Depth features that automatically add the correct semantic tagging for you, just download, install and activate WordLift.
datePublished (Twenty Thirteen)
Since these tags are highly coupled with your WordPress theme, we list here the themes we tested so far:
DW Focus 1.0.3
Please let us know if your theme is working or not, and we'll try to add support for it.
We now feature the WordLift Bar with the list of entities and links to the entity page right within your blog. WordLift Bar is experimental, if you encounter any issue you can disable it from the plugin options and report us any trouble or suggestion.
You can view some examples of the WordLift Bar:
English: demo 1
Russian: demo 2
Warning: WordLift is still under heavy testing, therefore some features might not work as expected or not work at all. ... <a href="http://wordlift.it/#!" target="_blank">Find out more...</a>
Soylent Semantic Web Is People! (with notes) by Dorothea Salo
Teaching "linked data" to librarians presents too many challenges, because development so far has not produced tools that enable librarians to do work. This needs to change soon, or we will miss opportunities. <a href="http://www.slideshare.net/cavlec/soylent-semantic-web-is-people-with-notes?utm_source=slideshow&utm_medium=ssemail&utm_campaign=upload_digest" target="_blank">Find out more...</a>
Current Practice in Linked Open Data for the Ancient World
Reports on current work relevant to the role of Linked Open Data (LOD) in the study of the ancient world. As a term, LOD encompasses approaches to the publication of digital resources that emphasize stability, relatively fine-grained access to intellectual content via public URIs, and re-usability as defined both by publication of machine reabable data and by publication under licenses that permit further copying of available materials. This collection presents a series of reports from participants in 2012 and 2013 sessions of the NEH-funded Linked Ancient World Data Institute. The contributors come from a wide range of academic disciplines and professional backgrounds. The projects they represent reflect this range and also illustrate many stages of the process of moving from concept to implementation, with a focus on results achieved by the mid 2013 to early 2014 timeframe. <a href="http://dlib.nyu.edu/awdl/isaw/isaw-papers/7/" target="_blank">Find out more...</a>
MODS RDF Ontology by Library of Congress
This initiative is a work in progress.
MODS RDF is an RDF ontology for MODS. As MODS is an XML schema for a bibliographic element set, MODS RDF is an expression of that element set in RDF.
MODS/RDF is modeled as an OWL ontology. It is available at:
A MODS/RDF namespace document, which provides a human-accessible list of MODS/RDF classes and properties, is accessible at: http://www.loc.gov/mods/modsrdf/v1
For more detailed information see The MODS RDF Ontology Primer.
MODS XML to RDF
MODS RDF may be used to create born-RDF MODS, or it may be used to create an RDF description corresponding to an existing MODS XML record. The latter is discussed in MODS RDF Primer - Part 2: MODS XML to RDF .
See Examples of MODS XML records and their corresponding RDF descriptions.
A stylesheet is available which converts existing MODS XML to MOD RDF (/XML). <a href="http://www.loc.gov/standards/mods/modsrdf/" target="_blank">Find out more...</a>
WordNet RDF¬†-¬†WordNet - WordNet RDF
From an email 4/16/2014 on W3C list: Princeton University in collaboration with the Cognitive Interaction Technology
Excellence Center of Bielefeld University are proud to announce the first
RDF version of WordNet 3.1, now available at:
This version, based on the current development of the WordNet project,
intends to be a nucleus for the Linguistic Linked Open Data cloud and the global
WordNet projects. The data are accessible in five formats (HTML+RDFa, RDF/XML,
Turtle, N-Triples and JSON-LD) as well as by querying a SPARQL endpoint.
The model is itself based on the lemon model and follows the guidelines
of the W3C OntoLex Community Group.
We have incorporated direct links to the previous W3C
WordNets, UBY, Lexvo.org, VerbNet as well as translations collected
by the Open Multilingual WordNet Project. Furthermore, we include links
within the resource for previous versions of WordNets to further enable
linking. We are interested in incorporating any resources that are linked to
WordNet and would greatly appreciate suggestions.
John P. McCrae, Christiane Fellbaum & Philipp Cimiano <a href="http://wordnet-rdf.princeton.edu/" target="_blank">Find out more...</a>
New catalog, new format - the pioneering work with Libris XL
Open HPI: Knowledge Engineering with Semantic Web Technologies by Harald Sack
Archived MOOC course in Semantic Web technologies (and linked data). "The knowledge contained in the World Wide Web is available in interlinked documents written in natural language. To make use of this knowledge, technologies such as natural language processing, information retrieval, data and knowledge mining must be applied. Semantic Web technologies follow an alternative approach by complementing web documents with explicit semantics based on formal knowledge representations, such as e.g. ontologies. In this course, you will learn the fundamentals of Semantic Web technologies and how they are applied for knowledge representation in the World Wide Web. You will learn how to represent knowledge with ontologies and how to access and benefit from semantic data on the Web. Furthermore, you will also learn how to make use of Linked Data and the Web of Data, currently the most popular applications based on Semantic Web technologies." <a href="https://open.hpi.de/courses/2d1ede48-4cc6-4a36-bcc4-6cb02e36b3ea" target="_blank">Find out more...</a>
RDA Registry by Metadata Management Associates
The RDA Registry contains linked data and Semantic Web representations of the elements and relationship designators approved by the Joint Steering Committee on Development of RDA (JSC).
The RDA Registry is based on the Open Metadata Registry. It is maintained by the JSC and Metadata Management Associates in association with ALA Digital Reference. <a href="http://www.rdaregistry.info/" target="_blank">Find out more...</a>
Preparing for the future: supporting the transition to Linked Data in Libraries by Jeff Mixter
Update on OCLC Research activities related to linked data including redevelopment of MODS data model as RDF and conversion of the Getty vocabularies. <a href="http://hangingtogether.org/?p=4096" target="_blank">Find out more...</a>
Conference report: IFLA 2014 Satellite Meeting Linked Data in Libraries: Let‚Äôs make it happen! by Rurik Greenall
Linked Datasets as of April 2014 by Christian Bizer
This is the latest .png image of the "linked data cloud" based on updates to datasets in the DataHub catalog <a href="http://data.dws.informatik.uni-mannheim.de/lodcloud/2014/ISWC-RDB/extendedLODCloud/extendedCloud.png" target="_blank">Find out more...</a>
Free data management platform from the Open Knowledge Foundation, based on the CKAN data management system.
CKAN is a tool for managing and publishing collections of data. It is used by national and local governments, research institutions, and other organisations which collect a lot of data. With its powerful search and faceting, users can browse and find the data they need, and preview it using maps, graphs and tables - whether they are developers, journalists, researchers, NGOs, citizens or your own colleagues.
"Publish data for free".
CKAN is free, open-source software, which has been developed by the Open Knowledge Foundation since 2006 and used by government and organisations around the world. Version 2.0 was released in May 2013. <a href="http://datahub.io/" target="_blank">Find out more...</a>
RDFa Markup, Schema.org Vocabularies, and DBpedia Topics for Digital Collections by Jason Clark
Describes a pilot project at Montana State University Library, optimizing digital collections for the web by applying selected external vocabularies <a href="http://www.lib.montana.edu/~jason/talks/dlf2014-rdfa-schema-dbpedia.pdf" target="_blank">Find out more...</a>
MARC, Linked Data, and Human-Computer Asymmetry | Peer to Peer Review by Dorothea Salo
Rationales given to students for why linked data, and data structures that aim for machine-manipulation, is different, important to know, and likely part of libraries' futures.
Atomicity, Consistency and Reliable, unchanging identifiers are some of the principles to understand that linked data models support better than older models.