Requirements
 Share
The version of the browser you are using is no longer supported. Please upgrade to a supported browser.Dismiss

 
View only
 
 
ABCDEFGHIJKLMNOPQRSTUVWXYZAAAB
1
CategoryRequirement
Class
(Mandatory/Optional)
DescriptionStatus
Priority (Jan. 15, 2018)
Entered by
2
Data / Content Management, Versioning and Rights
Unpublish/delete contributionMAbility to unpublish (with the ability to re-publish) or permanently delete a contribution and its associated resources from NHDS.not startedhighLAC
3
Data / Content Management, Versioning and Rights
CMS capabilitiesOAbility to easily add, modify remove content on the public-facing NHDS user interface.
partially completed
highLAC
4
Data / Content Management, Versioning and Rights
Content analysis/reportingOAbility to create and run reports and analysis jobs on the entire contents of NHDS.not startedhighLAC
5
Data Harvesting and ImportPre-ingest analysisMAbility to obtain and analyze samples from contributors to perform an initial assessment of quality/consistency of data and inform the creation of necessary validation/mapping rules and scripts.not startedhighLAC
6
Data Harvesting and ImportImport/harvest integrity verificationMSupport verification of harvested or imported data to ensure that integrity of data is not compromised during exchange. This can be accomplished by doing checksum verifications and providing status results for harvest jobs.not startedhighLAC
7
Data Harvesting and ImportError reportingMAny errors during import/harvest jobs should be detected and the error details should be captured to allow investigation. This includes errors affecting the job as a whole, as well as partial errors (e.g. where some individual records in a harvest failed, but other proceeded successfully).not startedhighLAC
8
Data Normalization and EnrichmentsDate normalizationMDetect and convert various date formats to a single standard (e.g. convert “March 15, 2016” and “3/15/2016” to “2016-03-15”).
partially completed
highLAC
9
Data Normalization and EnrichmentsText normalizationMBased on text patterns (e.g. “FirstName LastName” to “LastName, FirstName” or vice versa).
partially completed
highLAC
10
Data Normalization and EnrichmentsError reportingMAny errors during enrichment jobs should be detected and the error details should be captured to allow investigation. This should include the individual records and fields where errors were detected.not startedhighLAC
11
Data ValidationError reportingMAny errors during validation jobs should be detected and the error details should be captured to allow investigation. This should include the individual records and fields that failed validation.not startedhighLAC
12
End-user Access and SearchAccessibility - WCAG 2.0 level AAMThe public-facing user interface must comply with WCAG 2.0 level AA.not startedhighLAC
13
End-user Access and SearchData visualizationsOAbility to search for and visualize resources using various methods such as maps, timelines, charts, etc.not startedhighLAC
14
End-user Access and SearchDetect broken linksOAbility to detect and broken links in the data automatically (e.g. when the link to a contributor digital object is broken).not startedhighLAC
15
End-user Access and SearchData exportOAbility to export NHDS data in bulk, by specific criteria (e.g. contributor or collection) or individual resources.not startedhighLAC
16
End-user Access and SearchSpecific export formats supported (RDF, JSON-LD, XML, etc.) to be determined at a later date.phase 2highLAC
17
NHDS RDM and contributor informationSupport the NHDS RDMMAs the core of NHDS access and discovery, the platform must support the RDM at all levels.
partially completed. phase2
highLAC
18
NHDS RDM and contributor informationUse metadata elements from well-established modelsOUsing well-established models (e.g. Dublin Core) to describe entities (identifiers, entity types, fields, relationships) will improve interoperability and overall quality of NHDS data.
partially completed
highLAC
19
NHDS RDM and contributor informationContributor contactsOCapture points of contact for each contributor for addressing technical and data quality issues.
partially completed
highLAC
20
Storage and IndexingAbility to update or remove dataMIt should be possible to update data in or remove data from the storage mechanism and search indexes.not startedhighLAC
21
Contribution Processing PipelinePipeline configurationMOperators should be able to determine how far a contribution should progress in the processing pipeline (e.g. should a validation/mapping start immediately after harvesting, or should it halt until it’s started manually.)not startedlowLAC
22
Contribution Processing PipelineReportingMAllow viewing of all current, previous and scheduled jobs and their associated details, including details on any failures or issues.not startedlowLAC
23
Contribution Processing PipelineJob managementMAbility to stop, pause/resume, restart jobs and to restart a processing pipeline for a contribution from a specific step.not startedlowLAC
24
Contribution Processing PipelineProvenance trackingMAbility to capture, retain and track changes to data through the pipeline.not startedlowLAC
25
Data / Content Management, Versioning and Rights
Rights-based managementOAbility to configure how contributions are handled based on specific rights models/statements.not startedlowLAC
26
Data / Content Management, Versioning and Rights
VersioningOAbility to manage different versions of the NHDS model, data mapped to each model version, and any APIs that are based on that version of the model.not startedlowLAC
27
Data Harvesting and ImportMultiple ingest channel typesMSupport multiple ways of collecting data for ingest.not startedlowLAC
28
Data Harvesting and ImportMultiple ingest channels per contributorMIt should be possible to configure and associate multiple different ingest channels and configurations with each contributor.not startedlowLAC
29
Data Harvesting and ImportData harvestingMSupport automatic data harvesting from online sources. Required support for specific harvest methods (e.g. OAI, SPARQL endpoint, third-party API) will be determined at a later time.not startedlowLAC
30
Data Harvesting and ImportDirect data importMSupport direct import of (bulk) data by operators for cases where data is provided in the form of a data dump file (or a package). Required support for specific formats (containers such as BagIt and METS, as well as specific metadata formats, MARC, MODS, linked open data/RDF, CSV, etc.) will be determined at a later time.not startedlowLAC
31
Data Harvesting and ImportProvenance trackingMAll relevant information about each successful harvest or import should be captured and retained in machine-readable form. At a minimum this information should include contributor details, dates and times (both start and end), ingest channel details, and details about the harvested/imported data (format, size, validation results, etc.).not startedlowLAC
32
Data Harvesting and ImportExtension possibilityMIt should be possible to extend/enhance the harvest/import capabilities of the platform. For example, it must be possible to implement/add new harvesting methods in the future.not startedlowLAC
33
Data Harvesting and ImportScheduled/periodic harvestsOSupport scheduling periodic harvests or harvests at specific times for data sets that are updated on a regular basis.not startedlowLAC
34
Data Harvesting and ImportRe-use harvest/import configurationsOThe platform should allow reusing harvest/import configurations (e.g. by copying or using base templates) for cases where multiple contributors require similar settings.not startedlowLAC
35
Data MappingCreate and view mapping documentationMFor each contributor/ingest channel, it must be possible to easily add and access the mapping documentation from the contributor (this document identifies how contributor data maps to the NHDS core data model).not startedlowLAC
36
Data MappingDefine mapping rulesMFor each contributor ingest channel, it must be possible to create customized mapping rules or scripts that map source data to the NHDS data model.not startedlowLAC
37
Data MappingTest mapping rulesMAbility to test mapping rules to ensure that they are working as expected. This includes the ability to run the rules on simulated data or on a portion/sample of ingested data.not startedlowLAC
38
Data MappingRe-use mapping rulesOIt should be possible to reuse mapping rules between contributors and ingest channels (e.g. by copying or using base templates).not startedlowLAC
39
Data Normalization and EnrichmentsProvenance trackingMAll relevant information about each successful enrichment job should be captured and retained in machine-readable form. At a minimum this information should include dates and times (both start and end), source (ingest) information, and validation rules/scripts used.not startedlowLAC
40
Data Normalization and EnrichmentsDistinguish between original and enriched values/entitiesOAbility to differentiate fields and entities that contain original values provided by the contributor from those added by NHDS as part of normalization or enrichments.not startedlowLAC
41
Data ValidationData format validationMEach contribution must be validated based on the format/type of the contribution (e.g. are XML files valid XML, are RDF files valid RDF).not startedlowLAC
42
Data ValidationDefine validation rulesMFor each contribution, it must be possible to define customized validation rules or scripts that ensure that the data conforms to expectations (these rules are generally based on the contribution format and the mapping document).not startedlowLAC
43
Data ValidationTest validation rulesMAbility to test created validation rules to ensure that they are working as expected. This includes the ability to run the rules on simulated data or on a portion/sample of ingested data. Validation requirements include but are not limited to: mandatory/allowed fields based on resource type, checking values against controlled vocabularies, lists and authorities, checking relationships and constraints and validating formats for field contents such as dates and string patterns.not startedlowLAC
44
Data ValidationProvenance trackingMAll relevant information about each successful validation job should be captured and retained in machine-readable form. At a minimum this information should include dates and times (both start and end), source (ingest) information, and validation rules/scripts used (including versions).not startedlowLAC
45
Data ValidationRe-use validation rulesOIt should be possible to reuse validation rules between contributors and ingest channels (e.g. by copying or using templates).not startedlowLAC
46
Data ValidationVersioning of validation rulesOAllow maintaining different versions of validation rules and scripts with descriptions of changes to each version.not startedlowLAC
47
End-user Access and SearchPersonal collectionsOEnd users can create and share their own custom collections (by manually adding individual resources or search results to their collections).not startedlowLAC
48
Storage and IndexingProvenance trackingMStorage mechanism should capture and retain the provenance information created for each record during the ingest process.not startedlowLAC
49
Storage and IndexingSpatial/geospatial indexingOAbility to index based on location and/or proximity.not startedlowLAC
50
Data / Content Management, Versioning and Rights
Create custom collectionsOAllow operators to easily create collections of resources (e.g. exhibitions) that can be accessed by end users.not startedmediumLAC
51
Data Normalization and EnrichmentsDefine controlled valuesOAbility to create and maintain lists and hierarchies of controlled values such as authorities, vocabularies, taxonomies and type lists.not startedmediumLAC
52
Data Normalization and EnrichmentsLinked open data controlled valuesOAbility to support linked open data authorities, vocabularies and taxonomies.not startedmediumLAC
53
Data Normalization and EnrichmentsMatching fields to controlled valuesOAbility to match values to controlled values such as authorities, vocabularies, taxonomies or lists of valid values. This could include matching textual values, cross-walking between different authorities/vocabularies, performing external lookups (such as geo/location services) and providing an ability to resolve matching failures or ambiguities (exception cases) manually by operators.not startedmediumLAC
54
Data Normalization and EnrichmentsAugment entities with new informationOAbility to augment existing entities by adding new field values, and new relationships (to other new or existing entities).not startedmediumLAC
55
End-user Access and SearchSearch and browse collectionsMIncludes contributor collections, collection resources (where an ingested resource is a collection), as well exhibitions and collection created by NHDS.not startedmediumLAC
56
End-user Access and SearchFacet based searchMAbility to refine searches using facets, including hierarchical and relational facets.
completed, but new facets may want to be added
mediumLAC
57
End-user Access and SearchAdvanced searchOAbility to construct complex searches based on values of specific fields (including value comparisons, logical operations, negation, wildcards, etc.).not startedmediumLAC
58
End-user Access and SearchSpecific output formats supported by APIs (RDF, JSON-LD, XML, etc.) to be determined at a later date.phase 2mediumLAC
59
NHDS RDM and contributor informationCustomizable data modelMAbility to refine and customize and change the RDM when needed, including creation of new fields, entity types and relationships in the future.
completed, but may need to be refined
mediumLAC
60
NHDS RDM and contributor informationUse linked open data ontologies and namespacesOUsing linked open data ontologies and namespaces for metadata elements will facilitate releasing the NHDS data as linked open data and promote re-use by third-parties.not startedmediumLAC
61
Non-Functional Requirements (*preliminary)
Handle large filesMSupport large contribution files and packages (2GB+).not startedmediumLAC
62
Non-Functional Requirements (*preliminary)
Handle large ingest packagesMSupport ingest packages containing a large number of resources (actual number TBD).not startedmediumLAC
63
Non-Functional Requirements (*preliminary)
Handle millions of recordsMSupport ingesting and providing access to millions of records.not startedmedium
64
Storage and IndexingPre-production stagingMAbility to index and store new contributions or updates in a staging environment to allow performing QA tasks and ensure that the new contribution and processes are working as expected.not startedmediumLAC
65
Storage and IndexingField-level indexingMAbility to index individual field values to enable complex searches on fields and combinations of fields (comparisons, logical operators, negation wildcards, etc.).not startedmediumLAC
66
Storage and IndexingRelational facetsMAbility to index facets based on relationships (e.g. an “Author” facet where the facet value is the identifier of a Person entity, as opposed the textual name of the author).not startedmediumLAC
67
Storage and IndexingRetain original recordsMOriginal contribution data must be kept without any modification (note that while these should be permanently stored and be accessible for processing jobs, they do not need to be included in the operational data store).not startedmediumLAC
68
Contribution Processing PipelineHandle all processing jobsMThe processing pipeline should be able to handle all processing jobs related to contributions from beginning of ingest to production release. These include harvesting/import jobs, validations, mapping jobs, indexing jobs, storage tasks, staging and production release.
partially completed
some high, some low
LAC
69
End-user Access and SearchSearch and browseMEnd users should be able to search for and browse the resources in the public-facing user interface.completedLAC
70
End-user Access and SearchFull text searchMSearch based on textual terms.completedLAC
71
End-user Access and SearchInternationalizationMSupport multiple languages (English/French as a minimum) and make a best effort to display content to users in their chosen language (when available).completedLAC
72
End-user Access and SearchAPIsOAllow third-party data consumers to access the NHDS data directly through an API. This includes API-based searches as well access to individual resources or collections.completedLAC
73
NHDS RDM and contributor informationAllow entities and relationshipsMThe RDM contains different types of entities and relationships between entities. Relationships should be typed, directional and use identifiers to identify the source and target of relationships.completedLAC
74
NHDS RDM and contributor informationPersistent identifiersMAll entities should have persistent identifiers. Persistent identifiers should be captured in a way that survive platform changes (e.g. replacing storage mechanisms) and should be easily convertible to URIs.completedLAC
75
NHDS RDM and contributor informationSupport contributor profilesMAbility to create and manage contributor profiles. Each profile should at the minimum include a contributor identifier, contributor name in both official languages, as well as links to institutional websites.completedLAC
76
Non-Functional Requirements (*preliminary)
Handle many contributors/contributionsMSupport thousands of contributions.completedLAC
77
Storage and IndexingRetain fidelity of dataMThe storage mechanism used must be able to retain all the details associated with the entities including types, relationships, identifiers and language information for fields.completedLAC
78
Storage and IndexingFull text indexingMAbility to index entities based on textual content.to confirmLAC
79
Storage and IndexingFacet-based indexingMAbility to index entities based on facets.to confirmLAC
80
Storage and IndexingTemporal indexingMAbility to index based on temporal/time values.to confirmLAC
81
Storage and IndexingHierarchical facetsMAbility to index hierarchical facets to enable drill-down searches (e.g. location hierarchies, taxonomies, categories, etc.). It should be possible to calculate the facet value hierarchy for each field/entity at index time based on entity relationships or internal / external vocabularies/taxonomies or services.to confirmLAC
82
Storage and IndexingCustomize indexing configurationMAllow modifying the indexing configuration to add/modify/remove fields from the index as needed.completedLAC
83
Storage and IndexingIndex relationshipsOAbility to index relationships between entities.completed?LAC
84
MProvide citation link or other solution to ensure content hosts are cited as teh authoritative record and not the NHDS. eg if institution applies a DOI it would be sensible to promote the DOI to be used for scholarship and bibliometrics (from Steering COmmittee)not startedhighLAC
85
MInstall google analyticsnot startedhighLAC
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
Loading...
Main menu