ARCHIVE_DataCommons_MilestoneMasterSpreadsheet_GanttChart_01-24-18
 Share
The version of the browser you are using is no longer supported. Please upgrade to a supported browser.Dismiss

 
$
%
123
 
 
 
 
 
 
 
 
 
ABCDEFGHIJKLMNOPQRSTUVWXYZAAABACAD
1
LegendHeliumXenonArgon Calcium
2
3
Month 1Month 2Month 3Month 4Month 5Month 6
4
KC1. Development and Implementation Plan for Community Supported FAIR Guidelines and Metrics
5
Engage FAIR communities
6
Create FAIR-TLC measurable metrics
7
Create API specs and demos to promote digital object server FAIRness, as well as self-reported FAIRness to FAIR-scoring platforms
8
Collaboratively evaluate FAIR-TLCness of Pilot datasets (MODs, GTEx)
9
KC2. Global Unique Identifiers (GUID) for FAIR Biomedical Digital Objects
10
Implement GUID best practice
11
Training for data providers to iteratively improve identifier provisioning and maintenance
12
Implement identifier documentation and tooling for indexing and search by KC2
13
GUID/Metadata Definition Report
14
Proxy Object Demonstration v1
15
Proxy Object Demonstration v2
16
Enable FAIRness as a continuous process
17
Increase FAIR access to Data Commons research products by operating the Globus Publish service
18
Integrate Minids and Globus Publish with other DCPPC components
19
Establish GUID best practices for DCPPC
20
Integrate DOS and our implementations with GUID services/standards
21
Integrate TRS and Dockstore.org with GUID services/standards
22
KC3. Open Standards API
23
Platform neutral API for dataset management
24
Create Commons API registry
25
Specify of API best practices
26
Analyze API landscape
27
Develop API broker: standards
28
Demonstrate v1 open API standards for Data Commons
29
Demonstrate interoperability of pilot datasets
30
Document access permission model concept
31
Demonstrate Rabix improvements in response to FAIR Council & user feedback
32
Demonstrate interoperability of Rabix with multiple community defined standards
33
Development of GA4GH API standards for containers and workflows
34
API and implementations for data object access across clouds
35
APIs and implementations of read, variant, etc access across clouds
36
KC4. Cloud Agnostic Architecture and Frameworks
37
Demonstrate interoperable data commons supporting multiple clouds
38
Demonstrate cross-cloud data integration management
39
Demonstrate optimal placement of containerized analysis tools
40
Demonstrate pilot access for cloud storage of 3 pilot datasets across multiple cloud environments
41
Demonstrate ability to create GUID and run tasks via API
42
Resilience reports and improvements in response to monitoring
43
Make it easy for Data Commons users to access, transfer, synchronize, and share data securely, reliably, and rapidly across different computing platforms
44
Ensure maintainance of data identifier and integrity
45
Enable cost-effective cloud usage by organizing data transfers and computations to minimize the cost of processing user requests
46
Framework Services (CFS) API Standards
47
FS Design and Development
48
FS Deployment Environments
49
KC5: Workspaces for Computation
50
Standup CommonsShare Instance
51
Enable CommonsShare to be interoperable with key components and standards provided by other full stacks and demonstrate
52
Distributed CWL ExecutionData Science WorkspacesData Platform
53
Demonstrate customized Data Commons Workspace environment within dedicated VPC
54
Demonstrate support for workflow languages w/in IDE/ComposerDemonstrate workspace lockdown and publishing
55
User engagement reports and summaries
56
Create a Workspace Manager Integrate with DERIVA & Globus
57
Enable the creation of Globus Genomics/Galaxy workspacesEnable high performance and FAIR execution of standards-based bioinformatics workflows
58
Develop & deploy remote desktop environment to enable cloudbased interactive analysis of DCPPC datasets; support environments, data source connections & access
59
Deploy and operate a multi-user Jupyter notebook environment using JupyterHub to enable scalable interactive analysis; enable publication and sharing
60
Workspaces (Data Use-aware search, Publishing GTEx workspaces, Publishing 1 or more TOPMed workspaces)
61
Analytics (Jupyter notebooks, Analytical templates)
62
Integration with Dockstore; KC #2 and KC #4; KC #7
63
KC6: Research Ethics, Privacy, and Security
64
Create and manage Data Governance Council
65
Establish Disclosure Review Board
66
Draft plan with roles and responsibilities and a schedule to support Full Stack team’s development of System Security Plan
67
Finalize NIH Data Commons Data Governance Policy & Procedure
68
Initial Implementation of minimum viable security infrastructure
69
Initial Implementation of data governance management pipeline
70
Initial implementation of auditing and regulatory reporting
71
Provide Data Commons with strong, standards-compliant foundation for authentication, authorization, and authorization delegation
72
Provide secure, standards-based authorization delegation
73
Machine-readable data use restrictions
74
Software to support DACs
75
KC7: Indexing and Searching
76
Full Text Search
77
Semantic Search
78
Provenance
79
Demonstrate MVP data indexing and search infrastructure
80
Demonstrate semantic search across datasets with different ontologies
81
Demonstrate guided search for clinical and experimental scientists
82
Demonstrate synthetic cohort construction
83
Demonstrate improved FAIR-ness score and reporting methods
84
Enable accessible data; improve data interoperability/re-use
85
Ensure that FAIRness of all data occurs rapidly throughout the data lifecycle
86
Provide simple, user-friendly, Web clients for formulating sophisticated queries
87
Extend the findability across all catalogs and data storage endpoints through "deep indexing"
88
Develop indexing and query services
89
Develop basic data models for driver projects
90
Import driver projects into Commons
91
KC8: Scientific Use Case
92
Evaluate role of SABV in phenotype-genotype
93
Provide system to understand sex differences in human disease
94
Design data structures and annotation pipelines
95
Integration of human and model organism data
96
Genotype-phenotype associations from TOPMed
97
Jointly analyze GTEx, MODS, and TOPMed datasets to identify the genes and mechanism of action
98
Democratize access to analytical results from TOPMed, GTEx & MODs datasets through web portals
99
Nucleating an ecosystem of scientific communication through partnerships
100
Loading...