What do you need in a graph library?
A recently launched open source library in Python provides an abstraction layer for developing knowledge graphs. This survey attempts to find needs/features/use cases for us to prioritize in our development roadmap. Your info will be kept private, not shared with any third-party, and will not be used for any form of targeted marketing.
What kinds of graph support do you need most?
Building graphs in RDF, i.e., managing vocabularies, namespaces, literals, etc.
Serialization methods (see detailed question below)
Network-based inference (see detailed question below)
Graph database integrations (see detailed question below)
Validation (see detailed question below)
Embedding (e.g., deep learning integration via PyTorch, etc.)
Graph algorithms (e.g., SSSP, BFS, centrality, etc.)
Parallel processing that scales across a cluster
Traversals of triples, triangles, other relatively simple structures
Multi-scale backbone filter
Homology, TDA, and other advanced analytics approaches
Which serialization methods do you need most?
RDF files in N3 or TTL ("Turtle") format
Graph database connectors
Which forms of network inference do you need most?
Probabilistic Soft Logic (PSL)
Bayesian Networks (Statistical or Causal)
Markov Logic Networks (MLN)
Which forms of validation support do you need most?
Validating based on SHACL
Validating based on a W3C standards grammar
Validating based on axioms (e.g., SKOS)
Which graph database integrations do you need most?
What kinds of graph algorithms do you need to use the most?
Any other suggestions about what you'd like to see in a KG library?
Send me a copy of my responses.
Page 1 of 1
Never submit passwords through Google Forms.
This content is neither created nor endorsed by Google.
Terms of Service