ABCDEFGHIJKLMNOPQRSTUVWXYZ
1
NameOrganizationDefinitionMajor challenges
2
4/1/2016 17:25:06Maryann MartoneHypothes.isAn annotation applied to a document is transferred when the document is viewed in different contexts without requiring re-installation of any software
3
4/1/2016 18:36:33Jon UdellHypothesisI can view a web page using any standard browser, and that page may be served by any standard webserver. Same for annotation. I can annotate a page using any standard annotation client. And my annotations can live in any standard annotation server.Until/unless annotation is baked into browsers, the mechanisms by which annotation is bolted on are complex, fragile, and difficult to standardize.
4
4/3/2016 21:35:07chris hunter
Gigascience database
Annotations that are linked and available on all instances and platforms displaying the data / information being annotated.
Keeping all those different platforms mirroring correctly, especially when updated or new annotations added.
5
4/5/2016 5:53:29Simeon WarnerCornell UniversityI can share annotations created in one system with a particular colleague (not openly) who has created annotations in another system (perhaps we are using annotation servers supported by our institutions or by separate third-party organizations). We can both see each others annotations nicely intermixed.Client/app UX for handling multiple annotation sources
6
4/5/2016 11:22:36Sebastian Karcher
Qualitative Data Repository
Interoperable annotations can be created with different tools, transferred easily between tools, and can be viewed on any website and regardless of browser or other system requirements.Not sure yet.
7
4/5/2016 13:03:49Rob SandersonStanford / GettyA representation that can be transferred between systems, where both systems understand the semantic properties of the relationship(s) expressed by the user.
Coming to appropriate levels of clarity and specificity, such that the relationships are generally useful to many institutions, rather than being so broad as to not be useful or so narrow as to be only useful to a few.
8
4/6/2016 14:12:20William GunnMendeleyAnnotations that contain, among other optional things, a mandatory minimal set of information, formatted according to a standard. Nice things to have include discoverability by a third party visiting the resource and machine-readable open licensing. An example would be an annotation on a manuscript created by one set of tools, for example Mendeley, which could be discovered by a non-Mendeley user when they open an annotated PDF, visit the Mendeley catalog page, or query the Mendeley catalog's API and displayed in context on the resource by the tools employed by the non-Mendeley user.
Getting coalition members to agree on a standard and implement the standard in their tools.
Negotiating privacy levels of annotations. For example, how can an annotation restricted to a certain set of users on a platform remain restricted to those users when it leaves the platform?
Negotiating access rights. For example, if a copyrighted PDF contains an annotation, should that annotation be discoverable when a researcher who doesn't have access to the full-text visits the abstract? How should it be displayed?
9
10
4/11/2016 7:38:33Mark PattersoneLifeWe see interoperable annotation as a way to enhance research communication by facilitating multiple forms of public and private annotation and discussion around content, supporting efforts such as post-publication review and journal clubs as well as ad-hoc discussions. An interoperable annotation system will also allow those discussions to be associated with the content regardless of where they occur.
Convincing scientists that this is worth the effort. Providing compelling use cases that support the work of scientists and make their lives easier.
11
4/12/2016 4:57:30Scott EdmundsGigaScienceCross platform - i.e. working across different databases, datatypes, research objects, domains etc.
Silos (paywalls, people worried about transparency, IP, etc.). Mistranslation (language, domain specific terminology, etc.).
12
4/13/2016 15:25:22brooks hansonaguinteroperable annotation would provided threaded/linked discussions with enough metadata and connections to allow filtering, discoverability, linking and other functionality around online content.
To me the major challenge is simplifying annotation while collecting/assigning the metadata needed to drive higher functionality.
13
4/13/2016 17:44:06David Arjanik
NYU Digital Library Technology Services (DLTS)
Annotation that works across different systems and formats, taking into account that the scholarly works being annotated might be living things that change over time.
Maintaining the integrity of the annotations when the content being annotated has changed. Examples:

* The text that was highlighted or annotated has changed or disappeared.
* The larger work that contained the text that was highlighted or annotated has changed or disappeared.
* The URL of the "page" has disappeared.
* The text that was highlighted or annotated has not changed or disappeared but has moved to a different location on the page loaded by the URL.
* The text that was highlighted or annotated has not changed in content but has changed in presentation. Or, the only content changes were whitespace changes.

The above all could be applied to annotations based on other annotations. What happens when the original annotations change?
14
4/14/2016 19:29:15David MillmanNYUThat networks of annotations can be repurposed and edited to become the next-generation artifacts of scholarly conversations.
Annotation meets Versioning. How to track the repurposing and editing, the overlapping graphs of provenance and annotation?
15
4/15/2016 13:42:59Ron SnyderITHAKA / JSTORMuch content, such as that on JSTOR, can be rendered in multiple ways, including PDF, page images, and HTML. A user should be able to create an annotation in any version and have it displayed in all others.
Correlating annotations made to a text-based rendition to a specific image/region on a page scan.
16
4/15/2016 18:53:16Patrick JohnstonJohn Wiley> Scholarly annotations are annotations that conform to an agreed set of community standards and protocols that encourage openness and interoperability without stifling innovation or circumventing the right to privacy. They are portable, consistently identifiable, scoped consistently, curatable, archivable, and classifiable in a way that benefits the global scholarly community.

### Notes

We should contrast annotation in the context of scholarship, let's call this **scholarly annotation**, from purely social annotation. The mechanisms for both may, to all intents and purposes, be identical, but the purpose is different: scholarly annotations are there to bolster, substantiate, augment, or counter what they annotate. There are therefore greater obligations in terms of persistence and the ability to draw new conclusions from the annotations.

Some of the following are already properties of open annotations, but the emphasis is a little different, and there are nuances. For example, we should consider that not all annotations may be publicly available, at least not immediately, and that annotations may be anonymized (e.g. double-blind review).

* Conforming
* Annotations conform to a set of community standards designed to maximize interoperability. For example, the [Web Annotation Data Model](https://www.w3.org/TR/annotation-model/#aims-of-the-model). These take precedence over proprietary innovations in annotating agents.
* Portable
* Annotations can be easily moved from one hosting service to another.
* They can be easily federated across multiple hosting services through a common annotating agent.
* Consistently identifiable
* Both the annotations and the thing annotated come with an immutable notion of identity that is independent of where/how they are hosted, or at least a consistent means to establish provenance to something that has such an identity.
* This implies, at least for the case of scholarship, that content that is annotated has some kind of permanent home address (URI) (see challenges), though not necessarily a permanent home (URL).
* Some content may not exist in online form. A consensus approach to identifying content for the purpose of annotation that decouples it from the hosting service. Consider for example historical research against different editions of Shakespeare, the pages of which may only be rendered graphically, with or without transcript text.
* Any identity strategy must, over time, tend towards convergence rather than divergence. In other words, any two hosting systems should be able to disambiguate a shared identity and predictably settle on a common underlying identity. We should consider the role of identity name services (INS like DNS) services such as https://doi.org, both for the thing annotated and the annotations themselves.
* Annotations should be referenceable whether or not they sit behind a paywall, in other words the identity of the annotation is public even if the annotation body itself isn't. Some communities may sit behind an access control layer for legitimate reasons of privacy, particularly in traditional review models.
* Consistently reproduceable annotation scope
* Rules of scoping algorithms publicly available and testable by implementers.
* Proprietary/closed source scoping algorithms are by definition *not* interoperable.
* Scoping algorithms may be optimized based on the underlying structure of content, for example to provide resilient anchors over content changes, but these must always degrade gracefully when the structure changes (i.e. a different serialization of the content).
* Curatable
* A consistent vocabulary for flagging annotations as inapropriate or irrelevant that is respected across communities.
* Consistent conflict resolution. There will inevitably be disagreements, but each community must at the very least declare how they resolve conflicts in a way that avoids discrimination of any kind.
* Archivable
* Annotations used for a specific purpose, such as peer review or copyediting, specifically where the annotations lead to decisions and/or consensus opinion, must be licensed in such a way that they can be archived alongside the original content. This does not prevent private communities from operating, just that these communities are by definition excluded from such purposes, i.e. they are considered more like social annotations. This does not prevent either anonymization or delayed publication (for example during peer review).
* Scholarly annotations on content behind a paywall should not be more restricted than the content they annotate.
* Annotation services must make their annotations available for archiving purposes. For example, if a reviewer is given the option of using the hosting service of their choice, and the review is used to make a decision that affects the legitimacy of the source content, then the annotations should be able to be archived with the content in a permanent repository.
* Classifiable
* A consistent underlying vocabulary for ranking the usefulness of annotations based on crowd consensus and/or community moderation
* A consistent normalizing algorithm for numerical ranking. Ideally this would be combinable across hosting services that implement the same ranking model.
* Marking offensive or irrrelevant annotations.
* Expert endorsement through community.
* Certification that ranking conforms to specific norms (patterns of abuse, appropriate anonymity)
* Accessible
* Scholarly annotations should be accessible to people of all abilities. This may include obligations on the part of annotation agents, hosting services, community members participating in discussions, and community curators. For example, providing context around images.
Curation and noise: how do we differentiate between annotations that are not useful or relevant from those that promote discussion? The role of communities, and interoperable community standards should help. For example, it should be possible for a 4chan-like no-holds-barred community to co-exist with a more formally curated journal community: it is the consumer's choice whether they want these to coexist in the same user experience, but they should be able to do so in an informed manner, and in a way that does not lead to false accusations made against one or the other community.

Notions of ownership and copyright. To a large extent these are the same as for any web content, however the conversational nature of annotation makes it more susceptible to inadvertant transgressions and general bad blood. This is where clear obligations around archiving are crucial.

If a hosting service moves annotations to another hosting service.

Open annotations made on content behind a paywall, and how these might be exposed. For published content, the abstract may be sufficient (default scope is the entire article), though some form of snippetting that does not circumvent the paywall could be possible. Of course, if a person viewing an annotation viewer also has access to the underlying content, we must not prevent a seamless user experience. It is much simpler if the content is always open, but interoperability should not depend on this being the case.

The immutability of annotated content. Content that is published through formal channels comes with some sense of obligation on the part of its publisher. However, even publishers are subject to mergers and acquisitions, and re-hosting is a reality. Similarly, much of the Web is volatile, and content that is merely 'put out there' will not necessarily be there tomorrow.
17
4/15/2016 21:58:23
Anna Pollock-Nelson
MIT Pressannotations can be saved, shared, and versioned over time and over various systems; annotations would carry with them metadata to allow them to be filtered, searched on, hidden, and repurposed; annotations might appear differently across systems, but would have the same basic functionality and purpose; defining the standards for metadata, sharing, display, etc
18
4/16/2016 14:43:35Dan WhaleyHypothes.isAn interoperable annotation layer over knowledge would allow publishers to publish and control authoritative annotations for their content, while users could collaborate with their existing communities and colleagues in a consistent manner regardless of the underlying content or platform, using the service provider(s) of their choice.
Getting a majority of publishers and platforms to begin working together.
19
Problems: Kindle not open; Read Cube not open; Interdisciplinary and proprietary formats.
20
User stories: Python notebook/no collaboration - maybe hypothesis could help. Using NLP/AI on articles and annotation cld automate systems - tag arguments, comments. MITPress - publishers maybe use layers to incentivize engagement for marketing purposes. Cameron-AnnualReviews-leverage expertise of authors. How to promote? Getting people to start communicating.
21
BioArxiv - more engagement in commenting... don't know why. Use cases to build on existing behavior. Safe environment? Test it to feel comfortable. Private annotation. Looking for answers to comments? Make it public! Journal club usage very valuable. Author-invited: on a particular manuscript - public? Public peer review? Discovery of experts? Credit - aggregate your body of scholarly contributions. (maybe) Social aspects - lessons learned. Include in Altmetrics.
22
2 discussions. Concern about high quality annotation. Factual annotation vs comments. Use cases: Reporting requirements for grant apps, faster to annotate. Author-enhanced pre-pub, post-pub; peer-review; linking artifacts; pedagogy; guides to students; qualitative test coding; repository guy - resolve hard to store formats.
23
24
Comments: Email clients standards; annotations granularity; ORCID; think more conceptually;
25
Open conversation; public/private; peer review; need one use case to start. Culture/Credit Personal/Small Group/Public
26
What is day-to-day activity?
27
Floor comments: semantic interoperability - controlled vocabulary? Don't go crazy -- self sorts within domain. See DataOne project and provenance.
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100