The ACM Multimedia (ACMMM) has not yet opened its reviewer data. However, there's a noticeable interest in the community in viewing how scores are distributed. In light of this, we are exploring the possibility of anonymously gathering ratings and confidence levels to create a visualization of scores that meets this interest.
We acknowledge that this method **
might lead to bias**. The number of filled data will be updated on the Paper Copilot site
without modification and
no personal data will be collected.Please fill in the ratings in the same order split by commas ',' or semicolons ';', or just space ' ' e.g.:
Ratings: 5,5,3,1
Confidence: 2,2,3,4
Notice:
- 1: Trivial or Wrong; 2: Strong Rejection, 3: Clear Rejection; 4: Rejection; 5: Marginally below acceptance; 6: Marginally above acceptance threshold; 7: Accept; 8: Clear Accept; 9: Strong Accept; 10: Seminal paper;
- You can update your submission anytime when you re-visit this link, we'll consider only the latest submitted data for every paper ID (paper ID is only used for the accuracy of the review curve, false paper ID will introduce noise to the collected data).
- This Google form will be removed if there's no response or if the reviews are publicly available. We'll also update this on Twitter during the discussion, follow me if needed.