To read more on this tool: https://tattle.co.in/products/ogbv/
List of suggested features
- Plug-in itself should be in different languages with more visual component and screen-reader accessibility
- Build a Bot: Add list of offensive words and the bot would tweet the articles to them of why these words shouldn't be used.
- Tweets/posts that generate automated counter speech or respond to an offensive post can come from another, third party account and not the user’s account. Important to make sure that the account linked with the tool doesn't get blocked.
- There should be a lightweight mobile version of the tool.
- One type of oGBV: Surveillance without the knowledge of the people. Can this tool flag this surveillance?
- In case of easy filtering, can we limit filters to those that can’t be discriminatory?
- Sometimes, we also want to reduce the possibility of the conversation continuing, if there is an automated response that conversation might continue. It could escalate. Trolls will be trolls, not open to learning, so better to hide than respond.
- Rather than hide, flagging the content as misogynistic could be more useful.
- Legal terms and reporting guidelines should be made more digestible.
- In places with less digital access, a browser plug-in can help create group vigilantism. Can build groups of adolescents who can be trained further to spread awareness.
- Think like a new user and build very basic solutions.
- I would expect the extension to check with me before doing anything. I should be told when the tweet is flaring up. Pop-up asks me if this is safe virality or not very safe etc.
- Proposed Non Ml-features are more useful, and wouldn't worry so much about the moderation and specific detection.
- Keep the filtering simpler and bring archiving tools.
- Invoke networks for virality: Tag other people to get help from the troll army. And the good thing is they carry on a conversation and completely ignore the hater. When you tag people in and they are having fun, there 's humour, joking and when the hater has to watch that,
- If there's a way for other users of the plugin to say that they are available as a listener. Tool could suggest some of these people.
- Periodic reminder that this information is not being read by anyone.
- Usually people don’t know that they are being cyberbullied. To identify what’s happening to them, suggest available help. Can this tool validate that experience?
- To maintain anonymity on Twitter, we often use multiple accounts from which you post different aspects of your life. Tool should notify which account you are posting from.
- Why do I need ML, If i can just have simple filtering?
- Action should be on the violence perpetrated rather than the content.
- Instead of removing offensive content, make it public to point out that this is wrong
- It’s better if they can restrict it for sometime. Sometimes it’s good to live in a bubble. Silence Hashtags.
- Automated message that says “this language is objectionable”. Send a message to people to make language more sensitive. But direct action should be avoided.
- Journalists can’t keep track of how their images are circulated. To tackle this non consensual use of images of public figures, users can add and give access to track their visuals, commonly used images videos.
- If there’s a mechanism for exporting the chat content in a form that is useful for forensic so that the victim doesn’t have to give the phone. Third party independent chat export tool that can be trusted.
- Voice notes and nudes on private messages. If you try to download this content as evidence, it would amount to accepting the conversation and subsequent victim-blaming. Third-party download should be compatible with the tool.