THE PROBLEM
Historically, filter bubbles were caused by geographical and cultural confinements such as the one caused by the church in Europe. Printed media such as newspapers allowed for easier spread of information which created more exposure and dialogue.
Today, information is spread digitally through mediums such as social media and search engines at a much larger scale. This results in more controversial and divided information being spread across the internet, especially within certain “bubbles.” This reinforces existing biased views. These effects could have larger scale consequences such as societal division, leading to less effective governance, loss of time, and vulnerability to both domestic and foreign malicious agendas.
How might we increase exposure and dialogue amongst people with controversial and differing views that are confined to their own filter bubble?
OUR PLAN
Our proposed solution is the deployment of bots to mitigate the influence of disruptive sources and foster productive discussions. The bots will be trained to identify and target the median source and type of disruptive information.
Using this information, we can optimize the bot to further promote productive discourse and better train the botnet to respond to a wider range of digital conflict. We will evaluate the performance of the bot through built-in evaluation feedback mechanisms and observation of user interaction.
Sean Fish, Divya Pinnaka, Maya Rajan, TJ Crawford, Tanmoy Panigrahi
To who does it matter?
This problem matters to everyone. In contemporary times, the entire globe is affected by the choices the internet enables. As social media continues to evolve, and with the potential impacts of deepfakes, the tracking of disingenuous behavior is set to increase in importance.
Why does it matter?
Deploying bots to mitigate the influence of disruptive sources will foster productive discussion, and help to disregard attempts to derail civil conversation.
THE IMPACT
The impact of this system will be an improvement on automatic moderation of online forums of discussion, encouraging more civil discourse and mitigating the impact of chaotic agents of both independent and state-sponsored natures.
How will we know we are making progress?
We will know if we are making progress when the following benchmarks are met:
ZeroDay
GRAND CHALLENGES
Flowchart describing the basic algorithm used by the bot.