1 of 12

SenseCraft Game Design

A game platform to build citizen's capacity to engage with complex evidence

Jack Park�jackpark@topicquests.org

Marc-Antoine Parentmaparent@conversence.com

https://creativecommons.org/licenses/by-nc/2.0/

© 2023 TopicQuests Foundation and Conversence

2 of 12

Collective intelligence and citizen participation

  • Every public decision should be justified
    • Goals, evidence, provenance, context, limitations…
    • vs opaque decisions processes: Bureaucracy, Hidden algorithms, LLM
  • Small diverse groups make better decisions on complex issues
    • Self-selection bias, sampling bias
  • Group dynamics do not scale
    • Size scaling: Time to hear everyone out
    • Complexity scaling: Simplify issues (vote) or take time to master background knowledge
    • Divide in workgroups: divided understanding
  • Language is sequential, maps are fractal
    • Compositionality of units allows more contributions and contributors

3 of 12

How to engage citizens with complexity

  • Map literacy is a skill
    • Good map writers help in medium assemblies
    • Asynchronous crowdsourcing leads to coherence issues
  • How to motivate citizens to learn to engage with complex maps?
  • People spend time to master games
    • Even with very complex rules!
    • Games provide a context with a common goal

4 of 12

Early gamification: Foresight Engine, MMOWGLI

  • Individual play
  • Conversation tree structure
  • Points for provoking responses
  • Moderator bonus
  • Participant bonus

Note: Current state of our prototype uses a similar conversation structure

MMOWGLI (Jensen&Tester, 2012)

5 of 12

Early gamification: induced gaming dynamics

  • Dynamics induced by “playing the rules”
    • Reactive play (fast thinking)
    • Competitive engagement can dominate inquisitiveness
    • Moves to establish personality
      • Contributes to community, not to structure. Does not scale well!
  • Lower signal to noise ratio
  • Not worth the (high) cost of game preparation and moderation
  • Tree structure does not encourage convergence

Jack Park was asked (2010): �“How can we have civil conversations online about politics?

6 of 12

World of Warcraft meets global sensemaking

I would rather hire a high-level World of Warcraft player than an MBA from Harvard” –John Seely Brown (2012)

  • Social side of the interaction will mostly happen within-team
  • Team will want to keep disruptive behaviour in check to protect reputation
  • Roles lead to specialization and reduce context switching. Enforced diversity
  • Roles and aliases allow disengaging from one’s point of view
  • Mentorship dynamics within the team

7 of 12

Design for deep listening and nuance vs reactivity

  • Turn-based vs Real-time
    • Each team presents positions text
    • Each team maps other team’s scenarios
    • Map unification and clarification
  • Reward exhaustive mapping
    • Reward teams that contributed an element (divided among teams: rewards originality)
    • Reward teams that mapped that element: rewards attention to detail
      • The map element has to be anchored in the other team’s presentation
  • Encourage clarification questions
    • vs False consensus, Dunning-Kruger
    • Provide alternative plausible interpretations
    • Verify plausibility of alternative with the other teams

8 of 12

Design for coopetition vs adversarial dynamics

  • Mutual scoring
    • Team must spend tokens to identify quality contributions of other teams
  • Reward synthesis that bridge elements of previous proposals
    • The contributions must be recognized
    • The integrated elements will also count towards the original contributor’s total
    • Track adoption of proposal and synthesis by teams, favour change (Change My Mind)

9 of 12

Design for epistemic curiosity vs playing fixed rules

  • External evidence as minimal justification
    • Demonstrate understanding of the evidence (ask for methodological limitations?)
    • Can be prone to adversarial dynamics
    • Contested status for contradictory evidence
  • Resolving contested evidence with meta-reflexion
    • Identify shared heuristics (criteria) that would allow to decide contested truth (value)
      • Argumentation schemes
      • Look for blind spots, bias
    • Encourage asking the why of the why
      • Deep democracy

10 of 12

Co-design as a meta-goal

  • Engelbart’s vision: Coevolution of humans, tools and processes
    • Expand to ecosystem
  • Players define Quality
    • Mutual scoring as first step
      • Risk of collusion?
    • Identification of cognitive patterns
      • Player-enriched badge system
    • Building a library of meta-heuristics
  • Eventually: co-designed game dynamics?
    • Players design their own team dynamics (through roles)
    • Defining what pro-social behaviours the game should encourage
    • Construct a grammar and representation of meta-reasoning

Engelbart’s Vision

11 of 12

How would we include AI in the process?

  • AI as research assistants
    • Players will use AI. Accept it, but red flag for hallucinated evidence.
  • AI as game narrator
    • Appropriate for the big picture, game play highlights
  • AI as sparring partner
    • Learn to guess the AI-simulated team!
    • Learn to identify bias or unverified assumptions in AI-generated positions
  • AI as reference point of consensus reality
    • When a team makes a claim that AI did not suggest…
    • It could be extraordinary original!
    • But it probably requires extraordinary evidence
  • AI as design partner

12 of 12

Co-design with the Collective Intelligence Community

  • A lot of Collective Intelligence sub-communities working on their own
  • We need to exchange tools, data, methodology
  • Co-evolving a tool ecosystem
  • We need a shared protocol for CI interoperability
    • My previous contribution: Catalyst Interchange Format (2014)
    • Co-design the requirements!
    • Anything can be re-interpreted.
      • Deep recursivity
    • Towards HyperKnowledge
  • https://sensecraft.garden
  • https://hyperknowledge.org