Developing an App for Online Debates

By akohad Apr26,2023

[ad_1]

DECENTRALIZATION / SOCIAL MEDIA

Some thoughts on the practicalities of building a fair system

The existing mechanics of discussion on the internet tend to segregate people into self-reinforcing ideological groups, making respectful debate very difficult.

In this article we first examine the problem — why this separation occurs, why respectful debate is hard online and what we can learn from offline debate. Secondly, we consider how an online solution could work given the constraints of no central authority and unverified identities.

Why do we naturally form ideological groups?

Human nature tends to promote segregation into different groups. For example, consider these three classic results from sociology:

  • Confirmation bias – we’re more likely to accept information that confirms our already held beliefs.
  • Social proof – we’re more likely to trust people who we think are like us
  • Consistency principle – we’re more likely to trust people who don’t change what they say

These tendencies make us prefer groups with people who are most like us and hold the same beliefs as us. We’re biased to reject criticism to avoid the pain of re-thinking our beliefs — especially if it comes from someone outside our group.

Why does internet discussion kill respectful debate?

We can easily imagine that these instincts for group attachment would be beneficial in the kind of survival situation that drove our evolution, but unchecked, they promote division in internet discussion.

Discussion happens in many places on the internet but in most cases, it takes the form of a free-for-all sequence of messages, e.g.

  • Twitter, Instagram, Tiktok etc.
  • Blog or news article comment section
  • Facebook or Whatsapp group

The free-for-all discussion format rewards quantity over quality – whoever can sling the most mud wins. In such a system it’s better to spend your effort furiously typing than thinking or researching.

Different sites vary in their level of inclusivity. Social media sites like Twitter are more inclusive than a news or blog site with a particular political leaning. A Facebook group on a specific topic probably has a very narrow audience.

With increasing inclusivity comes increasing disagreement over issues and more frantic mudslinging. What do social media apps do to address this? Instead of adjusting their format to foster respectful debate, they:

  • Reduce inclusivity – ban people that say the ‘wrong’ thing
  • Exercise authority – persuade most people of the ‘right’ thing to believe using fact checks and make the ‘wrong’ content harder to find by tweaking the algos that decide what people see

These measures appease those users who agree with the ‘right’ thing but push others to alternative sites – as evidenced by the rise of Gab as a home for Twitter exiles.

Essentially Twitter becomes more like a big news site with a political leaning rather than an all-inclusive discussion platform.

What about less inclusive sites like blogs or newspaper sites? The political slant of the editorial and/or the personality of the writer(s) will naturally select the pool of contributors to the comments. For blogs in particular, limited resources and search engine optimization pressure further lead editors to narrow down their audience.

This selection results in low inclusivity and opposing groups being separated from each other by the technical barriers between conversing across separate sites — the only possible communication between them being from voluntarily placed hyperlinks.

Having a narrow audience, these sites are able to foster a stronger group identity that is defined by the editorial content and further policed by the regular participants.

Why is Wikipedia not a good model for debate?

Wikipedia is very interesting because it has produced a huge amount of quality content by open collaboration. That would not have been achievable without a structured approach.

Wikipedia uses an authority structure where those who have demonstrated the greatest commitment to the project are granted the highest access levels.

This is a great system for documenting established facts but perhaps less so for discussing contentious issues. A Wikipedia article is typically written by an individual and then later edited by others. That could leave it open to accusations of bias from the outset.

It’s also difficult to show fair representation in such a system because control sits with the editors that happened to make it into the highest positions of authority and nothing is known about their political leanings or other biases.

What can we learn from offline debate?

Let’s take a look at the Oxford Languages definition of debate:

a formal discussion on a particular matter in a public meeting or legislative assembly, in which opposing arguments are put forward and which usually ends with a vote.

The key parts of this are:

  • Formal discussion – it’s not just a free-for-all – there is structure that gives equal opportunity for both sides to present their arguments and respond to the other side
  • Opposing arguments are put forward – both opposing sides are represented, and each side uses arguments to support their side or to refute the other side’s arguments
  • Ends with a vote – it is time limited and comes to a conclusion about which side has the best arguments

This structure ensures an orderly discussion that produces a result.

After a debate is concluded participants are free to disagree with the result but having chosen to enter the debate, they commit to respect the result. If they feel their side is not fairly represented, they should refuse to take part.

What do we mean by fairly represented? Essentially, all the relevant arguments must receive equal consideration in determining the end result. The structured debate format must remove any advantages conferred by force by ensuring:

  • Equal size of opposing forces
  • Equal time and space to present arguments
  • Equal opportunity for arguments to be heard

However, effort must also be made to ensure appropriate selection of the teams and judges.

For example, if the judges are biased towards one side and ignore the arguments presented by the other side then the debate is pointless.

On the other hand, the teams themselves might fail to present all the relevant arguments for consideration. That could be due to:

  • Not having done enough research
  • Not having enough ideological diversity
  • Lacking the necessary presentation skills

Finally, the question being debated needs to be unbiased. If a significant proportion of people can’t agree with either side, then perhaps a different question should be asked.

To understand how debate could work online we’ll need to consider the following questions:

  • How should research be collected?
  • How should the teams be chosen?
  • How should arguments be presented?
  • How should presentations be judged?

In considering these, we need to take into account some general requirements:

  • We need to avoid requiring any central authority, which could be accused of unfairly controlling the result
  • We want to maximise the opportunity for people to participate
  • We want the process to be fun and engaging

Let’s start with the hardest question…

How should presentations be judged?

This is a tricky problem because we want to ensure fair representation. If we simply do a straight poll of all users then the result would almost certainly be biased — except in the unlikely event that the participants happened to be perfectly representative of the wider community.

A more accurate approach is to stratify people into groups based on various characteristics and weight the votes from the groups like polling organisations do. This leads to some more questions:

  • How do we agree what the groups should be?
  • How do we agree what the weights should be?
  • How do we stop one person voting in multiple groups?

We don’t want to rely on a central authority to decide on the groups and their weightings. Instead, these can be specified by the person proposing the debate. Anyone who disagrees with the groups would not join that debate so it’s in the interest of the proposer to make these acceptable to the widest range of people so the result can be said to be broadly agreed.

Each identity must be limited to one vote per debate. Although, if people identify with multiple groups, they could split their one vote between groups.

A harder problem is how to stop someone creating multiple identities and using them to vote more than once. This is impossible to prevent completely without a central authority verifying individuals by their birth certificates. However, it may be possible to impose a sufficiently high cost to deter multiple identities. For example, by requiring minimum positive feedback from other users as a condition of eligibility to vote on it.

How should arguments be presented?

In a traditional debate, arguments are laid out in a speech given face-to-face. The closest online equivalent would be a video presentation. This has the advantage over face-to-face delivery that arguments can be presented without the possibility of interruption.

Arguments could also be presented in essay form as some people will prefer to read rather than watch and some presenters will be more comfortable writing than making a video. This form can be consumed faster than watching a video and is effective at communicating logical arguments but is less able to communicate emotion and is typically less exciting/engaging.

Who should produce the presentation? We could elect a presenter or presenters, but this is likely to lead to dissatisfaction due to the difficulties already described with fair voting in a decentralized system. A better approach would be to allow anyone to present and use voting to rank the presentations.

When are presentations made? A typical format for debate is to follow the sequence:

  • Team A present
  • Team B rebut
  • Team B present
  • Team A rebut

In an online scenario where videos or essays are pre-prepared, this could be made concurrent:

  • Team A & B release presentations
  • Team A & B release rebuttals

This prevents the unfairness of team B seeing team A’s presentation prior to giving their own but not vice-versa.

How should the teams be chosen?

In a traditional debate, teams are created with equal size so that they have the same resources available. This is important due to the limited resources available with a small number of participants. If one side had twice the number of researchers, they could potentially prepare more arguments in the allotted time, giving them an unfair advantage.

Additional people provide an advantage up to a point but once there are enough people to dig up all the relevant arguments in the available time, recruiting more bodies produces diminishing returns – maybe even negative returns due to greater noise. Therefore, in an online scenario where it is much easier to bring together large numbers of people on each side, there is little benefit to restricting team size.

Restricting team size also brings a lot of additional complexity because there needs to be a mechanism for electing teams in a representative and fair way and for replacing team members that are elected but then don’t contribute positively (to avoid malicious placement of dud members by the opposing side).

Considering all the above, it makes sense to drop the restriction on team sizes and allow anyone to participate in a team.

How should research be collected?

It is the role of team members to find evidence and suggest arguments and it is the role of the presenters to assemble these into a coherent case. However, if the teams are large then there could be a lot of information for presenters to sift through. In this case it is useful to have a way to filter out the noise.

In its simplest form this could be the ability to like a post and to be able to sort by number of likes or emphasise posts with a minimum number of likes or hide posts without a minimum number of likes.

It may also be desirable to allow presenters to pick which specific users’ likes they’re interested in. In this way we see a new role developing, similar to a forum moderator, who has the job of cleaning the stream of incoming messages. In this case though, the moderator can prune quite aggressively because they are only providing their view and not ‘the’ view and the output of multiple moderators can be combined.

Having discussed each part in detail, let’s conclude by summarizing how the whole thing might work:

  • Anyone can propose a debate consisting of a: question, voting groups, timings (i.e. when it starts, when it ends, when presentations/rebuttals are exchanged) and minimum required reputation level for eligible voters
  • Once the debate start-time passes, anyone can post in the chat room for either side
  • Anyone can like posts
  • Anyone can filter based on likes from specific users
  • Anyone can join either side of the debate as a team participant
  • A team participant can submit a presentation for their side
  • Presentations become visible to all when the exchange deadline passes and are sorted by likes
  • Anyone can like presentations
  • The process repeats for rebuttals
  • Anyone with the minimum required reputation can vote on the debate when the end deadline passes
  • Reputation points are gained by participation in any previous debates
  • When the voting deadline passes the result is settled and the debate is closed

In this article we’ve discussed the motivation for a debating app and how it could work from a user’s point of view. Next time we’ll think about how this might be implemented as a peer-to-peer web app.

[ad_2]

Source link

By akohad

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *