vTaiwan
Polis
In 2015, as part of a broader democratic process, the vTaiwan platform used the “Polis” system as a means of surveying the public’s opinion on the market launch of the UberX ride-hailing service.
Scroll
How algorithms can improve digital discourse
Climate crisis, wars, pandemics: The world is facing major challenges. Under such circumstances, our top priority should be to develop solutions together. Instead we are seeing a reality of increasing polarization and fragmentation – especially in public dialogue.
This dynamic can be seen very clearly on digital communication and information platforms. Peoples’ desire for certainty and clear solutions often gives rise to conflicting and irreconcilable points of view. Online platforms play a key role here.
The question of whether this reflects a fundamental link between online platforms and polarization remains controversial. However, more and more studies are showing that online platforms have demonstrably contributed in numerous instances to the spread of misinformation, radicalization and incitements to violence.
In many cases, the platforms have reinforced polarization and fragmentation. In theory, they could instead do more to help improve the discourse. The fact that this has not yet happened has to do with the way they function.
The problem lies in particular with the major online platforms’ recommendation systems. Based on algorithmic systems, these sort through the content on the platforms and select what is displayed to users.
But there is another way. Platform recommendation systems don’t have to be designed only to maximize interaction with content. In the same way, recommendation systems could also give preference to other content. This means that instead of favoring particularly provocative or sensational content, the system would instead have a balancing effect that promotes constructive debate.
To achieve this, the algorithms would have to be designed so as to take additional criteria into account when recommending content, for example the probability that different groups agree with the content. and bridging algorithms work according to this principle, promoting mutual understanding and productive debate.
One of the earliest approaches that inspired further applications is being pursued by “”, a project of “The Computational Democracy Project”, a U.S.-based non-governmental organization. This initiative focuses on visualizing opinion groups in discussions to better identify major areas of agreement and disagreement across different groups.
Here’s how it works in practice:
A so-called conversation is started on a specific topic, with selected people allowed to participate. One example might be the following topic:
Cars and other motor vehicles are everywhere in many large cities in Germany. Given the rising use of personal transport, this leads to traffic jams, accidents and overcrowded city centers.
Moderators and/or participants write various statements on this topic, each expressing a point of view. Participants can then vote on these statements.
Rate this statement
“” then identifies the statements that are oriented toward the development of agreement between groups with otherwise different opinions. This means the software provides the highest ranking to posts that draw the most agreement from the greatest number of contributors who have differing opinions on other posts. This is also called .
With algorithmic recommendation systems geared toward maximizing interaction – as is currently the case with many online platforms – the result after the so-called would look different:
However, are not equally effective in all situations.
They work particularly well with open-ended questions, and those which allow for a variety of responses.
They are less suitable for questions intended to evoke simple yes or no answers, or that require a ranking or prioritization of responses.
It is important that the questions or topics posed offer space for constructive conflict. In other words, participants must genuinely engage with and be interested in the perspectives of the opposing side. Topics that feature already strongly entrenched positions are less suitable.
Building mutual understanding and trust across divides by using works in practice as well as in theory. Bridging-based ranking systems have already been used in the following examples.
vTaiwan
In 2015, as part of a broader democratic process, the vTaiwan platform used the “Polis” system as a means of surveying the public’s opinion on the market launch of the UberX ride-hailing service.
Twitter/X
With its “Community Notes” feature, X (formerly Twitter) has introduced a function that allows users to collaboratively add context to potentially misleading posts.
Polarization and fragmentation within our digital discourse are serious problems.
Large online platforms are required to take more targeted action against criminal content, given the increase in political pressure and legal requirements such as the EU’s . However, it is also crucial to shift the focus towards the design and business model of the platforms, aiming to address not only the symptoms but more emphatically tackle the root of the problem.
The research on bridging algorithms has revealed the following insights:
They do this in order to increase the time users spend on their sites, thus increasing advertising revenue. But they could also design their algorithms differently, to do more to improve digital discourse. One possibility is the use of bridging algorithms, which have a balancing effect and promote constructive discussion. The two examples of “Polis” and X’s “Community Notes” demonstrate that bridging algorithms can work in practice, not just in theory.
In the “Facebook Papers” published by whistleblower Frances Haugen, there are references to various experiments that the company carried out with bridging algorithms. The result: Recommendation systems that place greater weight on agreement between people from different groups significantly improve the quality of discourse.
Meta, for example, has not yet followed up on its initial experiments with Facebook. The financial disadvantage of bridging algorithms is too obvious: Their use reduces the time spent on the platforms, and thus lowers advertising revenue.
For this situation to change, and for bridging algorithms to find wider application, three main things need to happen:
More evidence is needed showing which specific bridging criteria affect the quality of discourse quality, and how. The emergence of new platforms such as BlueSky, which allow users to freely choose which recommendation algorithms will be used, offers a useful window of opportunity to test bridging algorithms in practice.
Platforms need to live up to their social responsibility more fully, and facilitate better digital discourse with the help of their recommendation systems. This would not require a complete redesign, but merely the addition of bridging criteria to existing recommendation systems – such as the probability that two different opinion groups agree with the same content or statement.
In parallel, public and political pressure on online platforms must be increased. If providers do not fulfill their social responsibility on their own, regulation will be needed to bring about change.
These are not easy tasks. But this discussion is necessary. Recommendation algorithms represent a key point of technological leverage, and modifying them as discussed could allow the problem of polarization in the digital space to be addressed more effectively.
A different digital discourse is possible – and is essential in order to strengthen constructive discourse worldwide.
Do you have any questions or comments? Please feel free to contact us at any time.