Disinformation and AI: The Differences Between Wikipedia and Social Media

Translate This Post

As researchers at the Wikimedia Foundation, part of our work is to develop and apply machine learning techniques to assist the Wikipedia editors on their tasks. Within the Knowledge Integrity Program, one of our focusses is to help Wikipedians to spot violations to the core content policies. While for-profit companies are investing large resources in developing AI systems to prevent the spread of disinformation on their platforms, and academic researchers are also working in that direction, the case of Wikipedia is particularly very unique. In this post I would like to explain the limitations on using AI to improve Knowledge integrity in Wikipedia, why we can not directly apply the technologies and approaches developed for Social Media platforms, and how we are dealing with these problems. 

Let’s start pointing out some key differences: While most dynamics in social media are about sharing opinions and gaining popularity, Wikipedia is about sharing in the sum of all knowledge. Wikipedia’s unique goal and process presents both challenges to applying machine learning techniques used by large social media platforms to identify disinformation and opportunities for new, more human-centered approaches. Let’s start by comparing these two different paradigms. First of all, your social network activity reflects your thoughts and interests, on the other hand, a Wikipedia article is collectively created, or in other words is a commons, without a single owner. Second, the lifecycle on social networks content is very short, but for Wikipedia it is about perennial knowledge. And last, but not least, in social networks you can do – almost – whatever you want respecting a very general set of “terms and conditions” designed by the platform owners, while in Wikipedia there are procedures and policies for writing articles, created by the community, including key points like keeping a neutral point of view and using reliable and verifiable sources.

Although the popularity of social networks started more than a decade ago, the concerns about their content trustworthiness are much more recent. Scandals in the usage of social networks during the Brexit and USA elections in 2016, had put social networks under observation. Other important cases – less covered by global-north based-media – of political manipulation during elections have been seen in Brazil and India. However, problems for the most popular social networks are related with their business model: they need to allow people to say whatever they want, and – probably the most important part – to show people content that increases their engagement, usually reinforcing their beliefs. Content trustworthiness is not the aim of those companies; they need just to control extreme cases. This filtering process is known as content moderation. And given the huge amount of content they need to moderate, big tech companies are putting a lot of effort and hopes on developing tools based on Machine Learning (a.k.a Artificial Intelligence) to help – and take the lead – on detecting and removing that extreme content.

Conversely, Wikipedia has dedicated its 20 years of existence to sharing knowledge, constantly trying to create trustworthy content. Misleading information can happen in Wikipedia when edits do not fulfill content policies. This can be on purpose with the intention of deception (disinformation) or it can be accidental (misinformation). Whatever the motivation, the community regulates itself. When an editor detects unreliable content, they fix it following a process of continuous improvement. When the community suspects that one or a group of their members are adding bad or misleading content intentionally, they start a deliberation process that can end-up on banning that member or group. Although bad-quality content can be found on Wikipedia, themajority of Wikipedia editors try to follow the content policies, and also try to detect content policies violations. While there are a set of AI-based tools to support those processes, the bulk of the work is done manually

Beyond tools, the processes for content moderation and improvement are very different across platforms. To prevent disinformation, content moderation in social media comes from a central authority: the company behind each platform. Also, as mentioned above to do content moderation at a large scale, big tech companies hire employees and create AI systems. AI systems need a “ground-truth”, i.e., a unique source of truth. Contrary, by design Wikipedia can’t have a unique ground-truth, because its aim is to be the sum of all human knowledge, so there is no single point of reference, and moreover, all significant points of view  – supported by reliable sources – needs to be represented. Wikipedia “moderation” is not about the truth, it is about verifiability of content through reliable sources. And the rules don’t come from a unique central authority, they are designed, reviewed and applied by a community of editors, through a well-established deliberation process.

All these essential differences on the processes of content creation and moderation have a direct impact on the design of Machine Learning-powered tools. For social networks, a typical ML task in content moderation will be to look at content that spreads fast, and check if that piece of content is supported by the “ground-truth”, that in many cases is a Wikipedia article. This process of contrasting a claim against a trustable document is known as Natural Language Inference, and as we mentioned before, requires a ground-truth. 

But in the case of Wikipedia, the challenges for machines are different. Without a single ground-truth, information contained in Wikipedia articles needs to be checked against multiple sources. But just applying Natural Language Inference algorithms to Wikipedia articles is not enough. For example, determining what is a reliable source is an additional challenge. And problems go beyond fact checking: Wikipedia needs algorithms that would help to create content that is written with a neutral point of view, without cultural bias. From the Machine Learning perspective, we have the combination of several problems that includes the aforementioned Natural Language Inference, but also other complex Natural Language Understanding problems, and Information Retrieval tasks. And maybe the most important part, we need to follow a Human-Centered AI approach, creating explainable algorithms that empower – and never try to replace – our community editors, respecting different cultural context and background, and that can support the over 300 languages currently existing on Wikipedia. 

In summary, the challenges on fighting disinformation on Wikipedia require dedicated effort that goes beyond addressing the traditional “fact-checking” problem. Differently from Social Networks where algorithms are expected to do the work that no one else is doing, in Wikipedia we need algorithms able to help the current editor’s workflows, implying that our baseline is much more challenging. Our algorithms are expected to interact with experienced editors, and they need to understand the recommendations they are receiving. Generating this synergy between algorithms and editors, to create an unbiased, high-quality and inclusive Wikipedia, is our main goal.

If you want to learn more about our vision and projects in this area, we invite you to visit our Knowledge Integrity Program and read our white paper. If you are a researcher, and want to collaborate with us, please check our Formal collaboration program.

Can you help us translate this article?

In order for this article to reach as many people as possible we would like your help. Can you translate this article to get the message out?