Global Trends 2025

Translate this post

Each year, as the Wikimedia Foundation kicks off our annual planning for the year ahead, we develop a list of trends that we believe are likely to significantly impact the context in which the Wikimedia movement and projects operate. We identify specific online trends that are most relevant to our mission, such as changes in how and where people search for and contribute information online, the rise of misinformation and disinformation in online spaces, and evolving regulation of online information providers. This analysis allows us to begin our planning with the guiding question, “What does the world need from Wikimedia now?”

This question is a driving force of conversations with and across the movement. As in past years, the trends below demonstrate how our current technological, geopolitical, and social environment looks very different from the founding days of Wikipedia, and how we must continue to adapt and evolve. Each will shape our annual plan as well as strategies that affect our future—from better protecting Wikimedians with strong technology tools and trust and safety measures to experiments that bring Wikimedia content to audiences in new ways.  

Changes in how and where people receive and contribute information

Trust in information online is declining and shared consensus around what information is true and trusted is fragmenting. Last year, we noted that consumers are inundated with information online and increasingly want it aggregated by trusted people. With the launch of Google AI overviews and other AI search products, many people searching for information on the web are now being helped by AI. Even so, AI-assisted search has still not yet overtaken other ways that people get information (e.g., via traditional web search engines or on social platforms). However, we see that the trend we noted last year of relying on trusted people has grown stronger: people are increasingly skeptical of traditional knowledge authorities, such as government institutions and media, and instead turn in growing numbers to online personalities, who are having a bigger impact on what people believe and trust. Online personalities (e.g., podcasters, vloggers) on social platforms now factor more heavily in important events like political elections globally. By seeking out personalities who share their ideology and demographics, people are increasingly ending up in isolated filter bubbles that fragment shared consensus around facts

People participate eagerly in online spaces that provide rewarding connection. As a website that relies on the contributions and time of hundreds of thousands of Wikimedians, we closely follow trends in where and how people are contributing online. Last year, we highlighted that people now have many rewarding, potent ways to share knowledge online. This year, we observe that people globally are eagerly joining and sharing their knowledge and expertise in smaller interest-based groups (on platforms like Facebook, WhatsApp, Reddit, and Discord). These spaces are increasingly popular globally and make people feel more comfortable participating than broad, general social channels. A dedicated core of volunteers maintains these communities, performing vital activities like moderation and newcomer mentoring.

For young people especially, gaming has become a participatory space that rivals social media. Gaming communities have formed on platforms like Discord and Twitch, where people actively co-create and participate – organizing events or moderating user content and behavior – not just play. Platforms are capitalizing on games to drive user engagement on unrelated products, like the successful and growing games section of The New York Times.

People have a finite amount of time to spend on online activities, and we suspect that one cause of a decline in the number of new people that are registering as editors on the Wikimedia projects – which started in 2020-2021 and continues to the present – may be related to the growing popularity and appeal of participating in some of these other rewarding online spaces.

Changes in how online information is distributed and regulated 

Digital information that is created and verified by humans is the most valuable asset in the AI tech platform wars. Last year we predicted that AI would be weaponized in creating and spreading online disinformation. This year, we are seeing that low-quality AI content is being churned out not just to spread false information, but as a get-rich-quick scheme, and is overwhelming the internet. High-quality information that is reliably human-produced has become a dwindling and precious commodity that technology platforms are racing to scrape from the web and distribute through new search experiences (both AI and traditional search) on their platforms. Publishers of human-created online content across multiple industries (for example, many of the major news and media companies globally) are responding by negotiating content licensing deals with AI companies and instituting paywalls to protect themselves from abusive reuse. These restrictions are further decreasing the availability of free, high-quality information to the general public.

Struggles over neutral and verifiable information threaten access to knowledge projects and their contributors. Last year, we highlighted that regulation globally poses challenges and opportunities to online information-sharing projects that vary by jurisdiction. This year, challenges to sharing verified, neutral information online have increased significantly. Public consensus around the meaning of concepts like “facts” and “neutrality” is increasingly fragmented and politicized. Special interest groups, influencers, and some governments are undermining the credibility of online sources that they disagree with. Others also try to silence sources of information through vexatious litigation.

Globally, a growing number of laws that seek to regulate online technology platforms do not make room for nonprofit platforms that exist in the public interest, such as open science initiatives, crowdsourced knowledge and cultural heritage repositories, and online archives. One-size-fits-all online regulation can threaten contributor and audience privacy on these platforms, and imperil community content moderation practices. For example, laws that would force platforms to verify the identity of and track the actions of visitors or contributors can endanger people’s privacy and safety to access or share information. Regulations that require platforms to immediately remove content labeled as misinformation run counter to built-in safeguards to address misinformation on platforms that operate through community consensus, and that prioritize accuracy instead of profit. 

What’s next and how you can join the conversation

As with our past updates to the community about trends, this is not a comprehensive list of threats and opportunities facing our movement, but rather a way to begin discussing and aligning on how to meet what the world needs from us now as we begin to plan for the next fiscal year. Earlier this year, Chief Product & Technology Officer Selena Deckelmann invited our global community to share what trends and changes are most important to them – we encourage you to continue the discussion on this talk page. In the coming months, the Wikimedia Foundation will publish its draft annual plan to lay out our proposed work for the coming year in response to these trends. Some work is already underway; for example, to address the decline in new editors, we are adding new kinds of “edit checks,” intelligent workflows that make constructive mobile editing easy for newcomers and increase their likelihood of continuing to contribute. We look forward to more community conversations about how we can protect and grow our free knowledge projects in a changing socio-technical landscape.

Can you help us translate this article?

In order for this article to reach as many people as possible we would like your help. Can you translate this article to get the message out?

Notify of
0 Comments
Inline Feedbacks
View all comments