“Don’t Blink”: Protecting the Wikimedia model, its people, and its values in November 2025

Translate this post
Image collage for the November 2025 issue of “Don’t Blink.” Image by the Wikimedia Foundation, CC BY-SA 4.0, via Wikimedia Commons.

Welcome to “Don’t Blink”! Every month we share developments from around the world that shape people’s ability to participate in the free knowledge movement. In case you blinked last month, here are the most important public policy advocacy topics that have kept the Wikimedia Foundation busy.

The Global Advocacy team works to advocate laws and government policies that protect the volunteer community-led Wikimedia model, Wikimedia’s people, and the Wikimedia movement’s core values. To learn more about us and the work we do with the rest of the Foundation: visit our Meta-Wiki webpage; follow us on LinkedIn, X (formerly Twitter), and Bluesky; and, sign up for our quarterly newsletter or Wikimedia public policy mailing list.

________

THANKS FOR TAKING OUR SURVEY TO HELP US IMPROVE “DON’T BLINK”! IF YOU HAVEN’T DONE SO YET, WE WOULD APPRECIATE IF YOU DO!

Do you have a minute to spare? We still want to hear from you, our readers! Help us share the news that is most relevant to you by taking this short multiple-choice survey about our monthly advocacy recap. We want to inform you about our work to protect and promote free and open knowledge in the format that best meets your needs.

If you want more information about how we will use your feedback, please read our survey privacy statement.

________

Publishing public policy primers with talking points to help Wikimedians do free knowledge advocacy!
[Explore the public policy primers, written especially for Wikimedians]

Have you ever wanted to learn more about the Foundation’s key policy positions on internet and digital issues? Global Advocacy’s newly published policy position primers are a resource for the Wikimedia community to learn about some of our core priorities so they feel better equipped to analyze and advocate around internet regulations in their local context. The primers cover intermediary liability, privacy, information integrity, and human rights, and explain: why the issue is relevant for the Wikimedia movement; the Foundation’s general position on the topic; main talking points; and they also offer further reading material to better understand the issue. We have also provided a template so Wikimedians can create their own policy position on the internet governance and digital topics that are most important to them.

Explore the public policy primers and more at our Meta-Wiki Resource Center.

Wikimedians bring their perspectives to crucial discussions at regional Internet Governance Forums
[Read this blog post about what Wikimedians did and learned at local IGFs around the world]

2025 has been an especially important year for Wikimedians to engage with internet governance discussions. This December, the United Nations (UN) will conclude its review of 20 years of the World Summit on Information Society (WSIS) recommendations, and make an important decision: Will internet governance remain a multistakeholder matter—in other words, with governments, the private sector, civil society, and technical actors jointly developing the norms and rules that shape the use of the Internet—or will governments get to make those crucial decisions mostly alone?

Wikimedians have been engaging throughout the year at regional meetings of the Internet Governance Forum—the main forum for discussing WSIS outcomes and, hence, open to stakeholders across civil society, the technical community, government, and the international community—in order to share their perspectives there on the Wikimedia movement’s priorities.

In Brazil, Chile, Ghana, Germany, Italy, Nigeria, and Switzerland, local Wikimedians made connections at all levels of internet governance to discuss issues related to multilingual access, open licensing, community-governed digital public goods, and human rights online. As the world nears a critical decision about the future of internet governance, it is important that we continue showing up in policy spaces, sharing the wisdom built over 25 years of collective action so that the entire world can benefit from an internet shaped by and for everyone.

Read this blog post about what Wikimedians did and learned at local IGFs around the world.

Building an inclusive digital future at MozFest
[Learn about Wikimedia Foundation and volunteer sessions at the Mozilla Festival in our blog post]

The Mozilla Festival (also known as MozFest) is a yearly global gathering bringing together technologists, policymakers, activists, artists, and educators to shape a healthier internet and a more just digital future. This year, several Wikimedians joined the Global Advocacy team in Barcelona, Spain, to host sessions on topics related to Wikimedia’s values: openness, accessibility, and community governance.

Participants in one session highlighted the challenges of digital safety in authoritarian regimes and worked together across regions to share strategies to increase security. Other sessions explored how to build climate-resilient communities, and how to make linguistic and disability inclusion the default for web design. Franziska Putz (Senior Movement Advocacy Manager) led an interactive session about the Wikipedia Test along with Pablo Aragón (Research Scientist) and Albert Cañigueral (Barcelona Supercomputing Center), asking the audience to imagine they were regulators trying to solve the issue of artificial intelligence (AI) crawlers without accidentally hurting the public interest web.

Learn about Wikimedia Foundation and volunteer sessions at Mozfest in our blog post.

Wikimedia community project wins UNESCO fund for climate change information integrity
[Read the announcement and find out more about the project to provide information around extreme weather and climate-related events]

Wikimedia Brazil and the Working Group on Climate Justice and Wikimedia Projects (an effort maintained by Wikimedia affiliates from Argentina, Bolivia, Chile, Colombia, Peru, and Uruguay) recently collaborated on a proposal to present to the Global Initiative for Information Integrity on Climate Change Fund. This global initiative, established in 2024 by the Government of Brazil, the UN Secretariat, and United Nations Educational, Scientific and Cultural Organization (UNESCO), aims to promote information integrity on climate change topics.

Out of 447 submissions from nearly 100 countries, the Wikimedians’ project, titled “Advancing Information Integrity on Extreme Weather and Climate-related events in Wikipedia and Wikidata,” was one of the top 10 proposals selected. This is not only a major triumph for these Wikimedia affiliates from South America, but also for the Wikimedia movement, since it affirms volunteers’ ability to address critical global topics, such as information integrity on climate change, and the crucial role in doing so of digital public goods like Wikipedia.

Read the announcement and find out more about the project to provide information around extreme weather and climate-related events.

Sharing across the world how AI can affect open knowledge and human rights
[Read our blog post about the Foundation’s AI and Machine Learning Human Rights Impact Assessment report]

During November, the Global Advocacy team presented the findings of the Foundation’s AI and machine learning (ML) Human Rights Impact Assessment (HRIA) report and AI Strategy in various conferences and forums across the world. The AI/ML HRIA report was commissioned by the Foundation to examine the risks and opportunities that emerging technologies like AI and large language models (LLMs) present for the Wikimedia projects and the broader Wikimedia movement. In the HRIA report, external researchers identified potential risks related to social and human rights issues such as bias, knowledge equity, and discrimination, and also provided concrete recommendations for how to reduce these risks. The researchers identified opportunities for AI to potentially benefit the work done by the Wikimedia community as well, including through the Foundation’s own development of AI tools to support human-led editing work.

The world is at a critical inflection point: AI and ML development is changing the landscape of how people search for information online and policymakers and regulators are seeking to limit the more harmful impacts of these technologies. It is important to bring the Wikimedia movement’s perspective to these conversations, both as an important source of content to train generative AI models as well as as a nonprofit and public interest platform that could inadvertently be harmed even by well-intentioned laws and regulations.

We discussed these matters at the following conferences and forums. 

Wikiconferencia Colombia and Wikiherramientas
[Watch a video(in Spanish) of our presentation on the HRIA report]

At Wikiconferencia Colombia, Amalia Toledo (Lead Policy Specialist for Latin America and the Caribbean) spoke on a panel about AI and the digital commons. Amalia discussed how projects such as Wikimedia Commons, which is one of the largest repositories of freely and openly licensed media online, are affected by AI data scraping, and shared how local Wikimedians can get involved in public policy development related to AI. She called for regulations that require better attribution to AI source data and for stakeholders across government, the technology industry, and civil society to find mechanisms for reinvesting in the digital commons—which is also crucial to training and maintaining up-to-date AI models.

Amalia also led a session for Wikiherramientas, a series of workshops organized by Wikimedistas de Uruguay to exchange knowledge, tools, and recommendations to improve how the Wikimedia volunteer communities works on the Wikimedia projects. She presented the findings of the AI/ML HRIA report, along with the Foundation’s AI Strategy, and gave examples of how the Foundation and community can adapt to take advantage of the opportunities presented by these new technologies at the same time that they ensure that human rights are upheld both on the projects and offline.

Watch a video (in Spanish) of our presentation on the AI/ML HRIA report.

Asia Democracy Assembly 2025
[Learn more about the event]

The Asia Democracy Assembly 2025 hosted the largest conference for civil society working on democracy and human rights in Asia during November, bringing together over a thousand participants to discuss democracy, technology, and civic space. Attendees included government officials and regulators, thought leaders, activists, and representatives from the UN.

Rachel Judhistari (Lead Public Policy Specialist for Asia) joined the conference to network with those attending and further the Wikimedia Foundation’s global advocacy priorities by educating them on how the Wikimedia projects and model play an important role in providing the civil societies on the continent with digital public infrastructure that allows access to reliable information. Rachel spoke on a panel about public interest AI, where she shared the threats that some AI models pose to the sustainability of Wikimedia’s community-led and consensus-based public interest model, emphasizing the importance of practices like attribution to sources to the continued survival of this model.

Learn more about the event.

Sofia Information Integrity Forum
[Read more about the forum]

The Sofia Information Integrity Forum (SIIF) gathers experts from around the world to discuss questions and seek solutions related to maintaining information integrity, democracy, and resilience online. This year, several speakers from the Foundation and Wikimedia movement emphasized the important role that Wikimedians and the Wikimedia projects play in ensuring information integrity online.

Costanza Sciubba Caniglia (Anti-Disinformation Strategy Lead) opened with a keynote speech focused on how safeguarding Wikipedia and its volunteer contributors is not only essential to support information integrity on the Wikimedia projects, but also the global public. Costanza described how deployment of certain AI models is increasingly threatening to destabilize traditional information sources from which these same tools draw data to train and generate synthetic content. She emphasized that both governments and the private sector must find ways to continue supporting the important human-led work of knowledge production and fact-checking online.

Following the keynote, various members of local Wikimedia affiliates as well as Nataliia Tymkiv (Chair of the Executive Committee of the Wikimedia Foundation Board of Trustees) participated in a roundtable where she shared how Wikimedia’s volunteer contributors work every day to promote information integrity on the Wikimedia projects. 

Community Conversation Hours

During November the Global Advocacy team hosted two community conversation hours to present the findings of the Foundation’s AI/ML HRIA report to the Wikimedia community, inviting feedback and questions on these as well as its recommendations. This report—and the critical conversations that it generates—are a part of the Foundation’s ongoing efforts to meet our commitments to protect and uphold the human rights of all those who interact with Wikimedia projects readers and volunteers alike. We look forward to future opportunities to engage with the Wikimedia communities about these important topics. 

________

Follow us on LinkedIn, X (formerly Twitter), and Bluesky; visit our Meta-Wiki webpage; sign up for our quarterly newsletter to receive updates; and, join our Wikipedia public policy mailing list. We hope to see you there!

Can you help us translate this article?

In order for this article to reach as many people as possible we would like your help. Can you translate this article to get the message out?