From August 20–22, 2025, Nairobi became the stage for one of the most important conversations shaping our digital future: the Gendering AI Conference. For three days, activists, feminists, policymakers, technologists, and community leaders gathered to raise a question that was urgent and thought-provoking: Can artificial intelligence be gendered?

When we talk about gendering AI, we don’t mean giving it a woman’s voice. We mean asking whether these systems are built on values like care, fairness, and equality. This is not a women’s issue, it is a question of justice and design that touches all of us. Wikipedia helps us see why: for a long time, only one in five biographies were about women. That gap didn’t exist because of a lack of effort, but because of how knowledge was collected and recorded even beyond Wikipedia and how this has shaped what the whole world sees as “truth.” The problem isn’t about women or men, but the design of knowledge systems.
The conference was structured in three tracks. The first day focused on governance, ethics, and justice; big questions of power and accountability. The second day turned to feminist AI for social change and technical innovation. The third day looked at community and wellbeing futures, with special attention to rural, queer, and youth perspectives. Moving through these sessions, one could feel the weight of urgency but also the energy of imagination. At every stage, I found myself reflecting on how these conversations intersect with the work of Wikimedia, where I serve in advancing gender equity in open knowledge. The overlaps are striking questions of data, safety, power, representation, imagination, and accountability are not abstract but they are alive in both spaces.
Data sovereignty: who gets to decide?
One of the strongest themes was data sovereignty. The first keynote speech challenged us to think about how power is encoded into AI from the very beginning: who asks the questions, who decides what data is collected, and who decides how it is used? Without African languages, histories, and cultures represented in training datasets, AI becomes a new form of digital colonialism, extracting from the Global South while excluding their realities. This was echoed in a panel on AI-driven violence in war and conflict zones, where examples from the DRC and Ethiopia showed how technology designed elsewhere can be tested on communities with little power to consent.
Listening to this, I was reminded of the structural challenges Wikimedia has grappled with for years. When African women’s biographies are underrepresented on Wikipedia, or when oral traditions remain undocumented, these are not just gaps but reflections of broader systemic silences in how knowledge has historically been collected and valued. The result is that African realities are less visible in the world’s largest open encyclopedia, and by extension, in the datasets that many AI systems draw from. The Nairobi discussions, therefore, underscored an opportunity: Wikimedia can be a powerful ally in advancing data sovereignty. Each time a local language is strengthened on Wikipedia, each time an oral history is digitized, each time an African woman leader’s story is published, we are not just enriching an encyclopedia, we are ensuring that the Global South is present in the digital future.

KICTANET Presentation during Gendering AI Conference 2025
Expertise and silos: who do we consider an expert?
Another striking thread was the question of expertise and silos. Throughout workshops like How Power Shapes AI and the co-designing of gender-inclusive AI toolkits, speakers asked: who do we consider an expert? Too often, AI is left to engineers and data scientists, pushing the narrative that technical skill alone defines expertise. But the conference showed a richer truth. Grassroots feminists spoke about the ways AI intersects with care economies. Rural women imagined what an agri-tech app would look like if designed through their daily struggles. Young women in AI talked about building what they had needed as girls. Content moderators, many of them African, spoke openly about the trauma of cleaning up toxic online content. Each of these voices offered forms of expertise that are rarely recognized, yet deeply necessary.
The lesson here is that working in silos impoverishes technology. AI cannot be gendered if it is built only by those who code. It must be shaped by those who live at the margins, those who carry memory, culture, and contradiction. Again, this resonates with Wikimedia. Knowledge equity requires breaking silos too. If only a small, technically proficient group defines what counts as knowledge, then Wikimedia risks reproducing exclusion. But if we widen the definition of expertise to include oral historians, feminist researchers, young contributors, and storytellers, then Wikimedia can become a model for how to value all forms of knowledge in shaping digital futures.
AI, a tool for harm but also a tool for safety
The second day of the conference deepened this conversation by looking at technology-facilitated gender-based violence (TGBV). The keynote speaker for the day painted a sobering picture of how disinformation campaigns, misogynistic codes, and deepfakes target women leaders, journalists, and activists. She shared the story of women politicians falsely branded as prostitutes, their images manipulated to humiliate them, and their reputations attacked through coordinated campaigns. UNFPA’s session on safe and ethical AI echoed these concerns, stressing that violence online is not separate from violence offline – it is part of the same continuum.
Yet what struck me most was the insistence that AI is just a tool. It can be used to harm, but it can also be reclaimed. The same technologies that spread lies can be used to track disinformation networks. The same algorithms that amplify misogyny can, if reoriented, amplify solidarity and truth. This duality is also present in Wikimedia’s world. Online harassment remains one of the biggest barriers for women and marginalized genders who want to contribute. The Nairobi conversations reminded us that safety cannot be an afterthought – it must be part of the design. For Wikimedia, this is an opportunity to embed care into community governance, to build safer spaces where contributors can write and edit without fear. By doing so, Wikimedia can model what it means to humanize technology.

UNFPA Presentation during Gendering AI Conference
Wellbeing and futures: AI can be a site of care, healing, and liberation
By the third day, the focus shifted toward wellbeing and futures. The main plenary highlighted the mental health toll on African content moderators who are tasked with cleaning toxic material for global platforms. Another session on queer healing and rural feminist technologies reminded us that AI should not be reduced to profit or efficiency; it should be about care, healing, and liberation. The workshop Reworlding Tech was especially powerful, inviting participants to imagine technology differently: not as something we escape from, but as something we redesign through feminist values.
Youth: dreaming towards a future of possibility
What stayed with me most, however, was the youth voice. In Inheritance: Young Women in AI, speakers said they were building what they had needed as girls. That, to me, was the heart of feminist imagination: technology rooted in the past but dreaming toward the future, shaped by memory and struggle but insisting on possibility. It reminded us that imagination is political. If rural women designed agri-tech, if queer communities reimagined healing technologies, if young women defined AI’s future, the results could look radically different.
This is also where Wikimedia’s role shines. Wikimedia is not only a platform for documenting the world as it is; it is a space for imagining the world as it could be. By amplifying diverse forms of knowledge, by strengthening content in African languages, by nurturing young contributors and feminist leaders, Wikimedia can align with the spirit of these conversations. The opportunity is to ensure that open knowledge is not just a record of the past, but a seedbed for more humane technologies.
Accountability: how do we repair harm?
Across the three days, a difficult but necessary question echoed: How do we repair the harm already done? AI has already amplified biases, normalized stereotypes, and scaled violence. Repair, participants agreed, requires more than technical solutions. It demands accountability, redistribution of power, and centering those harmed in the process of building new futures.
This too has meaning for Wikimedia. Accountability in our context can mean listening more closely to underrepresented contributors, measuring success through equity in addition to scale, and building stronger partnerships with feminist and digital rights groups who are already leading in these spaces. It is not about pointing fingers but about recognizing that Wikimedia, as one of the world’s largest open knowledge ecosystems, has a unique opportunity to align with the values of repair and equity.
Wikipedia as ally
By the time the conference closed, I felt both urgency and hope. Urgency, because AI is already reshaping societies in ways that deepen inequality. Hope, because communities are actively imagining technologies rooted in justice, solidarity, and care. Perhaps gendering AI is possible, but only if we resist silos, broaden who we call an expert, humanize technology through empathy, and ensure that marginalized people’s realities and feminist voices are centered in the knowledge that trains it.

This is exactly where my work at the Wikimedia Foundation comes in. As Gender Lead, my role is to make sure that our movement doesn’t just close knowledge gaps, but also shows up in global debates on AI and digital futures. Wikimedia’s projects already amplify marginalized voices and support safer participation, and now the opportunity is to connect this work even more explicitly to the Wikimedia AI Strategy. As the strategy reminds us, “digital information that is created and verified by humans is the most valuable asset in the AI tech platform wars.” If AI will tell the story of the world, then Wikimedia can help ensure that story is told with equity, dignity, and humanity at its heart.
Through our FY25–26 Gender & Inclusion OKRs, we are turning this vision into action:
- Building resilient communities by co-developing a Community Care Toolkit and strengthening harassment response in collaboration with Trust and Safety and the WikiWomen* Taskforce.
- Advocating for gender-inclusive AI and information integrity, with storytelling events like this one, publishing resources, and building partnerships with feminist allies so Wikimedia is represented in global AI governance.
- Enabling diverse future audiences, ensuring more women, youth, and Global South contributors step into leadership roles.
- Providing cross-team equity support so that gender and inclusion are embedded across campaigns, grants, and advocacy.
For anyone who would like to continue this conversation, you can reach me at bridgitk-ctr@wikimedia.org
Can you help us translate this article?
In order for this article to reach as many people as possible we would like your help. Can you translate this article to get the message out?
Start translation