Survey shows interest in evaluation in Wikimedia movement, with room to grow

Translate this post

The Wikimedia Foundation’s Program Evaluation & Design team recently completed a survey about the evaluation of organized activities within the Wikimedia community. Program evaluation allows the Wikimedia community to see if the programs and projects they are doing, often to inspire and engage people to participate in the Wikimedia movement, work. It’s important to find out whether the programs that we spend hours of work and much energy on, and may invest money in, can be more efficient, more effective, and more impactful. Program Evaluation allows us to do that, and the Program Evaluation & Design team is here to support the community in discovering ways to do just that.
The survey was completed in August, having been sent out to over 100 program leaders around the world. The survey’s goal was to get a high level view of how program leaders within the Wikimedia movement have been evaluating programs such as edit-a-thons, workshops, Wikipedia Education Program, on-wiki contests, Wiki Loves Monuments, WikiExpeditions, other “Wiki Loves”, and GLAM programs. We wanted to know what type of data was being gathered by those planning and executing such programs across the movement. The results show that people who run programs track a variety of different data points, which is good. We know how busy volunteers and chapter/affiliate staff are, so it’s wonderful to see their ability to include evaluation into their often already overwhelming workflows. We’re excited to share some results with you, and to explain our next steps.

Evaluation Capacity Survey

We had a great response rate – 69 of the 114 invited program leaders completed the survey! Respondents represented 32 Wikimedia chapters, three affiliated clubs and organizations and eight individual community members. Thank you to everyone who responded! Some of the highlights from the survey include:

  • Half of the respondents reported having received grants from the Wikimedia Foundation.
  • Edit-a-thons and workshops, photo upload competitions, and the Wikipedia Education Program were the kinds of programs which were most frequently organized in the movement in the past year.

Programs implemented in the last 12 months

  • The kinds of data which are most commonly gathered by program leaders are:
  • Dates when their programs take place (recorded by 67–86%, depending on program)
  • User names (64–91%, depending on program)
  • Number of articles edited as result of their program (46–73%, depending on program)
  • Number of new articles/media uploads (58–100%, depending on program)
  • Program leaders are also tracking the following data, with variance depending on the program. Shared learning can help improve the percentage of program leaders recording this information in the future:
  • Budget/monetary costs (from 21% for individually run writing contests to 60% for GLAM content donations)
  • Volunteer hours (from 0% for individually run writing contests to 73% for Wikimedian in Residence programs)
  • Staff hours (from 14% for individually run writing contests to 55% for Wikimedian in Residence programs)

This is exciting to see! It shows that program leaders throughout the Wikimedia world have already been gathering some measurable outcomes which are linked to some of the most inherent program goals. We also learned that program leaders have room to grow regarding data collection and setting measurable objectives.

Identification of Goals and Objectives, by Program

This survey confirms what the Program Evaluation & Design team learned from participants at our first workshop in Budapest: This room to grow is important and the movement is already making great strides in evaluation, and wants to stride even further – and it’s our team’s mission to help them. So what happens next?

First round of data collecting to launch next week

One of the first aspects of working with the community to improve data collection and help them create measurable objectives is for us, and the community, to have a solid understanding of the specific types of data being gathered, and of how it’s being gathered. Beginning next week, we will be piloting our first round of data collection. Those respondents from the capacity survey who said they were gathering data are going to be invited to a follow-up data collection survey. For example, the capacity survey showed us that as many as 54% of respondents are collecting exit surveys for their programs! That is an impressive number, and we want to learn more. So, we are asking those respondents to share their surveys and survey processes with us so we can learn more about the type of data they’re gathering. This will allow us to work with the community to develop shared tools and data gathering techniques – which can make the process of evaluation easier, more impactful, and more collaborative.

Evaluation Strategies in Operation, by Program

In the capacity survey, 12% of respondents reported that they currently had staff support for evaluation (either working on evaluation as staff members themselves, or having staff to support their evaluation), 30% had staff support planned in the future, 26% were solely volunteer supported, and 33% did not yet have a plan for how they would support their evaluation efforts. This data shows us that program leaders are primarily volunteers, and might lack the time and energy to evaluate and participate in the processes we are producing to make evaluation easier and more impactful. We understand that, and we hope that this first round of data gathering will allow our team to ease the burden with better tools and processes. Thank you to everyone who has worked with us, and continues to do so on this pilot!
In the coming weeks, we will share a detailed report about the capacity survey. Later in the fall, a comprehensive report will be published about our data collection pilot, which will drive next steps and processes. We look forward to a continued and strong relationship regarding evaluation that will positively impact the Wikimedia movement.
Dr. Jaime Anstee, Program Evaluation Specialist
Sarah Stierch, Program Evaluation & Design Community Coordinator, Wikimedia Foundation
2013-10-22: Edited to change title

Archive notice: This is an archived post from blog.wikimedia.org, which operated under different editorial and content guidelines than Diff.

Can you help us translate this article?

In order for this article to reach as many people as possible we would like your help. Can you translate this article to get the message out?