… few hundred megabits at a time

We’re sitting here in Wikimania Underground, where our “hacking days” at Buenos Aires happen. Seeing each other face to face can allow us to discuss much faster how we should approach some of possible changes, that could make the site faster  for everyone.
We eliminated quite a few web response headers (which cannot be compressed, due to how HTTP works), especially some of large ones we are using inside the cluster to achieve better caching, or debugging information – causing few hundred megabit savings (it is difficult to know exact numbers, due to the nature of caching).
Also, we’re experimenting with trade offs in content compression – by choosing more expensive compression methods, we decrease size of transmitted pages by up to 15%, though doubling compression costs on our side. We still think that we may end up doing different levels of compression for different types of content (something what will be efficiently cached from anonymous users will have way higher relative wins).
Of course, we will have reduced bandwidth bills, probably more than the additional hardware to cover the change would cost us in resources.

Archive notice: This is an archived post from blog.wikimedia.org, which operated under different editorial and content guidelines than Diff.

3 Comments
Inline Feedbacks
View all comments

“causing few hundred megabit savings” > is it per hour, per day, per month ?

” is it per hour, per day, per month ?” …
… per second, I guess 😉

That’d blow my mind 😮