robots.txt

Translate this post
Robot Icon, courtesy of Wikimedia Commons

This is not a very exciting title for a post, granted, but this little file contains quite a bit of power, especially on the Wikimedia websites. The little lines of command found in this file tell us what pages should not be included when search engines like Google or Yahoo! spider Wikimedia content.
Many of the commands in robots.txt are there for technical reasons. For example, we do not want search engines to index dynamically-generated pages, such as the Search page, because this would put too much of a load on our servers.
However, we have also included some discussion pages in robots.txt. The issue here is not so much article content but rather all the bickering, flamewars, and name-calling that we often find on discussion pages.
Consider this one aspect: Search engines are used constantly by employers hunting for information about prospective employees. Imagine a candidate being rejected because of an unanswered late entry to a year-and-a-half old conversation telling Joe Q. Lastnamehere that he is a liar and con man and his authority is fraudulent. You may believe that such an employer would be legally wrong to base a hiring decision on such a frail source, but people make these sorts of decisions all the time by using search engines.
Robots.txt already keeps search engines from spidering several types of discussion, including page deletion discussions on several wikis. By excluding those pages from search engines, we can keep the discussion on-wiki without broadcasting “non-notable” or “spammer” on every search. This has dramatically reduced the number of complaints our OTRS volunteers have received about these discussions.
As some of our users have discovered, though, there is another hazard of search engines: user discussion pages. These pages often contain users’ real names, and often call those people “vandals” or “plagiarists” or “biased”. These can be as bad as deletion discussions, if not worse.
All projects should be aware of the potential hazards of not including these pages in spidering. It may be time to coordinate your language namespaces so that you may be able to prevent any hazardous issues resulting from non-mainspace discussions about people. You can request that the developers add items to the robots.txt file by filing a bug at http://bugzilla.wikimedia.org.
Very truly yours,
Cary Bass, Volunteer Coordinator

Archive notice: This is an archived post from blog.wikimedia.org, which operated under different editorial and content guidelines than Diff.

Can you help us translate this article?

In order for this article to reach as many people as possible we would like your help. Can you translate this article to get the message out?

4 Comments
Inline Feedbacks
View all comments

Very informative. Thank you.

Perhaps the Foundation could concentrate some developer time and funding on improving our own search system. This would reduce reliance on external sites, and allow broader exclusion of “working” pages from Google etc.

@thewub: If you follow wikitech-l lately, you’ll see there’s a lot of active ongoing work on the internal search – it’s noticeably better of late.

There have been recent proposals to noindex all talk pages on the English Wikipedia – I’m opposed to this on the general principle that it’s best to let the search engines index as much as possible and sort out relevance on their own (determining relevance of webpages is what search engines do, after all). As long as talk pages aren’t misleadingly portrayed as sources of factual information, rather than opinions of individuals, I don’t think there’s a problem – and a forum interface for talk pages will go a long way towards cementing that impression.