Investing in our shared future, supported by AI: Announcing the Scoring Platform team

Translate this post

Illustration by Mun May Tee-Galloway, CC BY-SA 4.0.

On 12 January 2015, an editor by the name of Blank123456789 noted that “LLAMAS GROW ON TREES” in the article about Dog intelligence.  Within a second, the edit was flagged by an algorithm as potentially problematic
Another Wikipedia editor named IronGargoyle saw this flagged edit in an advance curation tool called Huggle.  With a glance, he was able to identify the edit as problematic and strike it down. This whole interaction took a matter of seconds. A vandal vandalizes, and a patroller supported by advanced vandalism detection artificial intelligences (AIs) sees the problem and corrects it.
Out of the 160,000 edits that the English Wikipedia receives every day, about 4,000 (2.5%) are vandalism, which has a very specific meaning on Wikipedia: editing (or other behavior) deliberately intended to obstruct or defeat the project’s purpose.  Reviewing the hundreds of thousands of edits that take place every day would be a monumental task–but the volunteer community of Wikipedia editors has managed quite well, something that is today largely due to AIs that have been developed to support them.
AIs make the work of maintaining massive encyclopedias, dictionaries, databases, and more much easier by making a lot of large scale tasks (like counter-vandalism and article quality assessment) much easier and quicker to spot and handle.  Historically, the AIs that have helped Wikipedians were built and maintained by volunteers. While these systems filled a critical infrastructural role, they were generally only available for the English Wikipedia and did not scale well.
Over the past few years, I have been working alongside a large group of volunteers on a core technology that makes basic AI support for wiki-work much more accessible to non-AI specialist developers.  Named “ORES,” it is an artificial intelligence service that makes predictions about which edits are vandalism, which new page creations are problematic, and which articles are ready to be nominated for Featured status. (See our past posts about how it works and measuring content gaps with ORES)
Without a doubt, the project has been a breakaway success. The beta feature has 26,000 active users and over 20 third party tools, is actively running in production, and has received positive write-ups in Wired, MIT Tech Review, and the BBC. As a result, we’ve become a leader in conversations around detecting and mitigating biases, and have built collaborations with researchers at UC-Berkeley, UMN, CMU, Télécom Bretagne, and Northwestern.
Developing and maintaining ORES requires a lot of consistent effort and vision, and we recently requested resources from the Wikimedia Foundation to formally support the project. With a budget and broader mandate in place, we can now focus on bringing new ORES models to production, improving performance, and extending accountability.

Meet the Scoring Platform Team

Photo by Myleen Hollero/Wikimedia Foundation, CC BY-SA 3.0.

The new Scoring Platform team is led by Aaron Halfaker, a principal research scientist who authored a series of studies into Wikipedia’s newcomer decline and designed Snuggle, a newcomer socialization support tool.  ORES is the next item on Dr. Halfaker’s research agenda.  He hypothesizes that by enabling a broader set of people to build powerful, AI-driven wiki tools, some of Wikipedia’s fundamental socio-technical problems may become much easier to solve.
Photo by Mardetanha, CC BY-SA 4.0.

Amir Sarabadani will be continuing his work as a quasi-volunteer and contractor for our peer organization, Wikimedia Germany.  Amir has developed several bots and bot-building utilities that are used to maintain content in Wikipedia and Wikidata.  Amir has been a core contributor since the early days of the volunteer-driven “Revision Scoring as a Service” project, and is the primary author of our insanely popular Beta feature—the ORES Review Tool.
Photo by Adam Wight, CC BY-SA 3.0.

As of this month, the team is welcoming their first full-time, budgeted engineer, Adam Wight.  He has worked with the Wikimedia Foundation’s fundraising team since 2012, volunteered for ORES and the Education Program. Outside of computers, he’s done a few eclectic things like helping to start “The Local” food co-op and People’s University, an open-air school on subjects ranging from philosophy to the history of adventure playgrounds and practical blacksmithing.  Adam is currently working out the details of an auditing system that will allow humans to more effectively critique ORES’ predictions.

Where we plan to go next

In the next year, the Scoring Platform team to work in three new directions:

  • Democratizing access to AI. We’ll increase the availability of advanced AIs to more wiki communities.  Small, but growing communities need AI support the most, so we’ll be targeting these emerging communities to make sure they are well supported.
  • Developing new types of AI predictions.  The team is currently experimenting with new types of AIs for supporting different types of Wikipedians’ work.  We’re collaborating with external researchers to develop prediction models.
  • Pushing the state of the art with regards to ethical practice of AI development.  AIs can be scary in all sorts of ways.  They can perpetuate biases in hidden ways, silence the voices of those who don’t conform, and simply operate at speeds and scales far exceeding mere humans.  We’re building a human-driven auditing system for ORES’ predictions so that human contributors will have a new and powerful way to keep ORES in check.

Until now, ORES was primarily a volunteer-driven project.  With minimal financial support, a ragtag team was able to build a production-level service that supports 29 languages and 35 different Wikimedia project wikis.  The ORES Review Tool (a simple tool for helping with counter-vandalism work) has been a breakaway success, with over 26k editors installing the beta feature before it was enabled by default.

How to learn more and get involved

The Scoring Platform team welcomes collaboration and volunteers to get involved with the project.  See the team’s page and our technical blog for more information about how to get involved.  See ORES’ documentation for more information about using the service or getting support for your wiki.  Or join the larger community of people interested in applying AI to make wikis work better via our mailing list and IRC channel (#wikimedia-ai on freenode).
Aaron Halfaker, Principal Research Scientist, Scoring Platform team
Wikimedia Foundation

Archive notice: This is an archived post from blog.wikimedia.org, which operated under different editorial and content guidelines than Diff.

Can you help us translate this article?

In order for this article to reach as many people as possible we would like your help. Can you translate this article to get the message out?