How Smart is the SMART Copyright Act?

The word "copyright" spelled out on a keyboard
Copyright spelled out on a keyboard. Image by Dennis Skley, CC BY-ND 2.0, via Flickr

During March 2022, United States Senators Patrick Leahy and Thom Tillis introduced the Strengthening Measures to Advance Rights Technologies Copyright Act of 2022 (SMART Copyright Act). The bill is deceptively simple. It would require the Library of Congress to mandate that online platforms use certain “technical measures” (i.e., automated systems) to identify infringing content. Its simplicity masks its dangers, however. For that reason, though the Wikimedia Foundation agrees that technical measures to identify potentially infringing works can be useful in some circumstances, we sent a letter (reproduced below) on 19th April 2022 to the bill’s sponsors letting them know that we oppose it. 

Under the SMART Copyright Act, the Foundation and Wikimedia communities could be forced to accommodate and implement technical tools to identify and manage copyrighted content that may not be right for Wikimedia projects. This requirement could force the Foundation to change its existing copyright review process, even though the current process is working very well. 

Currently, content contributed to Wikimedia projects must be available through a free knowledge license, in the public domain, or subject to some other limitation on copyright protection. The Foundation and our communities mostly rely on Wikimedia editors to figure out whether particular content complies with the rules. These editors do use certain automated technical measures to help them, but the decisions about which measures are appropriate and what content requires action are theirs. In addition, the Foundation accepts requests to remove content under the Digital Millennium Copyright Act (DMCA). Because the user policies and review systems are extremely effective, the number of DMCA takedown notices the Foundation receives is very small, and many are not granted. For example, our last transparency report shows we received only 21 total DMCA notices between July and December 2020 (as compared to the nearly 150,000 received by Facebook). We granted only 2 of them, which indicates that the other 19 were inappropriate or defective in some manner.  

If the SMART Copyright Act forces Wikimedia projects to use inappropriate tools or to substitute inappropriate tools for our existing copyright enforcement process, we are concerned it will make our copyright enforcement worse. The SMART Copyright Act, like other proposals before it, puts too much faith in artificial intelligence and automated tools as the only solution to infringement. While we fully agree that tools can be a helpful aid in identifying infringement, they should not be considered as a fix for all enforcement problems. There are two main reasons for this:

  1. Technical tools are not good at determining when a work was “fairly used” or when a work has entered the public domain. This flaw leads to inappropriate censorship. Even YouTube’s Content ID identifies numerous false positives for infringement, and fails to catch a significant amount of problematic content. We worry that such tools would do far worse than the Wikipedia non-free content policy enforced by users.
  2. Technical tools are often developed and owned by one company, and are not open source or freely available. If specific tools are mandated by the copyright office, this would make it difficult for smaller companies and nonprofits to use them without becoming overly reliant on those companies.

The SMART Copyright Act tries to address these concerns by requiring the Librarian of Congress to implement a process to take input from a broad range of stakeholders. The problem with this approach is that large rights holders and large platforms are very likely to dominate the process, since these organizations can devote more time and staff to the proceedings. Lost in or absent from the debate will be small platforms, nonprofit platforms, and—most concerningly—the public and the creative community that relies on free knowledge protections to flourish. This will likely produce designated technical measures that fail to take into account the diversity of information, formats, forums, and platforms as well as the impacts that deploying these measures could have on various kinds of information, formats and platforms.

Online platforms should be free to use the processes and technical measures that are most appropriate for their individual formats and communities. Appointing a government agency to dictate which technical measures platforms must use will likely lead to censorship of legal content. It could also make Wikimedia projects’ copyright enforcement less efficient. For those reasons, we hope that senators will reconsider the SMART Copyright Act. 


*     *     *

BE HEARD on the SMART Copyright Act! In addition to the letter the Foundation sent, you can let Congress know that you oppose mandatory censorship filters. Fight for the Future is leading a petition opposing the harmful impacts of the legislation that will be delivered to Congress on 25th April, 2022. You can sign the petition at www.nocensorshipfilter.com and make sure Congress knows just how many people are concerned about the impacts this bill will have on free speech. 


*     *     *

Senator Patrick Leahy   

Chair 

Senate Judiciary Committee Subcommittee on Intellectual Property

437 Russell Senate Office Building

Washington, DC 20510

Senator Thom Tillis

Ranking Member

Senate Judiciary Committee Subcommittee on Intellectual Property

113 Dirksen Senate Office Building

Washington, DC 20510

Dear Chair Leahy and Ranking Member Tillis:

The Wikimedia Foundation opposes the Strengthening Measures to Advance Rights Technologies Copyright Act of 2022 (SMART Copyright Act) due to our strong concerns about the negative impacts it could have on free knowledge projects, including Wikipedia. The bill, as currently drafted, would require the Librarian of Congress to institute a process that would mandate that nearly all online platforms use certain technical measures to identify and remove potentially infringing content from their services. We are concerned that requiring one-size fits all measures could upset the delicate balance between encouraging free expression and allowing for vigorous enforcement of intellectual property rights that has emerged since the passage of the Digital Millennium Copyright Act. Particularly, we are concerned that the imposition of these measures could force the Wikimedia Foundation and other hosts of community-driven platforms to make changes to our public interest projects that could harm Wikipedia’s volunteer contributors’ commitment to the exchange of free knowledge and disrupt our already well-functioning copyright enforcement system.

The Wikimedia Foundation hosts several projects of free knowledge, the most famous of which is Wikipedia. Within these projects, hundreds of thousands of users around the world create free, collective knowledge, and the projects use a number of long-established community-led systems to ensure copyright compliance. One of the requirements for knowledge to be freely available is that it is hosted under a free culture copyright license (our projects primarily use Creative Commons licenses) or in the public domain. The Wikimedia projects also make exceptions to this free culture requirement on a case by case basis. For example, English language Wikipedia allows fair use images to illustrate articles where no non-free image is available such as for older musical groups and movies. This is reflected in a policy written and voted on by the users themselves.

The Wikimedia projects use a multi-layered system of human review and tools that assist volunteer reviewers to ensure the accuracy of copyrighted material and licensing information on the projects. As an initial step, many Wikimedia projects have an upload wizard (for example this one is the most common for photographs) that prompts the user to provide licensing information or, if it is their own work, to license it under a creative commons license. The Wikimedia Foundation’s Terms of Use also have a more formal content licensing agreement within them.

Once a work is uploaded, it is typically monitored by other users with the assistance of a variety of tools. The Foundation hosts some tools, which are developed by an open source developer community and used for detecting possible copyrighted materials on the Wikimedia projects. Other tools are hosted on community-created pages that help users address copyright issues. These tools are employed by the volunteer editors to help them review changes to the Wikimedia projects and identify changes that may infringe copyright or violate the free knowledge licensing requirements of the projects. 

The Foundation also accepts DMCA requests sent to it directly. Because the user policies and review systems are extremely effective, the number of DMCAs the Foundation receives is vastly smaller than the millions received by most hosting providers and many are done in bad faith. For example, in our last transparency report we received only 21 total DMCA notices and granted only 2 of them, indicating that the other 19 were inappropriate or defective in some manner. 

Because our overall architecture focuses on hosting content that is freely available under copyright law, these measures broadly assist in ensuring that Wikimedia hosted content is legally available. At the same time, limitations and exceptions to the copyright system are also an important part of protecting free expression. Some of the inaccurate DMCAs we have received in the past resulted from the use of technical tools other than those hosted by the Foundation or commonly used by our community finding works on our sites that either falsely asserted ownership or, even more concerningly, failed to adequately assess fair use even in clear cases of non-commercial educational use. Takedown demands generated by such tools can be disruptive and confusing for our user communities and require resources for a legal response that may take away from other work to advance the Foundation’s non-profit mission.

Under the SMART Copyright Act, the Foundation and our user communities could be forced to accommodate and implement technical tools to identify copyrighted content, regardless of whether those tools are appropriate for our projects. Forcing our projects to use inappropriate tools or to substitute these tools for our existing copyright enforcement process runs the risk of eliminating the nuanced review the Foundation and community are able to engage in when reviewing allegations of copyright infringement, including analysis of whether the content represents fair use of a copyrighted work. That will inevitably lead to over-enforcement and over-removal of legal content and harm to free expression and the free knowledge movement. 

The SMART Copyright Act, like other proposals before it, simply puts too much faith and emphasis on artificial intelligence and automated tools to enforce copyright laws. While we strongly agree that tools can be a helpful aid in identifying infringement, they should not be considered as a fix for all enforcement problems or supersede the work of a volunteer community, and the concerns that mandating reliance on them creates for free expression and the free exchange of legal content far outweigh any benefit to rightsholders. Even YouTube’s content ID tool, which is highly accurate and efficient for its purposes, nonetheless identifies numerous false positives for infringement and also fails to catch a significant amount of problematic content as well.

We appreciate that the bill does require that the Librarian of Congress consider many of these concerns when determining whether to make a technical measure a designated technical measure and on which platforms the particular measures would have to be deployed. However, the process is very likely to be dominated by rightsholders and by large platforms that have the time and capacity to devote to the proceedings. Lost in or absent from the debate will be small platforms, non-profit platforms, and, most concerningly, the public and the creative community that relies on free-knowledge protections to flourish. In addition, any designated technical measure will present implementation problems. As noted above, they often do a poor job analyzing whether content qualifies for one of the exceptions or limitations on copyright, including whether the content is a fair use. Finally, any approved technical measure will almost certainly be proprietary rather than free and open source further limiting the ability of small platforms to integrate them and raising the likelihood that those that benefit will be the standards creators: the industry power players as well as the big tech companies. This will likely produce designated technical measures that are over-broad and implementable only by the largest platforms and/or fail to take into account the diversity of information, formats, forums, and platforms and the impacts deploying these measures could have on them.

Thank you for taking the time to consider our views and our concerns. If you have any additional questions, please do not hesitate to reach out to Kate Ruane, Lead Public Policy Specialist for the United States, kruane@wikimedia.org.