This post is not about actions that platforms take themselves (for example, Google in the Google Books case, whatever you think of how it came out, was just directly scanning books itself), but about their responsibilities with respect to user activities (for example, videos uploaded by users to Facebook or Youtube). Some copyright people might say that this is a distinction without a difference, as a file host might be a direct infringer—but we’re not talking about direct vs. secondary infringements, volitional action, and any of those fun doctrines, but the what I hope is fairly commonsense distinction between potential infringements that result from the platform acting on its own vs. those that directly result from user activity, while acknowledging that there may be grey areas where a platform takes affirmative steps to encourage third-party infringing use. (In those cases, under existing law, the platform may be liable at least as a secondary infringer, even if its behavior does not constitute direct infringement.)
This already hints at a pretty fundamental issue: to what extent should a platform that enables a user to infringe, also be held to infringe itself? Which conduct should actually constitute “infringement”? Does an internet platform only have an obligation to avoid infringing in and of itself, or does it have an obligation to prevent third-party infringements? If it fails to police third-party infringements should it become an infringer itself or is there some other penalty? (The current DMCA framework, which creates a "safe harbor" for conduct where there is sometimes not an underlying infringing act, does not contribute to clarity in this regard.)
More questions: Should an ISP be held to infringe if infringing material is transmitted over its wires? Should a file host be held to infringe if a user uploads infringing content to a private account? To a public one? Does it make a difference if the content is encrypted such that the file host or ISP can’t see it? Should services have an affirmative obligation to scan private content, and should this affect liability?
These questions are all about what the actual scope of copyright should be. It is very hard to disentangle views on enforcement mechanisms from views on the underlying scope of the law. I am personally more sympathetic, for example, to attempts to enforce rights that I also believe should actually exist. But the law does not distinguish between enforcement efforts relating to a 70-year-old out-of-print book that should be in the public domain by now, and a newly released movie. So let’s fix that.
Next, now that we’ve all agreed on what the scope of the law should be, what’s the best way to enforce the underlying rights? Who should bear the cost? The easy policy answer is that the “least cost avoider” should. But it’s not clear who that is, and anyway it might vary from case to case.
In general, I think that the cost of enforcement of many private rights ought to be borne by the rightsholder, because that’s the cleanest way to ensure that the cost of enforcement doesn’t exceed the benefit of the right. From a utilitarian perspective, creating a strict liability regime, with crippling statutory damages for platforms, could create a situation where people are spending $500 to enforce a $5 right. Not good! (This is basically one of the differences between private civil law and criminal law. If something is a crime, you might want the state to prosecute it, costs be damned. But if someone commits a tort against you, it’s on you to try to recover damages if you think it’s worthwhile. Most torts are just not worth suing over.)
But, there are some pretty clear equity problems with taking that approach too far. I might want platforms to have to spend a disproportionate amount of money to protect the rights of individuals, if not those of Disney. At the same time, some of the enforcement mechanisms long advocated by the content industry are easily affordable by say Facebook and Google—who already have voluntarily implemented some of them anyway—and not by smaller platforms. For example, “staydown” requirements require a fairly advanced filtering and scanning infrastructure, and people still find ways around Youtube’s Content ID pretty regularly.
Still, it seems that platforms are best-situated to see what happens using their facilities—what if it would cost $3 for a platform to protect a $5 right, and it would cost the rightsholder $4? In that case, don’t we want the platform to do it?
But there’s still another question: Which platform? If we’re talking about Youtube, it seems like a pretty easy question. But what about a third-party CDN, an internet transit provider, the user’s ISP, the operating system vendor, the browser maker, the end-user device manufacturer? There’s no reason to think that any of them might not be the least-cost-avoider. Remember that there was a pretty strong feeling among content types that Apple was either encouraging or at least profiting from piracy when it released the iPod, and that some countries charge a copyright levy on blank media. So there’s not some kind of “layers” principle that everyone already agrees with.
It seems that if the goal is to just empirically determine who can enforce a right in the most cost-effective way, everything should be on the table. People often say that you wouldn’t want to hold the manufacturer of the getaway car liable for a bank robbery, but that’s not really true; if holding the manufacturer liable was the best way to actually prevent robberies (note: it isn’t), we probably would. All this stuff about “proximate cause” you learn about in torts class would go right out the window. This is in part why payment processors are responsible for enforcing all kinds of junk.
These are all ridiculously hard questions. There are answers to many of them within the current framework of the law but we’re not talking about that right now.
If I were inclined to rework the basic legal framework in this area I’d probably:
Distinguish between the obligations of large/dominant/etc platforms, and other platforms. I think it’s fine to put extra burdens and duties of care on large platforms, though the mechanism by which to accomplish this should probably not be, “If you mess up you stand in the shoes of the uploader of the content.” There are other ways to create obligations, besides tinkering with substantive copyright law.
Not treat all “intermediaries” or platforms or whatever the same. Pure transmission companies should not be liable for what passes over their wires, for instance. Current law does not make these distinctions, at least not very clearly.
Some other stuff.
I’ll end with a note that is somewhat pro-content. I will admit to being surprised that The Pirate Bay is still up and running and accessible, after so many years. I mean, come on. I am not a fan of DNS/ISP-level site-blocking as was proposed in SOPA but it sure seems like we need a better way to actually take action against such a blatantly unlawful site. Unfortunately on the internet we quickly run into some really thorny jurisdiction issues that sadly show no signs of being resolved.