The ORE of Communication will be available for subscription in late September. Speak to your Oxford representative or contact us to find out more.

Show Summary Details

Page of

 PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION ( (c) Oxford University Press USA, 2016. All Rights Reserved. Personal use only; commercial use is strictly prohibited. Please see applicable Privacy Policy and Legal Notice (for details see Privacy Policy).

date: 25 September 2017

Policies for Online Search

Summary and Keywords

The volume of information on the Internet is incomprehensible and growing exponentially. With such a vast ocean of information available, search engines have become an indispensible tool for virtually all users. Yet much of what is available online is potentially objectionable, controversial, or harmful. This leaves search engines in a potentially precarious position, simultaneously wanting to maximize the usefulness of results for end users while also minimizing political, regulatory, civil, and even criminal difficulties in the jurisdictions where they operate. Conversely, the substantial logistical and legal obstacles to regulating Internet content also leave policymakers in an unenviable position, and content that the public or policymakers may well want regulated—even that which is patently illegal—can remain virtually impossible to stamp out.

The policies that may affect online search are incredibly varied, including contract law, laws that affect expression and media producers more generally, copyright, fraud, privacy, and antitrust. For the most part, the law that applies was developed in and will still apply to offline contexts as well. Internet search is still an area filled with its own vexing policy questions. In many cases, these are questions of secondary liability—addressing whether the search provider is liable for search results that link to websites that are beyond their control. In other areas, though, the behavior of search providers will endure specific scrutiny. While many of these questions could be or actually are asked in countries around the world, this article focuses primarily on the legal regimes in the United States and the European Union.

Keywords: Internet, search engines, law and policy, copyright, privacy, antitrust

Who Gets Regulated and What Is Included by Policies on Search

In ordinary conversation, and even in much scholarship, the term “search engine” is often used narrowly—largely to refer to general-purpose Internet search portals such as Google and Bing. These companies are undoubtedly important; they anchor much of how people use the web, and Google alone is one of the world’s most valued and influential companies. Yet the place for and importance of technologies to search databases—and even web-based databases—is far wider and more diverse than that. Any program that digs through one or more computer databases in response to user queries is, in a very literal sense, a search engine. Thus, anybody who designs such a search technology is at least potentially subject to many of the same regulations that might apply to a major commercial web search portal, especially if this search technology is made available to the public.

Thus, in addition to general-purpose Internet search services, others regulated by Internet search policy may include sites that deploy their own custom search engines, specialized search services that focus on topics such as news or social media content, companies that host searchable user-created content such as reviews of products or services, and computer programmers who develop new search technologies for purposes like fun, research, and education. This means many of the policies that affect search are of concern not only to Yahoo! and DuckDuckGo, but also for a large share of the technology ecosystem. Much of the legal discussion, however, is understandably focused on commercial search operators.

The scope of search laws and policies that govern Internet search is hard to overstate. Internet law generally, and the law that touches Internet search specifically, is mostly a collection of other areas of law (Grimmelmann, 2015). Thus, all of the areas included here, such as contract law, copyright law, and the freedom of expression, are also important for offline life. This article is thus an exploration of the ways in which these areas of law touch on internet search, though there are areas where the Internet and search engines present previously unknown challenges that have led to changes, refinements, and specific schemes within several of these areas.

Multiple Jurisdictions

While the term is dated, the geographic component of “World Wide Web” is still a very accurate name for the collection of things available to a user with an Internet-connected web browser. The Internet is not merely international, but transnational; in many cases, users are literally unaware of which country content is coming from, where their personal files such as emails and online backups are being stored, and who owns which services. So it is with potential results from search engines. Excepting the minority of regimes that deploy aggressive technological Internet censorship, users are generally able to visit nearly any site, anywhere in the world. Thus, even when a regulator imposes regulations or obligations on a domestically based search engine, search providers who are based abroad can provide an unregulated alternative to end users. For countries that do not affirmatively restrict Internet content, the transnational nature of the Internet will therefore limit the value of such restrictions.

One potential way that countries can impose some degree of control over foreign websites is to take advantage of the system of domain names. Each website owner registers and pays for their ownership of that site with a domain registrar—a retail vendor of web addresses. The registrar then communicates information about purchases and renewals to the domain name registry—the organization that controls each top-level domain (TLD), such as .com or .uk. The organization that oversees the entire system is the Internet Corporation for Assigned Names and Numbers (ICANN). This system gives each country at least some control over some web addresses (at a minimum, those with that country’s TLD), and it gives the United States, in particular, some leverage over many websites that are owned and hosted abroad. Most ICANN-accredited registrars are in the United States, as is ICANN itself. So is VeriSign, the registry for two especially common TLDs, .com and .org. U.S. dominance over the domain registration system has been greeted with years of pushback and efforts by other governments to help shape the future of ICANN, to some effect (Christou & Simpson, 2007; Mueller, 2010).

With the U.S.-based leverage that relates to domain names, starting in June 2010, U.S. Immigrations and Customs Enforcement (ICE) launched “Operation In Our Sites” to seize domain names of websites accused of engaging in illegal online behavior (Herman, 2013). Targeted sites have included those accused of selling counterfeit handbags, trafficking in child pornography, and facilitating widespread copyright infringement. Some of the affected sites are primarily intended to help users find content, instead of hosting that content directly. One such site was a Spanish website, Rojadirecta, which helps users find unlicensed online streams of sports matches. In 2011, the site had its .com and .org domain names seized, so that users who tried to visit were instead directed to an ICE message stating that the domain had been seized and warning against copyright infringement. This occurred even though a Spanish court had already found the site to be legal in that country. The international nature of the web and of TLD registries, however, limits the likely impact of such an action. On a different TLD, .me (the TLD of Montenegro), kept serving users—even U.S. users—with the same content, and it was easy for users to find this new site using general-purpose search engines such as Google or Bing. Unless the U.S. government is willing to leverage ICANN being based in the United States to seize foreign websites—and, so far, it is not willing—this is a substantial loophole that will limit the effect of domain seizures. An alternate course of action is that the state could restrict the ability of search engines to return links to (allegedly) forbidden content. This, too, would likely have little effect in restricting access to forbidden content; when such a change was proposed in the name of copyright enforcement as part of the Stop Online Piracy Act, it was rejected in 2012 after the largest online protest in history (Herman, 2013, pp. 1–5, 180–205). Across the range of Internet policy, the global nature of the Internet reduces the ability of most governments—even the global Internet hegemon—to limit what their users have access to.

The range of jurisdictions also creates difficulties for anyone trying to study the varying policies that affect online behavior—or anyone trying to create an online business that must account for multiple jurisdictions. This article provides several comparative examples for a range of relevant laws, and these are merely illustrative rather than comprehensive. Yet the differences are even more profound than that implies, even reaching down to the foundations of different countries’ legal systems. The laws of some countries (such as those of continental Europe) are based in the civil law tradition, with its emphasis on trying to create a comprehensive, continuously updated code of laws. Others (most notably the United Kingdom and its former colonies) are based on the common law tradition, which asks judges to integrate a cacophony of constitutional clauses, statutes, relevant cases, and even customs and business practices to reach their decisions (The Common Law and Civil Law Traditions, n.d.). This creates potential confusion when jurisdictions are mixed, such as in the European Union where common law and civil law systems must co-exist (Tetley, 1999).

Multiple Stakeholders

In addition to being transnational, another important trait of the Internet is that it represents the coordinated efforts of a wide range of stakeholders who serve different roles. This article may focus on state regulation, but that sells short the degree and diversity of forces that shape the broader set of rules that govern Internet systems. This means that, in addition to governments, a whole host of private and public sector actors also matter; these include telecommunications companies, website operators, web hosts, domain name registrars and registries, creators of operating systems and web browsers, hardware vendors, content companies, educational institutions, end users, and many more. The rules that shape the contours of the Internet, then, include decisions by governments, but they also include a large set of corporate policies, contracts, industry and cultural norms, expectations, and market decisions—a combination of regulation, self-regulation, and co-regulation (Marsden, 2012). Further, these rules outcomes take place across widely divergent areas of Internet governance. Dutton and Peltu (2007) break these into three broad areas: Type 1, “Internet centric” rules about technical infrastructure and standards such as domain name addressing; Type 2, “Internet-user focused” issues, such as spam and unauthorized intrusion into computer systems, that do not exist offline; and Type 3, “non-Internet centric” issues, such as copyright and free expression that also matter offline. These tend to be managed differently, but lines are not always clear. Standards bodies and industry practices tend to determine outcomes in Type 1, and this alone is a highly contested affair—not only within standards bodies and between companies, but also in a tug of war between these private actors and states (DeNardis, 2014). Even in Type 1 conflicts, though, questions such as antitrust law, trademark law (as applied to domain name registrations), and more, can create a role for legislators and courts. Similarly, while Type 3 matters are often handled under civil and criminal law, stakeholders from industry and even user cultures also affect the lived experience of, for example, who is more or less free to speak their minds online. Type 2 issues, such as spam, are open to intervention by both state and non-state actors, but with somewhat limited efficacy all around.

A general- or special-purpose search engine will typically serve in the role of a website, connecting to other websites. This means interdependencies with one or more domain registrars, the registry or registries for each TLD that is used, ICANN, and potentially one or more webhosting providers. Also, the very purpose of the search engine depends on there being an array of other websites, of sufficient value and interest to the end user and in enough variety that a separate search tool is required to index them all. These other websites must implicitly or explicitly allow search engines crawling these sites to better understand how each site might or might not satisfy a given user as expressed by that user’s search terms. These other sites will also often “game” the search results of popular engines, each site manager seeking to maximize the site’s ranking in search results. Finally, all of the parties in the Internet ecosystem depend on last-mile connections to the Internet, and last-mile providers also need the backbone providers that move bits across long distances (Nuechterlein & Weiser, 2013).

Search engines thus serve a central role in a highly interdependent system; they coordinate many other actors in the Internet ecosystem, yet they also depend upon and are vulnerable to decisions made by many of these other actors. Similarly, the other actors and even unrelated third parties can quickly become affected by the behavior of many of these actors. This is why the state’s role in governing Internet search is often a matter of the kinds of law that govern relationships between people or other actors such as corporations. In particular, this often means contract law.

Contract Law

Contract law is most notably involved in search via the end-user license agreement, or EULA. Through these agreements, search providers give users a take-it-or-leave-it set of terms that govern the use of each site. These contracts “are notoriously cumbersome and go unread by the majority of users, including some judges” (Manta & Olson, 2015, p. 148). These contracts are also the subject of withering criticism, especially by those who do not see them as constituting meaningful consent by end users (Custers, 2016). They are still used, though, because they do a great deal to indemnify service providers such as search engines. They also provide at least some warning about the types of activities that will lead providers to suspend user accounts, remove websites, and so on.

In addition, search engines maintain a large number of contractual relationships with other companies that serve important roles in the Internet ecosystem. These include hardware and software contracts, business partnerships, and agreements about the carriage of data. The types of terms that are allowed, required, or forbidden by law in a given contract will vary substantially across jurisdictions, and these laws and policies generally have not kept up with the rapidly evolving search environment.

Internet Search, Free Expression, and Censorship

In most developed countries, laws and policies that restrict the freedom of expression must be justified in more extraordinary terms than rules that do not govern content. In the United States, this is embodied in the First Amendment to the Constitution, which reads in part, “Congress shall make no law … abridging the freedom of speech, or of the press …” Similarly, Article 10 of the European Convention for the Protection of Human Rights and Fundamental Freedoms (also known as the European Convention on Human Rights, or ECHR) includes the promise, “Everyone has the right to freedom of expression.” These rights may be subject to a number of limitations, such as restrictions that seek to protect national security, prevent defamation, and preserve privacy and confidentiality. The ECHR is broadly understood to allow countries to prohibit hate speech, and many European countries—such as Germany, France, Sweden, and the U.K.—have such prohibitions.

Despite limitations, one generally has more freedom under the law to engage in expression than to engage in similar non-expressive activities. This is why, for instance, burning a national flag is more likely to be legal than setting fire to a non-symbolic piece of cloth. The question of whether search results count as expression is therefore one with substantial legal significance. If search engines are expressive, this calls into question a substantial range of potential and attempted regulations. These range from lawsuits seeking to force search engines to link to certain sites or rank sites in a given way, to prohibitions on linking to obscene or indecent content. In the United States, a few courts have examined this question, and each has held that search results are expression that is protected under the First Amendment. With this principle in hand, the courts have turned away plaintiffs who sought to compel given search results from a search engine. The clearest such case to date is Zhang v. Inc. (S.D.N.Y. 2013).

Search engines also generally benefit from the principle that they are not considered publishers of the content to which they direct users, cutting off the bulk of concerns that a search provider could be held liable as if they were the publishers or distributors of such content. In the United States, this is most clearly codified in a part of the Communications Decency Act (1996), which specifies, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider” (47 U.S.C. § 230(c)(1)). While automated search results retrieve words, images, and other materials that may violate the law—such content being just enough to give the user a sense of what the target website has to offer—search providers are nonetheless regarded as not having an editorial function or serving as publishers of such content.

There are limitations, though. For instance, when designed their website, they included listing and search criteria that ask users about demographic factors such as sexual orientation and familial status. In Fair Housing Council v. (9th Cir., 2008), the court held that, because they designed these features, they were actually responsible for housing discrimination rather than an innocent operator of a website and search platform where some users may be engaging in discrimination. The court specifically differentiated this versus neutral search platforms such as Google and Yahoo!

European law, as well, generally holds search providers not to be publishers of the content that they help users to find. Some, though, have begun to worry that this principle is being encroached upon. Several developments, such as the newly created “right to be forgotten” (discussed below, in “Privacy”), have some concerned that the broad immunity of search providers is being eroded one topic at a time (Griffith, 2016).

Some countries impose substantial restrictions on Internet content, including search results. Most notable of these is China, due to its size and economic clout, though there are many other examples, including Cuba, North Korea, and Burma. In these countries, search engines foreign and domestic may accede to government mandates. Alternately, end users may try to visit foreign websites that do not comply with local mandates. This can be accomplished via tools that allow one to encrypt the content of data packets, to hide which sites are being visited, and even to hide the fact that these techniques are being used.

For any attempt to regulate search content, there is also the substantial problem of false positives and false negatives. No computer is capable of correctly determining, with total accuracy, which sites do or do not contain forbidden content. Thus, regulators will always capture some innocent information providers—blocking harmless sites by mistake. Conversely, censors will also regularly fail to restrict access to at least some forbidden content.

Secondary Liability

Search engines, as the connecting hubs of the Internet, are especially vulnerable to charges that they have enabled or facilitated legally problematic online behavior. Broadly speaking, this is a question of secondary liability—the area of the law that determines whether and under what circumstances some people or organizations can be held partially or entirely responsible for the behavior of others. Examples of secondary liability in the offline world include parents being generally responsible for the conduct of children, employers being potentially responsible for the behavior of employees, and establishments such as bars and restaurants being responsible if they allow their premises to become a place where the law is openly flouted. Unlike these examples, though, secondary liability can be harder to delineate online. “The scale and pace of the Internet have strained long-accepted tenets of secondary liability doctrine in both the common law and civil law world” (Goldstein & Hugenholtz, 2013, p. 339). Whatever theories of secondary liability apply in a given jurisdiction, search engines may have some exposure to civil or (in extreme cases) criminal legal liability.

There are some limitations, but in most areas of problematic content, automated search engines are largely immune to questions of secondary liability under U.S. law. This is so even when the content in question is obviously illegal or even dangerous, such as sites illegally marketing drugs (both pharmaceuticals and street drugs), sites that defame or threaten others, explicit images of minors, sites engaged in fraud and deception, and so on. In Europe, the range of content that may be illegal is somewhat broader, notably including hate speech. Article 10 of the ECHR stipulates that freedom of expression also comes with substantial responsibilities and is thus subject to a range of potential limitations, such as the various national prohibitions on hate speech. Still, the European Union’s 2000 e-commerce directive provides relatively robust protections against liability for providers of online services who handle other people’s information, and this includes search services (European Commission, 2000).

The threat of findings of secondary liability has still not vanished. Courts, legislatures, international treaty-making bodies, and public opinion could still turn against search providers for a host of potential reasons, only some of which are even foreseeable. Thus, major search engines such as Google and Yahoo! have for years engaged in self-regulation to try to limit the extent to which they facilitate obviously illegal behavior (Nathenson, 2013). They have done so in a way that does not compromise their core business model—as would happen if, for instance, search companies began manually approving or rejecting individual links for inclusion. In most cases around the world, these efforts at self-regulation have generally been sufficient to prevent substantial secondary liability and even to forestall legislative proposals that might have tied search providers’ hands. Two particularly significant examples of potential secondary liability are trademark and copyright (discussed in the following two sections), and these both illustrate limited liability combined with substantial efforts by search providers that exceed the minimums required by the law.


Trademark law is a specific area of substantial concern for search engines—and it can be a question of primary or secondary liability, depending on the facts of the case and the jurisdiction. If one wants to purchase or sell a knock-off handbag, search engines can be especially helpful. Also consider, though, the problem of famous marks being valuable search terms to identify a class of products. If one wants to sell a brand of cotton swabs other than Q-tips, the ability to buy ads to be shown to users who search for “Q-tips” would be especially valuable. If a search engine such as Google then sells such ads to other brands, it is understandable that Unilever, the maker of Q-tips, would be upset that their competitors are using Unilever’s brand name to identify consumers and sell competing goods. In this scenario, though, Unilever would generally have little legal recourse, anywhere in the world, as discussed below. A somewhat less cut-and-dried situation is where a competitor of a more famous brand may use deceptive or misleading online communication in a way that could confuse consumers about which brand is for sale. Whether in this case, or in cases of clear infringement—the sale of fake Louis Vuitton handbags, for instance—search engines generally need to maintain vigilance to ensure that they are not accessories to infringement.

Overall, search providers face limited risk of liability due to trademark infringement based merely on the practice of selling search ads based on brand names as keywords. This holds in the United States, where “to date, search engines have been successful in resisting such claims” (Dinwoodie, 2014, p. 475 n. 70). Google conducts their business in the belief that it simply is not infringement to sell ads to competitors using famous marks as keywords (Tuneski, 2011, p. 206). In practice, this means other brands of cotton swabs can pay search companies for the right to show up in advertisements when a user searches for “Q-tips.” This view is also becoming the standard in other jurisdictions. One example is the 2013 decision in which the High Court of Australia similarly held Google to be not liable for any potential consumer confusion that may result from advertisers’ uses of Google’s AdWords service (Goldman, 2013; Google Inc v Australian Competition and Consumer Commission, 2013). The EU, as well, has concluded that the mere act of buying or selling Internet ads using famous marks, by itself, does not constitute trademark infringement (Grigoriadis, 2014, p. 177).

The situation gets livelier when there is a genuine potential for consumer confusion—when either search results or advertisements could lead consumers to believe that a famous brand is being sold when it is actually a competitor’s goods. In the United States, these cases are technically matters of primary infringement, but they still hinge on questions that are more befitting secondary liability, such as whether a search operator knew about the infringement, and whether they induced the infringing behavior. Trademark owners are even more anxious when deliberate counterfeits of their famous marks show up in search results. Search providers generally have no direct control over these occurrences, though, and the large commercial search operators all have robust systems whereby trademark owners can contact them and ask for the removal of offending content, thereby generally evading secondary liability for facilitating wanton acts of infringement. Combined with the doctrine that search ads served on brand names are not trademark infringement, the notice-and-takedown system seems to have mostly immunized these sites from trademark liability in the United States (Dinwoodie, 2014).

In the European context, too, there are limited possibilities for holding providers responsible for trademark infringement, and yet search providers have shown substantial and growing support for trademark holders in the removal of offending content. In a 2009 ruling (consolidating cases between Google and trademark holders headlined by Louis Vuitton), the European Court of Justice held that Google is not responsible for potential trademark infringement by those who purchase advertising services from online search providers (Joined Cases C-236, C-237, & C-238/08). This is even if the search terms purchased are based in large part on famous brands, even if the selling of ads based on trademarked names drives up the price for such ads for the company that owns the famous mark, and even if those purchasing ads use this technology to commit trademark infringement. Despite this protection for online platforms, the companies that operate those platforms have taken part in a voluntary effort to reduce such infringement:

[T]he European Commission has superintended a dialogue among more than thirty stakeholders consisting of brand owners and Internet platforms regarding their respective roles in tackling counterfeiting online. And, in 2011, this led to the adoption of a Memorandum of Understanding (MOU), which may now be the vehicle by which the European Union seeks to universalize its approach to this question.

(Dinwoodie, 2014, p. 468)

This MOU is an especially illustrative example of co-regulation; it does not amend or interpret existing law, but is a voluntary agreement that provides a more streamlined notice-and-takedown system so that trademark owners can remove infringing items from sites.


Copyright is an area that garnered substantial legal anxiety for search operators, and the resulting conflicts and even legal battles between search operators and copyright holders have been the subject of substantial scholarship (Elkin-Koren, 2014; Proskine, 2006; Riordan, 2013). These involve questions of direct liability—generally for displaying too much of a copyrighted work in search results—and of secondary liability.

Search engines themselves can infringe on the original works, either in the course of copying protected works or via the search results displayed to users. If copyright’s prohibition on making unauthorized reproductions were interpreted in the strictest way possible, with no limitations or exceptions, creating a search engine would be virtually impossible. The process of creating a searchable database of the web necessitates making unauthorized copies of online works in order for the search technology to be able to search the content more rapidly and deliver relevant excerpts. Despite some push by copyright holders to demand permission to have their content indexed, courts and legislatures have decided to allow search engines to reproduce entire websites for the purpose of indexing their contents, without legal liability. Self-regulation and technological solutions have helped this cause; changes in HTML specifications in the mid-1990s added options (such as the “noindex” and “noarchive” meta tags), whereby websites can ask search engines not to index or archive a website. Reputable search engines accede to this request. This creates a system of implied consent; as long as a website does not include the instruction to search engines to exclude the site, the site owner is implicitly allowing search providers to index a site and even show cached versions in search results (Gasser, 2006, p. 213; Field v. Google, 2006, pp. 1115–16).

Even without the theory of implied consent, search engines are on reasonably firm ground when it comes to direct liability in the United States, though the legal basis for this is less clear in other countries. In the United States., the doctrine of fair use—as found in the U.S. Code at 17 U.S.C. § 107—provides a broad, context-dependent defense against charges of copyright infringement. It provides four factors that courts are to weigh, in combination: the purpose of the use, the nature of the copied work, the amount copied, and the effect on the market value of the original work (LaFrance, 2011, § 10.2). Courts have provided a very broad interpretation of fair use when it comes to search engines. These findings generally start from the premise that Internet search is a highly transformative reason for using an original work, and even before the Internet was widely adopted, transformative uses have been highly privileged and especially likely to be found to be fair uses. When a use is highly transformative—such as research, teaching, criticism, and parody—courts have regularly held that the remaining three factors are of less importance. Since courts have regularly found that finding information via a search engine is a transparently different purpose than consuming that information, these courts have generally held that search engines are engaged in fair use (Herman, 2015).

While merely indexing websites is not generally viewed as grounds for a charge of infringement, search results themselves sometimes are—by containing “too much” of the language of an original work or reproducing images, sounds, and videos that infringe on original works. This has become less of a concern over time for search sites, as long as they do not reproduce too much of a protected work. In two U.S. cases (Kelly v. Arriba Soft Corp., 2003; Perfect 10, Inc. v., Inc., 2007), the search engine was copying and storing photographic images, even reproducing reduced-quality versions as part of search results. The 9th Circuit Court of Appeals held, in both cases, that the purpose of enabling end users to find and recognize images was sufficiently transformative and of enough social importance as to warrant a finding of fair use. Two other, related cases (Authors Guild, Inc., v. HathiTrust, 2014; Authors Guild v. Google, 2015) were determined in the 2nd Circuit Court of Appeals. HathiTrust (a nonprofit that coordinates the activities of dozens of colleges and universities) and Google acted together to scan entire libraries’ worth of books in order to create a searchable database, and the court held that both HathiTrust and Google were engaged in fair use. As with the image search cases, the court held that creating a searchable database of books is a highly transformative use and justifies a finding of fair use.

These cases are especially important and clear examples of a much broader trend in U.S. jurisprudence, which is toward an ever-firmer consensus around the legality of creating searchable indexes and providing reasonably modest excerpts or previews to end users (Samuelson, 2015; Tushnet, 2015). Yet the trend is not unconditional or uninterrupted. Even in the United States, even following years of case law that is generally quite favorable to search operators, it is still possible for a search operator to be held liable largely for being too useful to the user—albeit in a case involving a particularly aggravating defendant and plaintiff engaged in noble work (Herman, 2015; Quinn, 2014).

This is a matter of potentially more serious concern in other jurisdictions, which tend not to have anything like the broad and malleable fair use defense found in the United States. In Europe, for instance, copyright law contains far narrower and more specific exemptions—nothing like the general-purpose, ambiguous defense of fair use that is so important in U.S. copyright. Quite the contrary, the 2001 EU Copyright Directive provides a specific list of exemptions that countries may include in their national laws (European Commission, 2001). In the relevant section (Article 5), the directive contains only one requisite exemption, giving legal cover for “temporary acts of reproduction … which are transient or incidental … whose sole purpose is to enable (a) a transmission in a network between third parties by an intermediary, or (b) a lawful use.” This provision is of substantial value to Internet service providers (ISPs), the companies that provide the infrastructure that facilitates Internet connectivity; transmitting data between third parties while acting as an intermediary is the core of their business model. This exemption is less obviously useful to search engines, however, since the copies they store of Internet content is neither transient nor incidental, but are retained indefinitely, and retaining and analyzing these copies is a core part of a search engine’s business model.

Despite this lack of either an explicit exception or a generally applicable defense akin to fair use, search companies have been permitted to operate in Europe without gaining explicit permission from all included sites. They have, however, been found to be liable in a number of cases that cut quite opposite to the trend in U.S. law. News aggregation is an especially perilous venture, subjecting many search providers to legal liability across several European countries (Weaver, 2012) for activities that would probably be legal in the United States. Image search has been a thorny issue in the German courts, and though the higher courts pulled that country toward the United States on this question, the lack of a general-purpose defense such as fair use strains the reasoning required to do so (Potzlberger, 2013). Such findings are frustrating for search companies, but good news for rights holders. They are also, in part, an outcome of the difference between the U.S. common law tradition and the civil law systems that hold in continental Europe. When all exceptions must be described in advance, and the underlying technology moves quickly, the civilian model can be caught flat-footed. The trade-off in the U.S. judicial system—with its fair use doctrine and reliance on judges’ interpretations—is that outcomes on copyright can be harder to predict, even if they have become relatively more predictable over time (Samuelson, 2015).

Another area of copyright law that is of very substantial concern to search operators is the question of secondary liability for infringement that takes place thanks in part to the search service. There is a far clearer path to problems of secondary liability for services that host data than there is for search operators. For instance, Google has far more potential liability for infringing copies of works posted on sites such as YouTube and Blogger, where the infringing copies that are available to the public actually reside on Google’s own servers. Still, at end of the 20th century, there was a potential argument to be made that a general-purpose search engine also has potential secondary liability for helping users to find infringing copies of works.

In 1998, the U.S. Congress created a system by which copyright holders can ask online providers to remove infringing content (Harris, 2015). This statute (17 U.S.C. § 512) was passed as part of the Digital Millennium Copyright Act, or DMCA, and it sets up what is commonly referred to as a notice-and-takedown regime. Virtually every kind of online service provider who handles data produced by third parties is covered, and this includes general-purpose search portals. Under this law, a copyright holder can contact any host of online information with notice about an infringing copy of a work found on the provider’s network. The online service provider is under no binding obligation to remove the allegedly infringing work, but for any copies that are removed promptly, the service provider is immunized against any further legal actions. This creates a strong incentive to comply with such takedown requests; compared even with the slim chance of a major lawsuit, the anger of a typical end user (even those whose works are taken down by mistake or overly aggressive views of copyright) is quite tolerable. The relevant EU Directive (European Commission, 2000) also contains something like the broad principles entailed in the U.S. notice-and-takedown regime, but it refers explicitly to services that provide “the storage of information provided by a recipient of the service” (Art. 14), not the panoply of service providers listed in the U.S. code. Search services thus seem not to be covered.

In any case, neither the U.S. nor EU notice-and-takedown immunity is actually of much significance to automated search engines. At the dawn of the web there was something of an open question about whether offering a link to illegal content could itself be illegal, and this created tremendous anxiety for (and corresponding legal and political advocacy by) Internet firms in general. In the years since, however, a global consensus has emerged that automated search engines have no liability for linking to infringing content, and thus that they have little to worry about from the threat of secondary liability due to copyright liability. As an illustration, consider the 2015 version of a standardized form Google offers for asking that content be removed from its search engine. On this form, there is a choice for alleging a trademark violation, but no such choice for alleging a copyright violation (Google, n.d. b). Perhaps more strikingly, a user can use Google to search for infringing copies of any file, and if the file is reasonably popular—any recent blockbuster film, for instance—Google will return many results. This works in EU countries as well. Thus, while legislatures could target more stringent copyright laws at search providers and disincentivize search results that link to infringing content, governments around the world seem to have chosen not to do so. Some policy proposals have been floated that would seek such an effect, such as the Stop Online Piracy Act (Stop Online Piracy Act, 2011) that was proposed in the United States, but they have not passed so far.


One major concern for the bottom line of several kinds of search operators is the problem of online fraud. For general- and special-purpose web search companies that thrive on advertising, as well as the companies that are paying for online ads, the threat of advertising fraud is real and especially thorny to resolve. This problem doubles for companies that place advertisements in other companies’ websites (Grimmelmann, 2007). Consider, for example, all of the fraud that Google needs to worry about. If it accepts ads from fraudulent actors, those who are victimized by such fraud or government regulators may come after Google and demand action or even reimbursement. Such claims have generally fallen on deaf ears in the courts, but it is still bad business to allow one’s service to be used for defrauding others, if this can be helped. More directly connected to Google’s bottom line is fraud in their AdWords program. This service places context-appropriate ads in webpages, collecting a fee from advertisers for each click. For these ads, the hosting web pages get a cut of Google’s fee charged to the advertiser, giving such sites a strong incentive to cheat and make it appear as though extra people are clicking on such ads. Competitors, too, can click on a company’s ads to drive up their marketing costs. Google is thus always in a furious battle to out-engineer those sites that might devise technical and social solutions to generate more clicks than the number tallied up by “real” users.

A different kind of fraud riddles another kind of search site—e-commerce storefronts that host consumer reviews. Companies with visible user reviews sections, such as Amazon, Best Buy, and Wal-Mart, each host what are surely thousands if not millions of fake reviews generated by interested companies. In October of 2015, for instance, Amazon filed a lawsuit in the United States against over 1,000 people, alleging that each had generated a substantial number of fake reviews (Gani, 2015). If true, these review generators were doing this in order to get paid by the companies whose products looked more desirable as a result. Amazon has a strong incentive to try to leverage whatever laws are at hand to discourage this practice, because serving as a searchable hub of product reviews is an important part of Amazon’s marketing strategy. Reducing or eliminating review fraud makes its searchable reviews more trusted, thus more valuable, and thus directing still more search traffic toward the e-commerce giant. Other review sites face similar challenges and legal options; this includes those whose reviews discuss restaurants, hotels, home cleaning or repair services, and so on. Fraudulent sellers and fraudulent reviews are also a bedeviling problem for third-party markets such as eBay, which have a similar dual desire to limit fake reviews to prevent unhappy buyers, as well as to preserve their site’s trustworthiness as a platform for reviewing both sellers and products.


Thanks to ever-cheaper computing, networking, and recording capacity, it has never been easier to track the behavior of others, and people do not have a reasonable opportunity to opt out of this surveillance (Brunton & Nissenbaum, 2015, § 3.3). Correspondingly, the likelihood that any given action will be tracked has never been higher. This raises substantial, understandable anxieties for many users of digital technologies, and it is also the reason for substantial—and, in some cases, successful—pushes to implement additional regulations of private actors in order to protect user privacy. It has also led to concerns about state surveillance and use of digital data in policing and intelligence gathering. Thus, the quest for privacy protections can include both concerns that the state is doing too much and that it is doing too little.

State surveillance of digital data was, for years, the focus of relatively little public discussion, but it became a topic of heated public debate in 2013, when former U.S. National Security Agency (NSA) contractor Edward Snowden leaked tens of thousands of internal NSA documents describing the agency’s activities. With the help of documentary producer Laura Poitras and journalist Glenn Greenwald, with the story broken by the London newspaper The Guardian, the Snowden documents revealed a state surveillance apparatus that collects and analyzes enormous amounts of information. Such collection includes data about telephone calls (landline and cellular), web browsing, social media, Internet search, and more (Greenwald, 2014).

The NSA and its supporters point out that the data collected by the agency is “metadata,” meaning that the agency does not routinely eavesdrop on the actual messages that are being sent in private communication. This means, for instance, that the agency is usually not examining the contents of actual messages such as emails, social media messages, phone calls, and so on. For people located in the United States, the agency claims that it does not eavesdrop on such messages without first obtaining a warrant. Even when all persons are abroad, and the law therefore does not require a warrant, the agency is only interested in looking at the data about the data as a means of trying to narrow down their focus before looking at a much smaller amount of data circulated by those the agency determines to be a potential threat or otherwise of strategic interest to U.S. interests.

Critics of NSA surveillance argue that, even if the agency is only examining metadata, this means the agency is still sucking up and analyzing a tremendous amount of data, a good deal of it potentially sensitive (Schneier, 2015). This includes one’s landline and telephone calling records, including who calls whom, the time and duration of each call, and the location (including GPS data) of each person’s cell phone at any given time; the sender and recipient(s) of every email message, the IP address (which usually implies one’s location) of the sender and recipient(s), the number and size of attachments; which Internet sites one visits, at what times, from which devices, and (by implication) much of one’s Internet search history; social media usage habits and connections, including time of day, interpersonal relationships and their connections, and which devices are used when, where, and for how long; and much more. Further, Snowden’s documents revealed that this surveillance of metadata is applied not only to persons of direct interest in terrorism investigations, but to those up to three degrees of separation, or three “hops” (shared phone calls, social media connections, email connections, etc.) away from such persons of interest. Depending on how many people are directly of interest and how “three hops” is interpreted, such a scope includes somewhere between several million people, and virtually everyone with a digital identity across the entire globe. Supporters of the surveillance programs contend that this strategy is necessary to preserve public safety and defend the national interest.

Since much of this surveillance strategy captures a great deal of data that is handled by search providers, it is a topic with major implications for these providers. Snowden’s revelations have sparked several changes among service providers, including large providers of search services. Perhaps most notably among these, large-scale providers have begun encrypting their communication between servers at different locations; it is widely believed that, before this, the NSA was eavesdropping on the unencrypted messages as they passed between servers along ultra-high-bandwidth, long-distance fiber optic connections. Ironically, had the NSA employed better encryption of its own documents when Snowden was a contractor, his leak might have been averted (Strohm, 2014).

It is also likely that there was or still is a court battle about whether search data (and similar Internet data) can be subjected to such surveillance under U.S. law. The public, however, does not and may never have access to the record of this court battle. Since the passage of the 1978 Foreign Intelligence Surveillance Act, such hearings take place before the Foreign Intelligence Surveillance Court (FISC, also called the FISA Court). This court’s proceedings are classified, so the public knows almost nothing about the court’s rulings—except, of course, for what has been leaked. Thus, this part of the debate over Internet search policy is likely happening but almost certain to occur mostly away from public view (Schneier, 2015, pp. 171–77).

Governments are not the only threat to online privacy, and search providers also have a range of potential policy questions to address that relate to private actors revealing private information about other private actors. On this count, search engines in the United States mostly have the option to remove or not remove results as they see fit. For instance, Google makes it clear that it will remove content for legally required reasons, but that it also voluntarily wants to help remove sensitive information such as government identification numbers (such as Social Security numbers), images of one’s signature, and unwelcome images of one’s nude body (Google, n.d. a). Otherwise, unsavory search results are included, and even these efforts by Google to remove highly sensitive content are entirely discretionary and not at all mandated by law. The same laissez faire regulatory stance is also in play when it comes to the use of personal information by online firms to conduct marketing research, targeted advertising, and so on. Search firms’ attorneys have ensured that the companies’ right to do so is included in the end-user license agreements—the binding contracts that govern the relationships between users and service providers. The provisions of these agreements that permit the collection and analysis of user data have, to date, faced no serious legal challenge in the United States.

In contrast, the EU has what is widely regarded the strongest set of privacy rules available anywhere in the world. Of particular relevance for search engines is the “right to be forgotten” online. This was cemented in a 2014 case (Google Spain SL v. Costeja, 2014), which held that a European citizen can demand that a search engine remove results that reveal personal information from search results in some circumstances. Some of the relevant conditions include:

Private persons will have the right to delete links to their own postings and repostings by third parties. They will have a right to delete links to postings created by third parties upon proof that the information serves no legitimate purpose other than to embarrass or extort payment from the data subject. Public officials and public figures will have a right to remove links to their own postings and re-postings by third parties, but not postings about them by third parties, unless the third party was acting with actual malice and the posting does not implicate the public’s right to know. In addition, all right to be forgotten requests will be subject to a general exemption for the public’s right to know.

(Rustad & Kulevska, 2015, p. 354)

Less specifically targeted at search engines but still affecting them is the right of European citizens to know what data about them media companies have collected. In 2011, Austrian law student Max Schrems leveraged this right to reveal just how much data Facebook is collecting about its users. He asked for all of the data they had collected about him, and the company sent him a CD with over 1,200 pages of data. (Chander, 2012, p. 1825) Search companies can thus be expected to turn over similarly comprehensive data to European citizens at the asking.

The contrast between strong EU privacy protections and comparatively weaker ones in other countries also creates potential concerns for Internet firms that seek to move data across borders. In 2015, with Schrems as the plaintiff, the EU Court of Justice ruled that data can no longer be shared as freely from the EU to the United States (Schrems v. Data Protection Commissioner). Under a 2000 international safe harbor agreement, companies had been broadly permitted to move data—even data about European citizens—between the United States and Europe. This was in the belief that the United States provided adequate protections of user privacy. The Schrems ruling, however, came in light of the revelations by Edward Snowden about U.S. surveillance practices, implying that European regulators are now somewhat less comfortable entrusting their citizens’ data to the privacy regulations available in the US.

Ranking and Reputation, Market Power, and Neutrality

Nearly every potential individual, group, or company that can be included in search results has some reason to worry about the results—whether their ranking is higher than their competitors, whether their more flattering portrayals outweigh unflattering ones, whether and how their content is included in a search index in the first place, and so on. Thus, major search engines are subject to regular criticism by all kinds of actors who allege unfair, inaccurate, or misleading results. In the United States, these complaints carry little legal weight and thus present no major threat to operators; among other cases, Zhang v. Baidu (2014) is especially clear (see the section “Internet Search, Free Expression, and Censorship”). Search engine operators are widely viewed as publishers who have the right to include, exclude, and rank sites as they (and their algorithms) see fit. This protection for search operators also applies broadly to actions alleging that the search engine participated in libel, by means of linking to libelous speech. Under U.S. statutory and case law, search firms are virtually untouchable on this count; unless a search company authors the content themselves, they are not considered publishers and thus utterly escape liability for the words of others that appear in and are more visible thanks to search results (Communications Decency Act, 1996, 47 U.S.C. § 230). Even in the United Kingdom, which is widely regarded as having the most stringent protections against defamation anywhere in the world, search engine providers have just a bit more to worry about (than in the United States) when it comes to potentially libelous claims that are returned in their search results. The 2013 Defamation Act sets up a notice-and-takedown regime for libel (Defamation Act 2013 (UK), 2013, § 5) that is similar to the copyright system created by the U.S. Digital Millennium Copyright Act (see the section “Copyright”). The Ministry of Justice clarifies, however, that this does not apply to search engines, meaning that search providers are not even expected to take down links upon request (Ministry of Justice (UK), 2014). While not exactly a matter of libel law, however, European protections for the right to be forgotten (see the section “Privacy”) sets up a major potential for the protection of one’s reputation. Citizens who believe their reputations are unfairly maligned are thus most likely to pursue remedies under that regime, where possible.

A related concern is the reputation and market effect of search ranking and placement. An entire industry has sprung up around search engine optimization (SEO), with many professionals and firms that promise to improve placement in rankings, minimize links that feature negative reviews and more. For the most part, search firms are immunized from legal concerns related to such placements, and their response to such strategies has largely been to modify their algorithms to reduce the extent to which such deliberate efforts can change search rankings. This sets up a cat-and-mouse game in which both search algorithms and SEO strategies constantly evolve, with the latter fighting to retain the ability to affect search rankings and the former fighting to limit the extent to which search results can be gamed.

Larger online players often bristle at their search placement as well, and this can and sometimes does reach the level of an explicit antitrust action. In particular, Google’s dominance in the search market, globally and in most individual industrialized countries, has given rise to a range of complaints from other online content companies, that Google is leveraging its dominance in search to unfairly benefit other online services such as maps, email, online comparison shopping, image searches, and more. Google’s search dominance has certainly driven a great deal of the traffic for the company’s other products, but it is far from settled whether this constitutes an illegal use of market power. The U.S. Federal Trade Commission conducted an investigation, but closed it in 2013, without taking any actions (Morris, 2013). The European Commission responsible for Competition Policy also conducted an investigation, raising concerns serious enough that Google proposed several remedies to address the Commission’s critiques of their use of their market power. “The parties were unable to come to terms, and the European Commission issued a formal Statement of Objections to Google in April 2015” (Hyman & Franklyn, 2015, n.p.). This call to curtail the ability of dominant search providers (and really, Google’s ability) to make algorithmic and design choices, free from regulatory oversight, is often called “search neutrality” (Manne & Wright, 2012). The term is a play on the term network neutrality (the next topic of discussion in this section). Those who oppose network neutrality sometimes invoke the concept of search neutrality as a turnabout-is-fair-play argument (or even a reductio ad absurdum) against the search operators who generally support network neutrality.

A perpendicular area of law also concerns search engines and deals with fairness, neutrality, and the concentration of market power: network neutrality and related policy questions. This area of law and policy concerns whether the telecommunications firms that operate the infrastructure that serves last-mile connections for end users will be permitted to decide which content gets relatively higher and lower priorities (Grimmelmann, 2015, pp. 618–658). There is relatively widespread agreement that the Internet will be better off if last-mile service providers are not able to leverage their position of being in noncompetitive markets (as is most often the case) in order to sell such priority to the highest bidder, or to favor their own content over that of competitors for nakedly economic reasons, or to advance one political agenda over another. For instance, a networking provider that also owns a portfolio of media properties could make their own media properties appear much more rapidly on their Internet subscribers’ computers, by way of slowing down their competitors’ content. Scholars disagree, however, about the extent to which ISPs have engaged in such discrimination and whether they will do (more of) this going forward, whether the situation is bad enough to warrant regulation in general, and whether and how such a regulatory scheme can be created to respond sensibly, nimbly, and proportionately (Nuechterlein & Weiser, 2013).

The network neutrality debate is not generally a debate about search engines, but it is very much one that is of concern to them. Even with relatively basic pages delivered for each search query, the sheer volume of search traffic makes search engines a substantial share of the data delivered to the typical end user. Search providers are therefore wary of a policy situation in which regulators would have little or no capacity to rein in potential discrimination by last-mile ISPs who provide infrastructure that delivers Internet connectivity to home users and small businesses. In the United States, the policy component of this debate has mostly followed in the wake of Tim Wu’s article, “Network Neutrality, Broadband Discrimination” (Wu, 2003), and observers generally agree that the scholar most closely associated with being against network neutrality policy is Christopher Yoo (Wu & Yoo, 2007). The debate started much later in Europe, and much of that debate has borrowed heavily from the arguments and concerns in the U.S. debate (Sluijs, 2010).

In a 2002 ruling, the FCC classified broadband connectivity under a section of the 1996 Telecommunications Act that leaves the agency with almost no regulatory powers (Declaratory Ruling and Notice of Proposed Rulemaking, 2002). They concluded that broadband is an “information service,” and these services are almost completely unregulated under the statute, so this decision foreclosed the possibility of much if any regulation. This decision was upheld by the Supreme Court as an action that was permitted (but not required) by the statute (National Cable & Telecommunications Assn. v. Brand X Internet Services, 2005). In 2015, the Obama administration FCC overturned this decision, reclassifying broadband as a “telecommunications service,” a category that includes a wide range of potential regulations, and promised to use this reclassification as a vehicle for preventing discriminatory actions on the part of broadband providers (In re Protecting & Promoting the Open Internet, 2015). Telecommunications firms such as Verizon have sued to block this decision, with the outcome still pending at the time of writing. If the FCC’s ruling is upheld, this is generally viewed as a victory for search engines, who will have less market dependence on broadband service providers.


The vast majority of scholarship on Internet search consists of the application of broader legal theories and principles (Grimmelmann, 2015). This means the evolution in relevant scholarship is largely a combination of two evolutions—which topics are covered, and how those topics are discussed—reducing the ability to identify overarching trends. A few vectors are worth noting, however. One is the move away from Internet exceptionalism—the belief that life and thus policy online can, must, or should be substantially different from offline existence. This is best encapsulated by John Perry Barlow’s (1996) “A Declaration of the Independence of Cyberspace.” It begins, “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.” A somewhat less hyperbolic version of this same sentiment legitimately infuses much of the legal scholarship of the time. This includes large servings of technological determinism, optimism that the Internet will almost inherently be equitable and liberating, and righteous indignation that industries and states would dare to try to tame the internet—especially via censorship and copyright regulation.

The exceptionalist paradigm was short-lived. In the preface to the second edition of his groundbreaking book Code, Lawrence Lessig (2006) writes:

The first edition of this book was published in 1999. It was written in a very different context, and, in many ways, it was written in opposition to that context. As I describe in the first chapter, the dominant idea among those who raved about cyberspace then was that cyberspace was beyond the reach of real-space regulation. Governments couldn’t touch life online. And hence, life online would be different, and separate, from the dynamic of life offline. Code v1 was an argument against that then common view.

In the years since, that common view has faded. The confidence of the Internet exceptionalists has waned. The idea—and even the desire—that the Internet would remain unregulated is gone.

(p. ix)

In just a decade, the collective wisdom shifted dramatically, and it is not an exaggeration to say that the material world’s material impact on the Internet made the shift all but inescapable. The birth and explosive growth of the search engine industry, and Google in particular, was actually a major part of this transition. In 1996, there was still no good way to search the bulk of the (already substantial) material on the web, and the URL was still not registered; by 2006, Google was a publicly traded company worth over $100 billion. Google had a Washington, DC lobbying office by 2005—and, if anything, doing so earlier would have been in the company’s best interests (Mohammed & Goo, 2006). While Internet firms were young (versus weary) and built on silicon (versus steel), it became undeniable that they can be giant and must deal with the flesh-and-blood world. Thus, just a few years into the new millennium, Internet exceptionalism was already mostly a thing of the past.

Another noteworthy trend, of more recent vintage, is a growing distance between many scholars and the policy positions staked out by search firms. This is largely a function of the evolution of topics of concern to legal scholars. Much of the early scholarship on Internet law, coming at roughly the turn of the millennium, was around issues on which scholars and search companies are likely to agree, such as copyright, trademark, defamation, freedom from government invasions of privacy, and freedom of expression (Goldsmith, 1998; Johnson & Post, 1996; Lessig, 1999). For instance, on copyright, most scholars have taken positions that call for substantially reduced copyright protections, or they have spoken out against the increases in copyright protections proposed by content companies such as music and motion picture producers. (This is true for most of those cited herein and literally hundreds more.) Since this aligns with the position of search engines, there was a natural alliance. The same is true for questions of freedom of expression; when the web was first becoming a mainstream medium, there were efforts such as the U.S. Communications Decency Act (1996) to limit access to indecent content such as pornography. Scholars mostly panned such efforts (Gey, 1998; Mailland, 2001; Ross, 2000), so again there was a natural alliance with search engines’ interests. Even then, this alliance was generally understood as provisional, and that has played out across some other issues that became the focus of further scholarly study.

The topic of privacy has gone from relatively under-studied to one of increasingly central importance. Privacy is also an area where scholarship is no longer so neatly aligned with the interests of search engines. While state surveillance continues to be a major concern among scholars, concern about private sector collection, warehousing, analysis, and marketing of private information is also growing (Kesan, Hayes, & Bashir, 2016; Ohm, 2015; Schneier, 2015). Privacy is just one example, but as copyright and freedom of expression lose their central focus in the study of Internet law, there have been more fissures between scholars and search engines across additional areas such as economic and employment policy, contract law (including critiques of end user license agreements), and more. The collective weight of legal scholarship is still often on the same side as major search companies on important policy debates, but it is now also increasingly likely that such scholarship cuts against these interests.

Another major shift is the growing interest among Internet law scholars more generally in questions about democratic accountability and process. These are generally broached as these affect the areas of law that most directly concern the Internet generally and search specifically, but some scholars have turned this focus into a broader political critique. After years of writing about copyright law, for instance, famed Harvard Law Professor Lawrence Lessig became convinced that copyright policy is being made in a way that is misguided due to what he calls the institutional corruption of the U.S. policy process—and not, as he had formally assumed, due to a lack of understanding on the part of policymakers (Lessig, 2011). This concern is so sincerely held that Lessig even led an unsuccessful bid for the Democratic party’s nomination for the 2016 presidential election.

Similarly, Tim Wu of Columbia Law School, a highly visible scholar of Internet law and policy, ran for the 2014 Democratic nomination for Lieutenant Governor of New York. Wu ran as the running mate of Fordham Law professor Zephyr Teachout—who, as a scholar of policy process and outspoken critic of institutional corruption, was in large part running for Governor to make a deliberate point about policy process. Wu himself has also been quite critical of the political process that leads to communication law and policy outcomes. For example, in his popular book The Master Switch (Wu, 2010), he writes, “It has been the aim of this book to show that our information industries—the defining business ventures of our time—have from their inception been subject to the same cycle of rise and fall, imperial consolidation and dispersion, and that the time has come when we must pay attention” (p. 299). To a large extent, this book casts Google as the hero against the forces that seek to close off the user experience (Apple) or consolidate the powers to create and deliver content (cable companies and content companies, especially as symbolized by the merger of Comcast and NBC Universal). Other scholars are less sanguine about the role Google plays in society, such as Siva Vaidhyanathan (2012) in his book The Googlization of Everything: (And Why We Should Worry). In any case, these scholars and many more have taken up the call to take a more critical look not only at the policies of state actors, but also the policies and behaviors of Internet companies and how these can present potential problems for users and society.

This critical turn also involves more scrutiny of the policymaking process that leads to policy outcomes as they affect the development of new technologies. In their introduction to a special issue of The Information Society, Milton Mueller and Becky Lentz (2004) call for just such a partial redirection of the efforts of communication law and policy scholarship. They write:

Typically, policy research focuses on how regulators, governments, and public policies shape communication–information industries and social practices. This special issue tried to reverse that equation and investigate what our call for papers termed “the social determinants of public policy”—that is, how social practices, long-term socioeconomic changes, cultural norms, and the interest groups engaged in communication and information influence the development of laws, regulations, and policies.

(p. 155)

While this does not represent the first time communication law and policy scholars had taken this approach (Robert McChesney is particularly associated with this paradigm), it does roughly mark the beginning of a shift toward a more sustained focus on the backstory of policy outcomes by many of their fellow scholars. Providers of online search are strewn throughout the stories that have thereby been told, though as is true throughout, the focus is generally on the specific area of law being studied—and the broad range of actors that play a role in policy outcomes—rather than on search providers specifically.

Primary Sources

In looking for primary sources of law, one must begin in the jurisdiction of interest, which means that no brief list of primary sources could possibly be complete for a global audience. A particularly valuable source of primary U.S. case law is the casebook maintained by James Grimmelmann (2015), in its 5th edition as of this writing. In addition to its high quality, it is available online on a pay-what-you-wish basis. The author is not aware of any similarly comprehensive casebook for other jurisdictions, but an especially helpful research guide for European Internet policy is the Research Handbook on EU Internet Law, edited by Andrej Savin and Jan Trzaskowski (2014). Either work would be an especially useful place to start for researchers interested in related issues in each jurisdiction.

Further Reading

Among, and in addition to, the works cited above, any of the following would be an especially fruitful place for researchers to start:

Center for Information Technology Policy (Princeton University). (n.d.). Freedom to tinker: Research and expert commentary on digital technologies in public life.

Chadwick, A., & Howard, P. N. (Eds.). (2009). Routledge handbook of Internet politics. London: Routledge.Find this resource:

Craig, B. (2012). Cyberlaw: The law of the Internet and information technology. Upper Saddle River, NJ: Pearson Education/Prentice Hall.Find this resource:

Goldman, E. (Ed.). (n.d.). Technology & marketing law blog.

Grimmelmann, J. (2007). The structure of search engine law. Iowa Law Review, 93(1), 1–63.Find this resource:

Grimmelmann, J. (2013). Speech engines. Minnesota Law Review, 98, 868–952.Find this resource:

Grimmelmann, J. (2015) Internet law: Cases and problems. (5th ed.). Oswego, OR: Semaphore Press.Find this resource:

Harris, D. P. (2015). Time to reboot? DMCA 2.0. Arizona State Law Journal, 47, 801–855.Find this resource:

Lane, J., Stodden, V., Bender, S., & Nissenbaum, H. (2014). Privacy, big data, and the public good: Frameworks for engagement. New York: Cambridge University Press.Find this resource:

Mueller, M. (2010). Networks and states: The global politics of Internet governance. Cambridge, MA: MIT Press.Find this resource:

Nuechterlein, J. E., & Weiser, P. J. (2013). Digital crossroads: Telecommunications law and policy in the Internet age (2d ed.). Cambridge, MA: MIT Press.Find this resource:

Pacifici, S. I. (Ed.). (n.d.). beSpacific: Accurate, focused research on law, technology and knowledge discovery since 2002.

Rustad, M. (2013). Global Internet law in a nutshell (2d. ed.). St. Paul, MN: West Academic.Find this resource:

Savin, A., & Trzaskowski, J. (2014). Research handbook on EU Internet law. Cheltenham, U.K.: Edward Elgar.Find this resource:

Schneier, B. (2015). Data and Goliath: The hidden battles to collect your data and control your world. New York: W. W. Norton.Find this resource:

Vaidhyanathan, S. (2012). The Googlization of everything: (And why we should worry). Berkeley: University of California Press.Find this resource:


Authors Guild v. Google. Slip Op. No. 13-4829 (2nd Cir. October 16, 2015).

Authors Guild, Inc., v. HathiTrust, 755 F.3d 87 (2nd Cir. 2014).Find this resource:

Barlow, J. P. (1996, February 8). A declaration of the independence of cyberspace. Electronic Frontier Foundation.

Brunton, F., & Nissenbaum, H. (2015). Obfuscation: A user’s guide for privacy and protest. Cambridge, MA: MIT Press.Find this resource:

Chander, A. (2012). Social networks and the law: Facebookistan. North Carolina Law Review, 90, 1807–1844.Find this resource:

Christou, G., & Simpson, S. (2007). Gaining a stake in global Internet governance: The EU, ICANN, and strategic norm manipulation. European Journal of Communication, 22(2), 147–164.Find this resource:

The Common Law and Civil Law Traditions. The Robbins Collection, University of California at Berkeley.

Communications Decency Act of 1996, Pub. L. No. 104-104 (1996).

Custers, B. (2016). Click here to consent forever: Expiry dates for informed consent. Big Data & Society, 3(1), 1–6.Find this resource:

Craig, B. (2012). Cyberlaw: The law of the Internet and information technology. Upper Saddle River, NJ: Pearson Education/Prentice Hall.Find this resource:

Declaratory Ruling and Notice of Proposed Rulemaking. (2002). (Cable Modem Declaratory Ruling). 17 FCC Rcd 4798 (2002).

Defamation Act (2013) (UK), 2013 c. 26 (2013).

DeNardis, L. (2014). The global war for Internet governance. New Haven, CT: Yale University Press.Find this resource:

Dinwoodie, G. B. (2014). Secondary liability for online trademark infringement: The international landscape. Columbia Journal of Law & the Arts, 37, 463–501.Find this resource:

Dutton, W. H., & Peltu, M. (2007). The emerging Internet governance mosaic: Connecting the pieces. Information Polity, 12(1–2), 63–81.Find this resource:

Elkin-Koren, N. (2014). After twenty years: Copyright liability of online intermediaries. In S. Frankel & D. J. Gervais (Eds.), The evolution and equilibrium of copyright in the digital age (pp. 29–51). New York: Cambridge University Press.Find this resource:

European Commission. (2000). E-Commerce Directive, Directive 2000/31/EC. EUR-Lex: Access to European Law.

European Commission. (2001). Copyright Directive, Directive 2001/29/EC.

Fair Housing Council of San Fernando Valley v., LLC, 521 F.3d 1157 (9th Cir. 2008).Find this resource:

Field v. Google. (2006). 412 F.Supp.2d 1106 (D. Nev. 2006).Find this resource:

Gani, A. (2015, October 18). Amazon sues 1,000 “fake reviewers”. The Guardian.Find this resource:

Gasser, U. (2006). Regulating search engines: Taking stock and looking ahead. Yale Journal of Law & Technology, 8, 201–234.Find this resource:

Gey, S. G. (1998). Fear of freedom: The new speech regulation in cyberspace. Texas Journal of Women & the Law, 8, 183–206.Find this resource:

Goldman, E. (2013, February 19). With its Australian court victory, Google moves closer to legitimizing keyword advertising globally. Technology & Marketing Law Blog.

Goldsmith, J. (1998). Regulation of the internet: Three persistent fallacies. Chicago-Kent Law Review, 73, 1119–1130.Find this resource:

Goldstein, P., & Hugenholtz, P. B. (2013). International copyright: Principles, law, and practice. New York: Oxford University Press.Find this resource:

Google. (n.d. a). Removal Policies—Search Help.

Google. (n.d. b). Removing Content From Google—Legal Help.

Google Inc. v. Australian Competition and Consumer Commission, No. 1. High Court of Australia (February 6, 2013).

Google Spain SL v. Costeja, C-131/12. (E.C.R.I. May 13, 2014). InfoCuria.

Greenwald, G. (2014). No place to hide: Edward Snowden, the NSA, and the U.S. surveillance state. New York: Metropolitan Books.Find this resource:

Griffith, M. E. (2016). Downgraded to “Netflix and chill”: Freedom of expression and the chilling effect on user-generated content in Europe. Columbia Journal of European Law, 22, 355–381.Find this resource:

Grigoriadis, L. G. (2014). Comparing the trademark protections in comparative and keyword advertising in the United States and European Union. California Western International Law Journal, 44, 149–205.Find this resource:

Grimmelmann, J. (2007). The structure of search engine law. Iowa Law Review, 93(1), 1–63.Find this resource:

Grimmelmann, J. (2013). Speech engines. Minnesota Law Review, 98, 868–952.Find this resource:

Grimmelmann, J. (2015). Internet law: Cases and problems (5th ed.). Oswego, OR: Semaphore Press.Find this resource:

Harris, D. P. (2015). Time to reboot? DMCA 2.0. Arizona State Law Journal, 47, 801–855.Find this resource:

Herman, B. D. (2013). The fight over digital rights: The politics of copyright and technology. New York: Cambridge University Press.Find this resource:

Herman, B. D. (2015). Dissolving innovation in Meltwater: Copyright and online search. Journal of Information Policy, 5, 204–244.Find this resource:

Hyman, D. A., & Franklyn, D. J. (2015). Search bias and the limits of antitrust: An empirical perspective on remedies. Jurimetrics, 55(3), 339–379.Find this resource:

In re Protecting & Promoting the Open Internet. (2015). FCC 15-24 (report and order). (No. 14-28).

Johnson, D. R., & Post, D. G. (1996). Law and borders: The rise of law in cyberspace. Stanford Law Review, 48, 1367–1402.Find this resource:

Joined Cases C-236, C-237, & C-238/08, Google France SARL v. Louis Vuitton Malletier SA, Google France SARL v. Viaticum SA, Google France SARL v. CNRRH SARL (2009).

Kelly v. Arriba Soft Corp., 336 F.3d 811 (9th Cir. 2003).Find this resource:

Kesan, J. P., Hayes, C. M., & Bashir, M. (2016). A comprehensive empirical study of data privacy, trust, and consumer autonomy. Indiana Law Journal, 91, 267–352.Find this resource:

LaFrance, M. (2011). Copyright law in a nutshell (2d ed.). St. Paul, MN: West Academic.Find this resource:

Lessig, L. (1999). Code: And other laws of cyberspace (1st ed.). New York: Basic Books.Find this resource:

Lessig, L. (2006). Code (2.0). New York: Basic Books.Find this resource:

Lessig, L. (2011). Republic, lost: How money corrupts Congressand a plan to stop it (1st ed.). New York: Twelve.Find this resource:

Mailland, J. (2001). Freedom of speech, the Internet, and the costs of control: The French example. NYU Journal of International Law & Policy, 33, 1179–1234.Find this resource:

Manne, G. A., & Wright, J. D. (2012). If search neutrality is the answer, what’s the question? Columbia Business Law Review, 2012(1), 151–239.Find this resource:

Manta, I. D., & Olson, D. S. (2015). Hello Barbie: First they will monitor you, then they will discriminate against you. Perfectly. Alabama Law Review, 67, 135–187.Find this resource:

Marsden, C. T. (2012). Internet co-regulation and constitutionalism: Towards European judicial review. International Review of Law, Computers & Technology, 26, 211–228.Find this resource:

Ministry of Justice (UK). (2014). Complaints about defamatory material posted on websites: Guidance on Section 5 of the Defamation Act 2013 and Regulations.

Mohammed, A., & Goo, S. K. (2006, June 7). Google is a tourist in D.C., Brin finds. Washington Post, p. D1.Find this resource:

Morris, P. S. (2013). Solving Google’s antitrust dilemma: Cognitive habits and linking rivals when there is large market share in the relevant online search market. Wake Forest Journal of Business and Intellectual Property Law, 13, 303–338.Find this resource:

Mueller, M. L. (2010). Networks and states: The global politics of Internet governance. Cambridge, MA: MIT Press.Find this resource:

Mueller, M., & Lentz, B. (2004). Revitalizing communication and information policy research. The Information Society, 20(3), 155–157.Find this resource:

Nathenson, I. S. (2013). Super-intermediaries, code, human rights. Intercultural Human Rights Law Review, 8, 19–172.Find this resource:

National Cable & Telecommunications Assn. v. Brand X Internet Services. (2005). 545 U.S. 967 (U.S. Supreme Court June 27, 2005). Legal Information Institute, Cornell University Law School.Find this resource:

Nuechterlein, J. E., & Weiser, P. J. (2013). Digital crossroads: Telecommunications law and policy in the Internet age (2d ed.). Cambridge, MA: MIT Press.Find this resource:

Ohm, P. (2015). Sensitive information. Southern California Law Review, 88, 1125–1196.Find this resource:

Perfect 10, Inc. v., Inc., 508 F.3d 1146 (9th Cir. 2007).Find this resource:

Potzlberger, F. (2013). Google and the thumbnail dilemma: Fair use in German copyright law. I/S: A Journal of Law and Policy for the Information Society, 9, 139–169.Find this resource:

Proskine, E. A. (2006). Google’s technicolor dreamcoat: A copyright analysis of the Google Book Search Library Project. Berkeley Technology Law Journal, 21(1), 213–239.Find this resource:

Quinn, D. J. (2014). Associated Press v. Meltwater: Are courts being fair to news aggregators?. Minnesota Journal of Law, Science, & Technology, 15(2), 1189–1220.Find this resource:

Riordan, J. (2013). The liability of Internet intermediaries. Oxford, U.K.: Oxford University PressFind this resource:

Ross, C. J. (2000). Anything goes: Examining the state’s interest in protecting children from controversial speech. Vanderbilt Law Review, 53, 427–524.Find this resource:

Rustad, M. (2013). Global Internet law in a nutshell (2d ed.). St. Paul, MN: West Academic.Find this resource:

Rustad, M. L., & Kulevska, S. (2015). Reconceptualizing the right to be forgotten to enable transatlantic data flow. Harvard Journal of Law & Technology28, 349–593.Find this resource:

Samuelson, P. (2015). Possible futures of fair use. Washington Law Review, 90, 815–868.Find this resource:

Savin, A., & Trzaskowski, J. (2014). Research handbook on EU Internet law. Cheltenham, U.K.: Edward Elgar.Find this resource:

Schneier, B. (2015). Data and Goliath: The hidden battles to collect your data and control your world. New York: W. W. Norton.Find this resource:

Schrems v. Data Protection Commissioner, C‑362/14 (October 6, 2015). InfoCuria.

Sluijs, J. P. (2010). Network neutrality between false positives and false negatives: Introducing a European approach to American broadband markets. Federal Communications Law Journal, 62, 77–117.Find this resource:

Stop Online Piracy Act. (2011). Pub. L. No. H.R. 3261 (2011).Find this resource:

Strohm, C. (2014, February 28). Encryption would have stopped Snowden from using secrets.

Tetley, W. (1999). Mixed jurisdictions: Common law vs. civil law (codified and uncodified). Louisiana Law Review, 60, 677–737.Find this resource:

Tuneski, A. G. (2011). Hey, that’s my name! Trademark usage on the Internet. Franchise Law Journal, 31, 203–214.Find this resource:

Tushnet, R. (2015). Content, purpose, or both. Washington Law Review, 90, 869–892.Find this resource:

Vaidhyanathan, S. (2012). The Googlization of everything: (And why we should worry). Berkeley: University of California Press.Find this resource:

Weaver, A. B. (2012). Aggravated with aggregators: Can international copyright law help save the news room? Emory International Law Review, 26(2), 1161–1200.Find this resource:

Wu, T. (2003). Network neutrality, broadband discrimination. Journal on Telecommunications and High Technology Law, 2, 141–176.Find this resource:

Wu, T. (2010). The master switch: The rise and fall of information empires (1st ed.). New York: Alfred A. Knopf.Find this resource:

Wu, T., & Yoo, C. (2007). Keeping the Internet neutral? Tim Wu and Christopher Yoo debate. Federal Communications Law Journal, 59, 575–592.Find this resource:

Zhang v. Inc. (2014). 10 F. Supp. 3d 433 (S.D.N.Y. 2014).Find this resource: