The Oxford Research Encyclopedia of Communication is now available via subscription. Visit About to learn more, meet the editorial board, or learn how to subscribe.

Dismiss
Show Summary Details

Page of

 PRINTED FROM the OXFORD RESEARCH ENCYCLOPEDIA, COMMUNICATION (communication.oxfordre.com). (c) Oxford University Press USA, 2016. All Rights Reserved. Personal use only; commercial use is strictly prohibited. Please see applicable Privacy Policy and Legal Notice (for details see Privacy Policy).

date: 24 November 2017

Digital Media Ethics

Summary and Keywords

Since the early 2000s, Digital Media Ethics (DME) has emerged as a relatively stable subdomain of applied ethics. DME seeks nothing less than to address the ethical issues evoked by computing technologies and digital media more broadly, such as cameras, mobile and smartphones, GPS navigation systems, biometric health monitoring devices, and, eventually, “the Internet of things,” as these have developed and diffused into more or less every corner of our lives in the (so-called) developed countries. DME can be characterized as demoticof the people—in three important ways. One, in contrast with specialist domains such as Information and Computing Ethics (ICE), it is intended as an ethics for the rest of us—namely, all of us who use digital media technologies in our everyday lives. Two, these manifold contexts of use dramatically expand the range of ethical issues computing technologies evoke, well beyond the comparatively narrow circle of issues confronting professionals working in ICE. Three, while drawing on the expertise of philosophers and applied ethics, DME likewise relies on the ethical insights and sensibilities of additional communities, including (a), the multiple communities of those whose technical expertise comes into play in the design, development, and deployment of information and communication technology (ICT); and (b), the people and communities who use digital media in their everyday lives.

DME further employs both ancient ethical philosophies, such as virtue ethics, and modern frameworks of utilitarianism and deontology, as well as feminist ethics and ethics of care: DME may also take, for example, Confucian and Buddhist approaches, as well as norms and customs from relevant indigenous traditions where appropriate. The global distribution and interconnection of these devices means, finally, that DME must also take on board often profound differences between basic ethical norms, practices, and related assumptions as these shift from culture to culture. What counts as “privacy” or “pornography,” to begin with, varies widely—as do the more fundamental assumptions regarding the nature of the person that we take up as a moral agent and patient, rights-holder, and so on. Of first importance here is how far we emphasize the more individual vis-à-vis the more relational dimensions of selfhood—with the further complication that these emphases appear to be changing locally and globally.

Nonetheless, DME can now map out clear approaches to early concerns with privacy, copyright, and pornography that help establish a relatively stable and accepted set of ethical responses and practices. By comparison, violent content (e.g., in games) and violent behavior (cyber-bullying, hate speech) are less well resolved. Nonetheless, as with the somewhat more recent issues of online friendship and citizen journalism, an emerging body of literature and analysis points to initial guidelines and resolutions that may become relatively stable. Such resolutions must be pluralistic, allowing for diverse application and interpretations in different cultural settings, so as to preserve and foster cultural identity and difference.

Of course, still more recent issues and challenges are in the earliest stages of analysis and efforts at forging resolutions. Primary issues include “death online” (including suicide web-sites and online memorial sites, evoking questions of censorship, the right to be forgotten, and so on); “Big Data” issues such as pre-emptive policing and “ethical hacking” as counter-responses; and autonomous vehicles and robots, ranging from Lethal Autonomous Weapons to carebots and sexbots. Clearly, not every ethical issue will be quickly or easily resolved. But the emergence of relatively stable and widespread resolutions to the early challenges of privacy, copyright, and pornography, coupled with developing analyses and emerging resolutions vis-à-vis more recent topics, can ground cautious optimism that, in the long run, DME will be able to take up the ethical challenges of digital media in ways reasonably accessible and applicable for the rest of us.

Keywords: privacy, copyright, information ethics, computing ethics, utilitarianism, deontology, virtue ethics, sexbots, social robots, carebots, social media, social networking sites, citizen journalism, digital media

Introduction: Digital Media Ethics—An Impossible Project?

Since the start of this century, Digital Media Ethics (DME) has emerged as a relatively stable territory at the crossroads between applied ethics, Information and Computing Ethics (ICE), professional ethics of several kinds (such as journalism ethics and research ethics), and, most recently, Machine Ethics or Robot Ethics (MRE). As we will see in the first section, DME develops out of an array of sources and origins: these range from the world’s oldest ethical (and political) philosophies through the emergence of computational technologies and, thereby, information and computing ethics. As computing technologies and digital media more broadly (e.g., cameras, mobile and smartphones, GPS navigation systems, wearables including biometric health monitoring devices, and, eventually, the Internet of things) have developed and diffused into more or less every corner of our lives in the (so-called) developed countries—so the ethical challenges and issues that once concerned primarily small professional communities (such as computer scientists and ICE philosophers) expanded dramatically to a staggering range of ethical challenges and issues “for the rest of us.” DME is thus at once radically interdisciplinary: it must take on board methods, approaches, insights, findings, and reflections from an array of academic disciplines that otherwise very strongly tend to keep to themselves. These include, as we have started to see, disciplines such as philosophical and applied ethics, as well as the disciplines engaged with the design and development of information and communication technology (ICT), including Artificial Intelligence (AI) and (social) robots, beginning with software engineering and computer science. This exceptionally interdisciplinary background is thereby part and parcel of DME as demotic,1 as an ethics “for the rest of us.” This is to say that DME draws not only from applied ethics per se, but also from the ethical sensibilities and intuitions of the computing and engineering professionals who design and deploy digital devices—as well as of those who use and sometimes hack these devices as part of our everyday lives. In turn, DME works to make the contributions of professional philosophers, computer scientists, and practitioners from other relevant disciplines, as clear, accessible, applicable, and thereby useful for persons of more or less every demographic and educational category across the globe who seek to enhance their ethical understanding and responsible usages of digital media.

This suggests that DME is an impossible project. Indeed, a host of additional features of DME, explored below, adds a number of further complications and apparent obstacles for DME. At the same time, it is apparent that DME has established a certain measure of ground, stability, and at least modest success. This suggests that, while DME is certainly ongoing and unending as it takes on novel ethical challenges evoked by new technological developments and applications—it is not a quixotic endeavor. On the contrary, two examples—privacy and carebots—demonstrate that DME follows the larger pattern of ethics and technology: new technologies often initially outrun our extant ethical frameworks and resolutions—but given enough time and reflection, new approaches are developed that manage to resolve many new difficulties in useful and satisfactory ways.

To see how all of this is so, a working definition of DME is developed, followed by a review of the emergence of ICE and then MRE as defining and shaping much of the work in DME. This initial exploration highlights how any definitional boundaries may be necessarily dynamic and frequently blurred—first of all, as these technologies advance and diffuse, they often open up new ethical challenges that require new approaches. For example, digital media has made possible the emergence of citizen journalism—the now commonplace practice of individuals or groups on the street, uploading video, tweets, and so on surrounding an unfolding event, as primary sources for professional journalists and news organizations. Accordingly, DME has turned to the professional ethics of more traditional journalism to develop a new hybrid ethics for citizen journalists (Couldry, 2013; Ess, 2013, pp. 151–156).

The second section reviews a number of ethical frameworks that are frequently employed in efforts to analyze and resolve the ethical challenges and issues taken up in DME. These require further consideration of matters of selfhood and culture, along with the meta-ethical difficulties that result from seeing how these frameworks often derive from and correlate with diverse cultural traditions, norms, and practices. Not surprisingly, as digital media often implicate interactions and the impacts of these interactions that cross multiple national and cultural boundaries, it often happens that the diverse ethical frameworks correlative to diverse cultural and national domains lead to different analyses of and responses to a given ethical concern or dilemma. In particular, key culturally variable components of these frameworks are the basic assumptions regarding personhood, identity, and moral agency—ranging from more individual to more relational emphases. These diverse responses further require us to attend to the meta-ethical positions of monism, relativism, and pluralism.

The third section examines two specific issues as primary examples of contemporary DME analyses and approaches. The first is privacy, as both facilitated and challenged in multiple ways by digital technologies. The second is a look at the primary ethical arguments surrounding social robots as designed and deployed for therapy, warfare, and sex. The primary point in these analyses is to show how diverse ethical frameworks may be usefully applied to help clarify, if not fully resolve, some of the central ethical challenges in play in these examples.

Digital Media Ethics: Working Definition, Origins

The Emergence of Information and Computing Ethics

An overview of the characteristics of DME includes a working definition of DME as an ethics “for the rest of us.” While DME draws on any number of disciplines, it is centrally rooted in ICE and in MRE: a review of ICE and MRE clarifies the defining issues, approaches, and resources of DME. A number of specific issues in DME, beginning with privacy and copyright, are first explored in ICE and MRE, by way several ethical frameworks, including deontology, consequentialism, and virtue ethics. The review of ICE leads directly to the next section on ethical frameworks and meta-ethics.

Digital Media Ethics: Working Definition

Digital Media Ethics may be understood as demotic, beginning with its radically interdisciplinary origins. DME begins in (a) ICE as a specific branch of applied ethics in philosophy (one that is strongly interdisciplinary, as it conjoins applied ethics with various branches of information and computer sciences). DME further draws on a range of more technical disciplines, such as computer and software engineering, to develop (b) informed understandings of the facilities and affordances of computing technologies, as coupled with (c) empirically informed insight into real-world uses, practices, and impacts (real and potential) of these technologies. Hence, DME requires the insights, methods, and findings of computer scientists, ICT designers, experts in AI, Big Data, and so on, along with those of social scientists who take up various methods (qualitative and quantitative) to discern actual impacts of these technologies. At the same time, DME absolutely rejects any suggestion that non-philosophers and non-computer scientists are somehow “ethical dopes” who will be inevitably lost without the guidance of highly trained professionals. Rather, DME rests on the Aristotelian view of human beings as acculturated in ethical ways since birth, and as having innate potentials for recognizing and coming to grips with ethical difficulties and demands—potentials that are realized and developed precisely in our specific practices, both as human beings per se and as practitioners in more specialized fields. On this view, it is not surprising that some of the first ethical analyses and responses to issues raised in the course of using digital media arose from communities of practice, that is, participants in USENET who were among the first to develop ethical guidelines called netiquette (Pfaffenberger, 1996). DME takes such basic responses and analyses as crucial starting points and important sources for further reflection and development that can helpfully inform and exploit the findings and insights offered by philosophers, social scientists, and computer professionals. Finally, these exceptionally extensive backgrounds aim towards an exceptionally extensive audience—nothing less than all who use digital media technologies in their everyday lives—whose ethical sensibilities and intuitions must likewise be taken on board in our ongoing reflections and debates. These manifold contexts of use dramatically expand the range of ethical issues computing technologies evoke well beyond the comparatively narrow circle of issues confronting professionals working in ICE.

Lastly, this demotic emphasis makes an important distinction between two possible understandings of DME as defined by digital media in turn. Manifestly, if every notion of a digital era is taken literally, every medium in such an era would be digital. If coupled further with the example of Medium Theory, that every technology is a media technology, beginning with speaking itself (Ong, 1988)—DME would be committed, in principle, to take up nothing less than every ethical issue evoked in the contemporary world. This is the thrust and worry of those concerned with what is variously described as ambient intelligence or, more concretely, an impending Internet of things made up of more or less the entire range of items in our world, from shoes to refrigerators to every conceivable product for consumers as well as for industry, as these are increasingly fitted with sensors and other devices that communicate, in turn, via the internet (Rouvroy, 2008). The ethical challenges certain to unfold alongside these developments will be considerable indeed.

Happily, the demotic emphasis points towards a narrower definition of DME. To begin with, analogue media—and the analogue world more broadly—are very much still with us, despite the commonplace use of the digital era and its parallels, such as the information age; indeed, some of us argue that, as human beings remain embodied and thereby analogue beings, it is more accurate to speak of a post-digital era, one that recognizes the “hegemony” of digital technologies while at the same time arguing that the digital and the analogue are conjoined and modulated in different ways in different contexts (Lindgren, 2017). At the same time, the term digital media, for most of us, refers to the devices of our everyday experiences and practices. Prominent examples begin with computers and computer/telephone networks in various forms, including their ever-more mobile versions in the form of tablets and smartphones, and extend to devices such as digital cameras, sound recorders, and sound and video playback devices like CDs, DVDs, and blu-ray devices; (increasingly) radio and TV broadcasting and reception, and streaming services for music and videos; GPS technologies embedded not only in smartphones but cameras; a dizzying array of health-oriented devices—and so on. DME thus focuses in the first instance on the sorts of ethical challenges and issues that arise in conjunction with our everyday use of these more or less pedestrian digital media.

Information and Computing Ethics (ICE)—Digital Media Ethics (DME)—Machine Ethics/Robot Ethics

DME has its origins in ICE. ICE in turn is generally acknowledged to have begun with the work of Norbert Wiener (1950/1954), who is more broadly known as the father of cybernetics. ICE gradually developed through the 1950s to the 1980s, as computing technologies, including computer networking, rapidly progressed. There are good reasons to take James Moor’s 1985 paper, “What Is Computer Ethics?” as the foundational work of the current phase of ICE as a branch of applied ethics (Miller & Taddeo, 2017). Moor points out that new possibilities of choice and action opened up by new computing technologies present ethical conundrums that confront us with conceptual muddles and policy vacuums as extant ethics and policy guidelines fail to offer adequate responses to these new possibilities: hence, new efforts at developing the ethical frameworks and guidelines are required to come to grips with new ways of using and exploiting these technologies for good and for ill.

ICE, however, is largely oriented towards and undertaken by professionals—namely, a comparatively few philosophers and computer professionals who jointly recognize these sorts of problems and who, by learning how to bridge their otherwise strongly separate disciplines, begin to establish positions and precedents regarding issues such as privacy and anonymity, computer crime and security, intellectual property and copyright.

These foundations are critical, first as they set the patterns and precedents for how philosophers and ethically informed computer professionals wrestle with the range of new ethical challenges evoked by computer technologies. For example, Wiener is significant not only for being the first to consider in a systematic way some of the large ethical (as well as social and political) problems associated with computing machinery; he also takes up virtue ethics as a primary source and framework for his ethical reflections. Specifically, Wiener highlights liberty in the motto of the French Revolution (liberté, egalité, fraternité) as “the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him” (1954, p. 106). Our unfolding freedom in this way is central to the good life, as conceived in virtue ethics as a life of flourishing (Bynum, 2010). In the subsequent decades of ICE, much of the work instead takes up ethical frameworks that, especially at the time, were far more prevalent—namely, utilitarianism and deontology (see the definitions of these in the next section). In the past decade, however, especially as DME has emerged as an ethics aimed more broadly at the challenges, not just of computer professionals, but also for “the rest of us”—all who make use of digital technologies throughout our everyday lives—virtue ethics has come to again play a major role (for reasons we will explore more fully below). Virtue ethics, in turn, is often conjoined with feminist ethics and ethics of care, as these began in Western societies in the 1970s and 1980s. At the same time, virtue ethics is arguably the oldest and most widespread ethical framework—one that operates in what may be categorized as Western, Eastern, and still older indigenous societies, and in both ancient and modern times (Ess, 2013, p. 238ff.) Last, ICE takes up a range of specific problems and issues that are foundational for DME, beginning with matters of privacy and copyright.

As an ethics for the rest of us, DME began to emerge in the 1980s and 1990s, first as a consequence of the “PC revolution”—the introduction of personal computers in the 1980s. Increasingly, this revolution diffused computing technologies, including early forms of computer networking and computer-mediated communication (CMC) beyond the small circles of computer professionals. This diffusion rapidly accelerated in the 1990s, fueled by falling prices for computing devices and by the transformation of the Internet from a university and research-based network to an increasingly demotic network used ever more for everyday activities.

Obviously, this accelerated diffusion of computing technologies exposed more or less every person who used these devices to what is now a familiar array of ethical issues, beginning with privacy, copyright, freedom of expression versus potentially harmful expression and materials (such as pornography), surveillance, identity theft, and cyber-bullying, to name a few (cf. Conger & Loch, 1995). Equally obvious, as these ethical issues came to the fore for more and more people, more and more philosophers, alongside colleagues in an increasing range of disciplines, took up specific ethical issues for analysis and resolution (e.g., social science research ethics such as Ess & the Association of Internet Researchers, 2002; Kraut, Olson, Banaji, Bruckman, Cohen, & Cooper, 2004). At the same time, these early efforts were demotic in the sense that initial ethical responses and emerging guidelines were almost always developed by specific communities—with little or no contribution from philosophically trained ethicists. For example, the first efforts to develop “netiquette”—rules for discourse and discussion online—emerged in response to the ethical conundrums surrounding commitments to anonymity and freedom of expression online, versus the often disruptive, if not destructive responses of some participants in the form of trolling and flame wars, responses facilitated precisely by online anonymity and initial hopes of fostering an entirely unlimited freedom of expression online. (Pfaffenberger, 1996; and see Tavani, 2013, pp. 6–9, for a brief history of what he prefers to call cyber-ethics). These examples are important, especially as they illustrate the largely successful processes of ethical responses emerging from the bottom up, that is, from the individual and collective ethical insights and sensibilities of the people involved, who in almost every instance have little to no formal training in philosophical ethics. This is a key feature of DME—namely, relying on the ethical sensibilities and insights of “the rest of us,” in contrast with (worst case) more theoretical approaches to ethics that work top-down, from extant frameworks and principles.

DME—as an effort to provide a more comprehensive set of ethical frameworks, possible resolutions, and guidelines for the many ethical issues confronted by users of digital media—has come into its own within the past decade or so. This development has been made possible, in part, as ICE has made significant progress—including the development of more comprehensive philosophical approaches to contemporary digital technologies that can be helpfully exploited by the rest of us who hope to move beyond more fragmentary, one-off responses to specific problems and issues. In these directions, parallel advances in ICE provide ever more comprehensive and appropriate philosophical frameworks for taking up the specific concerns of DME. Of primary importance here are the recent works of Luciano Floridi (2010), Peter-Paul Verbeek (2011), Shannon Vallor (2016), and Michel Puech (2016). Puech’s volume, titled The Ethics of Ordinary Technology, provides an extensive and sophisticated account of how digital technologies diffuse our everyday lives, and so provides a critical contribution to DME as a demotic ethics. Both Puech and Vallor’s Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting (2016) stand as the most recent contributions to the virtue ethics approaches in DME.

Indeed, one of the most striking developments in recent ICE is its parallel turn to virtue ethics and feminist ethics of care. A primary example here is the development of “networked systems ethics” (Zevenbergen, 2016). This ethical framework exemplifies the demotic, bottom-up emphasis of DME, as it results from a global, two-year project of evoking ethical sensibilities and frameworks from computer scientists and affiliated professionals engaged in networked systems research. We will explore the specific virtue ethics recommendations more fully below.

At the same time, the past decade has witnessed the emergence of “machine ethics” or “robot ethics” (e.g., Anderson & Anderson, 2011; Lin, Abney, & Bekey, 2012; Trappl, 2015; Wallach & Allen, 2009; Wallach & Asaro, 2017). We might initially think of robot ethics as something of a subfield of ICE. At the same time, however, robots—especially social robots, as we will see—evoke a range of familiar and novel ethical challenges: as material devices designed to replicate embodied human beings in a number of ways, robots further implicate an entire range of philosophical questions, beginning with our understandings of human identity and agency, the role of emotions in communication and ethical decision-making, and so on. These developments have led to the recent emergence of the field of “robo-philosophy” (e.g., Nørskov, 2016). Moreover, robots are increasingly part of our everyday lives—initially as vacuum cleaners or lawnmowers, but increasingly as social robots in various forms. Their further development and diffusion into our lives, most especially in the form of social robots, promises to accelerate dramatically over the next few decades. For these reasons, MRE now provides important resources for DME: and, as with ICE, robot ethics likewise shares a number of foci with DME as well (see Figure 1).

Digital Media EthicsClick to view larger

Figure 1. This Venn diagram provides an initial map of (only) some of the ethical topics and issues that are both shared within and distinct from the three domains of ICE, DME, and Machine Ethics/Robot Ethics. The ellipses indicate space for additional or future issues.

The list of critical issues has expanded accordingly. Tavani (2013), for example, takes up issues of free speech, anonymity, legal jurisdiction (for globally interconnected communication and commerce), and behavioral norms in virtual communities (pp. 6–9). Ess (2013, pp. 120–196) focuses on the ethical dimensions of citizen journalism and electronic democracy, friendship online, and violent content in games. Social robots, including carebots and sexbots, open up questions of robot and AI autonomy and rights, and related matters such as the ethical possibilities and limits of how human beings may best interact with them.

Ethical Frameworks and Meta-Ethical Considerations

This section briefly reviews some of the essential characteristics and features of the primary ethical frameworks in play in DME—namely, ethical egoism and utilitarianism as important versions of consequentialist approaches; deontology; and virtue ethics. This list is by no means complete, especially for DME as orientated towards globally shared and distributed media and correlative ethical issues: a more complete account includes feminist ethics and ethics of care, Confucian ethics, Buddhist ethics, and attention to African and indigenous traditions (see Ess, 2013, pp. 229–235, 245–252). In this context, however, the focus is restricted to virtue ethics. This is in part because of its origins and use in these diverse traditions (and still others, such as Hinduisms); virtue ethics is sufficiently extensive and representative of global traditions and can appropriately serve as the occasion and primary example of the cross-cultural dimensions of DME. The manifold differences between diverse cultural approaches and traditions in ethics require examination of the three meta-ethical positions of relativism, monism, and pluralism.

Consequentialism: Ethical Egoism, Utilitarianism

As the name implies, these approaches proceed by seeking to develop a kind of cost-benefit analysis of the likely and possible consequences of a given ethical choice. In classical consequentialist theory, these choices are understood primarily in terms of the pleasure and/or pain they result in—whether exclusively physical (so Jeremy Bentham) or more inclusive of intellectual (and related psychological) pleasures (John Stuart Mill). A key question (and critical deficit) for consequentialist approaches is “consequences for whom?” So-called ethical egoists take the view that the only ethically relevant consequences of possible choices and acts are those that directly affect the given individual. Utilitarians, by contrast, seek to apply consequentialist approaches to larger groups. Either way, the actions or choices that maximize pleasure and minimize pain are the ethically preferred and legitimate ones. The well-known slogan of utilitarianism, “the greatest good for the greatest number,” aims at maximizing pleasure (both physical and intellectual) for a larger community, such as nation-states (Ess, 2013, p. 201ff.; Sinnott-Armstrong, 2015).

Maximizing pleasure for the many, however, can be justified in utilitarianism at the cost of profoundly negative consequences for the few. Utilitarian approaches are frequently used in the often agonizing ethical choices of war. First of all, warfare requires that individuals and groups risk—and often lose—their very lives, in hopes that the larger consequences will benefit the many, in the form of greater security, peace, national sovereignty, and so on. In particular, there were well known examples of utilitarian thinking at work in World War II—most prominently, the decision to drop the atomic bombs over Hiroshima and Nagasaki. Quite simply, while ca. 200,000 civilian lives were lost, ca. 500,000 million allied soldiers’ lives—the estimated cost of a direct invasion of Japan—were saved (Ess, 2013, p. 204). This utilitarian calculus was also used following the 9/11 terrorist attacks on the United States to justify suspension of rights in the name of national security, where national security, it was argued, required massive and profoundly intrusive surveillance. Utilitarianism is more broadly an ethical approach that predominates in the English-speaking world (the United States, the United Kingdom, Australia, and so on; Burk, 2007, p. 98ff.; Ess, 2013, p. 65ff.)

Deontology

By contrast, deontological approaches take up the language of rights, duties, and obligations, coupled with the near-absolute insistence that basic rights be recognized and protected, even in the face of considerable risks or financial costs. Modern deontological ethics begins in the work of Immanuel Kant and the primary focus on the human being as a rational autonomy—that is, a radical freedom whose capacities include, not simply choice but, still more fundamentally, the capacity of self-rule, where self-rule specifically entails the ability to formulate one’s own moral laws. Kant’s understanding of ethics as an emphatically rational enterprise, one modelled on reason’s work in mathematics and the natural sciences, partly grounds his argument, that rational self-rule would not end in a chaos of diverse ethical laws and principles: on the contrary, just as human reason achieves apparently universally valid results and findings in the natural sciences and mathematics, so Kant argued that human reason in its ethical expression would do the same (1785/1959; 1788/1956).

Kant argued specifically that human autonomy, as our primary point of departure, issues first of all a duty of respect for other rational autonomies around me. In one formulation of his well-known categorical imperative, I am always to treat others as ends in themselves, never as means only. Again, our defining capacity as free beings is to determine our own moral laws and thereby pursue the goals or ends that they prescribe. But if I treat another as a means to those ends—for example, if I coerce another to serve as my slave or sex object—to do so annihilates their own capacity to determine their own moral laws and ends. Stated differently, to turn another human being (or any other form of rational autonomy) into an object or thing in this way is to fail to respect their fundamental freedom and capacity for rational self-rule. A presumption of foundational equality among all rational autonomies, then, immediately issues in the primary duty of respect for the other as an end, never as a means only (Alexander & Moore, 2015; Kant, 1785/1959, p. 47).

This sort of deontology issues in and supports modern understandings of human rights as inalienable and universal. In contrast with a utilitarian justification of sacrificing the few for the many—deontological approaches explicitly oppose such cost-benefit approaches. For example, Joel Reidenberg has stated bluntly: “In a democracy, privacy is a basic political right that cannot be sold out in the marketplace” (2000). Moreover, as seen below with regard to privacy in particular, these more deontological approaches appear to be more prevalent in the European and Scandinavian contexts (Burk, 2007, p. 100ff.; Ess, 2013, pp. 206–210; Stahl, 2004, p. 17).

Virtue Ethics

Virtue ethics proceeds from the straightforward and, it would seem, nearly universal human question: What must I do to be happy—where happiness is understood in terms of a specific sense of contentment or well-being (eudaimonia). This contentment is experienced as a result of the practice and cultivation of specific abilities—virtues—which in turn contribute to a sense of a good life as flourishing. Virtue ethics thereby foregrounds the importance of “moral wisdom or discernment, friendship and family relationships, a deep concept of happiness, the role of the emotions in our moral life, and the questions of what sort of person I should be,” where these are not explicitly taken up in deontology and consequentialism (Ess, 2013, p. 241; Hursthouse, 1999, p. 3).

By “moral wisdom or discernment,” Hursthouse refers to phronēsis, a specific form of reflective (in contrast with determinative) judgment that comes into play exactly when the usual rules and principles offer conflicting directions: phronēsis works from the ground up, precisely within the fine-grained details of a given context—in part, so as to discern what larger norms and principles should apply, and with what relative weight and priority, to that context. Phronēsis, like the other virtues, must be cultivated through long experience—in part as an embodied or partially tacit form of knowledge. We have already seen that Norbert Wiener has highlighted human liberty in terms of virtue ethics: again, Wiener understands “liberty” in the motto of the French Revolution (liberté, egalité, fraternité) as meaning “the liberty of each human being to develop in his freedom the full measure of the human possibilities embodied in him” (Bynum, 2010; Wiener, 1954, p. 106). At the same time, Wiener’s use of the term “cybernetics” points, perhaps unwittingly, to phronēsis as invoked by Plato. In The Republic, Plato uses the cybernetes—a pilot or steersman, who knows (from experience) what is possible and not possible, and is able to correct the course if he makes an error (hence the sense of contemporary cybernetics as self-steering systems)—as an analogue for phronēsis as an ethical judgment capable of learning from experience and correcting errors in judgment (Ess, 2013, p. 239; Plato, 1991; Weizenbaum, 1976).

More broadly, the range of virtue lists developed across human time and global space is extensive. This article focuses especially on the virtues highlighted by Shannon Vallor (2011b) and Sara Ruddick (1975). Vallor points to empathy, patience, and perseverance as virtues that are key for in-depth communication, long-term friendship, and intimate relationships; Ruddick highlights loving itself as a virtue, one that requires practice and cultivation, explored more fully below in the discussion of “complete sex” and sexbots. It seems clear that liberty, communication, friendship, and long-term intimate relationships are core components of a life of contentment and flourishing.

Whereas utilitarianism and deontology emerge and prevail primarily in modern Western ethics, virtue ethics is found in both ancient and contemporary Western ethics, as well as globally—in multiple indigenous traditions, in Confucian and Buddhist thought, and in the world’s major (and some minor) religious traditions (Ess, 2013, pp. 238–243). This combination of ancient heritage and global scope makes virtue ethics especially relevant to contemporary DME as it focuses on ethical issues that arise in conjunction with digital media that are often interconnected around the world via computer networks. As the discussion of meta-ethics demonstrates, virtue ethics is thereby strongly pluralistic.

In the contemporary world, virtue ethics has enjoyed a considerable renaissance relying in part on Wiener’s foundations. Perhaps most prominently, virtue ethics is increasingly taken on board in approaches to the design of ICTs (Spiekermann, 2016), including carebots (e.g., van Wynsberghe, 2013). More recently, Spiekermann’s implementations of virtue ethics in ICT design underlies an important new initiative of “The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems.” As seen initially, virtue ethics, along with feminist ethics of care, have been invoked in the recent emergence of “networked systems ethics” (Zevenbergen, 2016). Further reflecting extensive dialogue with philosophers and applied ethicists, Bendert Zevenbergen and his colleagues have determined that “virtue ethics should be applied to Internet research and engineering—where the technical persons must fulfill the character traits of the ‘virtuous agent’|” (Zevenbergen et al., 2015, p. 31; emphasis added; cf. Jackson, Aldrovandi, & Hayes, 2015). In these ways, Spiekermann, Zevenbergen, and their colleagues come full circle back to the foundations of ICE in the work of Norbert Wiener. At the same time, this endorsement of virtue ethics and feminist ethics, by those in the more technical domains of computer and software engineering, intersects with and reinforces the developments and applications of virtue ethics in DME (Vallor, 2011a, 2011b, 2016; van Wynsberghe, 2013).

It is important to recognize that the rising importance of virtue ethics is not necessarily as a complete replacement of either consequentialism or deontology. On the contrary, as the two example issues of privacy and social robots illustrate, virtue ethics approaches often reinforce deontological considerations and complement more utilitarian analyses. In addition, both feminist ethics and ethics of care share with virtue ethics an explicit emphasis on the importance of emotions in orienting our ethical concerns and helping us make critical ethical decisions (Ess, 2013, pp. 229–235).

Cross-Cultural Considerations: Selfhood, Personhood, Identity

Virtue ethics as it emerges in both Western philosophy, including Socrates and Aristotle, and Eastern traditions, such as Confucian thought, further highlights the foundational importance of our basic understandings and assumptions surrounding what it means to be a human self or person. Very broadly speaking, these early traditions emphasize the self as relational. Such a self is defined, first of all, in terms of one’s family relations: the self is the relationships constituted with one’s parents, grandparents, aunts, uncles, siblings, nieces, and nephews, and, eventually, one’s own spouse and children. In these usually hierarchical societies, one’s place and role are further defined by one’s friendships and network of relationships in the larger world. Religious traditions entail relationships with a still larger set of entities—perhaps the spirits of the ancestors, animist spirits, the gods and goddesses of polytheistic traditions, the transcendent God of the Abrahamic traditions, and so on. Broadly speaking, the virtues highlighted in these traditions emphasize our cultivating the abilities and qualities that foster larger social, political, and religious harmony—e.g., filial piety in Confucian tradition, as the honor and submission a son owes his father, and, by extension, the larger authorities of teachers, the state, and the Emperor (MacIntyre, 1994, p. 190).

By contrast, the emergence of more individualistic conceptions of selfhood can lead to sometimes strikingly different virtues. Starting with the virtue of self-care, especially as facilitated by literacy and writing (Foucault, 1988, p. 19), the more individual emphasis on selfhood can be seen in Wiener’s account of liberté, as well as in Kant’s interest in virtue ethics, including the injunction, “sapere aude!—have the courage to use your own understanding!”—as the motto of the Enlightenment (Kant, 1784/1991, p. 54). This article explores more fully the contrast between more relational and more individual emphases on selfhood that are centrally at work in conceptions of privacy. Most briefly, in many historical Asian traditions, individual “privacy” for a relational self can only be understood as the desire to cut oneself off from the relationships that define one: such a notion is uniformly regarded as a moral negative—something dirty or shameful. It is only as the self becomes increasingly understood as a rational autonomy in the modern West that individual “privacy” becomes articulated as a positive good and right—indeed, one that is foundational to both self-flourishing and the functioning of democratic polity itself (Ess, 2013, pp. 59–62; Lü, 2005).

Finally, these broad differences must be understood as differences in emphasis across a spectrum, not differences defining an oppositional binary. As Soraj Hongladarom has pointed out, there are relational emphases in modern Western philosophy alongside the stress on the individual (Hongladarom, 2016). In societies fostering strongly relational selfhood, individual human beings, of course, understand themselves as distinct persons. Moreover, the multiple changes brought about in the past decades by globalization, the global distribution of media that thereby exposes diverse cultures to one another in dramatic new ways, and the global diffusion of the internet itself all lead to notable shifts in emphases in understandings of selfhood. Most dramatically, as shifting attitudes and practices regarding privacy demonstrate, more strongly individual understandings of the self have emerged in China, Japan, Thailand, and elsewhere in societies formerly noted for relational emphases: that is, individual privacy is now acknowledged and increasingly protected in law as a positive good and right (Ess, 2013, p. 64ff.). And while individual emphases remain alive and well in Western societies, the contemporary era of “networked individualism” (Baym, 2011, p. 385) is also marked by stark shifts towards more relational understandings of “group privacy” (Lange, 2007) and “networked privacy”—indeed, in many instances, the willingness to abandon individual privacy altogether (Ess, 2013, p. 55ff.; cf. Vignoles et al., 2016).

Meta-Ethics: Relativism, Monism, Pluralism

Competing ethical claims, such as whether individual privacy can only be understood negatively vis-à-vis insisting that individual privacy is a foundational right and good, may well initially appear to present us with an irreconcilable opposition—an either/or choice that insists that only one of these can be ethically legitimate, and the other must hence be ethically illegitimate. The same response may occur to us in our first encounters with the more underlying ethical frameworks such as deontology, utilitarianism, virtue ethics, and so on. But in fact, such an either/or response is but one of three possible choices in turn about both specific ethical issues, such as individual privacy, and about ethical frameworks themselves. Since these three choices include choices about such ethical frameworks, they are usually referred to as meta-ethical positions.

The initial response of either/or—of insisting that one position is right, and thereby any alternative view that disagrees with that position is wrong—enjoys various names, including ethical monism or ethical absolutism. Such a position is relatively easy to hold in traditional societies that tend to be closed and static: such a position is also helpful, perhaps critical, as it helps individuals, as sharing the same ethical (and larger) orientations, to thereby build and sustain stable societies (cf. Ess, 2013, p. 218ff.). But ethical monism is more profoundly challenged by the experience—both individual and at social and cultural levels—of encountering not just one or two differing viewpoints, but a multiplicity of competing and apparently contradictory claims and ethics. Such is the context today, of course, as globalization and global media networks confront us all with a staggering diversity of cultures, each of which is defined by specific beliefs, practices, customs, and ethical norms that often vary widely from one to the other. A tempting and sometimes beneficent response to this diversity is, in effect, to give up on ethical monism and take the meta-ethical view of ethical relativism instead. Contra the underlying assumption in ethical monism, that there are universally legitimate ethical norms and standards—that somehow are valid and ethically binding for everyone (e.g., as decreed by an accepted Divinity)—ethical relativism insists that no such universal norms exist. “Everything is relative,” we like to say; “when in Rome, do as the Romans do,” and so on. The advantages of ethical relativism are significant. First of all, it allows us to be tolerant of the multitude of differing beliefs surrounding us, and thereby more capable of living in some modicum of harmony with people from a wide range of religious and cultural backgrounds—instead of feeling compelled to either condemn them as wrong and/or seek to convert them to our understanding of the one and only truth. A further advantage of ethical relativism is that it relieves us of the cognitive and emotional burdens of having to consider these difficult matters any further. These are no small advantages for all of us whose lives and vocations are demanding enough as it is.

Nonetheless, ethical relativism rests on a basic logical mistake regarding how to interpret the often very great differences we encounter in ethical positions. Ethical relativism argues that these differences can only be accounted for by assuming that no universally legitimate ethical norms or standards exist. In simple logical terms: If (A), there are no universal standards, then (B), one should encounter great diversity in ethics between individuals and cultures. So far, so good. The mistake consists in arguing further: since (B), we do encounter great diversity in ethics between individuals and cultures, therefore (A), there are no universal standards. The logical error here is called the fallacy of affirming the consequent. The “consequent” refers to the claim following the “then” in an if-then statement. To affirm the consequent is, in effect, to reverse the if-then statements: schematically, starting from if (A) then (B)—and then argue if (B) (on encountering great diversity), then (A) (there are no universal standards). But this does not follow. Quite simply, there may well be other grounds, circumstances, conditions, and so on that would lead to (B), an encounter with great ethical diversity—not only and exclusively (A), the absence of universal norms. Schematically, (C) (other grounds, circumstances, conditions, and so on) can lead to or imply (B): if (C), then (B) holds as well. But this means, in turn, that asserting (B) alone does not lead necessarily to (A) (the absence of universal norms): (B) may also imply (C) (other possible explanations for ethical diversity) (Ess, 2013, p. 213ff.).

A primary alternative explanation, in fact, is offered by the third meta-ethical position of ethical pluralism (Ess, 2013, p. 221ff.). Ethical pluralism grants the empirical facts highlighted by (B)—great individual and cultural diversity of ethical norms, practices, and so on. But ethical pluralism argues that this diversity may follow from our interpreting, applying, or understanding shared norms and values (as thereby at least quasi-universal) in just the distinctive and diverse ways identified by (B). To use a favorite example: both Norway and the United States agree that individual privacy is a fundamental right and norm. But this norm is practiced or understood in sometimes strikingly different ways. For example, research ethics in the United States uniformly focuses on protecting individual privacy (along with anonymity and confidentiality) exclusively: by contrast—and reflecting a somewhat more relational understanding of selfhood in Norway—researchers are further obliged to protect the privacy of both the individual and those persons in close relationship with the individual, within the individual’s intimsfære or “intimate sphere” (Ess & Fossheim, 2013; NESH, 2006, §13). These significant differences do not mean, as the ethical relativist argues, that there are no universally legitimate or binding ethical norms (B ➔ A). Rather, these differences result quite clearly from a shared acceptance of the ethical norm of privacy, but interpreting and applying that norm differently, as refracted through the different emphases on selfhood at work in each national context (C ➔ B).

Ethical pluralism thus provides a critical alternative to both ethical monism and ethical relativism as ways of understanding and responding to often profound ethical differences. In contrast with the ethical monist, such pluralism allows for a limited version of the tolerance ethical relativists insist upon. To see this, one must first recognize a basic logical contradiction in relativism: if all norms are relative, why should tolerance be favored over intolerance? At the same time, the ethical relativist cannot coherently distinguish between a Hitler on the one hand and a Mother Teresa on the other: both must be accepted as legitimate within a relative framework. Ethical pluralism, by contrast, endorses (quasi-) universal norms, such as basic human rights to life and respect for persons; these can and sometimes must be interpreted differently (e.g., the right to life in a wealth country such as Norway includes the right to free public health care, whereas this right remains contested in the United States)—hence, the culturally-rooted differences can be accepted and tolerated as the relativist would insist. At the same time, however, this flexibility of interpretation for the pluralist does not stretch to justifying wars of aggression and genocide. On the contrary: by insisting on basic rights to life and respects for persons, the pluralist can sustain the ethical distinction between a Mother Teresa (who respects such rights) and a Hitler (who systematically does not).

Finally, it is important to recognize that these three meta-ethical positions are not mutually exclusive. In particular, ethical pluralism will not resolve all ethical differences into coherent harmony, so one may be forced to take up one of the alternatives. For example, we may well find ourselves arguing for some version of ethical monism in matters such as human trafficking and genocide as irreconcilable with basic rights to life and equality, while holding to some version of ethical relativism with regard to politeness norms (shaking hands vs. one to three, sometimes four, kisses on the check vs. American hugging, etc.).

With these ethical and meta-ethical frameworks in mind, two examples show how they work in practice, by way of privacy and social robots.

Specific Issues in DME

Given how far digital media in all of its applications and usages interweave with more or less every aspect of our lives in contemporary (late) industrialized societies—the range of ethical challenges and issues surrounding digital media is proportionately extensive. This section explores only two of the most significant issues, moving from the “classic” and well examined issue of privacy (and so leaving aside copyright, pornography, and violent content and behavior) to the more recent focus on social robots, including carebots and sexbots (and so leaving aside topics such as death online and Big Data issues such as pre-emptive policing). The goal is to provide at least a reasonable sketch of the primary issues and how these may be approached by way of the ethical frameworks reviewed above. Of first importance is to illustrate how ethical frameworks may be applied—and/or, how some of these challenges suggest a turn towards new sorts of ethical analyses. Second, this review provides an overview of what may be taken to be primary exemplars and case-studies that, in many instances, will serve as starting points for contemporary and future ethical issues that are certain to emerge as digital technologies continue their rapid development and diffusion into our lives.

Privacy

There is perhaps no more vexed and complicated topic in DME than privacy—first of all, because of the essential role privacy plays in contemporary conceptions of individual selfhood, and second, because of our understandings of democratic norms, practices, and polity. At the same time, especially in an era characterized by mass surveillance and hackers of various stripes (e.g., whether working as lone wolves, for criminal organizations, and/or for nation-states rich and poor), privacy online is threatened on multiple fronts—including our own willingness to sacrifice privacy for the convenience of “free” online services such as email, social networking sites, and so on.

To navigate these demanding waters, it is necessary to review primary definitions of privacy as these have emerged vis-à-vis new media (from photography to the Internet). Then the interconnections are explored between privacy and culturally variable conceptions of selfhood, especially as these intersect in the most recent privacy theories. Next examined, specifically, are the contrasts between approaches of the United States and the European Union to privacy protection, in terms of the ethical frameworks of utilitarianism and deontology, along with recent legal advances in protecting individual privacy. The conclusion offers suggestions for how individuals can enhance their privacy online, while further discussing the privacy paradox: these suggestions further argue that protecting privacy is also a matter of virtue ethics.

Privacy: History, Key Definitions, Significance

People are often surprised to learn how comparatively recent and, in some ways, culturally specific contemporary understandings of privacy are. In the United States context, privacy is first explicitly articulated in a landmark legal paper by Samuel Warren and Louis Brandeis (1890), defining privacy as the right to “being let alone” or “being free from intrusion” (Tavani, 2013, p. 135). As is often noted (e.g., Miller & Taddeo, 2017), this articulation of privacy appears to be occasioned by the emergence of photography and the possibilities it opened up for publicizing the private lives of prominent people via expanding newspapers. By the same token, the rise of ICTs has evoked new understandings of privacy as tied to new possibilities of intrusion. As early as 1967, Alan F. Westin defined privacy as “the claim of an individual to determine what information about himself or herself should be known to others” (2003, p. 431). Philip Agre is often cited for his definition of privacy, as requiring “control over personal information is control over an aspect of the identity one projects to the world, [such that] the right to privacy is the freedom from unreasonable constraints on the construction of one's own identity” (Agre & Rotenberg, 1998, p. 3; cited in Rouvroy, 2008, p. 4; cf. Miller & Taddeo, 2017). Somewhat more broadly, Herman Tavani has summarized the view of many privacy theorists regarding informational privacy as “one’s ability to restrict access to and control the flow of one’s personal information” (Ess, 2013, p. 72; Tavani, 2013, p. 136).

Last, decisional privacy is especially critical to our basic understandings of democratic polity and norms. Tavani defines decisional privacy as a freedom from the interference from others in “one’s personal choices, plans, and decisions” (Ess, 2013, p. 72; Tavani, 2013, p. 135ff.). As should be manifest, these variant understandings of privacy are not exclusive of one another; rather, they interweave and often reinforce one another. In an information age, for example, it would seem that decisional privacy requires informational privacy as well.

Privacy, Personhood, and Culture

At this point, a foundational assumption is noted that shapes all of these definitions: namely, privacy is conceived of as primarily an individual right—a right further rooted in the U.S. origins of privacy rights as rights from unwarranted intrusion (“search and seizure”) of personal spaces (Debatin, 2011). But over the past 15 years or so, new conceptions of privacy have been developed that rest on more relational understandings of human beings—leading to notions of “group privacy” (Lange, 2007). Such relational understandings of human beings further undergird the most significant recent theory of privacy as oriented towards the online world—namely, Helen Nissenbaum’s (2010, 2011) account of privacy as “contextual integrity,” which relies in part on still earlier work by James Rachels (1975). Both Rachels and Nissenbaum explicitly shift from high modern (and primarily Western) conceptions of the human being as strongly individual towards more relational understandings; as the term suggests, the latter conception foregrounds the importance of multiple relationships in defining one’s sense of selfhood. As demonstrated, the relational self is built up within the family and close relatives, and then extends to larger social relationships, beginning with friendships and extending to various relationships in social, professional, political, perhaps religious spheres, and so on. Relational selves shaped by religious traditions include relationships with divinities—whether these are understood as more inextricably interwoven with the natural-material order (e.g., the kami in Japanese animism and Shinto) and/or as more transcendent of the natural-material order (e.g., God as understood in many—but by no means all—of the Abrahamic traditions of Judaism, Christianity, and Islam). So Rachels highlights relationships such as “businessman to employee, minister to congregant, doctor to patient, husband to wife, parent to child, and so on” (Rachels, 1975, p. 328, cited in Nissenbaum, 2010, p. 65, 123; cf. Ess, 2015, p. 64ff.). Rachels then links what we have seen in terms of an initial right to be left alone and an informational privacy to specific relationships: “there is a close connection between our ability to control who has access to us and to information about us, and our ability to create and maintain different sorts of social relationships with different people” (1975, p. 326, cited in Nissenbaum, 2010, p. 65; see Ess, 2015, p. 65). Nissenbaum in turn develops her account of privacy as “contextual integrity”—an understanding of privacy, that is, that shifts focus from either place or a given individual to the specific set of relationships within which specific information is shared. To use one of her examples: patients share what is often highly intimate and personal information with their physicians and other healthcare professionals, as it is needed, obviously, for effective diagnoses and treatment. The relationship between a physician and a pharmaceutical company is different, however. So if a physician were to share information given in the patient-physician relationship and context with, say, a pharmaceutical company seeking to identify likely targets for advertising its products—this would violate the contextual integrity of the first relationship. Privacy is now defined in terms of a right to an “appropriate” flow of information as defined by a specific context (Nissenbaum, 2010, p. 107ff.; cf. Ess, 2015, p. 62ff.).

Broadly speaking, then, as the sense of selfhood in Western cultures becomes ever more relational—precisely as such selfhood is facilitated by networked ICTs and their applications, perhaps most importantly social media—these more recent conceptions of privacy would appear to be both appropriate and necessary. At the same time, as the examples of Japanese animism and Shinto initially suggest, such relational selfhood has strongly prevailed in non-Western cultures, as well as in the pre-modern West and indigenous cultures (Ess, 2013, pp. 64, 98, 250ff.). In fact, in societies and traditions emphasizing relational conceptions of selfhood—including those cultures shaped by Buddhist and Confucian thought—there is originally no such thing as individual privacy as presumed in the (late) modern West. On the contrary, privacy for a relational self can only be conceptualized in negative terms—for example, as the desire to hide something shameful or bad (Ess, 2013, pp. 62–65; Lü, 2005). To be sure, there are examples of group privacy—e.g., of familial privacy vis-à-vis the larger community and the state (Kitiyadisai, 2005). Moreover, in part as ICTs have woven the world ever more closely together over the past several decades, it appears that in some Asian societies, the sense of selfhood is shifting towards more individual emphases. As Lü (2005) points out, privacy has now become a positive term and right in the People’s Republic of China: so much so, in fact, that individual privacy rights are being written into China’s constitution (Ess, 2013, pp. 67–68; Sui, 2011; cf. Greenleaf, 2011).

As something of a middle ground between these two broad contrasts, in Germany, Denmark, and Norway, discussions of privacy often involve two key terms: Privatleben (German) or privatlivet (Danish and Norwegian) and Intimsphäre or intimsfære. Roughly translated as “private life” and “intimate sphere,” especially the latter points precisely to the webs of close relationships among family and friends. As Nissenbaum’s understanding of privacy as contextual integrity articulates—what needs to be protected in our private lives and our intimate sphere is not solely bits of information about ourselves as individuals: in addition, our private life and intimate spheres require the protection of information shared through the close relationships they encompass. These strongly relational understandings of selfhood and thus privacy as a form of group privacy are in fact sufficiently strong as to be encoded in the Norwegian Internet research ethics guidelines (NESH, 2006, §13).

What Does Privacy Mean in the (Post-)Digital Era? Cultural and Ethical Contrasts

A central—in effect, operational—definition of privacy in conjunction with digital media is in how what counts as personal information is considered data. In the European Union’s initial legislation on protecting personal data, the definition is quite broad:

“personal data” shall mean any information relating to an identified or identifiable natural person (“data subject”); an identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural, or social identity. (DIRECTIVE 95/46/EC, Article 2 [a])

A critical development of this definition emerged in 2008, as the EU Data Commissioners ruled that Internet provider (IP) addresses count as personal data (White, 2008). Briefly, IP addresses are exchanged constantly between an individual’s computer and the various internet services it connects to and uses—whether an email service such as Gmail or any webpage that one might call up. Under most circumstances, if someone has access to the IP address of your computer, they can quickly and easily acquire a great deal of information about you—information that indeed counts as “personal” by the above definition. U.S-based companies such as Google strenuously objected to including IP addresses as part of the definition of personal data: Google—and any other company that offers web-based services—would thereby be prevented from sharing IP addresses with, for example, the advertisers who need this information as part of the larger business of tracking consumers’ interests and shopping patterns for the sake of targeted advertising (White, 2008).

This conflict crystalizes a larger contrast between the approaches of the United States and the European Union to privacy and the protection of personal data. Broadly, the contrast can be understood in terms of a more utilitarian orientation in the United States versus a greater deontological emphasis in the European Union. That is, as with copyright law and practice in the United States, privacy law and practice are frequently justified in terms of a utilitarian cost-benefit analysis. Broadly speaking, the U.S. view argues that fewer regulations and legal restrictions on companies make for greater economic efficiencies, and thus comparatively greater market activity and profit. Especially given the (now clearly questionable) 1980s’ assumptions of “trickle-down” or supply-side economics, these increased economic benefits will be distributed broadly: “a rising tide lifts all boats”—or, in more directly utilitarian terms, greater economic benefits, versus presumably fewer such benefits (resulting from greater regulation), are justified precisely as the former promises the greatest good for the greatest number (Burk, 2007, pp. 96, 98, 100).

By contrast, the European Union has justified its decisions in terms of protecting basic privacy rights, as rooted more fundamentally in the deontological emphasis on individual autonomy: as Burk puts it, “EU privacy law elevates considerations of regard for personal autonomy over considerations of cost and benefit” (Burk, 2007, p. 98). This is to say that from deontological perspectives, basic human rights, including privacy, are not to be superseded by market considerations. Indeed, as Burk continues, these regulations are costly: “compliance with EU data protection requirements imposes a substantial financial and administrative burden on a broad array of businesses that may handle personalized data” (2007, p. 98).

In a certain direction, these contrasts, between a more utilitarian United States and a more deontological European Union, have only increased since 2008. In 2012, the European Union, introduced new legislation that required websites to first ask for the consent of the user to the site’s use of cookies (small files that, among other things, allow the site to keep track of a specific user, beginning with the IP address affiliated with the machine, extending into browsing history, and so on). In 2018, a new regulation will take full effect—one still aimed at ensuring individuals “the right to the protection of personal data concerning him or her”—including significant fines on companies that violate the new requirements (REGULATION [EU] 2016/679). At the same time, however, the Regulation seeks to facilitate a “Digital Single Market,” one that “will allow European citizens and businesses to fully benefit from the digital economy” (European Commission, Justice: Building a European Area of Justice).

By contrast, the utilitarian approach became especially clear on the U.S. side of the pond after the terrorist attacks of 9/11, as the U.S. government moved rapidly to implement and develop new data surveillance technologies. “The greatest good for the greatest number” argued that national security superseded individual rights to privacy—specifically in the form of suspending due process rights that previously required government agencies to justify specific wiretapping and other surveillance techniques before a special court (Cohen, 2012, p. 166; cf. Braman, 2011). As the Edward Snowden revelations made especially clear, the resulting mass surveillance consistently and systematically violated individual privacy rights—both from the perspective of U.S. law and most especially from the definitions and regulations of data privacy protection in the European Union. The latter conflict came to a head in a recent case before the European Court of Justice (ECoJ), as Austrian law student Max Schrems accused Facebook of violating his privacy rights as defined in the European Union. That is, a so-called Safe Harbour agreement, in effect since 2000, required that personal data transferred from the European Union to the United States must be protected at the same levels as required by the E.U. Data Privacy Protection regulations. The Snowden revelations, however, made clear that the E.U. requirements were not met once such data was transferred to the U.S by transnational companies such as Facebook. The ECoJ declared the Safe Harbour agreement to be invalid—requiring a massive shift in how U.S.-based transnational corporations must now take up matters of data privacy with their European customers (Gibbs, 2015). In ethical terms, the ECoJ has insisted that the strongly deontological E.U. protections of personal data override the U.S. consequentialist arguments for compromising individual data privacy in the name of national security.

Protecting Privacy: Current Options and the Privacy Paradox

In some ways, the on-going debates and developments concerning privacy online amount to something of an ever-escalating arms race: as hackers, especially those sponsored by criminal organizations and nation-states, become ever more proficient, so nation-states and private companies increase the sophistication of their defenses. In the midst of all of this, those of us concerned with protecting individual privacy have a limited range of options. Beyond the protections rooted in national and, in the case of the European Union, international law, commercial products that promise greater security as well as so-called open source alternatives can be useful. The latter include increasingly well known and popular services such as PGP (“Pretty Good Privacy”) and the Tor browser, both of which encrypt documents and email as well as one’s browsing histories across the web. As made clear by the recent, apparently Russian-sponsored hacks of the U.S. National Security Agencies with previously unmatched de-encryption and related surveillance tools, even the most powerful defenses can be breached, given enough time and resources (Sanger, 2016).

At the same time, many of us appear to be our own worst enemy when it comes to privacy. As the “privacy paradox” articulates: one may well say that we are concerned with protecting privacy—but in practice, the population at large has been generally unwilling to undertake much effort to do so. The most obvious example is the willingness to sign up for various free services—whether a mail service, a social media site, and/or a convenient app for our phone—without bothering to read the Terms of Service (ToS), which define just how much personal data will be made available to the suppliers of the services and applications and their third-party customers. Similarly, while many people are aware of commercial and open-source products and services that would enhance online privacy—very, very few of us are willing to pay the even modest costs involved, whether in terms of money and/or the additional effort required to download, install, and then use an application such as PGP or Tor (Kaupang, 2014; Utz & Kramer, 2009).

The privacy paradox helps highlight a contrast in the ethical approaches examined here. Broadly, more utilitarian approaches justify compromising privacy rights if such compromises can lead to a greater good for a greater number of people, as in the example of national security arguments. More deontological approaches will seek to minimize such compromises: most strongly, given the view that the modern nation-state derives its legitimacy precisely from its work in protecting privacy rights and other rights seen as essential to democratic governance and human freedom—if the state knowingly compromises such rights, it begins to undermine its own grounds for existence.

Last, a virtue ethics approach would suggest that the positive practices of protecting privacy—in effect, running counter to the prevailing patterns highlighted by the privacy paradox—would serve as acquiring and practicing virtues that would enhance privacy, but also, it would seem, would contribute more broadly to a good life of flourishing and contentment.

As more and more of us experience and are impacted by privacy breaches of various sorts, such a virtue ethics approach may well become more common. That is, as often happens in the development and diffusion of new technologies, a certain amount of time must pass—time in which not simply the professionals and the academics, but a larger number of citizens and consumers are confronted by the sometimes severe consequences and high costs that follow from the unethical exploitations of these technologies: such hard experience seems necessary for “the rest of us” to become convinced and inspired to exercise more caution, demand better legal protection from the relevant authorities, and/or learn to use the new technologies in safer ways (cf. Feenburg, 2010). These oftentimes painful and difficult experiences appear to be required, however, to inspire people to come to grips with the new ethical challenges that confront us—in ways that are not simply consequentialist and deontological, but also virtuous.

In all events, these experiences and growing ethical sensibilities will continue to help shape DME as a demotically informed ethics. The Snowden revelations and the Max Schrems case suggest that we are indeed learning in helpful ways with regard to privacy online.

In these directions, the ethics of privacy are relatively mature and well-grounded, both within ICE and DME. By contrast, the second inquiry—into social robots—is at the earliest stages.

Social Robots: All’s Fair in Love and War?

Especially in a post-digital era, it is difficult to think of a topic for DME more fundamentally challenging and compelling than that of social robots. First of all, robots make digital media literally full-bodied and inextricably conjoined with our analogue senses, beginning with vision and hearing. These technologies are fully digital—including the Artificial Intelligence (AI) that is key to their (semi-)autonomous capacities. But far more than current digital media devices, such as screens for TV or visual telephony, phones, and so on—social robots communicate with us in fully analogue fashion, as they speak and use their faces and bodies for communication (including facial gestures to express [artificial] emotion, and body distance [proxemics] to convey respect, curiosity, intimacy, concern, and so on). This (re)turn to the analogue is most complete in the case of carebots and sexbots: humans will touch and be touched by these devices, employing much of the full range of communication played out and through our bodies.

It hardly needs saying that embodied forms of communication—especially in the domains of sexuality and intimate relationships—are among the most fundamental and defining forms of communication we enjoy as human beings. Indeed, researchers have demonstrated sexual arousal in human beings when simply asked to touch the buttocks of the otherwise perfectly sexless Japanese robot Pepper (Li, Ju, & Reeves, 2016). In these ways, then, carebots and (eventual) sexbots will, in effect, catapult digital media and communication fully back into the embodied analogue world.

Moreover, the diffusion of robots, including carebots, into our world is more advanced than some may be aware of. Examples of carebots have been employed in therapeutic care for some number of years now—perhaps most prominently, Paro, which models a baby harp seal, used in eldercare, including therapy for those with dementia (Sandry, 2015). KASPAR is used in conjunction with autistic children—sometimes with astonishing results (see Kaspar the Social Robot). More recently, the telenoid robot has been used experimentally in eldercare in Denmark, with promising results (e.g., Seibt & Nørskov, 2012). In this light, it seems an easy prediction to make: just as the PC revolution, by diffusing computers out of specialist institutes and labs into offices and home, inaugurated a demotic turn in information and computing ethics that became part of DME—so the comparable diffusion of robots into our everyday lives is gradually forcing “the rest of us” to come to grips with ethical concerns and questions that were exclusively the stuff of science fiction but a decade ago.

Indeed, the ethical dimensions of both the design of these devices and our interactions with them have come increasingly to the foreground in ICE, emerging within the distinct domain of Machine Ethics and Robot Ethics (MRE). At the same time, this work is strongly cross-cultural, most especially because of Japan’s leading role in robotics. A primary example of such ethically oriented and cross-cultural work is a major new research project called integrative social robotics, which aims at nothing less than integrating “robotics research with a wide scope of research disciplines that investigate human social interactions, including empirical, conceptual, and value-theoretical research in the Humanities.” The humanistic, specifically ethical dimensions of this project draw on virtue ethics and its focus on well-being to “guide the development of social robotics applications from idea to implementation” (both quotes from Integrative Social Robotics: A New Framework for Culturally Sustainable Technology Solutions).

To be sure, it is still early days with regard to sexbots, compared with carebots and warbots. Some devices have been brought to market that appear to begin the transition from “love dolls” to sexbots, driven at least in part by AI—inspiring at least one newspaper to proclaim that sexbots could be the “biggest trend of 2016” (Parsons, McCrum, & Watkinson, 2016). Perhaps the most dramatic indication of their impending reality is a recent campaign to stop their development and deployment altogether. As will be seen in more detail, Dr. Kathleen Richardson has argued for a ban on sexbots—one modeled on similar efforts to ban warbots—on important ethical grounds (2015).

The next section, then, explores the spectrum of ethical positions that have emerged in the past decade or so in ICE with regard to sexbots. This spectrum will provide the starting point for subsequent reflection and development in DME proper—beginning with an initial account of responses to sexbots from “the rest of us.”

Poles of the Spectrum: Marriage or Ban?

The current philosophical, and more popular, discussion of sexbots begins with David Levy’s Love and Sex With Robots: The Evolution of Human-Robot Relationships (2007). Levy assumes that, by 2050 or so, robots will have complete natural-language capacities that will allow them to converse with humans “on any subject, at any desired level of intellect and knowledge, in any language, and with any desired voice—male, female, young, old, dull, sexy” (2007, p. 10). Most critically, he further believes that “The robots of the mid-twenty-first century will also possess humanlike or superhuman-like consciousness and emotions” (2007, p. 10). The assumption that robots will indeed acquire human-like emotions is deeply questionable. Levy seems to be aware of this possibility, as he argues that artificial emotions—a robot’s capacity to evoke the gestures and expressions that mimic human expression of emotion—might trigger in us the anthropomorphizing belief that the machine really does care, and this will be sufficient for falling in love:

There are those who doubt that we can reasonably ascribe feelings to robots, but if a robot behaves as though it has feelings, can we reasonably argue that it does not? If a robot’s artificial emotions prompt it to say things such as “I love you,” surely we would be willing to accept these statements at face value, provided that the robot’s other behavior patterns back them up.

(Levy, 2007, p. 11; cf. p. 12)

On the contrary: this reliance on artificial emotions will emerge as a central point of subsequent argument and critique.

Given these assumptions, Levy initially bases his enthusiastic case for sexbots with what he believes is a strong analogy between such entities and pets: just as many of us enjoy long-lasting and deeply satisfying emotional bonds with our pets, so the capacity of robots, specifically sexbots, to display what appears to be emotional care coupled with sexual prowess will result in satisfying sex and further inspire love sufficient to justify marriage. He then relies on a primarily psychological account as to why people have sex—focusing specifically on sex as pleasure, release of tension and stress, pursuit of novelty, and escape from boredom (e.g., 2007, p. 187). Any connection between love and sex seems relatively arbitrary—and more important for women than for men (e.g., 2007, p. 183). In any event, Levy is convinced from the outset that if it’s love we seek alongside sheer sex, robots will be able to provide at least a simulacrum of such an emotion.

Levy acknowledges that there are ethical issues involved here as well—including, very much to his credit, the importance of our recognizing the rights of such robots as they become increasingly human-like in terms of their own autonomy (2007, pp. 98, 305, 309). In this direction, Levy thus appears to invoke a more deontological emphasis. Otherwise, while Levy does not use the term, his enthusiasm for sexbots rests on squarely consequentialist arguments—both ethical egoism and more broadly utilitarian considerations. These benefits are psychological as well as physical, as Levy initially explores by way of the multiple psychological benefits we receive from our relationships with pets. Levy further extols potential economic benefits (e.g., p. 139), as well as diverse social benefits such as “the likely reduction in teenage pregnancy, abortions, sexually transmitted diseases, and pedophilia . . .” alongside the “clear personal benefits when sexual boundaries widen, ushering in new sexual opportunities, some bizarre, others exciting” (2007, p. 300). In these directions, Levy identifies a key target group for sexbots—namely, “social misfits, social outcasts, or even worse” who will (ostensibly) become better-balanced human beings for having access to a sexbot. By the same token, in his view, sexbots will help “those who are devastated by the breakdown of their most significant human relationship” by ostensibly offering a speedy emotional recovery (2007, p. 304).

Levy’s climactic conclusion (pun intended) of his consequentialist argument is worth noticing. He envisions an ever-accelerating circle of new sophistication in sexbots, in turn driving ever more demand. This means that:

People will want better robot sex, and even better robot sex, and better still robot sex, their sexual appetites becoming voracious as the technologies improve, bringing ever high levels of joy with each experience. And it is quite possible that the terms “sex maniac” and “nymphomaniac” will take on new meanings, or at least new dimensions, as what are perceived to be natural levels of human sexual desire change to conform to what is newly available—great sex on tap for everyone, 24/7.

(2007, p. 310)

To state it kindly, this is not everyone’s vision—or ethics—of love and sex. On the contrary, this enthusiastic extreme has inspired the opposite response.

The Campaign Against Sex Robots: Kathleen Richardson

Kathleen Richardson is an anthropologist with a strong background in the therapeutic use of social robots for children with autism. Richardson has articulated a substantive critique of Levy, beginning with Levy’s arguments for sexbots as replacements for prostitutes. Richardson characterizes Levy’s account as showing that “the sellers of sex are seen by the buyers of sex as things and not recognized as human subjects”: the immediate ethical danger here is that doing so “. . . legitimates a dangerous mode of existence where humans can move about in relations with other humans but not recognise them as human subjects in their own right” (Richardson, 2015, p. 290). In ethical terms, we can understand Richardson’s point first of all as a deontological one—namely, the objection to denying human autonomy, dignity, and so on by treating the human person as instead a thing, an object (cf. 2015, p. 291).

But Richardson further counters Levy in his own ethical terms—namely, his consequentialist claims that replacing prostitutes with sexbots will thereby reduce human prostitution and the acknowledged negative costs thereof. On the contrary, Richardson argues, there is evidence to suggest that the expansion of the technologies of the sex industry—including more sophisticated sex toys as well as ever-greater kinds of pornography made ever more widely available via the internet—correlates with an increased rate of prostitution, not its decline (2015, p. 291).

Richardson further highlights the importance of empathy in our human interactions, where empathy is defined as “an ability to recognise, take into account, and respond to another person’s genuine thoughts and feelings” (2015, p. 291) The capacity for empathy is a major deficit in autism spectrum disorders—and, Richardson continues, in men who buy sex from prostitutes. This lack of empathy then reinforces Richardson’s deontological point—namely, “The buyer of sex is at liberty to ignore the state of the other person as a human subject who is turned into a thing” (2015, p. 291). As we have seen, empathy, as the critical capacity for understanding the emotions and intentions of others, thereby allowing us to properly interpret their behaviors and respond appropriately, is a primary virtue in human communication, friendship, and intimate relationships (Vallor, 2015). In this way, Richardson implicitly invokes a virtue ethics argument against sexbots. In Vallor’s terms, the risk here can be put in terms of an ethical deskilling: the more sexual and emotional interests are satisfied with sexbots, the more freely they are treated as objects (first of all, as they are our property)—the less we are required to practice and enhance our abilities as empathic beings. This argument is more fully developed below.

Last, Richardson raises concerns as to how the robotics industry, by developing sexbots that are predominantly female, young, attractive, and designed for service roles, mirror and reinforce prevailing “cultural models of race, class, and gender” (2015, p. 292). At least implicitly, this objection seems aimed again in a deontological direction—namely, against primary deontological commitments to respect for persons and equality. Moreover, we will see commitments of this sort undergirding a central set of further philosophical arguments against the sort of objectification of women that Richardson objects to.

Directly counter to Levy, then, Richardson has inaugurated a campaign against sex robots, modeled on earlier campaigns to prohibit Lethal Autonomous Weapons (LAWs) or so-called “warrior bots” (Campaign Against Sex Robots). In ethical terms, the contrast is even more foundational—namely, between Levy’s almost exclusively utilitarian approach versus Richardson’s predominately deontological and virtue ethics approach. Over against these apparent either/or oppositions, however, more recent work has carved out important middle grounds that help fill out the range of possible ethical responses to sexbots.

Eros and Complete Sex: Middle Grounds Between Levy and Richardson

A primary and directly philosophical critique of Levy’s arguments was developed by John Sullins (2012). To begin with, Sullins directly attacks Levy’s assumption that future robots will be capable of genuine emotions, including, presumptively, love. To the contrary, the developments in AI and robotics over the past decade have increasingly focused on artificial emotion—as we have seen, efforts to build into robots various ways of expressing what seem to be real emotions, ranging in facial expression and tone of voice to gesture. This focus comes in part because of an increasing recognition that developing real emotions in AIs and thus robots is simply out of reach—and will likely remain so for a very long time to come. This is in part because the experience of emotions in human beings depends more foundationally on our first-person phenomenal consciousness, our capacity for self-consciousness, and a self-reflective “I” that, among other things, is the arena within which we experience our emotions (Bringsjord, Licato, Govindarajulu, Ghosh, & Sen, 2015). Absent such consciousness and thus real emotions, artificial emotions entail simulating the appearance of emotions as informed by the now well-established psychological response of human beings to such simulation—namely, our coming to feel as if the machine in fact cares for us (Turkle, 2011, among others). Sullins responds to Levy’s question, cited above, “but if a robot behaves as though it has feelings, can we reasonably argue that it does not?” (Levy, 2007, p. 11) with an emphatic yes. Where for Levy, evoking in human beings an emotional response of being loved through artificial emotions is apparently good enough—for Sullins, this is ethically objectionable on two grounds: first, it is an intentional deception, and second, “to play on deep-seated human psychological weaknesses put there by evolutionary pressure as this is disrespectful of human agency” (Sullins, 2012, p. 408; cf. Ess, 2016, p. 65). This latter point is a specifically deontological one—an objection to the failure to respect our human autonomy.

Sullins then draws on virtue ethics and Plato’s understanding of eros and erotic love, as developed especially in the dialogue The Symposium. Within the framework of virtue ethics, Sullins highlights the guiding question as first put by Mark Coeckelbergh: how far robots may help us (and, at some point, themselves) lead a good life, a life of flourishing (Coeckelbergh, 2009, cited in Sullins, 2012, p. 402). For its part, erotic love highlights the autonomy of the beloved as a complete human being, one who brings into such a relationship the full range of distinctive interests, desires, experiences, fallibilities, strengths, demands, emotive responses, and so on that are specific to just that person. Erotic love thereby entails a kind of ignorance and correlative surprise. We do not fully or completely know what it is we seek in an erotic relationship that fulfills us in deeply emotional ways: part of the joy of erotic relationships is precisely the unexpected discovery of the Other, who surprises us with the various gifts and abilities that she or he brings into the relationship—gifts and abilities that fulfill us in ways we could not anticipate, because it is only in the meeting of such an Other that we first come to recognize the deficits in ourselves and our lives that the Other begins to fill and complement. This means, first and foremost, that the erotic Other, as fully autonomous and unique, cannot be “constructed” ahead of time, much less fully controlled. As Sullins puts it: “the main lesson Socrates was trying to give us in the Symposium is that we come into a relationship impoverished, only half knowing what we need; we can only find the philosophically erotic through the encounter with the complexity of the beloved, complexity that not only includes passion, but may include a little pain and rejection, from which we learn and grow” (2012, p. 408; cf. Ess, 2016, p. 65ff.).

I take it that the “pain and rejection” Sullins refers to here includes those experiences that follow directly from the autonomy of the Beloved as an Other. This freedom and independence allows the Beloved to sometimes choose not to fulfill our erotic interests and desires. (Indeed, the independence of the Beloved includes his or her physical abilities and preferences: s/he may simply not be able to conform to every desire sparked by the erotic imagination.) Especially the autonomy of choice can indeed lead to the pain of rejection altogether. But these experiences may also help us learn to practice critical virtues—including empathy, compassion and forgiveness, and patience. That is, in the best circumstances, empathy may help us understand and accept a Beloved’s rejection, thereby fostering our compassion and, ideally, forgiveness for perceived slights. And patience will be necessary to weather these storms. As with all virtues, these require practice—and help us “learn and grow,” as Sullins puts it, thereby contributing more broadly to a good life as a life of flourishing. That is, we are clearly more capable of developing and enjoying such a life the better we are at empathizing, understanding, forgiving, and being patient with others—whether in relationships of eros, friendship, family, colleagueship, and so on.

Good Sex and Complete Sex: Virtues and Deontology

These important connections between erotic love, virtues central to the good life, and deontological commitments rooted in human autonomy are made still clearer in the work of Sara Ruddick (1975). Ruddick is a primary founder of the ethics of care, and further draws on virtue ethics and deontology in conjunction with a sophisticated phenomenological account of sex. Ruddick affirms that “Any sexual act that is pleasurable is prima facie good” (1975, p. 101), but she distinguishes between good sex, better sex, and complete sex. To do so, Ruddick first points to our experiences as human beings, in which we are no longer aware of any distinction between our mind and subjectivity vis-à-vis our body. In philosophical terms, Ruddick thus contrasts a Cartesian, dualistic view of the mind as radically distinct from body with a phenomenological view that foregrounds our various experiences of embodiment. Ruddick specifically points to our experiences in sport, in which we are no longer aware of ourselves as minds somehow driving our bodies: rather, we enjoy the experience of complete embodiment. The self or subject is fully intermeshed with all the body is engaged in. In these experiences, we are our bodies as fully infused with our subjectivity and choice—rather than somehow disembodied minds precariously attached to a lumbering body (Ruddick, 1975, pp. 88–89).

Ruddick then develops a very careful and sensitive phenomenological description of what she calls complete sex. Complete sex entails sexual engagements that are not “just sex” (my term)—bodies being manipulated, as it were, from the distance of an observing, steering mind, oriented primarily towards maximizing sensation. Rather, complete sex engages both partners as embodied persons, whose bodily movements, gestures, and responses are inextricably interwoven with the person as a distinct autonomy and subjectivity. Directly contrary to the temptation in sex to treat the body of the other precisely as an object (a “thing,” to use Richardson’s language) taken up as a means for one’s immediate desires and gratifications—complete sex between persons thus invokes the deontological demand for respect for persons as autonomies and thereby as equals. On Ruddick’s account, this equality is interwoven with a second feature of complete sex—namely, the mutuality of sexual desire. Such mutuality is complex. It is not simply that each of the lovers desires the other: moreover, complete sex entails our desire that our desire for the Beloved is desired in turn. This mutuality of desire thus undergirds and reinforces the ethical demand that we regard the Other as an autonomy deserving respect as an equal, not simply a body made conveniently available for our use. (1975, pp. 89ff., 99ff.; cf. Ess, 2016, pp. 67–70).

As fully engaging the embodied subject as desiring both the Beloved and the desire of the Beloved, complete sex is thereby fully entangled with the critical virtue of love itself. On Ruddick’s showing—echoing Plato and anticipating Sullins—erotic love, like the other virtues, is difficult and requires practice. Again, the sexual context, most especially as fueled by strong desire, makes it all too easy to regard the Other primarily in terms of a body qua sexual object, a means for achieving our sexual ends. By contrast, the erotic love within complete sex, as shaped by embodiment and the correlative insistence on recognizing the Beloved as an autonomous person requiring respect as an equal, thus requires choice and practice. To be sure, such love and complete sex are thereby difficult and likely rare. And Ruddick is equally clear that the absence of these demanding conditions does not necessarily equate to bad sex. Rather, good sex can be experienced within relationships of care and respect and thereby has its ethically justifiable place (1975, p. 101).

Both Sullins and Ruddick thus point to the ethical importance of full respect for the autonomy of the Beloved as an equal person: in this way, they offer more elaborate ethical support for Richardson’s critique of prostitution and Levy’s larger vision of sexbots, as these violate the ethical necessity to recognize and respect the Other as not solely a body to be manipulated at will, but as a distinctive, autonomous, and thereby equal subject. Insofar as we are interested in sex and sexuality that qualify as erotic and complete in these ways, the correlative ethical commitments would preclude sex with prostitutes and sexbots as primarily bodies, not subjects. Contra Richardson, however, Ruddick’s account leaves open the possibility of using sexbots for good sex. In particular, specific sorts of populations, as Levy suggested, might be well served by sexbots used in therapeutic ways—for persons whose physical and/or emotional and/or social attributes and abilities may render them starkly unattractive and/or incapable of the sorts of relationships that would foster complete sex.

At the same time, however, the therapeutic value of sophisticated sexbots might well be limited. Recall that a requirement for complete sex is mutuality of desire. But however much sexbots might be able to fake emotions and thereby desire—they remain a kind of zombie, lacking both first-person phenomenal consciousness and thereby real emotion and desire. There might be circumstances in which sex with a zombie lover would count as good sex—as the robot analogue to sex between lovers in which one partner’s desire is low or perhaps even faked, for the sake of sustaining relationship and/or out of ongoing love and affection. But, contra Levy, I suspect that for many of us most of the time, a steady diet of zombie sex will quickly grow boring.

As Levy suggested, and as Richardson reinforces, our responses and attitudes here will be somewhat variable according to gender. In a recent survey on responses to having sex with robots, men were consistently more favorable than women (Scheutz & Arnold, 2016). But, much remains to be seen as “the rest of us” may have more everyday opportunities to engage with sexbots as they further develop. In the meantime, however, this initial exploration helps us at least mark out primary starting points for DME. Broadly, if crudely speaking, more utilitarian approaches will find good reasons to endorse sex with robots as maximizing pleasure. More deontological approaches will argue against our use of sexbots, insofar as machine-human sex may thereby indirectly encourage us to treat real women and men as objects. Last, a virtue ethics approach makes room for “good sex” with sexbots, including for specific groups otherwise challenged to develop romantic and sexual relationships. But VE also warns against the dangers of deskilling—the loss of virtues critical to not only “complete sex,” but also human relationships more broadly, including the virtues of empathy, compassion and forgiveness, and patience. Positively, virtue ethics would encourage us rather to pursue and cultivate these virtues, along with the virtue of love itself, as core components of lives of flourishing and contentment.

Acknowledgments

I am very grateful to Herman Tavani for critical comments and helpful suggestions, especially with regard to the discussion of machine ethics and for a Venn diagram for illustrating the interrelationships between ICE, DME, and Machine Ethics/Robot Ethics.

Discussion of the Literature

A few general collections form a primary literature for DME. These include:

Davisson, A., & Booth, P. (Eds.). (2016). Controversies in digital ethics. London: Bloomsbury Academic. Contributions address privacy and surveillance, participatory culture, professional communication, and identity in diverse contexts, including games and social networking sites (SNS).Find this resource:

Ess, C. (2013). Digital media ethics (2d ed.). Oxford: Polity Press. The book provides an overview of major ethical frameworks taken up in DME (in chapter 6), and it explores their application vis-à-vis a range of issues. Organized as a textbook, each chapter includes discussion, reflection, and writing questions, as well as an overview of primary sources in the literature and suggestions for further reading.Find this resource:

Grimshaw, M. (Ed.). (2014). The Oxford handbook of virtuality. Oxford: Oxford University Press. Several contributions address ethical issues evoked specifically by virtual realities and virtual worlds, including sexual ethics (Albright & Simmens, Stenslie). The contribution of Charles Ess takes up “virtual adultery” and virtual child pornography.Find this resource:

Heider, D., & Massanari, A. (Eds.). (2012). Digital ethics: Research and practice. Oxford: Peter Lang. This volume includes a number of useful contributions on the topics of privacy, copyright, surveillance, games, sexting, and citizen journalism.Find this resource:

Ethical Frameworks

Feminist ethics is introduced and summarized in:

Gilligan, C. (1982). In a different voice: Psychological theory and women’s development. Cambridge, MA: Harvard University Press.Find this resource:

Noddings, N. (1984). Caring: A feminine approach to ethics and moral education. Berkeley, CA: University of California Press.Find this resource:

Ruddick, S. (1975). Better sex. In R. Baker & F. Elliston (Eds.), Philosophy and sex (pp. 280–299). Amherst, NY: Prometheus Books.Find this resource:

Tong, R., & Williams, N. (2016). Feminist ethics. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy archive. This article includes attention to the Ethics of Care: see section 2.1.Find this resource:

Virtue Ethics

As a rapidly growing domain of both ICE and DME, the literature on virtue ethics is expanding rapidly. The following are primary, often watershed contributions and resources:

Hursthouse, R. (1999). On virtue ethics. New York: Oxford University Press.Find this resource:

Vallor, S. (2015). Moral deskilling and upskilling in a new machine age: Reflections on the ambiguous future of character. Philosophy of Technology, 28, 107–124.Find this resource:

Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford: Oxford University Press.Find this resource:

Verbeek, P. -P. (2011). Moralizing technology: Understanding and designing the morality of things. Chicago: University of Chicago Press.Find this resource:

Primary Topics in DME: Additional Resources

Bäcke, M. (2011). Make-believe and make-belief in second life role-playing communities. Convergence: The International Journal of Research into New Media Technologies, 18(1), 85–92.Find this resource:

Consalvo, M. (2007). Cheating: Gaining advantage in videogames. Cambridge, MA: MIT Press.Find this resource:

Floridi, L. (2005). The ontological interpretation of informational privacy. Ethics and Information Technology, 7(4), 185–200.Find this resource:

Floridi, L. (2006). Four challenges for a theory of informational privacy. Ethics and Information Technology, 8(3), 109–119.Find this resource:

Fromme, J., & Unger, A. (Eds.). (2012). Computer games and new media cultures: A handbook of digital games studies. Dordrecht, Germany: Springer.Find this resource:

Hick, D. H., & Schmücker, R. (Eds.). (2016). The aesthetics and ethics of copying. New York: Bloomsbury Academic.Find this resource:

Hildebrandt, M., O’Hara, K., & Waidner, M. (Eds.). (2013). Digital enlightenment yearbook 2013: The value of personal data. Amsterdam: IOS Press. Contributions include general philosophical and historical backgrounds to matters of privacy, “privacy by design” approaches, and related matters of personal data management.Find this resource:

Sicart, M. (2007). The ethics of computer games. Cambridge, MA: MIT Press.Find this resource:

Sicart, M. (2014). Play matters. Cambridge, MA: MIT Press.Find this resource:

Smith, C. (2010). Pornographication: A discourse for all seasons. International Journal of Media and Cultural Politics, 6(1), 103–108.Find this resource:

Smith, C., Attwood, F., & Barker, M. (2012). Porn research: Preliminary findings.

Sundén, J., & Sveningsson, M. (2011). Gender and sexuality in online game cultures: Passionate play. New York. Routledge.Find this resource:

Thorn, C. (2013). Introduction: Reflections on game rape, feminism, sadomasochism, and selfhood. In C. Thorn & J. Dibbell (Eds), Violation: Rape in gaming (pp. 4–23). Lexington, KY: CreateSpace Independent Publishing Platform.Find this resource:

Wiener, N. [1950] 1954. The Human Use of Human Beings: Cybernetics and Society. Boston: Houghton Mifflin (2d rev. ed.). New York: Doubleday Anchor.Find this resource:

Wittkower, D. E. (Ed.). (2010). Facebook and philosophy: What’s on your mind? Chicago: Open Court Press. The Wittkower volume includes a number of very helpful essays that both defend and critique Facebook friendship.Find this resource:

Media and Journalism Ethics

Carlsson, U. (Ed.). (2016). Freedom of expression and media in transition: Studies and reflections in the digital age. Gothenburg, Sweden: Nordicom.Find this resource:

Ward, S. J. A. (Ed.). (2013). Global media ethics. Malden, MA: Wiley-Blackwell.Find this resource:

Machine Ethics, Robot Ethics, Robo-Philosophy

Anderson, M., & Anderson, S. L. (Eds). (2011). Machine ethics. Cambridge, U.K.: Cambridge University Press.Find this resource:

Lin, P., Abney, K., & Bekey, G. A. (2012). Robot ethics: The ethical and social implications of robotics. Cambridge, MA: MIT Press.Find this resource:

Vallor, S. (2011a). Carebots and caregivers: Sustaining the ethical ideal of care in the twenty-first century. Philosophy of Technology, 24, 251–268.Find this resource:

Wallach, W., & Allen, C. (2009). Moral machines: Teaching robots right from wrong. New York: Oxford University Press.Find this resource:

Trust in Online Environments

Ess, C., & Thorseth, M. (Eds.). (2011). Trust and virtual worlds: Contemporary perspectives. London: Peter Lang.Find this resource:

Keymolen, E. (2016). Trust on the line: A philosophical exploration of trust in the networked era (PhD diss.) Rotterdam, The Netherlands: Erasmus University.Find this resource:

Taddeo, M. (2010). Trust in technology: A distinctive and a problematic relation. Knowledge, Technology, & Policy, 23(3–4), 283–286.Find this resource:

Taddeo, M., & Floridi, L. (2011). The case for e-trust. Ethics and Information Technology, 13(1), 1–3.Find this resource:

Further Reading

Turkle, S. (2011). Alone together: Why we expect more from technology and less of each other. Cambridge, MA: MIT Press. This is a watershed volume from one of the leading sociologists of new computer and communication technologies. Turkle’s earlier works were milestones and primary resources in our early understanding of how young people interact with ICTs. In contrast with the strongly positive, if not celebratory findings and tone of her earlier research, Alone Together brings forward an extensive array of considerably more critical concerns regarding the impacts of more recent forms of ICTs, including social media and mobile phones. While originally heavily criticized by many who wanted to retain an earlier optimism, Turkle’s volume has become widely recognized as marking out a major turning point in our understanding of the more negative impacts of ICTs.Find this resource:

Information and Computing Ethics (ICE)

Floridi, L. (Ed.). (2010). Information and computer ethics. Cambridge, U.K.: Cambridge University Press.Find this resource:

Himma, K. E., & Tavani, H. (Eds.). (2008). The handbook of information and computer ethics. Hoboken, NJ: John Wiley.Find this resource:

Miller, K., & Taddeo, M. (Eds.). (2017). The ethics of information technologies. Farnham, U.K.: Ashgate. A definitive collection of many of the most significant articles constituting Information and Computing Ethics (ICE), with focus on privacy, online trust, anonymity, values sensitive design, machine ethics, professional conduct, and moral responsibility of software developers.Find this resource:

Van den Hoven, J., & Weckert, J. (Eds.). (2008). Information technology and moral philosophy. Cambridge, U.K.: Cambridge University Press.Find this resource:

References

Agre, P. E., & Rotenberg, M. (Eds.). (1998). Technology and privacy. The new landscape. Cambridge, MA: MIT Press.Find this resource:

Alexander, L., & Moore, M. (2015). Deontological ethics. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy.Find this resource:

Baym, N. (2011). Social networks 2.0. In M. Consalvo & C. Ess (Eds.), The Blackwell handbook of Internet studies (pp. 384–405). Oxford: Wiley-Blackwell.Find this resource:

Braman, S. (2011). Anti-terrorism and the harmonization of media and communication policy. In R. Mansell & M. Raboy (Eds.), The handbook of global media and communication policy (pp. 486–504). Oxford: Blackwell.Find this resource:

Bringsjord, S., Licato, J., Govindarajulu, N. S., Ghosh, R., & Sen, A. (2015). Real robots that pass human tests of self-consciousness. In Robot and Human Interactive Communication (RO-MAN), 2015, The 24th International Symposium (pp. 498–504). August 31–September 4, 2015. Kobe, Japan.Find this resource:

Burk, D. (2007). Privacy and property in the global datasphere. In S. Hongladarom & C. Ess (Eds.), Information technology ethics: Cultural perspectives (pp. 94–107). Hershey, PA: Idea Group Reference.Find this resource:

Bynum, T. W. (2010). The historical roots of information and computer ethics. In L. Floridi (Ed.), The Cambridge handbook of information and computer ethics (pp. 20–38). Cambridge, U.K.: Cambridge University Press.Find this resource:

Coeckelbergh, M. (2009). Personal robots, appearance, and human good: A methodological reflection on roboethics. International Journal of Social Robotics, 1(3), 217–221.Find this resource:

Cohen, J. (2012). Configuring the networked self: Law, code, and the play of everyday practice. New Haven, CT: Yale University Press.Find this resource:

Conger, S., & Loch, K. D. (1995). Introduction. [Special issue on ethics and computer use]. Communications of the ACM, 38(12), 30–32.Find this resource:

Couldry, N. (2013). Why media ethics still matters. In S. J. Ward (Ed.), Global media ethics: Problems and perspectives (pp. 13–29). Oxford: Blackwell.Find this resource:

Debatin, B. (2011). Ethics, privacy, and self-restraint in social networking. In S. Trepte & L. Reinecke (Eds.), Privacy online (pp. 47–60). Berlin: Springer.Find this resource:

DIRECTIVE 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data. Official Journal of the European Communities, L 281/31 (November 23, 1995).Find this resource:

Ess, C. (2015). New selves, new research ethics?. In H. Ingierd & H. Fossheim (Eds.), Internet research ethics (pp. 48–76). Oslo, Norway: Cappelen Damm.Find this resource:

Ess, C. (2016). What’s love got to do with it? Robots, sexuality, and the arts of being human. In M. Nørskov (Ed.), Social Robots: Boundaries, Potential, Challenges (pp. 57–79). Ashgate: Farnham, Surrey, England.Find this resource:

Ess, C., & Fossheim, H. (2013). Personal data: Changing selves, changing privacy expectations. In M. Hildebrandt, K. O’Hara, & M. Waidner (Eds.), Digital enlightenment forum yearbook 2013: The value of personal data (pp. 40–55). Amsterdam: IOS Amsterdam.Find this resource:

Ess, C. et al. (2002). Ethical decision making and Internet research. Recommendations from the AOIR ethics working committee. Retrieved from http://aoir.org/reports/ethics.pdf.Find this resource:

Feenburg, A. (2010). Between reason and experience: Essays in technology and modernity. Cambridge, MA: MIT Press.Find this resource:

Floridi, L. (Ed.). (2010). Information and computer ethics. Cambridge, U.K.: Cambridge University Press.Find this resource:

Foucault, M. (1988). Technologies of the self. In L. H. Martin, H. Gutman, & P. Hutton (Eds.), Technologies of the self: A seminar with Michel Foucault (pp. 16–49). Amherst: University of Massachusetts Press.Find this resource:

Gibbs, S. (2015). What is “safe harbor” and why did the EUCJ just declare it invalid? The Guardian, 6.Find this resource:

Greenleaf, G. (2011). Asia-Pacific data privacy: 2011, year of revolution. Retrieved from http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1914212? UNSW Law Research Paper No. 2011–29.Find this resource:

Hongladarom, S. (2016). The online self: Externalism, friendship, and games. New York: Springer.Find this resource:

Hursthouse, R. (1999). On virtue ethics. New York: Oxford University Press.Find this resource:

Jackson, D., Aldrovandi, C., & Hayes, P. (2015). Ethical framework for a disaster management decision support system which harvests social media data on a large scale. In N. B. Ben Saoud, C. Adam, & C. Hanachi (Eds.), Information Systems for Crisis Response and Management in Mediterranean Countries (pp. 167–180). Dordrecht, Switzerland: Springer.Find this resource:

Kant, I. (1956). Critique of practical reason (L. White Beck, Trans.). Indianapolis, IN: Bobbs-Merrill.Find this resource:

Kant, I. (1959). Foundations of the metaphysics of morals (L. White Beck, Trans.). Indianapolis, IN: Bobbs-Merrill.Find this resource:

Kant, I. (1991). What is enlightenment? (H. B. Nisbet, Trans.). In H. Reiss (Ed.), Kant: Political writings (2d ed., pp. 54–60). Cambridge, U.K.: Cambridge University Press.Find this resource:

Kaupang, H. (2014). Information privacy and applications A case study of user behaviors and attitudes in Norway. Retrieved from https://www.duo.uio.no/handle/10852/45599. (Masters thesis) University of Oslo.Find this resource:

Kitiyadisai, K. (2005). Privacy rights and protection: Foreign values in modern Thai context. Ethics and Information Technology, 7(1), 17–26.Find this resource:

Kraut, R., Olson, J., Banaji, M., Bruckman, A., Cohen, J., & Cooper, M. (2004). Psychological research online: Report of board of scientific affairs’ advisory group on the conduct of research on the internet. American Psychologist, 59(4), 1–13.Find this resource:

Lange, P. G. (2007). Publicly private and privately public: Social networking on YouTube. Journal of Computer-Mediated Communication, 13(1), 361–380.Find this resource:

Levy, D. (2007). Love and sex with robots: The evolution of human-robot relationships. New York: HarperCollins.Find this resource:

Li, J., Ju, W., & Reeves, B. (2016). Touching a mechanical body: Tactile contact of a human-shaped robot is physiologically arousing. Presented at the International Communication Association, June 9–13, 2016. Fukuoka, Japan.Find this resource:

Lin, P., Abney, K., & Bekey, G. A. (2012). Robot ethics: The ethical and social implications of robotics. Cambridge, MA: MIT Press.Find this resource:

Lindgren, S. (2017). Digital media and society. London: SAGE.Find this resource:

Lü, Y. -H. (2005). Privacy and data privacy issues in contemporary China. Ethics and Information Technology, 7(1), 7–15.Find this resource:

MacIntyre, A. (1994). After virtue: A study in moral theory (2d ed.). Duckworth: Guilford.Find this resource:

Marwick, A., & Boyd, D. (2014). Networked privacy: How teenagers negotiate context in social media. New Media & Society, 16(7), 1051–1067.Find this resource:

Miller, K., & Taddeo, M. (Eds.). (2017). The ethics of information technologies. Surrey, U.K.: Ashgate.Find this resource:

Moor, J. H. (1985). What is computer ethics? Metaphilosophy, 16(4), 266–275.Find this resource:

NESH (The National Committee for Research Ethics in Norway). (2006). Guidelines for research ethics in the social sciences, law, and the humanities. Retrieved from https://www.etikkom.no/globalassets/documents/english-publications/guidelines-for-research-ethics-in-the-social-sciences-law-and-the-humanities-2006.pdf.Find this resource:

Nissenbaum, H. (2010). Privacy in context: Technology, policy, and the integrity of social life. Stanford, CA: Stanford Law Books.Find this resource:

Nissenbaum, H. (2011). A contextual approach to privacy online. Daedalus, 140(4), 32–48.Find this resource:

Nørskov, M. (Ed.). (2016). Social Robots: Boundaries, Potential, Challenges. Farnham, Surrey, England: Ashgate.Find this resource:

Ong, W. (1988). Orality and literacy: The technologizing of the word. London: Routledge.Find this resource:

Parson, J., McCrum, D., & Watkinson, D. (2016). Sex robots could be "biggest trend of 2016" as more lonely humans seek mechanical companions. Mirror. Retrieved from http://www.mirror.co.uk/news/world-news/sex-robots-could-biggest-trend-7127554.Find this resource:

Pfaffenberger, B. (1996). “If I want it, it’s ok”: Usenet and the (outer) limits of free speech. The Information Society, 12, 365–386.Find this resource:

Plato. (1991). The Republic with notes, an interpretive essay, and a new introduction (A. Bloom, Trans.). New York: Basic Books.Find this resource:

Puech, M. (2016). The ethics of ordinary technology. New York: Routledge.Find this resource:

Rachels, J. (1975). Why privacy is important. Philosophy and Public Affairs, 4(4), 323–333.Find this resource:

REGULATION (EU) 2016/679 of the European Parliament and of The Council. Official Journal of the European Union, L 119/1.Find this resource:

Reidenberg, J. (2000). Testimony of Joel R. Reidenberg before the Subcommittee on Courts and Intellectual Property Committee on the Judiciary, United States House of Representatives: Oversight Hearing on Privacy and Electronic Commerce, May 18, 2000.

Richardson, K. (2015). The Asymmetrical “Relationship”: Parallels Between Prostitution and the Development of Sex Robots. SIGCAS Computers & Society, 45(3), 290–293.Find this resource:

Rouvroy, A. (2008). Privacy, data protection, and the unprecedented challenges of ambient intelligence. Studies in Ethics, Law, and Technology, 2(1), Article 3.Find this resource:

Sandry, E. (2015). Re-evaluating the form and Communication of Social Robots, International Journal of Social Robotics, 7(3), 335–346.Find this resource:

Sanger, D. (2016). “Shadow brokers” leak raises alarming question: Was the N.S.A. hacked? New York Times, August 16.Find this resource:

Scheutz, M., & Arnold, T. (2016). Proceeding: The Eleventh ACM/IEEE International Conference on Human Robot Interaction (HRI 2016) (pp. 351–358). Piscataway, NJ: IEEE Press.Find this resource:

Seibt, J., & Nørskov, M. (2012). “Embodying” the Internet: Towards the moral self via communication robots? Philosophy of Technology, 25, 285–307.Find this resource:

Sinnott-Armstrong, W. (2015). Consequentialism. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy.Find this resource:

Stahl, B. (2004). Responsible Management of Information Systems. Hershey, PA: Idea Group.Find this resource:

Spiekermann, S. (2016). Ethical IT innovation: A value-based system design approach. New York: Taylor & Francis.Find this resource:

Sui, S. (2011). The law and regulation on privacy in China. Paper presented at the Rising Pan European and International Awareness of Biometrics and Security Ethics (RISE) conference, Beijing, October 20–21.Find this resource:

Sullins, J. (2012). Robots, Love, and Sex: The Ethics of Building a Love Machine. IEEE Transactions on Affective Computing, 3(4), October–December, 398–409.Find this resource:

Tavani, H. (2013). Ethics and technology: Controversies, questions, and strategies for ethical computing (4th ed.). Hoboken, NJ: Wiley.Find this resource:

Trappl, R. (Ed.). (2015). A construction manual for robots’ ethical systems: Requirements, methods, implementations. London: Springer.Find this resource:

Turkle, S. (2011). Alone together: Why we expect more from technology and less of each other. Cambridge, MA: MIT Press.Find this resource:

Utz, S., & Kramer, N. C. (2009). The privacy paradox on social network sites revisited: The role of individual characteristics and group norms. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, 3(2).Find this resource:

Vallor, S. (2011a). Carebots and caregivers: Sustaining the ethical ideal of care in the twenty-first century. Philosophy of Technology, 24, 251–268.Find this resource:

Vallor, S. (2011b). Flourishing on Facebook: Virtue friendship & new social media. Ethics and Information Technology, 14(3), 185–199.Find this resource:

Vallor, S. (2016). Technology and the virtues: A philosophical guide to a future worth wanting. Oxford: Oxford University Press.Find this resource:

van Wynsberghe, A. (2013). Designing robots for care: Care centered value-sensitive design. Science and Engineering Ethics, 19, 407–433.Find this resource:

Verbeek, P. -P. (2011). Moralizing technology: Understanding and designing the morality of things. Chicago: University of Chicago Press.Find this resource:

Vignoles, V. L., Owe, E., Becker, M., Smith, P. B., Easterbrook, M. J., Brown, R., et al. (2016). Beyond the “east–west” dichotomy: Global variation in cultural models of selfhood. Journal of Experimental Psychology, 145(8), 966–1000.Find this resource:

Wallach, W., & Asaro, P. (2017). Machine ethics and robot ethics. London: Routledge.Find this resource:

Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. New York: W. H. Freeman.Find this resource:

Westin, A. F. (1970). Privacy and freedom. London: The Bodley Head.Find this resource:

Westin, A. F. (2003). Social and political dimensions of privacy. Journal of Social Issues, 59(2), 431–453.Find this resource:

White, A. (2008). IP Addresses Are Personal Data, E.U. Regulator Says, Washington Post, January 22, p. D1. Retrieved from www.washingtonpost.com/wp-dyn/content/article/2008/01/21/AR2008012101340.html.Find this resource:

Zevenbergen, B., Mittelstadt, B., Véliz, C., Detweiler, C., Cath, C., Savulescu, J., et al. (2015). Philosophy meets Internet engineering: Ethics in networked systems research. (GTC workshop outcomes paper). Oxford Internet Institute, University of Oxford.Find this resource:

Zevenbergen, B. (2016). “Networked systems ethics.” Ethics in networked systems research: Ethical, legal, and policy reasoning for Internet engineering. Retrieved from http://ensr.oii.ox.ac.uk/author/ben-zevenbergen/. Oxford Internet Institute, University of Oxford.Find this resource:

Notes:

(1.) In contemporary usage, demotic is used primarily to refer to, for example, common or everyday uses of language versus more complex or literary forms (Merriam-Webster). Here, the term refers to three distinct features of DME that extend from more narrow to broader communities and populations.