Hearing loss is common, with approximately 17% of the population reporting some degree of a hearing deficit. Hearing loss has profound impacts on health literacy, health information accessibility, and learning. Much of existing health information is inaccessible. This is largely due to the lack of focus on tailoring the messages to the needs of deaf and hard of hearing (DHH) individuals with hearing loss. DHH individuals struggle with a variety of health knowledge gaps and health disparities. This demonstrates the importance of providing tailored and accessible health information for this population. While hearing loss is heterogeneous, there are still overlapping principles that can benefit everyone. Through adaptation, DHH individuals become visual learners, thus increasing the demand for appropriate visual medical aids. The development of health information and materials suitable for visual learners will likely impact not only DHH individuals, but will also be applicable for the general population. The principles of social justice and universal design behoove health message designers to ensure that their health information is not only accessible, but also equitable. Wise application of technology, health literacy, and information learning principles, along with creative use of social media, peer exchanges, and community health workers, can help mitigate much of the health information gaps that exist among DHH individuals.
In the European Union, “television-like” is a legal concept, introduced in 2007 as a part of a political compromise over the scope of the new Audiovisual Media Services Directive (AVMSD). The European Commission had originally intended to expand the new rules on linear television programming to cover also all new nonlinear audiovisual content services intended for the same audiences online. This approach was objected to by the U.K. government, which saw it as potentially harmful for the growth of the new online media. Although left practically alone in the opposition in the EU decision-making process, the U.K. government managed with the support of the U.K. regulator Ofcom and the U.K. industry alliance to limit the new directive to cover only “television-like” online services. According to AVMSD Recital 24, these services should “compete for the same audience as television broadcasts” while “the concept of ‘programme’ should be interpreted in a dynamic way taking into account developments in television broadcasting.” The vagueness of this concept has left room for very different and even opposing interpretations. A number of national regulatory authorities in Europe as well as the Court of Justice of the European Union argue that parts of some newspaper’s websites can also be classified as video-on-demand services, while Ofcom has systematically excluded all the audiovisual services on the websites of British newspapers from regulation.
Creating a clear definition of “TV-like” content or services is difficult not just because of the vague wording of the EU directive or digital media convergence, but because the whole concept is based on another set of concepts, which definitions are highly dependent on time and context: television, program, and channel as a practice of packaging content into a linear transmission schedule. Early TV was indeed showing radio programming in production, or radio with pictures. From a contemporary perspective, full-length films may seem to be typical content for television, but most of them have originally been made for theatrical distribution. Over the years, audiovisual media formerly known as television has expanded on multiple platforms and its content has also been available in different on demand-type formats for several decades. So depending on your perspective, there is either a plentitude of “TV-like” content services besides the genuine TV or a wide variety of different flavors of television. Currently, it can be argued whether TV is in terminal decline or just integrating with mobile and online media, but it is obvious that any efforts to define “TV-like” content could make sense only as long the traditional, linear type of (broadcast) TV continues to have an important role in our societies and media cultures.
J. Macgregor Wise
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Communication. Please check back later for the full article.
Gilles Deleuze (1925–1995) was a contemporary philosopher who taught at the University of Paris, Vincennes-St. Denis. He produced a wide range of work, from commentaries on philosophers (Kant, Spinoza, Nietzsche, Bergson, Hume, Liebniz, and Foucault) to analyses of film, literature, and painting. Two of his key contributions to philosophy are The Logic of Sense and Difference and Repetition. With his collaborator, the radical psychoanalyst Félix Guattari, he wrote four influential books, including Anti-Oedipus: Capitalism and Schizophrenia and A Thousand Plateaus. Deleuze did not develop a coherent and set framework of concepts, but rather an approach to philosophy that was based on immanence rather than transcendence, becoming rather than being, and multiplicity rather than singularity. Deleuze’s work is an affirmation of life and creativity, a vitalism. “Everything I’ve written is vitalistic, at least I hope it is, and amounts to a theory of signs and events” (p. 143). Each book of Deleuze’s seems to generate a new collection of concepts to grapple with the problem at hand. Three key concepts for Deleuze are rhizome, multiplicity, and assemblage.
For Deleuze and Guattari, the guiding image of thought was that of the rhizome. The idea of the rhizome is contrasted with that of the tree or the root. In the latter, there is the singular origin, the center. A rhizome is a structure without a center; it grows by sending off shoots (like crabgrass or potatoes). You are always in the middle with a rhizome, never at the start or end. The point is to connect. Like rhizomes, multiplicities must be made, and they are made by subtracting the unique. Multiplicities and rhizomes have sections that get structured, stratified, and pinned down, but then also always have lines of flight by which to escape.
An assemblage “establishes connections between certain multiplicities” (p. 23) and “stake[s] out a territory” (p. 503). An assemblage is always territorializing (bringing together various elements in a particular arrangement) and de-territorializing (opening up onto other territories, de-organizing). In addition to this dimension of an assemblage, it is also the stratification of systems of language and systems of technology in a relation of expression and content. The former they call collective assemblages of enunciation and the latter, machinic assemblages (of bodies, “actions and passions”). An assemblage is always articulating arrangements of bodies, discourses, affects, and other elements. Crucially, assemblages are always in process and are not stable structures; they are becomings.
How then to think of communication within this conceptual context? Deleuze and Guattari reject the idea of communication as intersubjective. There is not an individual subject speaking, there is only the collective assemblage of enunciation. They speak instead of language, but a language of order words. Communication is not about representation or signification. Deleuze tends to treat communication as a form of control. Some of Deleuze’s final essays and interviews were spent explicating this new social power of control. Contrasting with Michel Foucault’s influential ideas on the rise of disciplinary society, Deleuze maps the emergence of societies of control “that no longer operate by confining people but through continuous control and instant communication” (p. 174).
For those studying communication, Deleuze’s legacy is featured in three areas: the material turn in communication studies and critical theory; the rise in theories of affect; and notions of control with regard to theories of contemporary surveillance.
Since the early 2000s, Digital Media Ethics (DME) has emerged as a relatively stable subdomain of applied ethics. DME seeks nothing less than to address the ethical issues evoked by computing technologies and digital media more broadly, such as cameras, mobile and smartphones, GPS navigation systems, biometric health monitoring devices, and, eventually, “the Internet of things,” as these have developed and diffused into more or less every corner of our lives in the (so-called) developed countries. DME can be characterized as demotic—of the people—in three important ways. One, in contrast with specialist domains such as Information and Computing Ethics (ICE), it is intended as an ethics for the rest of us—namely, all of us who use digital media technologies in our everyday lives. Two, these manifold contexts of use dramatically expand the range of ethical issues computing technologies evoke, well beyond the comparatively narrow circle of issues confronting professionals working in ICE. Three, while drawing on the expertise of philosophers and applied ethics, DME likewise relies on the ethical insights and sensibilities of additional communities, including (a), the multiple communities of those whose technical expertise comes into play in the design, development, and deployment of information and communication technology (ICT); and (b), the people and communities who use digital media in their everyday lives.
DME further employs both ancient ethical philosophies, such as virtue ethics, and modern frameworks of utilitarianism and deontology, as well as feminist ethics and ethics of care: DME may also take, for example, Confucian and Buddhist approaches, as well as norms and customs from relevant indigenous traditions where appropriate. The global distribution and interconnection of these devices means, finally, that DME must also take on board often profound differences between basic ethical norms, practices, and related assumptions as these shift from culture to culture. What counts as “privacy” or “pornography,” to begin with, varies widely—as do the more fundamental assumptions regarding the nature of the person that we take up as a moral agent and patient, rights-holder, and so on. Of first importance here is how far we emphasize the more individual vis-à-vis the more relational dimensions of selfhood—with the further complication that these emphases appear to be changing locally and globally.
Nonetheless, DME can now map out clear approaches to early concerns with privacy, copyright, and pornography that help establish a relatively stable and accepted set of ethical responses and practices. By comparison, violent content (e.g., in games) and violent behavior (cyber-bullying, hate speech) are less well resolved. Nonetheless, as with the somewhat more recent issues of online friendship and citizen journalism, an emerging body of literature and analysis points to initial guidelines and resolutions that may become relatively stable. Such resolutions must be pluralistic, allowing for diverse application and interpretations in different cultural settings, so as to preserve and foster cultural identity and difference.
Of course, still more recent issues and challenges are in the earliest stages of analysis and efforts at forging resolutions. Primary issues include “death online” (including suicide web-sites and online memorial sites, evoking questions of censorship, the right to be forgotten, and so on); “Big Data” issues such as pre-emptive policing and “ethical hacking” as counter-responses; and autonomous vehicles and robots, ranging from Lethal Autonomous Weapons to carebots and sexbots. Clearly, not every ethical issue will be quickly or easily resolved. But the emergence of relatively stable and widespread resolutions to the early challenges of privacy, copyright, and pornography, coupled with developing analyses and emerging resolutions vis-à-vis more recent topics, can ground cautious optimism that, in the long run, DME will be able to take up the ethical challenges of digital media in ways reasonably accessible and applicable for the rest of us.
Courtney Barclay and Kearston Wesner
This is an advance summary of a forthcoming article in the Oxford Research Encyclopedia of Communication. Please check back later for the full article.
Drones armed with cameras have allowed journalists to capture images from new perspectives and in places previously unreachable. Footage of volcanic eruptions, war-torn villages, and nuclear disaster areas have all been made possible with drone technology. However, this same technology presents risks to personal privacy.
Since before Warren and Brandeis penned the oft cited Right to Privacy, newsgatherers have tested the boundaries of society’s notion of privacy. The development of new technologies at the time, such as the snap camera, made photography a faster, more efficient endeavor. Warren and Brandeis recognized that the increased photographic recording of society threatened individual privacy on a scale never before imagined. More than a century later, the use of new technology—drones outfitted with cameras and other imaging devices—has once again ignited debate over how to protect an individual’s privacy while ensuring journalists’ ability to gather news.
The traditional remedy for intrusive journalism has been through tort law, which requires an individual to show that she or he had a reasonable expectation of privacy. By and large, these laws have favored journalists; however, that result is usually based on the fact that the newsgathering activity occurred in a public place rather than any recognition of the importance of newsgathering. State lawmakers have begun to address drone photography with a wide variety of approaches that would move away from this public place exception—from prohibiting photography over private property to prohibiting any photography without someone’s consent, even in a public place.
The press has recognized the cost to individual privacy incurred by use of technologies such as drone photography. Professional codes of ethics instruct journalists to minimize harm to the public, requiring an “overriding” public interest to invade someone’s privacy. The Professional Society of Drone Journalists’ Code of Ethics addresses the additional responsibilities inherent to drone technology. Under this code, journalists should record only public spaces and delete any images of individuals in a private space.
Drone technology represents only one of the latest developments in surveillance used for law enforcement, commercial enterprise, and journalism. However, its growth and the gaps in privacy tort law underscore the importance of strong codes of ethics that serve the interests of both newsgathering and individual privacy.
Copyright exceptions and limitations in the United States have experienced dynamic evolution in light of new technological developments. There has been significant legal debate in the courts and in the United States Congress about the scope of the defense of fair use. The copyright litigation over Google Books has been a landmark development in the modern history of copyright law. The victory by Google, Inc., over the Authors Guild in the decade-long copyright dispute is an important milestone for copyright law. The ruling of Leval J emphasizes that the defense of fair use in the United States plays a critical role in promoting transformative creativity, freedom of speech, and innovation. The Supreme Court of the United States was decisive in its rejection of the Authors Guild’s efforts to challenge the decision of Leval J. There has been significant debate in the United States Copyright Office and United States Congress over the development of “the Next Great Copyright Act.” Hearings have taken place within the United States Congressional system about the history, nature, and future of the defense of fair use under United States copyright law. There remains much debate about the internationalization of the defense of fair use, and the need for the trading partners of the United States to enjoy similar flexibilities with respect to copyright exceptions. There has been concern about the impact of mega-regional trade agreements—such as the Trans-Pacific Partnership—upon copyright exceptions, such as the defense of fair use.
Questions related to identity have been central to discussions on online communication since the dawn of the Internet. One of the positions advocated by early Internet pioneers and scholars on computer-mediated communication was that online communication would differ from face-to-face communication in the way traditional markers of identity (such as gender, age, etc.) would be visible for interlocutors. It was theorized that these differences would manifest both as reduced social cues as well as greater control in the way we present ourselves to others. This position was linked to ideas about fluid identities and identity play inherent to post-modern thinking. Lately, the technological and societal developments related to online communication have promoted questions related to, for example, authenticity and traceability of identity.
In addition to the individual level, scholars have been interested in issues of social identity formation and identification in the context of online groups and communities. It has been shown, for example, how the apparent anonymity in initial interactions can lead to heightened identification/de-individuation on the group level. Another key question related to this one is the way group identity and identification with the group relates to intergroup contact in online settings. How do people perceive others’ identity, as well as their own, in such contact situations? To what extent is intergroup contact still intergroup contact, if the parties involved do not perceive it as such? As online communication continues to offer a key platform for contact between various types of social groups, questions of identity and identification remain at the forefront of scholarship into human communication behavior in technology-mediated settings.
Sun Joo (Grace) Ahn and Jesse Fox
Immersive virtual environments (IVEs) are systems comprised of digital devices that simulate multiple layers of sensory information so that users experience sight, sound, and even touch like they do in the physical world. Users are typically represented in these environments in the form of virtual humans and may interact with other virtual representations such as health-care providers, coaches, future selves, or treatment stimuli (e.g., phobia triggers, such as crowds of people or spiders). These virtual representations can be controlled by humans (avatars) or computers algorithms (agents). Embodying avatars and interacting with agents, patients can experience sensory-rich simulations in the virtual world that may be difficult or even impossible to experience in the physical world but are sufficiently real to influence health attitudes and behaviors. Avatars and agents are infinitely customizable to tailor virtual experiences at the individual level, and IVEs are able to transcend the spatial and temporal boundaries of the physical world. Although still preliminary, a growing number of studies demonstrate IVEs’ potential as a health promotion and therapy tool, complementing and enhancing current treatment regimens. Attempts to incorporate IVEs into treatments and intervention programs have been made in a number of areas, including physical activity, nutrition, rehabilitation, exposure therapy, and autism spectrum disorders. Although further development and research is necessary, the increasing availability of consumer-grade IVE systems may allow clinicians and patients to consider IVE treatment as a routine part of their regimen in the near future.
Brenda L. Berkelaar and Millie A. Harrison
Information visibility refers to the degree to which information is available and accessible. Availability focuses on whether people could acquire particular information if they wanted. Accessibility focuses on the effort needed to acquire available information. In scholarly, industry, and popular press, people often conflate information visibility with transparency, yet transparency is generally a valued or ideological concept, whereas visibility is an empirical concept. Growing interest in studying and managing information visibility corresponds with the rapid growth in the use of digital, networked technologies. Yet, interest in information visibility existed prior to the introduction of networked information and communication technologies. Research has historically focused on information visibility as a form of social control and as a tool to increase individual, organizational, and social control and coordination. As a research area, information visibility ties to classic communication and interdisciplinary concerns, as well as core concerns of contemporary society including privacy, surveillance, transparency, accountability, democracy, secrecy, coordination, control, and efficiency. An emerging research area with deep historical roots, information visibility offers a promising avenue for future research.
Internet neutrality—usually net(work) neutrality—encompasses the idea that all data packets that circulate on the Internet should be treated equally, without discriminating between users, types of content, platforms, sites, applications, equipment, or modes of communication. The debate about this normative principle revolves around the Internet as a set of distribution channels and how and by whom these channels can be used to control communication. The controversy was spurred by advancements in technology, the increased usage of bandwidth-intensive services, and changing economic interests of Internet service providers. Internet service providers are not only important technical but also central economic actors in the management of the Internet’s architecture. They seek to increase revenue, to recover sizable infrastructure upgrades, and expand their business model. This has consequences for the net neutrality principle, for individual users and corporate content providers. In the case of Internet service providers becoming content providers themselves, net neutrality proponents fear that providers may exclude competitor content, distribute it poorly and more slowly, and require competitors to pay for using high-speed networks. Net neutrality is not only a debate on infrastructure business models that is carried out in economic expert circles. On the contrary, and despite its technical character, it has become an issue in the public debate and an issue that is framed not only in economic but also in political and social terms. The main dividing line in the debate is whether net neutrality regulation is necessary or not and what scope net neutrality obligations should have. The Federal Communications Commission (FCC) in the United States passed new net neutrality rules in 2015 and strengthened its legal underpinning regarding the regulation of Internet service providers (ISPs). With the Telecoms Single Market Regulation, for the first time there will be a European Union–wide legislation for net neutrality, but not recent dilution of requirements. From a communication studies perspective, Internet neutrality is an issue because it relates to a number of topics addressed in communication research, including communication rights, diversity of media ownership, media distribution, user control, and consumer protection. The connection between legal and economic bodies of research, dominating net neutrality literature, and communication studies is largely underexplored. The study of net neutrality would benefit from such a linkage.