Media Literacy as a Consideration in Health and Risk Message Design
Summary and Keywords
Media literacy describes the ability to access, analyze, evaluate, and produce media messages. As media messages can influence audiences’ attitudes and behaviors toward various topics, such as attitudes toward others and risky behaviors, media literacy can counter potential negative media effects, a crucial task in today’s oversaturated media environment. Media literacy in the context of health promotion is addressed by analyzing the characteristics of 54 media literacy programs conducted in the United States and abroad that have successfully influenced audiences’ attitudes and behaviors toward six health topics: prevention of alcohol use, prevention of tobacco use, eating disorders and body image, sex education, nutrition education, and violent behavior. Because media literacy can change how audiences perceive the media industry and critique media messages, it could also reduce the potential harmful effects media can have on audiences’ health decision-making process.
The majority of the interventions have focused on youth, likely because children’s and adolescents’ lack of cognitive sophistication may make them more vulnerable to potentially harmful media effects. The design of these health-related media literacy programs varied. Many studies’ interventions consisted of a one-course lesson, while others were multi-month, multi-lesson interventions. The majority of these programs’ content was developed and administered by a team of researchers affiliated with local universities and schools, and was focused on three main areas: reduction of media consumption, media analysis and evaluations, and media production and activism. Media literacy study designs almost always included a control group that did not take part in the intervention to confirm that potential changes in health and risk attitudes and behaviors among participants could be attributed to the intervention. Most programs were also designed to include at least one pre-intervention test and one post-intervention test, with the latter usually administered immediately following the intervention. Demographic variables, such as gender, age or grade level, and prior behavior pertaining to the health topic under study, were found to affect participants’ responses to media literacy interventions.
In these 54 studies, a number of key media literacy components were clearly absent from the field. First, adults—especially those from historically underserved communities—were noticeably missing from these interventions. Second, media literacy interventions were often designed with a top-down approach, with little to no involvement from or collaboration with members of the target population. Third, the creation of counter media messages tailored to individuals’ needs and circumstances was rarely the focus of these interventions. Finally, these studies paid little attention to evaluating the development, process, and outcomes of media literacy interventions with participants’ sociodemographic characteristics in mind. Based on these findings, it is recommended that health-related media literacy programs fully engage community members at all steps, including in the critical analysis of current media messages and the production and dissemination of counter media messages. Health-related media literacy programs should also impart participants and community members with tools to advocate for their own causes and health behaviors.
Keywords: media literacy, intervention, alcohol use, eating disorders and body image, nutrition education, sex education, tobacco use, violent behavior, evaluation, sociodemographic characteristics, health and risk message design and processing
Media literacy programs in the context of health and risk message design and processing are explored. Media literacy has gained momentum in disciplines such as communication and public health as one of the health promotion tools used to enhance the well-being of the audiences. Community organizations have a long tradition of developing media literacy programs with and without the involvement of academics, but the focus here is only on formal evaluations of media literacy programs published in academic journals. The relationship between media messages and audiences’ perceptions related to these messages is discussed and the role media literacy plays in modifying such media effects is addressed. Special attention is given to audiences’ sociodemographic characteristics, lesson content, and evaluation measures. Limitations of those published programs and recommendations to enhance media literacy effectiveness are also discussed.
Part I: Background Information
Media messages are increasingly becoming a ubiquitous part of people’s lives. Adults in the United States spend an average of about 10 hours per day consuming media, including slightly more than 5 hours of television (live or time-shifted), about 3 hours online (via a smartphone, computer, or tablet) and almost 2 hours of radio (Nielsen, 2016). Teenagers (12 to 17 years old) spend a little more than 3 hours a day watching television (Nielsen, 2015), and 92% go online daily, with 24% of them reporting being “almost constantly” online (Pew Research Center, 2015). Considering an average 8-hour work or school day and about as many hours spent sleeping, coupled with technology advances that increase ease of access, it is crucial to explore the role media play in people’s lives. While different types of media content can affect people in various ways, it would be hard to argue that such high levels of media exposure would leave audiences immune to potential media effects.
Audiences have been suspicious of the potential effects that media messages can have on audiences since the development of mass communication devices such as the radio and the movies at the beginning of the 20th century (Pavlik & McIntosh, 2011). Some believed that the lyrics of jazz songs broadcast on the radio, coupled with the portrayals of gangsters’ lifestyles in motion pictures, “would sweep away the sexual prohibitions and other standards of decency in society, leaving it in a state of moral ruin” (DeFleur, 2010, p. 122). Early media theories assumed that media messages could reach all audiences with powerful effects that would shape audiences’ thoughts and behaviors (DeFleur, 2010). Since then, our understanding of media effects has become much more nuanced. However, the early idea that media can affect people still holds true under certain circumstances, depending on the type of media messages and the characteristics of audiences.
One of the most influential theories aiming to understand the influence of mass media is cultivation, a term coined by George Gerbner in the late 1960s to refer to the cultural and social consequences television viewing has on audiences (Morgan, Shanahan, & Signorielli, 2009). Cultivation theory states that audiences’ repeated exposure to media shapes their beliefs and attitudes according to the mediated images they see so that exposure to media messages influences people to perceive the world in a certain manner (Signorielli, 1990). Cultivation theory’s main tenet implies that “those who spend more time ‘living’ in the world of television are more likely to see the ‘real world’ in the terms of the images, values, portrayals, and ideologies that emerge through the lens of television” (Morgan et al., 2009, p. 35).
With time, theories of media effects have shifted emphasis from amount of time spent exposed to media messages to types of cognitive mechanism used to process such media messages. For instance, audiences’ motivation and ability to process information can moderate media effects. Audiences with low motivation and ability levels are more likely to be influenced by media messages than those with high motivation and ability levels exposed to the same media messages (Shrum, 2007). In other words, simply thinking about a particular topic can diminish the influence of media representations of that topic on audiences’ thoughts (Shrum, 2001).
Other factors can also moderate the influence of media messages. The availability and accessibility of media messages in audiences’ mind or memory play a role in how likely audiences are to rely on media messages when thinking about a topic (Entman, 1993). The intensity and recency of media exposure also affect how available and accessible media messages are (Arendt, 2013). While these concepts have been empirically related to media effects, it is important to keep in mind audiences’ individual differences. For instance, audiences who are more knowledgeable about a particular topic are less likely to be influenced by media representations of that topic (Scheufele & Tewksbury, 2007). Further research exploring audience’s individual differences has found that sociodemographic characteristics, such as race (e.g., Appiah, 2003; Brown & Pardun, 2004), gender (e.g., Harrison, Taylor, & Marske, 2006; Knobloch-Westerwick, 2007), age (e.g., Mares & Woodard, 2006; Samson & Grabe, 2012), education (e.g., Hargittai, 2010; Nojin, 1999), and income (e.g., Iversen & Kraft, 2006; Lee, Niederdeppe, & Freres, 2012), have also been shown to influence audiences’ interactions with media and moderate potential media effects.
Media Literacy Definition and History
To counter or at least limit the influence of media messages, scholars have explored ways to help audiences make sense of media by becoming educated media consumers, a strategy referred to as media literacy. Media literacy is traditionally defined as “an individual’s ability to access, analyze, evaluate and produce information for specific outcomes” (Aufderheide, 1993, p. 6). Scholars posit that by knowing how and why media messages are created, thus demystifying media production, audiences will understand the power of persuasion tools, which will help them counter the potential effects of those messages (Banerjee & Greene, 2007).
Interest in media education in the United States emerged in the 1970s and 1980s and was predominantly focused on television literacy, largely due to government funding (Piette & Giroux, 2001). At first, media literacy instructions emphasized teaching of specific technologies, as opposed to critically thinking about the creation and purpose of media messages. During the 1980s and 1990s, the rise of media literacy education in the United States corresponded to the development of media effects research (for historical development prior to the 1980s, please see Hobbs & Jensen, 2009). These decades also saw an increasing number of partnerships among teachers, community groups, and academic scholars. However, even though media literacy was growing during that time, the United States has traditionally lagged behind other English-speaking countries in that area because of systemic barriers, such as the politicization of education and limited resources offered to pre-service or in-service trainings (Kubey, 2003). Indeed, many European countries, as well as Australia and Canada, were the front-runners in incorporating media literacy curriculum into their educational system, with training opportunities and resources available for teachers. It is important to note that the scope of media literacy education in the United States and abroad continues to expand, as evidenced by the advancement of technologies and the increasingly interdisciplinary nature of media education practices. The concepts of digital and Internet literacy illustrate how technologies have impacted the understanding and practice of media literacy education (Livingstone, 2003; Livingstone, 2004). Related concepts, such as advertising literacy, film literacy, visual literacy, and health literacy (Livingstone & Van der Graaf, 2010), have been examined extensively in recent scholarship, demonstrating not only the interdisciplinary nature of media literacy education but also the applicability of media literacy education in the context of health and risk behaviors.
Among publications that have empirically examined the effectiveness of media literacy interventions, media literacy has been conceptualized on three levels of understanding a media message: (1) who the sender and receiver are, (2) what the meanings and interpretations are, and (3) what representation and reality it conveys (Primack & Hobbs, 2009). Indeed, media literacy programs vary in their approaches, from reducing audiences’ media consumption, to exposing audiences to counter portrayal, to critically analyzing media messages, to creating alternative messages. Most programs examining health and risk refer to these approaches as interventions, in which audiences engage in activities with some type of media content related to a particular topic, often related to health and risk behaviors. Overall, media literacy has been prescribed as an antidote to fight off the effects media messages may have in influencing audiences’ thoughts and behavior pertaining to a particular topic.
Given that media literacy interventions have been successful in countering potential negative media effects, it is important to explore the types of intervention activities and program evaluations, as well as participants’ sociodemographic characteristics, in order to identify both strengths and weaknesses to benefit future media literacy research. To this end, the literature exploring the role media literacy programs play in participants’ perceptions and interpretations of media messages, as well as their attitudes towards and engagement in health and risk behaviors, was reviewed. A total of 54 studies published between 1983 and 2016 were examined to determine participants’ main sociodemographic characteristics, the type of media literacy program implemented, and the evaluation measures used to assess program outcomes. While media literacy programs have been applied to a wealth of topics (e.g., racial perceptions, community relations, commercial messages, news programs, etc.), the discussion focuses on the following six types of health and risk media messages: alcohol use (n = 4), eating disorders and body image (n = 13), nutrition education (n = 3), sex education (n = 3), tobacco use (n = 13), and violent behavior (n = 16). Two additional articles target multiple health behaviors with one targeting alcohol and tobacco use, and the other one targeting violence, tobacco use, and nutrition. These six topics were selected because they represent the main health and risk topics that have been addressed by media literacy programs. These articles were selected by searching key terms related to this topic in the main online research databases and referencing the most recent meta-analysis research (Jeong, Cho, & Hwang, 2012). Additional searches were done by exploring authors’ relevant work.
Part II: Studies and Participants’ Sociodemographic Characteristics
The majority of published media literacy studies focused on health and risk examined here have either focused on prevention, targeting populations before they try or start a particular behavior, or on intervention, targeting populations who practiced or are currently practicing a behavior. Some studies target specific populations because they are more at risk than others, such as young women and eating disorders (e.g., Lew, Mann, Myers, Taylor, & Bower, 2007; Posavac, Posavac, & Weigel, 2001). However, others do not include information about prior or current behavior, such as minors’ tobacco use (e.g., Banerjee & Greene, 2006; Beltramini & Bridge, 2001; Kaestle, Chen, Estabrooks, Zoellner, & Bigby, 2013), probably to avoid reporting or asking about an illegal behavior.
Most media literacy programs reviewed were implemented in schools or universities, as the majority of published media literacy studies focus on school-aged children and adolescents. Some notable exceptions include studies that took place in a community setting, such as Head Start offices, a YMCA, or community health centers (Hindin, Contento, & Gussow, 2004; Kaestle et al., 2013; Zoellner et al., 2016, respectively) and a study in which three out of the four intervention activities (e.g., writing exercise) took place in participants’ homes (Lew et al., 2007). There seems to be no relationship between the type of health and risk behavior and the age or education level of participants. However, apart from programs pertaining to nutrition education that have largely involved adult populations (Evans et al., 2006; Hindin et al., 2004; Zoellner et al., 2016), scholars have given younger audiences more attention, with about twice as many media literacy studies conducted in elementary schools than in middle schools, high schools, or a combination of both.
As can be expected, participants’ sociodemographic characteristics largely correspond to the demographics of the location where the study was conducted. Some scholars do aim to recruit participants who reflect the location’s sociodemographic characteristics (e.g., Gonzales, Glik, Davoudi, & Ang, 2004; Shensa, Phelps-Tschang, Miller, & Primack, 2015; Zoellner et al., 2016). Based on convenient sampling techniques, participants’ racial or ethnic background, for instance, can vary from predominantly White (e.g., Austin, Pinkleton, Hust, & Cohen, 2005; Bickham & Slaby, 2012; McVey & Davis, 2002; Pinkleton, Austin, Chen, & Cohen, 2012, 2013; Pinkleton, Austin, Cohen, Chen, & Fitzgerald, 2008), to predominantly Hispanic (e.g., Banerjee & Greene, 2006; Fingar & Jolls, 2014; Gonzales et al., 2004; Webb & Martin, 2012; Webb, Martin, Afifi, & Kraus, 2009), to predominantly Black (e.g., Evans et al., 2006), to a majority of Asian participants (e.g., Lew et al., 2007). Some scholars plan to recruit participants from a particular sociodemographic background based on their risk vis-à-vis a particular health behavior. For instance, Primack and his colleagues (2014) aimed for their participant pool to include at least 25% of African American high school students because this group “bears the greatest burden of morbidity and mortality due to smoking” (p. 4). Apart from programs focused on eating disorder risk factors (e.g., body images and self-esteem), which overwhelmingly focus on female participants, especially college-aged women (e.g., Chambers & Alexander, 2007; Coughlin & Kalodner, 2006; Irving & Berel, 2001; Lew et al., 2007; Posavac et al., 2001; Yamamiya, Cash, Melnyk, Posavac, & Posavac, 2005), most media literacy programs include equal numbers of male and female participants.
In the United States, the majority of health-related media literacy studies have been conducted on the West Coast (e.g., Byrne, 2009; Fingar & Jolls, 2014; Robinson, Wilde, Navracruz, Haydel, & Varady, 2001; Webb & Martin, 2012; Webb et al., 2009). Other programs have been implemented in Australia (Richardson, Paxton, & Thomson, 2009; Wade, Davidson, & O’Dea, 2003; Wilksch, Durbridge, & Wade, 2008; Wilksch, Tiggemann, & Wade, 2006; Wilksch & Wade, 2009), the Netherlands (Vooijs & van der Voort, 1993a, 1993b), and Canada (McVey & Davis, 2002). Some programs’ locations are directly tied to a particular health risk. For instance, tobacco prevention and intervention programs were conducted, respectively, in low-income areas with a strong historical and economical tie to tobacco (Kaestle et al., 2013) and in impoverished urban environments, where tobacco advertisements are more prevalent than in richer areas (Gonzales et al., 2004).
Part III: Media Literacy Programs and Lesson Content
Different media literacy programs examined here often include different activities. The overall design of most programs, however, in its simplest form, follows the same structure, revolving around a set of activities anchored in pre and post measures related to a particular health and risk topic. Discussed are different types of activities and measures, including: (1) type of lesson and length, (2) type of content, and (3) mode of program delivery. Development and outcome of media literacy programs are discussed with a specific focus on: (1) research designs, (2) sociodemographic characteristics and intervention effectiveness, and (3) overall program evaluation.
Type of Lesson and Length
One-Shot vs. Multi-Lesson Format
Scholars have taken two primary approaches in lesson format: offering a media literacy program in a one-shot single lesson format or in a multi-lesson format. A variety of factors, such as time constraints and participant availability, may have influenced scholars’ decisions to employ different formats, but limited information about the reasons was provided in the published work.
A one-shot, single lesson is most common when working with children and adolescents in the context of alcohol use (Austin & Johnson, 1997a, 1997b; Chen, 2013) and with college women in the context of eating disorders and body image (Chambers & Alexander, 2007; Irving & Berel, 2001; Yamamiya et al., 2005). While single media literacy lessons have been found to modify key decision-making variables (Brown, 2006), a recent meta-analysis has found that media literacy programs with multiple sessions tend to be more effective (Jeong et al., 2012).
Indeed, more and more media literacy programs were designed with a multi-lesson approach in mind, administered consecutively in a week (Pinkleton et al., 2008, 2012, 2013) or throughout a semester (e.g., Bickham & Slaby, 2012; Rosenkoetter, Rosenkoetter, & Acock, 2009; Rosenkoetter, Rosenkoetter, Ozretich, & Acock, 2004). Notably, media literacy programs that focused on younger participants (e.g., elementary schoolers and middle schoolers) were more likely to include multiple lessons (e.g., Evans et al., 2006; McVey & Davis, 2002; Richardson et al., 2009; Rosenkoetter et al., 2004; Wade et al., 2003; Wilksch & Wade, 2009) than programs implemented at the university level (Chambers & Alexander, 2007; Irving & Berel, 2001; Yamamiya et al., 2005).
The durations of these programs, however, vary, mostly depending on the availability of participants and local partners (e.g., school principals and teachers). For instance, among the multi-lesson programs, the number of lessons ranged from two lessons in tobacco research (Banerjee & Greene, 2006, 2007) to as many as 31 in media violence research (Rosenkoetter et al., 2004). Similarly, the length of media literacy programs has been found to vary greatly for both one-shot and multi-lesson formats. One intervention was as short as seven minutes, administered as a video lecture (Posavac et al., 2001). Most common lesson lengths ranged from about 25–30 minutes (Chambers & Alexander, 2007), to 40 minutes (Banerjee & Greene, 2006; Bickham & Slaby, 2012), to 50 minutes (Goldberg, Niedermeier, Bechtel, & Gorn, 2006), to one hour (Kaestle et al., 2013), to 90 minutes (Coughlin & Kalodner, 2006; Zoellner et al., 2016).
Interestingly, and as referenced previously, media violence programs have tended to include more comprehensive lessons, especially the work done by Rosenkoetter et al. (2004) and Rosenkoetter et al. (2009), than programs pertaining to other health and risk behaviors. Additionally, given that media literacy effects tend to be short-term, some scholars have implemented “booster” lessons in an effort to continue the positive effects of the intervention (Rosenkoetter et al., 2009).
Type of Content
Media literacy programs are either developed by scholars who may reference other media literacy organizations’ work (e.g., Kaestle et al., 2013) or by organizations and university units that develop the media literacy programs (e.g., Coughlin & Kalodner, 2006; Fingar & Jolls, 2014; Pinkleton et al., 2008; Primack, Douglas, Land, Miller, & Fine, 2014; Richardson et al., 2009; Rosenkoetter et al., 2004; Webb et al., 2009). The content could be further organized into three distinctive areas: (1) reduction of media consumption, (2) media analysis and evaluations, and (3) media production and activism. The cornerstone of all media literacy programs is media analysis and evaluation, regardless of the behavioral contexts, in part because of how media literacy is defined. Reducing media use mostly applies to interventions focused on media violence and is seldom incorporated into other types of media literacy programs. Finally, media analysis and evaluation are emphasized more than media production and activism in most program content and activity design.
Reduction of Media Consumption
As mentioned, although media use reduction is not a part of the well-accepted media literacy definition, a number of programs—specifically in the media violence context—have used a reductionist strategy in addition to critical media analysis and evaluation in order to mitigate the potential negative effects media consumption can have on perceptions and behaviors. Such programs therefore have focused on cutting down on the number of hours participants watch television as one of their primary outcomes (Fingar & Jolls, 2014; Robinson et al., 2001; Rosenkoetter et al., 2004; Rosenkoetter et al., 2009).
Media Analysis and Evaluation
Learning how to critically analyze media messages is uniformly applied to media literacy programs in all behavioral contexts (e.g., Austin & Johnson, 1997a, 1997b; Banerjee & Greene, 2006; Chen, 2013; Goldberg et al., 2006; Hindin et al., 2004; Kupersmidt, Scull, & Austin, 2010; Sekarasih, Walsh, & Scharrer, 2015; Zoellner et al., 2016). Three specific components are introduced to build the foundation for critical media analysis: (1) knowledge pertaining to a particular subject, (2) understanding of media production techniques, and (3) critical media analysis by contrasting fiction depicted in the media with reality.
First, various programs start by introducing health statistics and knowledge to participants or focusing on influences media have on audiences (e.g., Byrne, 2009; Chen, 2013; Hindin et al., 2004; Pinkleton et al., 2008; Scharrer, 2006; Zoellner et al., 2016). For example, Zoellner and colleagues (2016) discussed the negative health consequences for over-consuming sugary beverages by visually demonstrating the amount of sugar that an average adult in the United States consumes on a daily basis. These health-related facts are then scaffolded into activities that take place at later stages.
Second, learning about the techniques media employ (e.g., production values, persuasion tools) and how each message is designed with a specific viewpoint can help participants gain a behind-the-scene understanding of the message production process (Chen, 2013; Shensa et al., 2015; Webb et al., 2009; Zoellner et al., 2016). Through understanding how messages are manipulated, participants may gain a critical understanding of message construction and therefore become more prepared for the next step, critical media analysis.
Contrasting fiction depicted in the media with reality represents the last step toward fostering audiences’ critical thinking skills vis-à-vis media messages. Some of the activities in this phase may include deconstructing media messages and asking participants to compare media scenarios to “real-world” ones. Activities are often guided by discussion questions to compare the selected media content with perceptions of reality, especially when participants lack or do not have any first-hand experiences with a particular health topic (Chen, 2013; Kaestle et al., 2013; McVey & Davis, 2002; Pinkleton et al., 2008, 2012, 2013; Posavac et al., 2001; Scharrer, 2006; Wade et al., 2003; Webb & Martin, 2012; Webb et al., 2009; Zoellner et al., 2016). Guided discussion questions are often obtained from the Center for Media Literacy (Fingar & Jolls, 2014; Webb & Martin, 2012; Webb et al., 2009), the National Association of Media Literacy’s Core Principles of Media Literacy Education (Phelps-Tschang, Miller, Rice, & Primack, 2015; Primack et al., 2014; Shensa et al., 2015) or from various theoretical frameworks (Austin & Johnson, 1997a; Banerjee & Greene, 2006; Bickham & Slaby, 2012; Zoellner et al., 2016). These guided questions often accompany selected media clips chosen from popular media (e.g., animation, television programs, movies, magazines, advertising, promotions, product placements), with some scholars paying specific attention to how these selected media examples appeal to the participants’ sociodemographic characteristics, such as age (Scharrer, 2006) or ethnic background (Shensa et al., 2015; Yamamiya et al., 2005).
During the media analysis activities, some scholars have asked participants to develop their own persuasive arguments and contrast them with the arguments presented in various media messages (Yamamiya et al., 2005). Others have asked participants to compare the arguments in antismoking advertising and cigarette advertising in an effort to contrast the truth presented in antismoking advertising and the glamorized picture in cigarette advertising (Banerjee & Greene, 2006). Scharrer (2006) has moved beyond critical media analysis by asking participants to evaluate the ethical responsibility of media producers, the industry, and members of the public for the potential consequences of the content created, disseminated, and consumed.
Novel approaches also have been used to help encourage participants to think critically about the media messages they consume. Rosenkoetter et al. (2004) worked with the local police departments to help contrast violence depicted in television against police’s day-to-day operations. The program invited a police officer to share how reality on the ground is sharply different from the violence portrayed in Cops, in which hours and hours of footage are condensed to present a lopsided depiction of the use of violence to settle conflicts. Another novel approach has used videos that counter mainstream media messages. Such video applications are predominantly seen in body image media literacy programs and feature Slim Hopes, a video lecture that examines the use of digitally modified images to critique how media constantly emphasize women’s thin ideal images (Chambers & Alexander, 2007; Irving & Berel, 2001; Irving, Dupen, & Berel, 1998). Other video approaches have featured lectures from psychologists demystifying mainstream media messages so that participants would become less likely to compare themselves with thin body ideals displayed by models (Posavac et al., 2001), which may also prevent them from identifying with those models.
Media Production and Activism
In contrast to the proliferation of media analysis and evaluation activities, media production and activism have received relatively little attention and incorporation into media literacy programs. The fundamental goal of media production activities is to give participants a chance to talk back to the media industry by applying the critical thinking and persuasion tools they have learned to demystify media messages. Programs that have included media production activities have mainly used print-based media, as it is easier to administer in a group setting and user friendly (e.g., Banerjee & Greene, 2006; Evans et al., 2006; Gonzales et al., 2004; Kaestle et al., 2013; Pinkleton et al., 2008; Zoellner et al., 2016). Notably, media production activities are most often observed in programs pertaining to alcohol use, tobacco use, sex education, and nutrition education. However, and as further discussed, it is important to note that media production activities are more of an exception than the norm in media literacy programs, as they remain rare.
Some of the main production activities include counter advertising, in which, as the label indicates, participants modify the advertising content by revealing the true consequences of using such products. Using the anti-smoking media literacy programs as an example, production activities ask participants to create counter advertising by turning the pro-product content (e.g., featuring a youthful model enjoying a cigarette) into a pro-health message by revealing the truth about smoking (e.g., drawing wrinkles on the model) (Austin et al., 2005; Banerjee & Greene, 2006; Gonzales et al., 2004; Kaestle et al., 2013). Other scholars take a fun approach to media production. In their 12-session nutrition and media literacy program, Evans et al. (2006) worked with 4th and 5th graders to design pro-vegetable and fruit messages across eight different sessions. Specifically, children participated in the development of the media campaign to deliver three messages about “the benefits of fruits and vegetables,” “eat the real thing!” and “buy fruits and vegetables” by designing the following media products: table toppers, refrigerator magnets, a website, a commercial, and a rap song.
While media production gives participants a voice, media activism expands participants’ level of empowerment and engagement with the issue at hand. The few studies that have integrated perspectives of advocacy into lesson design and implementations have primarily been in the context of tobacco use (Austin et al., 2005; Bier et al., 2011; Gonzales et al., 2004), eating disorders and body image (Irving & Berel, 2001), and media violence (Fingar & Jolls, 2014; Webb & Martin, 2012; Webb et al., 2009). Media activism activities may be as simple as passing out postcards from a media activism organization (Irving & Berel, 2001). Others have a more elaborated approach. For example, Austin et al. (2005) shared counter ads and anti-smoking activities from other youth around the world to foster a sense of collective self-efficacy. In their study, youth developed a project to use media to spread anti-smoking efforts and meet with other like-minded teens engaged in anti-smoking activities to promote collective activism. Gonzales et al. (2004) brought the action even further by integrating petition writing, poster creation, and tobacco advertising surveillance analysis as a part of their media literacy program.
Mode of Program Delivery
In-Person vs. Non-In-Person
Most media literacy programs are administered in-person, except for a few programs that deliver their interventions either via videos without any in-person component (Posavac et al., 2001; Voojis & van der Voort, 1993a) or on the Web (Shensa et al., 2015). For example, Voojis and van der Voort (1993a) delivered their program via a school television program with the aid of a workbook so students could watch the school television program and work through the workbook content. Similarly, Posavac et al. (2001) played a video featuring a psychologist discussing a specific intervention condition to female college participants. Only one study delivered a media literacy prevention intervention—modified from prior in-person program (Phelps-Tschang et al., 2015; Primack et al., 2014)—entirely online (Shensa et al., 2015). It is important to acknowledge that a mixture of in-person and video delivery for content reinforcement is common in media literacy programs (e.g., Irving & Berel, 2001; Irving et al., 1998).
Research Team vs. Non–Research Team
The majority of surveyed media literacy programs are delivered by qualified research team members, including trained teachers and peer educators. Few programs have non–research team members (e.g., untrained teachers) deliver the lesson. In these instances, non–research team members follow a standard media literacy program without receiving any official training from the research team members. While almost every study uses adults in lesson administration, a few studies use trained peer educators in body image (Irving et al., 1998) and sex education (Pinkleton et al., 2008, 2012, 2013) programs to deliver lesson content.
Part IV: Media Literacy Programs and Their Evaluation Measures
Media literacy programs are implemented with particular goals pertaining to participants’ perceptions and behavior vis-à-vis a specific topic. To this end, all elements of a program are conceptualized to achieve those goals. Similarly, scholars need to be able to evaluate the success of their programs in order to determine if they have met their goals. Discussed are how scholars have conceptualized their research designs, explored relationships between sociodemographic characteristics, and examined intervention effectiveness and media literacy program intervention evaluation.
The majority of health-focused media literacy programs include a control group in their design to ensure that potential changes in health and risk attitudes and behaviors expected with participants who take part in the intervention are not occurring with participants in the control group who are not exposed to the intervention. In the case of pilot studies conducted to develop a particular intervention, scholars have used a post-test-only design with a control group (e.g., Pinkleton et al., 2008). Studies with no control group are rare and usually aim to either pilot test a particular intervention or explore if an intervention would be better than another (e.g., Hindin et al., 2004). Most programs are also designed to include at least one pre-test and one post-test in order to measure the effectiveness of the intervention. Post-test measures are usually administered immediately following the intervention, mostly for convenience purposes, as participants are readily available (e.g., Chambers & Alexander, 2007; Chen, 2013; Pinkleton et al., 2008, 2012, 2013). In some cases, scholars also use delayed post-test measures to assess longer-term effects of the media literacy program, ranging from two hours (Goldberg et al., 2006), to one week (Huesmann, Eron, Klein, Brice, & Fischer, 1983; Webb & Martin, 2012), to five–six weeks (Banerjee & Greene, 2006, 2007; Byrne, 2009), to three months (Austin & Johnson, 1997a, 1997b; Wade et al., 2003), to up to one year (McVey & Davis, 2002).
In most cases, scholars develop one media literacy intervention, which, as mentioned, can consist of one workshop (i.e., one-shot, single lesson format) or a series of classes (i.e., multi-lesson format), and compare its effect with the control group. A smaller number of studies have tested different types of interventions or activities within the same program to explore whether one intervention approach is more effective than the other (e.g., Byrne, 2009; Chen, 2013; Posavac et al., 2001). For instance, studies have tested the type of lesson delivery format (video vs. print) (e.g., Chambers & Alexander, 2007), type of activity (media analysis vs. media production) (e.g., Banerjee & Greene, 2006), or type of intervention (general vs. context specific) (e.g., Austin & Johnson, 1997b). Others have examined differences between trained and untrained administrators (e.g., Fingar & Jolls, 2014; Webb & Martin, 2012), while some even compared the effects of media literacy programs to non–media literacy programs, such as psychology-based or standard health trainings (e.g., Irving & Berel, 2001; Primack et al., 2014; Wade et al., 2003).
Participants in media literacy programs are randomly assigned to the intervention group or control group, when available. Depending on the setting in which the media literacy program takes place, the scope of the program, and the overall duration of the intervention, participant randomization is done at the school level (Bickham & Slaby, 2012; Kupersmidt et al., 2010; Pinkleton et al., 2007; Vooijs & van der Voort, 1993a; Webb & Martin, 2012; Webb et al., 2009), classroom level (Banerjee & Greene, 2006, 2007; Goldberg et al., 2006) or individual level (Austin & Johnson, 1997a, 1997b; Byrne, 2009; Huesmann et al., 1983; Kaestle et al., 2013; Zoellner et al., 2016).
Relationship between Sociodemographic Characteristics and Intervention Effectiveness
Participation in a media literacy program alone is usually not the sole explanation for potential changes in health and risk attitudes and behaviors. Three main factors have been explored to explain participants’ responses to media literacy interventions: gender, age or grade level, and prior behavior pertaining to the health topic under study. Some of those findings are addressed by comparing within-group results, that is, among participants who took part in the interventions, as opposed to comparing effects between intervention and control groups.
Some media literacy programs pertaining to alcohol use found that boys became less interested in alcohol-branded merchandise than girls (Kupersmidt et al., 2010), while girls perceived alcohol advertising to be less realistic and were more skeptical toward alcohol ads than boys (Austin & Johnson, 1997a; Chen, 2013, respectively). Grade level also played a role in alcohol-related media literacy interventions, as fifth-grade children were able to better deconstruct alcohol ads and understand the ads’ persuasive intent after having taken part in the intervention than third- and fourth-grade children (Kupersmidt et al., 2010). Never having consumed alcohol also contributed to higher levels of self-efficacy to refuse alcohol as well as to more complex deconstructions of alcohol ads (Kupersmidt et al., 2010). In terms of eating disorders and body image, media literacy interventions led boys to cultivate less sociocultural attitudes toward appearance than girls (Wilksch et al., 2006). However, female participants who were identified as having a high risk of eating disorders reported lower levels of drive for thinness, body dissatisfaction, and internalization of thin beauty standard after the intervention than other female participants (Coughlin & Kalodner, 2006).
Participants’ gender also affects their attitudes towards sexual behavior. For example, after having completed a media literacy program, girls displayed more positive attitudes toward abstinence than boys (Pinkleton et al., 2012). Similarly, girls were more likely to report wanting to delay sexual activity than boys, whereas the latter had more positive attitudes toward engaging in risky sexual behavior than girls (Pinkleton et al., 2013). Gender also contributed to participants’ attitudes toward tobacco use, as female participants were more likely than male participants to report less pro-tobacco attitudes after having completed a media literacy program (Gonzales et al., 2004). Additionally, younger participants benefited more from tobacco-related media literacy interventions than older participants, as they were less likely to believe that their peers engaged in risky behavior such as smoking (Austin et al., 2005). Interestingly, demographic characteristics and prior behavior do not seem to interfere as much with the outcomes of media literacy programs pertaining to nutrition education or violent behavior.
Media Literacy Program Intervention Evaluation
Overall, the evaluation of media literacy programs is assessed in three main ways, measuring participants’ (1) attitudes toward media messages in general and messages pertaining to a specific health and risk behavior in particular, (2) attitudes and intentions toward a specific health and risk behavior, and (3) actual engagement with a specific health and risk behavior. Most studies focus on the first two evaluation measures, as behavior can be more challenging to measure, especially if scholars aim to directly observe participants’ behavior, as opposed to relying on self-reporting, in order to increase the results’ reliability and validity.
As previously mentioned, media literacy aims to enhance an individual’s ability to analyze, evaluate, and produce media messages (see Aufderheide, 1993). Participants who took part in media literacy interventions displayed higher levels of critical analysis and deconstruction of media messages, and lower levels of perception of media realism than participants in control groups. Such results were found for alcohol use (e.g., Goldberg et al., 2006; Kupersmidt et al., 2010), eating disorders and body image (e.g., Irving & Berel, 2001), nutrition education (e.g., Bickham & Slaby, 2012; Zoellner et al., 2016), sex education (e.g., Pinkleton et al., 2008, 2013), tobacco use (e.g., Austin et al., 2005; Phelps-Tschang, Miller, Rice, & Primack, 2015; Primack, Fine, Yang, Wickett, & Zickmund, 2009), and violent behavior (e.g., Huesmann et al., 1983; Rosenkoetter et al., 2009; Scharrer, 2005).
In terms of attitudes and intentions toward a specific health and risk behavior, participation in a media literacy intervention reduced desire for any products with alcohol logos (Austin & Johnson, 1997a, 1997b; Kupersmidt et al., 2010). Similarly, media literacy interventions modified knowledge (Gonzales et al., 2004), attitudes (Gonzales et al., 2004; Shensa et al., 2015), self-efficacy (Austin et al., 2005; Kupersmidt et al., 2010), normative perceptions (Austin et al., 2005; Shensa et al., 2015), and behavioral intention (Beltramini & Bridge, 2001; Banerjee & Greene, 2007; Kupersmidt et al., 2010) toward tobacco use. Participants’ attitudes, motivation, and self-efficacy pertaining to nutrition education were also affected by taking part in a media literacy intervention (Evans et al., 2006; Hindin et al., 2004). Media literacy interventions focused on eating disorders and body image led to lower levels of body image disturbance (Posavac et al., 2001; Wilksch et al., 2006) and weight dissatisfaction (Lew, Mann, Myers, Taylor, & Bower, 2007), as well as more realistic perceptions of ideal body image (Chambers & Alexander, 2007) for both college women and young female adolescents.
Lastly, participants in media literacy interventions pertaining to sexual behavior reported higher levels of self-efficacy, lower levels of expectancies related to sexual behavior, and more accurate understanding of peers’ sexual norms (Pinkleton et al., 2008, 2012, 2013). Despite such promising findings, a few studies found that participation in media literacy interventions did not influence participants’ intentions to drink alcohol in the future (Austin & Johnson, 1997b; Goldberg et al., 2006). Some interventions revealed opposite media literacy effects, as participants developed higher levels of pro-tobacco attitudes (Bier et al., 2011) and higher levels of future expectations of smoking (Kaestle et al., 2013), as well as increased willingness to use violence (Byrne, 2009).
Fewer studies measured or reported behavioral variables (i.e., health behaviors or media consumption), and results have been mixed. Participants who took part in media literacy interventions drank fewer sugary drinks (Zoellner et al., 2016) and smoked fewer cigarettes (Gonzales et al., 2004) than participants in control groups. Media literacy interventions seemed particularly effective in reducing participants’ viewing of violent TV (Fingar & Jolls, 2014; Rosenkoetter et al., 2004, 2009) and aggressive behavior (Huesmann et al., 1983; Rosenkoetter et al., 2004, 2009). One study found that while a media literacy intervention decreased participants’ verbal aggression, it did not change their physical aggression (Robinson et al., 2001). Other interventions, however, were not as successful in affecting behavior. For instance, media literacy interventions did not reduce problematic eating behaviors (McVey & Davis, 2002) and did not increase participants’ consumption of fruits and vegetables, despite having revealed higher levels of motivation to eat more fruits and vegetables (Evans et al., 2006).
Part V: Limitations in Media Literacy Programs
As reported, media literacy interventions work, especially in terms of decreasing participants’ perceptions of realism and increasing their critical analysis of media messages. Media literacy interventions have also demonstrated varying degrees of success in modifying attitudes or behavior. Yet, a survey of the 54 health-related media literacy studies reveals some limitations in the areas of participants’ sociodemographic characteristics, target population involvement, media production activities, and evaluations of program effectiveness. It is important to note that limitations discussed here apply only to the reviewed health-related media literacy studies published in scholarly journals. These gaps are addressed and recommendations to advance future media literacy research are presented.
Limitation 1: Adults Are Rarely Recruited as Participants in Media Literacy
A comprehensive survey of the media literacy programs reveals a startling trend in participants’ characteristics and program focus. First, children, adolescents, and young college women dominate the participant pool. It is likely that children and adolescents are more receptive and vulnerable to the influences of media from a developmental perspective, and, as a result, may benefit more from media literacy. In addition, studies dealing with certain health topics, such as body image and self-esteem, tend to only recruit women under the assumption that women are greatly affected by unhealthy portrayals of body images in mass media. Other studies conducted with college students also tend to over sample young women because of the gender gap in college enrollment. Second, almost all health-promoting media literacy programs tend to focus on individual-level changes, rather than changes at the home-environment level.
In contrast with the individual behavior change focus, particularly on younger participants, adults are rarely the key targets. One reason for few adult participants in media literacy programs is perhaps due to the generally accepted notion that adults have a relatively sophisticated level of cognitive skills, making them less vulnerable to media influences. In fact, more often than not, adults have been regarded as those who have higher media literacy knowledge and skills and thus are capable of administering a standard media literacy curriculum with little guidance and training (Fingar & Jolls, 2014; Webb & Martin, 2012). Another reason for a lack of adult participants is perhaps due to a limited focus on improving the home environment. This overemphasis on individual behavior change without an attempt to modify home environment could potentially impact the overall effectiveness of media literacy programs. It is possible, for example, that what children have learned (e.g., smoking leads to cancer) may contradict the practices and behavior in their family environment (e.g., seeing parents or guardians smoke).
Adults are not entirely immune to the pervasive media influences. In fact, adults from low-income neighborhoods, rural areas, and medically underserved communities are more likely to be targeted by marketing messages of unhealthy products (Kumanyika, & Grier, 2006; Yancy et al., 2009) and thus more likely to experience health disparities and become at-risk for undesirable health consequences. Health disparities—defined as “a particular type of health difference that is closely linked with social, economic, or environmental disadvantage” (Healthy People 2020, n.d.)—disproportionally affect those who are older, from low socioeconomic status, and certain geographical areas (see also HRMDP Encyclopedia II: Social Determinants of Health and Risks, 9: Health Disparities). The health disparity gap calls for a need to reach out to these populations who may benefit more from learning how to discern media messages and counter their persuasive intent.
Despite the importance of targeting adults, only a handful of studies—specifically in the nutrition education context—have been targeting adults (Evans et al., 2006; Hindin et al., 2004; Zoellner et al., 2016). These programs recognize that without the involvement of adults, children’s healthy eating behavior (e.g., fruit and vegetable intake) is unlikely to take place. These programs shape the home environment directly through deepening parents’ communication skills related to discussing television advertising with their children (Hindin et al., 2004) or indirectly through seeing the pro-nutrition messages (e.g., magnet, poster, website, song, etc.) designed by their children (Evans et al., 2006). In other words, there is no doubt that adults also benefit from receiving health and media literacy interventions that have the potential to improve their health outcomes (Zoellner et al., 2016). Recruiting adults as participants, especially those who are more at-risk, could have the potential to improve their health and enhance health equity by providing them with an ability to escape the myriad marketing influences and to outsmart media messages. Lessons learned from a number of youth-oriented media literacy programs that target certain populations (i.e., Hispanics and African Americans) due to their at-risk status (Gonzales, Glik, Davoudi, & Ang, 2004; Phelps-Tschang, Miller, Rice, & Primack, 2015; Primack, Douglas, Land, Miller, & Fine, 2014; Shensa et al., 2015) further provide insight into ways lessons could be crafted to speak to these adult populations.
Limitation 2: Little Involvement from the Target Population
In the majority of health-focused media literacy programs, scholars select a heath and risk behavior based on their area of expertise or interest and recruit participants from either local schools or universities. Such approaches often prevent scholars from targeting populations who may be more at risk of certain behaviors and also prevent participants or local representatives from being involved in the development of the programs and interventions. In addition, most of those programs are presented as one-time interventions, with very little data available pertaining to participant follow-up and longer-term implementation of the program or future similar efforts. Several steps can be taken to address those limitations, thus enhancing the health benefits of media literacy programs, especially for the most at-risk populations.
Partnering with schools has several advantages, such as having access to the same groups of students for longitudinal measures and training teachers to conduct multiple interventions (e.g., Voojis & van der Voort, 1993a; Webb & Martin, 2012). However, the location of media literacy activities is always the same, as students and teachers or members of the research team mainly interact only in classrooms. Activities pertaining to media literacy, such as the ones previously mentioned, can be conducted almost anywhere; yet, they seem mostly restricted to classrooms for convenience purposes. Media literacy scholars should consider implementing programs in other locations and involving parents in programs that target children. For instance, Hindin et al. (2004) conducted their intervention in a Head Start to teach parents how to discuss TV advertising with their children. Parents can also be approached as more than just the beneficiaries of media literacy programs. Training parents in the same manner as teachers are trained would extend media literacy discussions outside of the classroom. Parents can become crucial partners for the long-term effects of media literacy programs by reinforcing key concepts in informal settings, whether their children are watching a show pertaining to or talking about health and risk behaviors. This would echo the established parental mediation approach, which stresses the role parent-child communication plays in mediating how children interpret media messages, thus limiting the strength of potential media effects on young audiences (Buijzen & Valkenburg, 2005; Nathanson, 1999).
Parent training could take place face-to-face, similar to the one for teachers, to allow parents to be more actively engaged and ask questions. Such training could take place at school, but also at local organizations’ offices, community centers, or churches. Scholars could also develop “training modules” packaged as booklets or videos, depending on available resources and participants’ preferences, in order to meet the needs of parents who may not have the flexibility or resources to attend face-to-face training sessions. Online and mobile phone training could also be viable options if Internet access is not a problem for a particular community. For instance, Shensa et al. (2015) and Phelps-Tschang et al. (2015) both successfully implemented online anti-smoking media literacy interventions targeting teenagers. A similar approach could be developed to train parents on how to enhance their and their children’s media literacy skills.
In addition to involving parents in the implementation of an intervention, scholars should also involve students, parents, and other community members in the development of a particular media literacy program. Indeed, there is no indication that scholars who trained teachers to implement media literacy interventions consulted with the latter in the development of those interventions or asked teachers for feedback about the interventions. Conducting quantitative or qualitative preliminary research among target populations and community members, such as teachers in the case of school-based programs, would shed light on what community members perceive as the most important health behaviors to address. Data collected prior to developing a program would also provide scholars with activity ideas that may better suit (at a cognitive or conceptual level) the needs of a particular population than the activities they would have implemented without any local input.
Some scholars explain how a particular population is affected by a particular behavior and then aim to recruit participants from that population (e.g., Chambers & Alexander, 2007; Gonzales et al., 2004; Irving & Berel, 2001; Primack et al., 2014). However, despite those commendable recruitment efforts, participants or other community members are often not given any agency in the process, as their role is mainly limited to being the recipient of an intervention. Recognizing that it may not always be possible to conduct preliminary research with community members, scholars should nevertheless not overlook participants’ active involvement in media literacy projects. Even children as young as five or six years old are capable of expressing their thoughts for research purposes (e.g., Haerens et al., 2009; Ross & Harradine, 2005). Better understanding how members of the target population perceive a particular health and risk behavior should be the first step in determining how such perceptions should be countered or reinforced in a media literacy program.
Scholars from various disciplines have more recently tried to promote more research partnerships with members of local communities in which scholars and community representatives collaborate equally to identify and solve a problem specific to that community (Holland, Powell, Eng, & Drew, 2010). This approach, known as community-based participatory research (CBPR), relies on the different strengths and expertise of all parties involved at all steps of the research process (Wallerstein & Duran, 2006; see also IV: Normative Approaches Participatory Decision-Making Approaches, Normative Approaches). Media literacy scholars should consider applying tenets of CBPR when developing interventions in order to involve and thus reach audiences beyond program participants.
Involving community members and developing participants’ agency vis-à-vis a particular media literacy program could also contribute to the long-term sustainability of the program. Indeed, training participants to become facilitators would provide future opportunities for community members to directly fight a particular health and risk behavior problem. Scholars could also partner with community members in the collection of longitudinal data to better assess the effectiveness of the program, as well as conduct process evaluations, as further discussed in Limitation 4, to ensure that the intervention activities continue to meet participants’ needs beyond the initial interventions.
Limitation 3: Lack of Production and Content Development
Media literacy intervention activities examined here mainly focus on analyzing and deconstructing media messages. As mentioned earlier, simply thinking about how a particular topic is represented in the media can diminish the likelihood that audiences will perceive that topic in terms of its media representation (see Shrum, 2001). Therefore, several media literacy interventions have been successful by engaging participants with current media messages. However, and as further discussed, long-term effects of media literacy interventions have not been as well-documented as short-term ones. Pedagogy studies have found that “hands-on” activities, such as creating a tangible product or solving concrete problems, can lead to longer knowledge retention than more “passive” activities, such as class discussions or individual essays (Haak, HilleRisLambers, Pitre, & Freeman, 2011; McInerney & Fink, 2003; Taraban, Box, Myers, Pollard, & Bowen, 2007). Health and risk focused media literacy programs conducted by academic researchers should therefore consider adopting a more hands-on approach by guiding participants in the development of their own media messages.
Indeed, the adage of “learning by doing” is particularly apt here, as learning how to produce media messages will emphasize how every aspect of a message is consciously created with a particular purpose in mind. Creating media messages would force participants to think about an intended target audience and how the characteristics of that audience, such as sociodemographic ones, would affect the presentation of their messages. Such messages could vary from “traditional” media such as print posters, brochures, and short videos, to more creative items such as nutrition labels (e.g., Zoellner et al., 2016) or magnets (e.g., Evans et al., 2006). As participants select specific texts and illustrations, they would be able to discuss both the process of developing media messages, as well as the different ways those messages could be interpreted. This, in turn, would allow participants to further deconstruct media messages using the same steps they would have used to create their own messages, thinking of themselves as members of a particular target audience whose values or realities are supposed to be reflected in the media messages aiming to reach them.
Creating media messages can also empower participants, especially those who identify with marginalized groups in society based on their social identities, to develop their own counter-narratives to the dominant ones represented in the media. While a discussion on media portrayals of “others” (i.e., non–White, male, heterosexual, middle-class, Christian) and their effect on self-perceptions and perceptions is beyond the scope here (see Bryant & Oliver, 2009; Dines & Humez, 2011; Grossberg, Wartella, Whitney, & Wise, 2006), media have been creating certain identities that have influenced health and risk behaviors for audiences who identify with these identities, such as the idea that smoking and being violent is masculine (Escamilla, Cradock, & Kawachi, 2000; Hetsroni, 2011, respectively) or that being skinny and sexual is feminine (Collins, 2011). In contrast, by creating their own counter messages, participants can re-define those identities to promote healthy behaviors in their community.
Stuart Hall, one of the fathers of cultural studies, explains how media present and re-present specific messages in order to give meaning to a particular reality (Hall, 2011). The idea that media can define audiences’ reality about a particular phenomenon such as health and risk behaviors represents one of the core notions that media literacy interventions aim to demystify. Indeed, as previously mentioned, one of the main evaluations of an intervention’s success pertains to lowering participants’ perceptions of media realism. Therefore, engaging participants in the creation of their own media messages would allow them to use their own words and images to create a reality that counters media’s and matches their community’s.
Media literacy program should harness relatively inexpensive and accessible technology such as mobile phones and editing software to incorporate content development activities in their interventions. Participants should also be encouraged to use the same tools to publish and disseminate their messages in their community. Furthermore, participants should investigate how their target audiences, community members who meet certain sociodemographic characteristics, respond to the messages, thus conducting informal media effects research exploring potential changes in perceptions and behavior vis-à-vis a particular health and risk behavior. Creating and evaluating media messages would empower participants by giving them a voice absent from the media while contributing to the well-being of their community.
Limitation 4: Missing Components in Evaluations
The previous three gaps discussed specific participants and components missing from the current media literacy research. Given what was previously discussed, this gap focuses on the missing components in the context of evaluations. The three main types of evaluations (i.e., formative evaluation, process evaluation, and summative evaluation) are individually defined and addressed. In addition, missing components in various stages of the evaluation are discussed and ways to expand evaluation research are proposed. As research is often time and resource intensive, documenting how media literacy programs are designed and implemented in these three phases is not always possible. Suggestions to improve the current state of media literacy research therefore represent an ideal-case scenario for consideration.
Scholars have mainly been interested in summative evaluations, that is, how well media literacy programs perform in terms of outcomes of interest. Thus, summative evaluation—an evaluation that assesses whether a program significantly changes perceptions, attitudes, and behaviors—plays a key role in the current media literacy scholarship. Implementation of the other two types of evaluations (i.e., formative evaluation and process evaluation) is relatively rare but could be incorporated to guide the initial design, refinement, and implementation of media literacy programs.
As mentioned in Limitation 2, involving relevant partners, including students, parents, teachers, and community members, in the development of media literacy programs (i.e., the formative evaluation or research phase) is crucial. Indeed, formative research is the foundation for developing specific program components that could influence behavior change (Gittelsohn et al., 2006). It specifically “ensures that a program or program activity is feasible, appropriate and acceptable before it is fully implemented” and is usually “conducted when a new program or activity is being developed or when an existing one is being adapted or modified” (Centers for Disease Control and Prevention, n.d.). Involving key partners in the early research phase, as the community-based participatory research posits, helps scholars understand the priorities set forth by community partners as members whose perspective on the utility of media literacy programs may be different from that of scholars. In particular, assessing these partners’ baseline media literacy skillsets—both quantitatively or qualitatively—prior to their involvement could provide ways to customize lesson design and training materials (for those who implement the program) so that the lesson content is tailored to speak to partners’ priorities and concerns.
With media literacy scholars overly relying on results from summative evaluations to modify future program design, process evaluation is often overlooked. Process evaluation, which “determines whether program activities have been implemented” (Centers for Disease Control and Prevention, n.d.), plays a significant role in helping scholars and partners understand how well the activities are being executed. Despite its importance, few studies reported process evaluation measures, such as implementation dosage (e.g., participants’ attendance to media literacy lessons) and fidelity of implementation (e.g., self-reported data from those who implement the program on how well the program is implemented) (Kupersmidt et al., 2010). These pieces of information provide a tremendous amount of insight into how attractive certain program components are, how well-attended these programs are, and how certain lessons could be enhanced. They also could explain the results of summative evaluation and help scholars and intervention administrators reinforce key concepts that otherwise may have been overlooked. It is important, however, to acknowledge that programs with multiple lessons have more opportunities to conduct process evaluations than programs with a one-shot single-lesson format.
When assessing outcomes of interest, scholars often choose a selection of outcome variables that correspond to their theoretical underpinning and research questions. These variables inevitably affect how scholars conceptualize and operationalize “program effectiveness.” Some, for example, focus on media-related processes (e.g., media realism) while others focus on attitudinal and behavioral variables (e.g., norms, self-efficacy, behavioral intention, and behavior change). It may be helpful to include all types of measures in an effort to advance the field by understanding how media literacy programs change how participants interpret media messages and how these changes could potentially mediate the behavior change process.
As previously mentioned, the last phase of evaluation (i.e., summative evaluation) receives the most attention in media literacy programs. Some common themes are observed among the surveyed media literacy studies, including immediate post-test, limited assessment of actual behavior change, limited assessment of content components with lesson administrators and, finally, little integration of sociodemographic characteristics in data analysis.
It is common for media literacy programs to assess participants on the primary outcomes of interest immediately after program implementation. The inability to observe long-term behavior change is a limitation of immediate evaluation. A compromise between measuring long-term behavior change and immediate assessment is often made by asking questions about behavioral intention, a proxy for behavior change. While behavioral intention does predict behavior change, it may be helpful to follow-up with participants over time to address whether media literacy programs could generate lasting attitudinal and behavior change if time and budget allow.
There is also a limited assessment of content components, particularly media production and advocacy, in summative evaluation. Examining the content of these media production examples and asking participants about their perceptions of the collective activism efforts—quantitatively or qualitatively—could help add nuance to the evaluation. For example, textual and visual information should be assessed for re-presentation of hegemonic messages and for participants’ level of critical thinking. Feedback gained from analyzing whether there is a close match between production and program components, therefore, could potentially help scholars refine program design. As for the advocacy activities, interviewing participants about their likelihood of engaging with other youth and community members could add another layer of information that broadens the scope of media literacy research.
Similarly, while it is common to assess participants on outcomes of interest, involving program administrators in evaluations (e.g., trained or untrained teachers, peer educators, and college students who implemented the programs)—quantitatively or qualitatively—is extremely rare. Some examples from Webb et al. (2009), Scharrer and Cooks (2006), Scull and Kupersmidt (2011), and Vooijs and van der Voort (1993a) provide ways to understand these program administrators’ perceptions of the media literacy design as well as whether their media literacy skillsets are enhanced because of the training and experience with the program. For example, learning teachers’ motivations for participating in research (e.g., encouraging students to think critically and assessing teachers’ pre- and post-training or intervention media literacy skills could help foster a long-term, win-win partnership.
Finally, some studies have started to explore how moderating variables, such as participants’ sociodemographic factors (e.g., gender, race or ethnicity, and grade level) (Chen, 2013; Kupersmidt et al., 2010; Phelps-Tschang et al., 2015; Pinkleton et al., 2008, 2012, 2013; Primack et al., 2014; Scharrer, 2005; Shensa et al., 2015), as well as how prior experience (e.g., alcohol and tobacco use), (Kupersmidt et al., 2010; Pinkleton et al., 2007) affect the effectiveness of media literacy programs. Results from the surveyed studies not only provide insight into how future media literacy prevention and intervention could be designed to speak to participants’ characteristics but also suggest ways to evaluate media literacy programs. For example, sociodemographic moderators that have been found to predict health status could also be considered in future evaluations. These moderators should also serve as important markers for future recruitment efforts to enhance effective targeting. Overall, evaluations—regardless of phases—should be carefully conceptualized and planned in parallel with initial program development.
Part VI: Conclusion
Media can have the power to influence audiences’ health and risk behaviors. Designing and implementing appropriate media literacy interventions has the potential to enhance audiences’ health. An in-depth examination of media literacy studies focused on health and risk from the past three decades has revealed certain trends pertaining to areas such as participants’ sociodemographic characteristics, lesson content, and evaluation measures. Concomitantly, similar trends have been observed when it comes to the limitations related to these areas. Addressing those limitations in health-related future media literacy programs could enhance their effectiveness, specifically in terms of reaching certain sociodemographic groups who are underrepresented, yet most at-risk for health disparities. Particular efforts should be paid to message recipients’ sociodemographic characteristics during all stages, from design (e.g., involvement of target populations), to intervention (e.g., recruitment of at-risk populations and implementation in community locations), to evaluation (e.g., analysis of the relationship between sociodemographic characteristics and program effectiveness). More emphasis should also be placed on media production and activism activities to further engage participants and give them a voice in their community. Ideally, research conducted in full partnership with community members should guide the design, implementation, and evaluation of media literacy programs. Given the increasing importance of media in our lives, media literacy truly has the potential to become the “go-to” strategy for health and risk communication research by teaching how to critically process health and risk messages.
Additional Resources: Links to Digital Materials
Media Education Lab
The Media Education Lab is an initiative from the Harrington School of Communication and Media at the University of Rhode Island. Its mission is to “to improve digital and media literacy education through scholarship and community service.” It offers resources pertaining to research, teacher/staff education, curriculum development, advocacy, and youth and community media production.
MediaSmarts is a Canadian organization that promotes digital and media literacy. Its mission embraces “that children and youth have the critical thinking skills to engage with media as active and informed digital citizens.” It offers resources on various topics catered toward youth, parents, and teachers.
Media Literacy Now
Media Literacy Now is an advocacy organization that promotes media literacy and digital citizenship education policy. Its mission is to “spark policy change in every state and at the national level to ensure all K–12 students receive comprehensive media literacy education and skills.” It partners with parents and community members to develop legislation to make media literacy part of the school curriculum. Its website aggregates resources for teachers and parents.
Media Education Foundation
The Media Education Foundation produces educational videos pertaining to various aspects of media, culture, and society. Its mission states that it “produces and distributes documentary films and other educational resources to inspire critical thinking about the social, political, and cultural impact of American mass media.” Titles of all videos are organized by topics on its website, which also includes trailers for most videos.
Media Literacy Project
Media Literacy Project closed in June 2015, but its founders continue to offers media literacy trainings and workshops, and its resources are kept accessible online. It developed programs aimed at engaging youth in media literacy activities related to social issues.
Bergsma, L. J., & Carney, M. E. (2008). Effectiveness of health-promoting media literacy education: A systematic review. Health Education Research, 23(3), 522–542.Find this resource:
Cheung, C. (Ed.). (2016). Media literacy education in China. Singapore: Springer.Find this resource:
Hobbs, R. (1998). The seven great debates in the media literacy movement. Journal of Communication, 48(1), 16–32.Find this resource:
Kickbusch, I. S. (2001). Health literacy: Addressing the health and education divide. Health Promotion International, 16(3), 289–297.Find this resource:
Potter, J. W. (2004). Theory of media Literacy: A cognitive approach. Thousand Oaks, CA: SAGE.Find this resource:
Rich, M. (2004). Health literacy via media literacy: Video intervention/prevention assessment. American Behavioral Scientist, 48(2), 165–188.Find this resource:
Sørensen, K., Van den Broucke, S., Fullam, J., Doyle, G., Pelikan, J., Slonska, Z., et al. (2012). Health literacy and public health: A systematic review and integration of definitions and models. BMC Public Health, 12(1), 80–92.Find this resource:
Appiah, O. (2003). Americans online: Differences in surfing and evaluating race-targeted web sites by black and white users. Journal of Broadcasting & Electronic Media, 47(4), 537–555.Find this resource:
Arendt, F. (2013). News stereotypes, time, and fading priming effect. Journalism and Mass Communication Quarterly, 90(2), 347–362.Find this resource:
Aufderheide, P. (1993). Media literacy. A report of the National Leadership Conference on Media Literacy. Aspen Institute, Communications and Society Program.
Austin, E. W., & Johnson, K. K. (1997a). Immediate and delayed effects of media literacy training on third graders’ decision making for alcohol. Health Communication, 9(4), 323–349.Find this resource:
Austin, E. W., & Johnson, K. K. (1997b). Effects of general and alcohol-specific media literacy training on children’s decision making about alcohol. Journal of Health Communication, 2(1), 17–42.Find this resource:
Austin, E. W., Pinkleton, B. E., Hust, S. J., & Cohen, M. (2005). Evaluation of an American Legacy Foundation/Washington State Department of Health media literacy pilot study. Health Communication, 18(1), 75–95.Find this resource:
Banerjee, S., & Greene, K. (2006). Analysis versus production: Adolescent cognitive and attitudinal responses to antismoking interventions. Journal of Communication, 56(4), 773–794.Find this resource:
Banerjee, S., & Greene, K. (2007). Antismoking initiatives: Effects of analysis versus production media literacy interventions on smoking-related attitude, norm, and behavioral intention. Health Communication, 22(1), 37–48.Find this resource:
Beltramini, R., & Bridge, P. (2001). Relationship between tobacco advertising and youth smoking: Assessing the effectiveness of a school-based, antismoking intervention program. Journal of Consumer Affairs, 35(2), 263–277.Find this resource:
Bickham, D. S., & Slaby, R. G. (2012). Effects of a media literacy program in the U.S. on children’s critical evaluation of unhealthy media messages about violence, smoking, and food. Journal of Children and Media, 6(2), 255–271.Find this resource:
Bier, M. C., Schmidt, S. J., Shields, D., Zwarum, L., Sherblom, S., Primack, B., et al. (2011). School-based smoking prevention with media literacy: A pilot study. Journal of Media Literacy Education, 2(3), 185–198.Find this resource:
Brown, J. D. (2006). Media literacy has the potential to improve adolescents’ health. Journal of Adolescent Health, 39(4), 459–460.Find this resource:
Brown, J. D., & Pardun, C. J. (2004). Little in common: Racial and gender differences in adolescents’ television diets. Journal of Broadcasting & Electronic Media, 48(2), 266–278.Find this resource:
Bryant, J., & Oliver, M. B. (Eds.). (2009). Media effects: Advances in theory and research (3d ed.). New York: Routledge.Find this resource:
Buijzen, M., & Valkenburg, P. M. (2005). Parental mediation of undesired advertising effects. Journal of Broadcasting & Electronic Media, 49(2), 153–165.Find this resource:
Byrne, S. (2009). Media literacy interventions: What makes them boom or boomerang? Communication Education, 58(1), 1–14.Find this resource:
Centers for Disease Control and Prevention. (n.d.). Types of evaluation.
Chambers, K. L., & Alexander, S. M. (2007). Media literacy as an educational method for addressing college women’s body image issues. Education, 127(4), 490–497.Find this resource:
Chen, Y. (2013). The effectiveness of different approaches to media literacy in modifying adolescents’ responses to alcohol. Journal of Health Communication, 18(6), 723–739.Find this resource:
Collins, R. L. (2011). Content analysis of gender roles in media: Where are we now and where should we go? Sex Roles, 64(3), 290–298.Find this resource:
Coughlin, J. W., & Kalodner, C. (2006). Media literacy as a prevention intervention for college women at low- or high-risk for eating disorders. Body Image, 3(1), 35–43.Find this resource:
DeFleur, M. L. (2010). The “magic bullet” theory of uniform effects. In M. L. DeFleur (Ed.), Mass communication theories: Explaining origins, processes, and effects (pp. 122–132). New York: Routledge.Find this resource:
Dines, G., & Humez, J. M. (Eds.). (2011). Gender, Race, and Class in Media: A Critical Reader (3d ed.). Thousand Oaks, CA: SAGE.Find this resource:
Entman, R. M. (1993). Framing: Toward clarification of a fractured paradigm. Journal of Communication, 43(4), 51–58.Find this resource:
Escamilla, G., Cradock, A. L., & Kawachi, I. (2000). Women and smoking in Hollywood movies: A content analysis. American Journal of Public Health, 90(3), 412–414.Find this resource:
Evans, A. E., Dave, J., Tanner, A., Duhe, S., Condrasky, M., Wilson, D., et al. (2006). Changing the home nutrition environment: Effects of a nutrition and media literacy pilot intervention. Family Community Health, 29(1), 43–54.Find this resource:
Fingar, K. R., & Jolls, T. (2014). Evaluation of a school-based violence prevention media literacy curriculum. Injury Prevention, 20(3), 183–190.Find this resource:
Gittelsohn, J., Steckler, A., Johnson, C. C., Pratt, C., Griesser, M., Pickrel, J., et al. (2006). Formative research in school and community-based health programs and studies: “State of the Art” and the TAAG Approach. Health Education & Behavior, 33(1), 25–39.Find this resource:
Goldberg, M. E., Niedermeier, K. E., Bechtel, L. J., & Gorn, G. J. (2006). Heightening adolescent vigilance toward alcohol advertising to forestall alcohol use. Journal of Public Policy and Marketing, 25(2), 147–159.Find this resource:
Gonzales, R., Glik, D., Davoudi, M., & Ang, A. (2004). Media literacy and public health: Integrating theory, research, and practice for tobacco control. American Behavioral Scientist, 48(2), 189–201.Find this resource:
Grossberg, L., Wartella, E., Whitney, D. C., & Wise, J. M. (2006). MediaMaking: Mass Media in a Popular Culture (2d ed.). Thousand Oaks, CA: SAGE.Find this resource:
Haak, D. C., HilleRisLambers, J., Pitre, E., & Freeman, S. (2011). Increased structure and active learning reduce the achievement gap in introductory biology. Science, 332(6034), 1213–1216.Find this resource:
Haerens, L., De Bourdeaudhuij, I, Barba, G., Eiben, G., Fernandez, J., Hebestreit, A., et al. (2009). Developing the IDEFICS community-based intervention program to enhance eating behaviors in 2- to 8-year-old children: Findings from focus groups with children and parents. Health Education Research, 24(3), 381–393.Find this resource:
Hall, S. (2011). The whites of their eyes: Racist ideologies and the media. In G. Dines & J. Humez (Eds.), Gender, race and class in media: A critical reader (3d ed., pp. 81–84). Thousand Oaks, CA: SAGE.Find this resource:
Hargittai, E. (2010). Digital na(t)ives? Variation in Internet skills and uses among members of the “Net generation.” Sociological Inquiry, 80(1), 92–113.Find this resource:
Harrison, K., Taylor, L. D., & Marske, A. L. (2006). Women’s and men’s eating behavior following exposure to ideal-body images and text. Communication Research, 33(6), 507–529.Find this resource:
Healthy People 2020. (n.d.). Disparities.
Hetsroni, A. (2011). Violence in television advertising: Content analysis and audience attitudes. Atlantic Journal of Communication, 19(2), 97–112.Find this resource:
Hindin, T. J., Contento, I. R., & Gussow, J. D. (2004). A media literacy nutrition education curriculum for Head Start parents about the effects of television advertising on their children’s food requests. American Dietetic Association, 104(2), 192–198.Find this resource:
Hobbs, R., & Jensen, A. (2009). The past, present, and future of media literacy. Journal of Media Literacy Education, 1(1), 1–11.Find this resource:
Holland, D., Powell, D. E., Eng, E., & Drew, G. (2010). Models of engaged scholarship: An interdisciplinary discussion. Collaborative Anthropology, 3(1), 1–36.Find this resource:
Huesmann, L., Eron, L., Klein, R., Brice, P., & Fischer, P. (1983). Mitigating the imitation of aggressive behaviors by changing children’s attitudes about media violence. Journal of Personality and Social Psychology, 44(3), 899–910.Find this resource:
Irving, L. M., & Berel, S. R. (2001). Comparison of media-literacy programs to strengthen college women’s resistance to media images. Psychology of Women Quarterly, 25(2), 103–111.Find this resource:
Irving, L. M., Dupen, J., & Berel, S. (1998). A media literacy program for high school females. Eating Disorders: The Journal of Treatment & Prevention, 6(2), 119–131.Find this resource:
Iversen, A. C., & Kraft, P. (2006). Does socio-economic status and health consciousness influence how women respond to health related messages in media? Health Education Research, 21(5), 601–610.Find this resource:
Jeong, S., Cho, H., & Hwang, Y. (2012). Media literacy interventions: A meta-analytic review. Journal of Communication, 62(3), 454–472.Find this resource:
Kaestle, C. E., Chen, Y., Estabrooks, P. A., Zoellner, J., & Bigby, B. (2013). Pilot evaluation of a media literacy program for tobacco prevention targeting early adolescents shows mixed results. Health Promotion, 27(6), 366–369.Find this resource:
Knobloch-Westerwick, S. (2007). Gender differences in selective media use for mood management and mood adjustment. Journal of Broadcasting & Electronic Media, 51(1), 73–92.Find this resource:
Kubey, R. W. (2003). Why U.S. media education lags behind the rest of the English-speaking world. Television & New Media, 4(4), 351–370.Find this resource:
Kumanyika, S., & Grier, S. (2006). Targeting interventions for ethnic minority and low-income populations. Future of Children, 16(1), 187–207.Find this resource:
Kupersmidt, J. B., Scull, T. M., & Austin, E. W. (2010). Media literacy education for elementary school substance use prevention: Study of media detective. Pediatrics, 126(3), 525–531.Find this resource:
Lee, C. J., Niederdeppe, J., & Freres, D. (2012). Socioeconomic disparities in fatalistic beliefs about cancer prevention and the Internet. Journal of Communication, 62(6), 972–990.Find this resource:
Lew, A. M., Mann, T., Myers, H., Taylor, S., & Bower, J. (2007). Thin-ideal media and women’s body dissatisfaction: Prevention using downward social comparisons on non-appearance dimensions. Sex Roles, 57(7), 543–556.Find this resource:
Livingstone, S. (2003). The changing nature and uses of media literacy. Media@LSE electronic working papers, 4.Find this resource:
Livingstone, S. (2004). Media literacy and the challenge of new information and communication technologies. Communication Review, 7(1), 3–14.Find this resource:
Livingstone, S., & Van der Graaf, S. (2010). Media literacy. In W. Donsbach, (Ed.), International encyclopedia of communication. Oxford: Blackwell.Find this resource:
Mares, M. L., & Woodard, E. M. (2006). In search of the older audience: Adult age differences in television viewing. Journal of Broadcasting & Electronic Media, 50(4), 595–614.Find this resource:
McInerney, M. J., & Fink, L. D. (2003). Team-based learning enhances long-term retention and critical thinking in an undergraduate microbial physiology course. Journal of Microbiology & Biology Education, 4(1), 3–12.Find this resource:
McVey, G. L., & Davis, R. (2002). A program to promote positive body image: A 1-year follow-up evaluation. Journal of Early Adolescence, 22(1), 96–108.Find this resource:
Morgan, M., Shanahan, J., & Signorielli, N. (2009). Growing up with television: Cultivation processes. In J. Bryant & M. B. Oliver (Eds.), Media effects Advances in theory and research (3d ed., pp. 34–49). New York: Routledge.Find this resource:
Nathanson, A. I. (1999). Identifying and explaining the relationship between parental mediation and children’s aggression. Communication Research, 26(2), 124–143.Find this resource:
Nielsen. (2015). Kids’ audience behavior across platforms.
Nielsen. (2016). The total audience report: Q1 2016.
Nojin, K. (1999). Revisiting the knowledge gap hypothesis: Education, motivation, and media use. Communication Research, 26(4), 385–413.Find this resource:
Pavlik, J. V., & McIntosh, S. (2011). Converging media: A new introduction to mass communication (2d ed.). New York: Oxford University Press.Find this resource:
Pew Research Center (2015). Teens, social media & technology overview 2015: Smartphones facilitate shifts in communication landscape for teens.
Phelps-Tschang, J. S., Miller, E., Rice, K., & Primack. B. A. (2015). Web-based media literacy to prevent tobacco use among high school students. Journal of Media Literacy Education, 7(3), 29–40.Find this resource:
Piette, J. & Giroux, D. (2001). The theoretical foundation of media education program. In R. Kubey (Ed.), Media education in the information age (pp. 89–134). New Brunswick, NJ: Transaction Books.Find this resource:
Pinkleton, B. E., Austin, E. W., Chen, Y., & Cohen, M. (2012). The role of media literacy in shaping adolescents’ understanding of and responses to sexual portrayals in mass media. Journal of Health Communication, 17(4), 460–476.Find this resource:
Pinkleton, B. E., Austin, E. W., Chen, Y., & Cohen, M. (2013). Assessing effects of a media literacy-based intervention on US adolescents’ responses to and interpretations of sexual media messages. Journal of Children and Media, 7(4), 463–479.Find this resource:
Pinkleton, B. E., Austin, E. W., Cohen, M., Chen, Y.-C., & Fitzgerald, E. (2008). Effects of a peer-led media literacy curriculum on adolescents’ knowledge and attitudes toward sexual behavior and media portrayals of sex. Health Communication, 23(5), 462–472.Find this resource:
Pinkleton, B. E., Austin, E. W., Cohen, M., Miller, A., & Fitzgerald, E. (2007). A statewide evaluation of the effectiveness of media literacy training to prevent tobacco use among adolescents. Health Communication, 21(1), 23–34.Find this resource:
Posavac, H. D., Posavac, S. S., & Weigel, R. G. (2001). Reducing the impact of media images on women at risk for body image disturbance: Three targeted interventions. Journal of Social and Clinical Psychology, 20(3), 324–340.Find this resource:
Primack. B. A., Douglas, E. L., Land, S. R., Miller, E., & Fine, M. J. (2014). Comparison of media literacy and usual education to prevent tobacco use: A cluster randomized trial. Journal of School Health, 84(2), 106–115.Find this resource:
Primack, B. A., Fine, D., Yang, C. K., Wickett, D., & Zickmund, S. (2009). Adolescents’ impressions of antismoking media literacy education: Qualitative results from a randomized controlled trial. Health Education Research, 24(4), 608–621.Find this resource:
Primack, B. A., & Hobbs, R. (2009). Association of various components of media literacy and adolescent smoking. American Journal of Health Behavior, 33(2), 192–201.Find this resource:
Richardson, S. M., Paxton, S. J., & Thomson, J. S. (2009). Is BodyThink an efficacious body image and self-esteem program? A controlled evaluation with adolescents. Body Image, 6(2), 75–82.Find this resource:
Robinson, T., Wilde, M., Navracruz, L., Haydel, K., & Varady, A. (2001). Effects of reducing children’s television and video game use on aggressive behavior: A randomized controlled trial. Archives of Pediatrics and Adolescent Medicine, 155(1), 17–23.Find this resource:
Rosenkoetter, L. I., Rosenkoetter, S. E., & Acock, A. C. (2009). Television violence: An intervention to reduce its impact on children. Journal of Applied Developmental Psychology, 30(4), 381–397.Find this resource:
Rosenkoetter, L. I., Rosenkoetter, S. E., Ozretich, R. A., & Acock, A. C. (2004). Mitigating the harmful effects of violent television. Journal of Applied Developmental Psychology, 25(1), 25–47.Find this resource:
Ross, J., & Harradine, R. (2005). I’m not wearing that! Branding and young children. Journal of Fashion Marketing and Management, 8(1), 11–26.Find this resource:
Samson, L., & Grabe, M. L. (2012). Media use and the sexual propensities of emerging adults. Journal of Broadcasting & Electronic Media, 56(2), 280–298.Find this resource:
Scharrer, E. (2005). Sixth graders take on television: Media literacy and critical attitudes of television violence. Communication Research Reports, 22(4), 325–333.Find this resource:
Scharrer, E. (2006). “I noticed more violence:” The effects of a media literacy program on critical attitudes toward media violence. Journal of Mass Media Ethics, 21(1), 69–86.Find this resource:
Scharrer, E., & Cooks, L. (2006). Violence, conflict, and community service-learning: Measuring impact on students and community. Journal of Higher Education Outreach and Engagement, 11(1), 71–86.Find this resource:
Scheufele, D., & Tewksbury, D. (2007). Framing, agenda setting, and priming: The evolution of three media effects models. Journal of Communication, 57(1), 9–20.Find this resource:
Scull, T. M., & Kupersmidt, J. B. (2011). An evaluation of a media literacy program training workshop for late elementary school teachers. Journal of Media Literacy Education, 2(3), 199–208.Find this resource:
Sekarasih, L., Walsh, K. R., & Scharrer, E. (2015). “Media violence is made to attract and entertain people”: Responses to media literacy lessons on the effects of and institutional motives behind media violence. Journal of Media Literacy Education, 6(3), 1–13.Find this resource:
Shensa, A., Phelps-Tschang, J., Miller, E., & Primack, B. A. (2015). A randomized crossover study of Web-based media literacy to prevent smoking. Health Education Research, 31(1), 48–59.Find this resource:
Shrum, L. G. (2001). Processing strategy moderates the cultivation effect. Human Communication Research, 27(1), 94–120.Find this resource:
Shrum, L. G. (2007). The implication of survey method for measuring cultivation effects. Human Communication Research, 33(1), 64–80.Find this resource:
Signorielli, N. (1990). Television’s mean and dangerous world: A continuation of the cultural indicators perspective. In N. Signorielli & M. Morgan (Eds.), Cultivation analysis: New directions in media effects research (pp. 85–106). Newbury Park, CA: SAGE.Find this resource:
Taraban, R., Box, C., Myers, R., Pollard, R., & Bowen, C. W. (2007). Effects of active‐learning experiences on achievement, attitudes, and behaviors in high school biology. Journal of Research in Science Teaching, 44(7), 960–979.Find this resource:
Vooijs, M. W., & van der Voort, T. H. A. (1993a). Teaching children to evaluate television violence critically: The impact of a Dutch schools television project. Journal of Educational Television, 19(3), 139–152.Find this resource:
Vooijs, M. W., & van der Voort, T. H. A. (1993b). Learning about television violence: The impact of a critical viewing curriculum on children’s attitudinal judgments of crime series. Journal of Research and Development in Education, 26(3), 133–142.Find this resource:
Wade, T. D., Davidson, S., & O’Dea, J. A. (2003). A preliminary controlled evaluation of a school‐based media literacy program and self‐esteem program for reducing eating disorder risk factors. International Journal of Eating Disorders, 33(4), 371–383.Find this resource:
Wallerstein, N. B., & Duran, B. (2006). Using community-based participatory research to address health disparities. Health Promotion Practice, 7(3), 312–323.Find this resource:
Webb, T., & Martin, K. (2012). Evaluation of a US school-based media literacy violence prevention curriculum on changes in knowledge and critical thinking among adolescents. Journal of Children and Media, 6(4), 430–449.Find this resource:
Webb, T., Martin, K., Afifi, A. A., & Kraus, J. (2009). Media literacy as a violence-prevention strategy: A pilot evaluation. Health Promotion Practice, 11(5), 714–722.Find this resource:
Wilksch, S. M., Durbridge, M. R., & Wade, T. D. (2008). A preliminary controlled comparison of programs designed to reduce risk of eating disorders targeting perfectionism and media literacy. Journal of American Academy of Child and Adolescent Psychiatry, 47(8), 937–947.Find this resource:
Wilksch, S. M., Tiggemann, M., & Wade, T. D. (2006). Impact of interactive school-based media literacy lessons for reducing internalization of media ideals in young adolescent girls and boys. International Journal of Eating Disorders, 39(5), 385–393.Find this resource:
Wilksch, S. M., & Wade, T. D. (2009). Reduction of shape and weight concern in young adolescents: A 30-month controlled evaluation of a media literacy program. Journal of the American Academy of Child & Adolescent Psychiatry, 48(6), 652–661.Find this resource:
Yamamiya, Y., Cash, T. F., Melnyk, S. E., Posavac, H. D., & Posavac, S. S. (2005). Women’s exposure to thin-and-beautiful media images: Body image effects of media-ideal internalization and impact-reduction interventions. Body Image, 2(1), 74–80.Find this resource:
Yancy, A. K., Cole, B. L., Brown, R., Williams, J. D., Hiller, A., Kline, R. S., et al. (2009). A cross-sectional prevalence study of ethnically targeted and general audience outdoor obesity-related advertising. Milbank Quarterly, 87(1), 155–184.Find this resource:
Zoellner, J. M., Hedrick, V. E., You, W., Chen, Y., Davy, B. M., Porter, K. J., et al. (2016). Effects of a behavioral and health literacy intervention to reduce sugar-sweetened beverages: A randomized-controlled trial. International Journal of Behavioral Nutrition and Physical Activity, 13(1), 38–49.Find this resource: