Statistical Evidence in Health and Risk Messaging
Summary and Keywords
Persuasive messages use statistical evidence in order to convince an audience to accept a conclusion. Statistical evidence represents a compilation of experiences structured and collected in a manner that permits expression in mathematical form. Research demonstrates that the use of statistical evidence increases the persuasiveness of a message, and a message that uses both statistical and narrative evidence generates the greatest persuasiveness.
Statistical evidence can take the form of summarizing the collective opinion of experts on a topic or an expression of the collective set of experiences. The challenge becomes gaining acceptance of statistical expressions of experience versus what is perceived as the narrative or lived experience of the single person. Statistical evidence is often presented using a mathematical expression to indicate the size or force of the evidence.
The accumulation of statistical evidence often involves the use of meta-analysis to reduce Type I (false positive) and Type II (false negative) error. The use of evidence is strategic and can target specific elements of belief by understanding the structure of beliefs and the connectivity among elements. The use of the Subjective Probability Model provides a means to capitalize on the use of evidence by changing probabilities in beliefs to increase the effectiveness of a message campaign.
Statistical evidence, however, may be ineffective under circumstances referred to as the “base-rate fallacy.” The base-rate fallacy occurs when the presentation of statistical information is accepted, but examples are used that contradict the base-rate. The impact of the use of the example is to create a shift in the belief in the typicality of the example, despite knowledge of the base-rate.
Fear appeals provide a particularly useful and important application of statistical evidence in the pursuit of public health campaigns. The tenets of the Extended Parallel Processing Model indicate that message effectiveness relies on a combination of: (a) perceived severity of the threat, (b) perceived vulnerability to the threat, (c) perceived efficacy of the solution, and (d) perceived personal efficacy of the solution. Each element is largely impacted by the application and use of statistical information to make claims. The use of statistics generally outlines the argument and supports the conclusion offered in support of a conclusion offered to the message recipient.
Statistical evidence when used in a message often offers data or information that becomes the justification for a conclusion. A large part of a message becomes gaining acceptance of information by an audience, then explaining (reasoning) to the audience how those facts support a conclusion, often involving some type of recommendation for behavior. Understanding statistical evidence requires understanding how the material functions within the context of the belief system of the individual.
Keywords: statistical evidence, subjective probability model, base rate fallacy, meta-analysis, expert opinion, narrative evidence, fear appeals, extended parallel processing model, type I error, type II error
Persuasive messages rely on a combination of evidence and argument to reason the justification for the acceptance of a message conclusion. This article focuses on the contribution of statistical evidence in making a conclusion. The function of evidence is to provide support that would justify accepting the conclusion as valid. Evidence gives support by supplying experience that makes the position advocated by the message something logical or reasonable based on a collection of information. Evidence functions as a source of information that provides the justification the conclusion offered. Statistical evidence provides a particular form of proof related to the collection of information that is aggregated and then summarized using some mathematical representation.
In one model of argument, the Toulmin Model (Toulmin, 1959), the three major elements are: (a) data, (b) warrant, and (c) claim. The claim provides the conclusion or the result of the argument that the message sender wants the audience to accept. The logical or inevitable question someone can ask when a conclusion or claim is advanced is “why should the conclusion be accepted?” Data provides information that the communicator argues exists that serves as the basis for acceptance of the conclusion. The evidence (data) provided in the message becomes used by the warrant to justify the acceptance of the conclusion. The term “warrant” represents the process of reasoning from the evidence that connects or makes the claim appear reasonable. For example, if I claim that global warming exists, I would point out that, of the ten warmest years on earth, eight took place in the last decade. The claim, I believe, is supported or justified on the basis of the evidence (fact) that I just stated. If one accepts the fact, then I must argue or reason that the fact justifies or makes reasonable the conclusion. This essay concerns how one particular set of evidence or data, statistical evidence, functions to provide a basis for making a claim. Several different applications of statistical evidence serve as a basis of understanding how statistical evidence provides a basis for justifying a conclusion (claim).
Statistical evidence offers a collection of experiences summarized in mathematical form to provide the basis for a conclusion (Reinard, 1988). For example, one could say that 75% of the population wants to see the legislature increase the number of restrictions on the sale of handguns. The evidence assumes that this fact is relevant in justifying the conclusion and represents some process that provides a summary of experiences. Often, there may be arguments about the value of such evidence (e.g., if the sample for this statistic was provided by three of four persons at a gathering of progressive party members, there may exist a basis for a number of arguments about the authoritativeness of the evidence). Any statistic may be accurate, but the context and an understanding of how the information was generated may require articulation. The goal of offering statistical evidence becomes the generation of a sense of an accumulation of data presented in a simple form that justifies the conclusion offered. The question of whether the use of such evidence operates as an appropriate application and provides enough justification provides one basis of questioning the methodology and the implications of the conclusions.
Nonstatistical evidence usually offers examples, case studies, analogies, or other more singular instances that offer a justification for a conclusion (Reinard, 1988). The use of such evidence, often called “narrative” evidence offers proof or support for a conclusion. The offering of a detailed case study or an analogy provides an experience, offered in depth or detail as a means to justify a conclusion. A person who was a “witness,” present at some event or situation, may claim superior understanding, despite what other sources of experience (however accumulated) provide. The use of a single relevant or important example serves as a basis for attitude and behavioral change in many messages. Clearly, many criminal justice trials involve and place weight on what a witness saw or heard during the commission of a crime.
Two meta-analyses support the position that statistical evidence is more persuasive than narrative evidence (Allen & Preiss, 1997; Zebregs, van den Putte, Neijens, & de Graaf, 2015). The conclusion, however, while supported by a large empirical study, provides a suggestion that the effect of evidence is additive (Allen et al., 2000). The additivity of evidence types indicates that the most persuasive message is one that combines both forms of evidence (narrative and statistical) as opposed to either form of evidence alone. The data indicates that one should not view statistical evidence as the only means of generating support, the evidence indicates that statistical evidence is most persuasive when combined with other forms of evidence and support (Kim, Allen, Preiss, & Peterson, 2014). The indication of combining relevant examples with overall statistical evidence provides simple but effective advice for the generation of messages. Rather than viewing narrative and statistical evidence as oppositional, the combination of the forms of evidence offer a means of mutual support and increases level of acceptance of the claim. Some of the findings of the Zebregs et al. (2015) meta-analysis suggests that statistical evidence generates more impact on beliefs and attitudes while narrative evidence generates larger impacts behavioral intentions. The expanded and updated meta-analysis supports an additive view and begins to clarify the nature of why the combination of types of evidence proves most effective.
What follows in the discussion of statistical evidence is not a claim that all such evidence operates independently and apart from all other forms of proof; instead, statistical evidence offers one method of generating data that could be used in the justification for a claim. Part of the consideration of any evidence becomes the consistency with other forms of support for a claim. Clearly, when many forms of evidence reach the same conclusion, then the argument in favor of accepting the conclusion should be viewed as stronger. The current experimental and survey evidence fails to provide a complete understanding of the mechanism that produces the effect on message receivers.
The next part of this article considers the various types of statistical evidence that may exist, how the amount of statistical evidence changes effectiveness (with application to the subjective probability model), and issues in the quality of statistical evidence (particularly as it relates to meta-analysis). Finally, the article considers three issues in the application of statistical evidence to health design and risk messages, dealing with: (a) the base rate fallacy, (b) elements of the use of fear appeals related to message impacts of changing statistical evidence, and finally, (c) the functional challenge of using statistical evidence within a health design and risk message. The goal of the article is to provide a means of assisting those generating health design and risk messages some guidance issues to evaluate when examining any message strategy.
Types of Statistical Evidence
Different forms of evidence exist, and each provides a level of support for a claim. The question is whether one form of evidence provides more persuasive impact than other forms of evidence. The basis of the claim and the function of the claim play important roles in the process of providing a justification for the conclusion. The critical feature is to understand the relationship between the data or fact provided by the evidence and then the claim advanced to the audience.
Some statistical evidence provides a means of support to generate that other experts accept this position. For example, the popular slogan by one chewing gum company, “three out of four dentists recommend,” indicates that a person should accept the conclusion of others who serve as experts about some issue. In the case of this form of evidence, the reliance becomes a belief that expert consensus provides a basis for accepting an opinion. The same is true when persons argue for the acceptance of a human cause for global warming, stating that 90% of the climate scientists concluded that this association exists. In the case of global warming, the statistical evidence provides a summary of expert opinion, arguing that the majority of the relevant expert community accepts some conclusion. When formulating action or policy, if the majority of scientists believe that HIV infection leads to AIDS, the argument runs that a policy attempting to reduce the level of HIV infection becomes warranted. The function of the summary of expert opinion becomes a basis to demonstrate that the conclusions offered represent the best view of the community of experts.
Part of the frustration that many in the public health community face becomes the structure of media in representing health issues. Journalists work on an ethic of presenting the two sides of a story. What this means is that a story about immunizations and autism has the expert medical person arguing that the existing evidence supports no connection. The journalistic account often includes some parent or other nonexpert believing that the evidence actually supports a connection. Brainard, writing in the Columbia Journalism review in 2013, points out how the expectation for “balanced” coverage sustained the bogus claim that childhood vaccines can cause autism. Even when the relevant expert community is nearly unanimous in rendering a verdict based on the empirical data, the practice of journalism operates to frustrate health and medical practice by providing a sense of controversy when none exists. The relevant argument, that the overwhelming majority (unanimous) of scientific communication accepts a particular conclusion, is lost. The challenge of continuing to answer uninformed or biased challenges creates disbelief and reduces the effectiveness of public health efforts.
Another use of evidence to generate message acceptance is advocacy for an action or outcome likely based on previous experience. For example, arguing that 90% of a disease is preventable or curable with regular medical checkups provides an estimate to justify a practice based on the collected experience provided for in the statistic. The justification stems from the belief that the collected experience, expressed as a statistic, provides a basis for a conclusion. What many studies employ as a method involves the collection of experiences that are coded, using some systematic approach to understanding the record. The statistic functions as a means of summarizing all those experiences in a systematic manner for presentation as a fact that provides a justification for a conclusion (Allen & Kim, 2016).
The challenge becomes dealing with the independent example; the cases that are nonconforming to the general rule represent serious and important issues. For example, the scientific evidence indicates that childhood immunizations do not cause or relate to the incidence of autism. However, many parents will point to a child becoming diagnosed after some immunization and believe that the example of this particular child provides evidence of the connection. Even though the collected experience across thousands or millions of cases indicates that the connection does not exist, the parent maintains the conclusion that the immunizations caused the autism because direct experience with a child serves as sufficient evidence to refute the statistics. The tension between what a person feels and believes to be true versus what collected experience demonstrates focuses the challenge between the cognitive and the emotional. The problem of how individuals make sense of the world always operates against what some external agent provides as information.
Amount of Statistical Evidence
One consideration becomes what level or amount of evidence should become employed in making an argument. The question is, essentially, whether to provide one statistical analysis as opposed to many forms of statistical evidence when making an argument. On the issue of using a large amount of evidence, each element providing support for the proof plays an important role in understanding the justification for a position. Research by Kim, Allen, and Cole (2016) indicates that using multiple statistical proofs in a message to make an argument may function additively. For example, one could argue that every professional organization dealing with the atmosphere, weather, and meteorology has endorsed the conclusion that human behavior (the consumption of fossil fuels increasing the level of carbon dioxide in the atmosphere) led to a rise of global temperature, justifying acceptance of the conclusion. If case studies exist, the reliance and recitation of case studies becomes both tiresome and redundant; eventually, the impact is not cumulative but, instead, dull and boring. Instead the need for simplicity and accuracy implies the use of a method of incorporating multiple investigations into a single estimate or conclusion. Under these conditions, the statistical evidence becomes an inductive shorthand to summarize existing examples for efficient presentation.
The evidence is linked in a set of reasoning for a chain of effects in some forms of argument. A question of reasoning becomes the link of evidence in a causal chain when one event leads to another event (outcome), which then causes another outcome. The formal test of a chain of arguments linked by evidence involves the subjective probability model (SPM) (see Wyer, 1970). SPM argues that the prediction of acceptance of the conclusion is effected by messages that change the probability of beliefs in the events that lead to setting the sequence in action.
The question of cause, and how events become linked, serves as a basis for understanding the world. Attitude change could focus on the perception of the links between events and increase the probability of some prior event having taken place, which would cause the outcome. Alternatively, if event A causes event B, then the message could target the probability of the connection between event A and event B. The cause (event A), when taking place, increases the probability of the outcome event (event B). Messages would increase the belief in the outcome event (event B), not by addressing event B directly, but indirectly, by increasing the belief in event A taking place (which causes event B) or by increasing the belief that event A will cause event B. The message takes an existing set of belief structures and provides a message to target one element of that system. The impact of targeting of that element is to increase belief in the outcome (event B) without ever mentioning the outcome.
The subjective probability model empirically provides a high degree of empirical validation when tested under a variety of conditions, messages, and probability conditions (see work by Allen & Burrell, 1990; Allen, Burrell, & Egan, 2000). Even when the length of the chain is long (A causes B causes C causes D causes E), multiple causes for an event are considered (A causes C, and B causes C). The ability of persons to reason statistically and maintain accuracy when messages impact the level of belief has been consistently demonstrated. The findings indicate that participants remain able to reason statistically and consistently when creating a system of cause and effect. The importance of this for statistical evidence becomes the ability to demonstrate that the use of quantitative evidence, when believed, can predict the nature of the change in the outcomes associated with those causes. Rather than viewing statistical evidence as something persons are unable to process accurately, the evidence demonstrates that persons can and do maintain the ability to represent and incorporate statistics accurately.
Health messages often capitalize on the belief that various health outcomes are caused by some sequence of events. The ability to target a particular element in the sequence means that the fear of the outcome event takes place without mentioning directly the outcome event. Understanding the belief system of the target audience permits a wider and greater ability to generate favorable outcomes, using messages to change the understanding of how causes and effect become connected.
Quality of Statistical Evidence
Part of the issue in using statistical evidence is the problem of inconsistency in the available evidence for a particular issue or outcome. Suppose, for example, that ten studies exist; five of the outcomes demonstrate a significant association or effect, and five studies demonstrate no significant association. The cynicism or resistance to scientific study can be found in the old maxim, “if you lay ten economists end to end, they still could not reach a conclusion.” The challenge for many persons, particularly when dealing with health issues, becomes the seeming inability of scientific studies to consistently reach conclusions. The typical pattern of a press release has a study, finding some positive value for a practice, being contradicted by another study, finding no such positive benefit for the practice. The result of the inconsistency in empirical findings creates resistance to the acceptance of any particular finding, since the belief is that such a finding will be challenged or discarded in a short time.
Inconsistency in empirical research does not operate as the exception, but instead generates the expected outcome of empirical research. Type I error (false positive) rates run typically at 5%, where as type II error (false negative) error often runs in excess of 50%, particularly in social science research (Allen, 1993, 1998,1999, 2009). The problem of using or interpreting statistical evidence involves issues of inconsistency among findings generated by empirical investigations (Allen & Preiss, 1993, 2007).
The impact of the statistical problem becomes a source of inconsistency and a real problem for public health communication issues. Very often, a study will be announced on a morning or other news program, proclaiming the relationship between some element of behavior or practice (e.g., consumption of alcohol or coffee) and some health outcome (i.e., cancer, death). Often, within a few months or a year, a new study is announced indicating some outcome represented or perceived as inconsistent with the previously announced study finding. The challenge, for a person paying attention and consuming such stories, is that the advice appears inconsistent and difficult to meaningfully implement (Preiss & Allen, 1995, 2002, 2006).
The view that persons should take greater control of health care practice and become more assertive consumers is becomes more difficult when research inconsistencies continue to grow. Multiple voices giving contradictory sets of advice provide difficulty for the public and for public health professionals. Essentially, a study is released, represented in the media, portraying some practice as related to increasing or decreasing the probability of some outcome. Given the level of type I and/or type II error, the next study, released a few months later, may provide a very different, or at least inconsistent, conclusion from the previously reported investigation. The result of multiple statistical studies with contradictory outcomes, particularly related to health issues, provides the basis for confusion and the inability to act rationally. Strangely enough, the inconsistency becomes the basis for a problem when persons actually try to rely on rationality and science as a basis for action (Allen & Preiss, 2014; Allen & Preiss, 1993).
The solution to the contradiction in the outcome of empirical findings is a statistical technique known as meta-analysis. Meta-analysis provides, in empirical results of investigations, a means of resolving inconsistencies that are caused by both random and systematic sources of error. The question of what assumptions exist about the nature of how research findings should be integrated plays an important role in representing the research. Essentially, meta-analysis provides a statistical solution to a set of inconsistencies (type I or type II error) generated as a result of underlying statistical assumptions played out across the entire terrain of the literature. The challenge is to generate a set of statistical outcomes that provides a basis for rational and scientific discourse.
The assumption of empirical research involves the ability of the accumulated set of evidence to provide a better set of advice and options about what conclusions to draw from existing research. A simple question, like whether children should receive vaccines, or whether such medical advice risks adverse outcomes, provides a set of actions that must be considered. The argument about whether or not to administer childhood vaccines reflects a belief in the amount of evidence for a particular conclusion. When evidence or published research generates the perception of inconsistent findings, the actions undertaken by the population become less uniform and more diverse. The justification for the actions, as well as public policy recommendations, reflect confusion rather than knowledge. One of the roles of meta-analysis becomes the resolution of the inconsistencies to provide a coherent and unified representation of the existing literature. The improved rationality is related to increased effectiveness and efficiency in dealing with statistical evidence to generate a conclusion.
The base-rate fallacy involves acceptance or establishment of a base-rate for some event in the mind of the audience. Base-rate fallacies, or errors in reasoning, are grouped into three categories: (a) errors in application of base rate based on experience with counterexamples, (b) errors in understanding changed data information without accompanying information about the base rate, and (c) understanding statistics involving change when definitions have been modified.
Errors in Application When Encountering Counterexamples
What happens is that a subsequent message provides evidence or an example that runs contrary to the base-rate, and the opinion of the audience moves away from the base-rate in favor of the inconsistent example provided in the message. Even when, in essence, a sample of persons knows better, the subsequent example becomes more persuasive or influential than the overall statistical known data. The participants remain able to provide the base rate estimate and indicate continued belief in that value. The example, even though represented as a single example, provides a basis for an error in judgment related to the features of the example when applied to the underlying reasoning.
The example often used is information that 75% of the recipients of governmental social aid programs for the poor are European Americans. However, a video message, based on a news report, portrays an aid recipient as an African American. The measurement of subsequent opinion reveals that the estimated percentage of persons receiving aid who are African American increases, despite the given evidence of the base-rate information. The argument or conclusion is fallacious, because the ability of the person to make application of the base rate in the presence of an inconsistent example constitutes an error in reasoning.
For public health campaigns, a corresponding example involves establishing a base rate of a disease that is 90% fatal; but then, the media reports a story on a person who is surviving the condition—essentially, the example of someone, diagnosed with a condition, who survives using some particular therapy or intervention. The impact of the story of the survivor reduces the perceived level of fatality or severity of the occurrence of the disease by an audience. A popular advertisement for a facility that treats cancer presents a patient who was cured of a cancer that is almost always fatal. The advertisement even indicates (in the fine print) that this example does not reflect a typical outcome, but the example provides a case study that runs contrary to the established or expected outcome known on the basis of experience. Asked about the fatality of the disease, persons would have reduced faith in the base-rate data, in favor of the belief in the direction of the counterexample (Allen, Preiss, & Gayle, 2006).
Evidence of this effect occurs within the gay male community, as people began to view HIV infection as not a serious condition as additional anti-retroviral treatments became established. The impact of advertising by drug companies demonstrated atypical stories inconsistent with the established base rates known to the population (Chigwedere & Essex, 2010). The result of these stories was a reduction in the concern or fear of the disease and a corresponding decrease in the utilization of prevention methods. As the perception of the severity diminishes, the impact of this type of information is a reduced urgency of the need to take precautions to prevent the disease. The logic represents a problem for public health messages—improvements in treatment correspondingly reduce the fear of the disease and lower the motivation to adhere to any recommendations.
The impact of efforts to provide information that run contrary to the established base rate information typically employs examples. Whether intentional or incidental, the publicity or advertising of such examples, while offering hope and alternatives, impacts the nature of the belief in the base-rate. Taking care and thinking of the implications of contrary information requires consideration.
Misapplication of a Base-Rate
A second issue in base-rate information involves a misunderstanding of the implications of statistical information. Suppose, for example, the popular press reports on a study published in a reputable medical journal that finds that consuming some type of food finds a 50% increase in some disease or negative health outcome over a ten-year period. The study is reported to involve over 100,000 persons, and it considered health records for over a ten-year period. The data may seem alarming and generate fear and a need to change a behavior.
While such conclusions may appear warranted, an important piece of statistical information is left out of the above statement. Suppose for example, that the base rate of diagnosing the incident of illness is 1 out of 100,000 persons. A 50% increase would mean that 1.5 persons would be diagnosed with the disease. Considering a United States population of 350 million, the incidence of disease goes from 3,500 persons diagnosed each year to 5,250 persons diagnosed each year. The modification of a major behavior, along with the accompanying impact, may or may not be warranted given the relatively low incidence of the disease. Basically, the percentage of impact provided by statistical forms like odds/ratios (logistic regression) fails to incorporate the starting probability.
Changes in Definitions or Medical Procedures
A third application of understanding base-rate statistical information involves the question of what happens as diagnostic ability or definition of the disease takes place over time. The problem of comparing disease rates may be difficult when the definition or the ability to diagnosis a disease or condition changes. Consider that the definition of “autism” now uses the term “autism spectrum” and includes a lot of conditions that may not have been previously diagnosed or viewed as a part of autism. The problem of determining whether the incidence of a disease is changing, for whatever reason, requires a consideration of whether the definitions or diagnostic ability has undergone change (see e.g., definitional changes over time for AIDS and low birthweight infants). Unless there exists a means to go back and recalculate the incidence of events using the new definition or procedures, comparisons to old statistics become difficult and potentially misleading.
Unless historical data is recalculated using the changed standards the ability to evaluate trends simply fails to exist. Statements about the changing rate of particular illnesses may reflect changes in the definition rather than the underlying incidence of the condition and the caution used when interpreting the outcomes. The impact may be an increase (or decrease) in fear or concern that remains unwarranted, because the only change that took place was a change in definition or the ability to test and diagnose some condition that was previously underdiagnosed or overdiagnosed. The change in the statistical incidence of some event may reflect the definitional change and not a real change in the probability of the event.
Issues involving public health are particularly susceptible to changing definitions applied to diagnosis. For example, a new test that is very sensitive to the existence of abnormal cells may increase the detection of cancer or precancerous growths. The result of using the test becomes a spike in the diagnosis of a condition, but not necessarily a change in the actual incidence of the disease. With any new disease or new diagnostic tool, the perception of change in the level of a disease may take place without an actual change in the frequency of a condition. The challenge becomes providing a perspective on the statistical data that maintains an understanding of the historical context within which to place the data.
Fear Appeals and Statistics
Fear, when used in a message, generates emotional response to the perception of a threat and then provides a means to reduce or eliminate that threat. Using the extended parallel process model (EPPM; see Witte, 1992; Witte & Allen, 2000), the understanding of the fear appeal involves four elements of the message: (a) severity of the threat, (b) vulnerability to the threat, (c) efficacy of the proposed solution, and (d) personal efficacy of the proposed solution. The severity of the threat describes what outcome occurs if the threat becomes realized (death, loss of income, loss of freedom, etc.). Vulnerability to the threat describes the susceptibility of the target of the message to the threat. Efficacy of the proposed solution describes how effective the solution is likely to be to remove or eliminate the threat. Finally, the personal efficacy of the proposed solution provides information about the ability of a person to implement the solution to alleviate the threat.
Consider that several of the elements are often described in statistical terms. For example, a message might state that “50% of males will experience some form of erectile dysfunction.” The statement provides a means to describe how common or likely that a male will experience the condition. The use of the statistical information may also provide a basis for establishing that the experience of any single individual represents a common experience, especially for conditions that a person may find embarrassing to admit. The vulnerability to the threat indicates the probability of the threat. The general application of the information involves an estimation of how likely that the particular threat event may impact the person who is the target of the message. A fear message, to increase effectiveness, needs to increase the perception of the particular threat to that person.
The severity often becomes expressed in statistical terms. For example, suppose that, for some disease (like Ebola), indications are that 80% of those untreated die after contracting the disease. The combination of the end result and the high probability of consequence indicates the severity of contracting the disease. Unlike contracting a case of influenza or a cold, the impact of contracting Ebola becomes very severe. The use of statistical probability of the outcome provides a basis for the indicating the severity of the threat by generating an outcome that should indicate a high level of severity.
The efficacy of a solution refers to the ability of some proposed action to counteract the threat. For example, the development of a vaccine for a disease may provide almost 100% immunity from contracting a serious case of infection. The vaccine provides a highly efficacious solution to the threat of the disease. Often, the efficacy of the solution is represented by some probability of success, like the probability of a successful treatment for a cancer resulting in a five-year survivability. The presentation of some type of improved outcome usually is provided in the form of statistical representation.
The final issue in the use of a fear appeal in public health communication represents the ability of a solution to be implemented by a person. The personal efficacy of the solution provides the person receiving the message an indication of the ability to implement the solution. For example, providing a vaccine may prove 100% effective, but if the cost of the vaccination represents something beyond the financial ability of individuals, the solution lacks efficacy. Lack of access to finances may mean that only a few wealthy persons benefit, while most the population remains unable to implement or participate in a prevention program.
The representation of the personal efficacy of a solution points out the degree to which a solution can be reasonably implemented by the population. For example, a free vaccine program available at any medical clinic or hospital in the United States provides close to a 100% chance of personal efficacy for implementation. One example of the changing efficacy of a public health policy involves the availability of abortion for women in the United States. Laws in some states set standards for medical practice that drastically reduce the number of clinics able to perform the procedure. What sometimes is reported is that only a single clinic in a state meets the new standard. The result of such policy is a “legal” practice that, while available, becomes inaccessible to most women, and the personal efficacy for using the procedure becomes very low. The impact of any solution or option is less effective when the access by the message receivers is unavailable, indicating a problem of implementation. The impact undermines public health campaigns when the options are viewed simply as something that the message targets cannot implement.
Functional Challenge of Statistical Evidence
Evidence supports the conclusion as a justification for change. In the context of a fear appeal, the need to convince an audience of the severity or vulnerability of a threat often involves an invocation of statistical information. For example, the argument for a screening procedure may contain a statement about the percentage of persons who are diagnosed with a particular condition. The assumption is that the higher the percentage of persons diagnosed with the condition, the greater the perception of the threat. The use of statistics becomes a means to interpret or provide a perception of some element of the context. By changing the perception of the situation, the perception of the conclusion changes, and messages may become more or less effective.
The same logic applies to solutions or prevention procedures. Demonstrating that a person taking a particular drug is cured provides justification for undergoing the treatment. The problem of statistical evidence is the abstraction, or removal from the lived reality of the individualized story or enactment of circumstances. The challenge of narrative research becomes the “realness” of the evidence that becomes rooted in the examination of something with names and circumstances. Statistical evidence can be distanced or ignored if the person views oneself as outside the norm or involved in some unique circumstance that makes the general rule something inapplicable. The belief in exceptionalism, as applied to an individual, creates resistance to some public health messages. The findings of both a large scale study (Allen et al., 2000) and a meta-analysis (Zebregs et al., 2015) indicate that the combination of statistical and narrative evidence may prove most effective in generating change in beliefs, attitudes, intentions, and behaviors of message receivers.
Statistical evidence offers advice for an “expected” set of outcomes and should be modified as circumstances or applications warrant. While the practice of persuasion is guided by scientific research, the generation of messages operates as an art. Messages still require artistry in the generation of statements that will move an audience to action. The inclusion of narrative evidence provides a means of making the statistical real and placed within a context. The statement that one death constitutes a tragedy, while a million only represents a statistic represents the need to combine both emotional and cognitive elements of the situation. The narrative analysis provides a means to interpret and understand the overall set of statistics associated with the particular process. Understanding the situation through a well-developed example provides details and knowledge about the process that may not be possible by simply presenting overall statistical features.
Meta-analyses can provide evidence that fear appeals (and the associated elements) work to change attitudes, intentions, and behaviors. What a meta-analysis fails to provide are the specific means in a particular message, targeted at a specific audience, to impact a specific behavior. The cultural elements involving values and semantics require that application of a creative art intends to apply the scientific knowledge in pursuit of an outcome. The need to translate the findings of any quantitative social science finding into the practical generation of a message remains a challenge (Preiss & Allen, 2006).
One of the principle limitations of meta-analysis stems from the lack of phronesis (equipment for living) that the statistics fails to generate (Allen & Preiss, 1997, 2007; Preiss & Allen, 1995, 2002, 2006). Understanding that high fear provides a more effective message, particularly for health risk messages, does not generate the information necessary to generate a particular message. Knowing how the message elements work provides very useful information but does not provide the information to the message writer about how to generate the elements of that message. The generation of the message requires an understanding of the audience and the context viewed from the art of message construction.
Understanding the Use of Statistical Evidence in Health Communication Messages
There exist many forms of evidence and proof that an advocate can offer for a claim. Offering statistical evidence in support of a conclusion provides an important example of offering support for a claim. The problem with statistics is the lack of emotional connection that a narrative or a case study example provides in support of a conclusion. Statistical evidence offers the potential to demonstrate the normal or expected outcome of some action or condition. The statistic, representing the sum of a vast number of experiences, provides a simple and effective means of creating a norm for the existing narratives or examples.
Even when the statistical or scientific evidence seems clear to the medical community, the narrative or experienced reality may be a substitute for that evidence. The best use of statistical evidence represents the circumstances that provide a lived reality, by which statistical analysis can provide a justification for action. The example of a person impacted by a successful treatment or diagnosis gives understanding to an existing statistic. The focus should not treat the persuasive effort as one that chooses between statistics and other forms of proof. Instead the use of statistics should become combined with other forms of proof to provide a larger picture that combines the expected norm with the emotional attachment.
Discussion of the Literature
The existing literature examining the impact of statistical evidence involves essentially two primary types of manuscripts: (a) empirical tests of comparing message strategies, and (b) literature summaries using meta-analyses. The empirical tests use as a proto-type design some message content (like participation in colon cancer screenings or HPV immunizations) using multiple versions of the same messages. Each message will contain some different form of evidence as part of the appeal. Using independent group designs, each group receives one version of the message and then responds to some type of dependent measure (attitude, behavioral intention, behavior). The comparison of message types determines which message provides the best and most persuasive option.
Meta-analysis provides an examination of the accumulation of existing literature to evaluate an entire body of research to establish some claim (Allen, 2009). Unlike individual studies that may rely on small samples, the meta-analysis combines samples to provide the basis to fundamentally reduce error in the estimation of the parameters (Schmidt & Hunter, 2014). The ability to make claims that transcend the limitations of individual studies provides the basis for universal claims of empirical generalizability, and for generation of more authoritative claims about the basis for additional research.
The consideration of message topic or context is an issue when considering generalization for any particular claim about statistical evidence. One of the implications for meta-analysis becomes the consideration of whether or not a separate analysis becomes necessary for each separate medical condition or disease. For example, suppose a meta-analysis exists dealing with skin cancer and the evaluation of a public health campaign to promote the need to wear protective clothing. Any finding of the meta-analysis addressing the use of fear appeals may or may not generalize to research using the same evidence strategy to persuade persons to undergo a colonoscopy. The same issue could exist for other medical conditions (breast cancer mammograms, immunizations, yearly physical examinations, etc.). A central question becomes whether or not a finding for one condition about a message strategy will work for other conditions.
One example of a finding that may not generalize to a specific condition is the use of one-sided or two-sided message appeals. Meta-analyses find that, in general, two-sided refutational messages are more persuasive than one-sided messages (Allen, 1991; Allen et al., 1990). One exception to the general finding does exist. Public health messages for organ donation appeals find that one-sided messages demonstrate increased persuasiveness when compared to two-sided refutational messages. The reason for this involves that the other side of the argument involving organ donation addresses the fear of premature organ removal. Essentially, the argument in favor of organ donation becomes the ability to help others after the death of the donor. The other side represents a fear about the possibility of negative consequences of signing the organ donation card, which is refuted and rejected by the message as a basis for a conclusion (Kopfman, Smith, Ah Yun, & Hodges, 1998). No amount of refutation can offer reassurance or reduce the level of increased anxiety enough to overcome the initial raising of that fear (Ford & Smith, 1991). The simple raising of that fear seems to generate opposition that no amount of refutation can reduce, making the one-sided message more effective. While other message conditions exist that may violate the general rule, it is a clear case of a message condition running contrary to the general rule of two-sided refutational messages generating larger amounts of persuasion.
The next stage of research examines how evidence works in combination with other potential message elements as potentially additive factors. While the inclusion of statistical evidence may increase the persuasiveness of the message, consideration of other factors—like message source credibility, fear, counter-attitudinal advocacy—play potential roles in determining the impact of the inclusion of statistical evidence in a message. The question of parsimony plays a role in the understanding of message design and analysis because messages seldom consist of a single appeal or set of arguments, reasoning, and evidence. A message provides a set of claims that come from an identified source within a particular context as applied to some set of arguments.
The problem of generalizing requires an understanding of a more broad approach to the persuasiveness of messages. Consider that the use of statistical evidence provides one element for understanding the persuasiveness of a message. The next step in the examination of statistical evidence generates the need to provide a more holistic view of the message. The link between how statistical evidence functions within a system of argument and claims provides the basis for the next step in research. Health messages dealing with risk and prevention are probably the largest single source of application for the discipline of communication for research and analysis. Public health campaigns provide a simple but important set of messages with very identifiable and desirable outcomes for the message creator.
The underlying theories used in this context are related to the Theory of Reasoned Action, Theory of Planned Behavior, and the Elaboration Likelihood Model. Each theory deals, to some extent, with how evidence would be processed to change opinions. The distinctions for each theory and the associated research play important roles in understanding the influence of statistical evidence.
Theory of Reasoned Action (TRA) and Theory of Planned Behavior (TPB) both fall into the same family of theories dealing with the assumption of the human as a form of internal reasoning that relies on a sequence from attitude to behavioral intentions to behavior (Sheeran & Taylor, 1999). Essentially, the assumption is that manifested behavior is consistent with attitude. The idea that attitudes predict behaviors plays an important role in the approach to understanding messages. Attitudes serve as the valence by which a person begins to evaluate the acceptability of behaviors and forms intentions about what behaviors to enact. Empirical evidence provides substantial support for this model across a large number of meta-analyses, many of them involving health message issues. The role of statistical evidence becomes that of information that contributes to the formation of attitudes. In particular, the theory makes reference to social norms of behavior as a basis for action and statistical information about what constitute the norms for both expert opinion and accepted practice. The use of statistics to indicate how most persons and/or experts assess the circumstances provides the ability to indicate what constitutes the accepted or expected normal attitudes/behavior. A number of experimental investigations indicate support for this conclusion.
The only negative reaction or reactance can occur if the person who receives the message believes that the effort involves the belief that the message sender is attempting to restrict the freedom of the receiver. The response to the message involves anger and frustration with the message content. The message receiver views the persuasive attempt as an illegitimate use of evidence and norms in an effort to persuade.
The elaboration likelihood model (ELM), developed by Petty and Cacioppo (1986), indicates that the attitude change occurs as a result of two routes in persuasion: central or peripheral. The central route occurs as a result of message elaboration where the message recipient thinks about the message content. The impact of a central attitude change route involves greater and more permanent attitude change on average. The use of statistical evidence as a means to generate attitude change should generate elaboration as the message recipient starts to think about the information and integrates the effort into the belief system.
The peripheral route of attitude change involves the use of message cues that, while responded to as positive or negative, do not invoke message elaboration. Often, the peripheral route considers issues that are emotional and situational (like message source credibility). The impact of peripheral route processing leads to the production of temporary attitude change and requires little cognitive effort.
ELM has generated a lot of evidence consistent with various tenets or expectations related to cognitive processing of information. However, a number of expectations, when subjected to tests using meta-analysis, find the model’s ability to retain fidelity lacking. The underlying issues of dealing with message processing may have some element of validity within ELM, but the dual route of processing remains difficult to sustain given empirical evidence.
The use of statistical evidence in public health campaign messages remains a desirable method for generally changing attitudes and increasing acceptance of behavioral recommendations. However, such general advice is far from universal and certain given examples like the base-rate fallacy, which indicates human mental reasoning while indicating some element of rationality and logic (as evident in the Subjective Probability Model data), are far from infallible. The conclusion to be drawn from the available data is a general recommendation that statistical evidence improves message persuasiveness. Such a conclusion, like many in the sciences, remains without a universal guaranteed outcome, but is a recommendation that should seriously raise the average expected value.
Resources to Understand Issues in Statistical Evidence
Allen, M., & Preiss, R. (1997). Comparing the persuasiveness of narrative and statistical evidence using meta-analysis. Communication Research Reports, 14, 125–131.Find this resource:
Allen, M., Preiss, R. W., & Gayle, B. M. (2006). Meta-analytic examination of the base-rate fallacy. Communication Research Reports, 23, 1–7.Find this resource:
Baesler, E. J. (1997). Persuasive effects of story and statistical evidence. Argumentation and Advocacy, 33, 170–175.Find this resource:
Reinard, J. C. (1988). The empirical study of the persuasive effects of evidence. Human Communication Research, 15, 3–59.Find this resource:
Casey, M., Allen, M., Emmers-Sommer, T., Sahlstein, E., DeGooyer, D., Winters, A., et al. (2003). When a celebrity contracts a disease: The example of Earvin “Magic” Johnson’s announcement that he was HIV positive. Journal of Health Communication, 8, 249–266.Find this resource:
Dillard, J. P., & Shen, L. (2005). On the nature of reactance and its role in persuasive health communication. Communication Monographs, 72, 144–188.Find this resource:
Jacobs, S., Allen, M., Jackson, S., & Patrell, D. (1985). Can ordinary actors recognize a logical conclusion if it comes up and bites them on the butt? In J. Cox, M. Sillars, & G. Walker (Eds.), Argument and social practice: Proceedings of the fourth SCA/AFA conference on argumentation (pp. 665–674). Annandale, VA: Speech Communication Association.Find this resource:
Kim, S., Allen, M., Gattoni, A., Grimes, D., Herrman, A.M., Huang, H., et al. (2012). Testing an additive model for the effectiveness of evidence on the persuasiveness of a message. Social Influence, 7, 65–77.Find this resource:
Kitchen, P., Kerr, G., Schultz, D., McColl, R., & Pals, H. (2014). The elaboration likelihood model: Review, critique, and research agenda. European Journal of Marketing, 48, 2033–2050.Find this resource:
Lane, R., Miller, A. N., Brown, C., & Vilar, N. (2013). An examination of the narrative persuasion with epilogue through the lens of the Elaboration Likelihood Model. Communication Quarterly, 61, 431–445.Find this resource:
Sheeran, P., & Taylor, S. (1999). Predicting intentions to use condoms: A meta-analysis and comparison of Theories of Reasoned Action and Planned Behavior. Journal of Applied Social Psychology, 29, 1624–1675.Find this resource:
Turkiewicz, K. L., Allen, M., Venetis, M., & Robinson, J. D. (2014). Observed communication between oncologists and patients: A causal model of communication competence. World Journal of Meta-Analysis, 2(4), 186–193.Find this resource:
Zhang, J., Chen, G. M., Makana, C. T., Wang, Y., Ni, L., & Schweisberger, V. (2016). A psychophysiological study of processing HIV/AIDS public service announcements: The effects of novelty appeals, sexual appeals, narrative versus statistical evidence, and viewer’s sex. Health Communication, 31, 853–862.Find this resource:
Allen, M. (1991). Meta-analysis comparing effectiveness of one and two-sided messages. Western Journal of Speech Communication, 55, 390–404.Find this resource:
Allen, M. (1993). Critical and traditional science: Implications for communication research. Western Journal of Communication, 57, 200–209Find this resource:
Allen, M. (1998). Methodological considerations when examining a gendered world. In D. Canary & K. Dindia (Eds.), Handbook of sex differences & similarities in communication: Critical essays and empirical investigations of sex and gender in interaction (pp. 427–444). Mahwah, NJ: Lawrence Erlbaum.Find this resource:
Allen, M. (1999). The role of meta-analysis for connecting critical and scientific approaches: The need to develop a sense of collaboration. Critical Studies in Mass Communication, 16, 373–379.Find this resource:
Allen, M. (2009). Meta-analysis. Communication Monographs, 76, 398–407.Find this resource:
Allen, M., Bruflat, R., Fucilla, R., Kramer, M., McKellips, S., Ryan, D., et al. (2000). Testing the persuasiveness of evidence: Combining narrative and statistical evidence. Communication Research Reports, 17, 331–336.Find this resource:
Allen, M., & Burrell, N. (1990). Resolving arguments accurately. Argumentation, 4, 213–221.Find this resource:
Allen, M., Burrell, N., & Egan, T. (2000). Effects with multiple causes: Evaluating arguments using the subjective probability model. Argumentation and Advocacy, 37, 109–116.Find this resource:
Allen, M., Emmers-Sommer, T., & Crowell, T. (2002). Couples negotiating safer sex behaviors: A meta-analysis of the impact of conversation and gender. In M. Allen, R. Preiss, B. Gayle, & N. Burrell (Eds.), Interpersonal communication research: Advances through meta-analysis (pp. 263–280). Mahwah, NJ: Lawrence Erlbaum.Find this resource:
Allen, M., Emmers-Sommer, T. M., D’Alessio, D., Timmerman, L., Hanzal, A., & Korus, J. (2007). The connection between the physiological and psychological reactions to sexually explicit materials: A literature summary using meta-analysis. Communication Monographs, 74, 541–560.Find this resource:
Allen, M., Hale, J., Mongeau, P., Berkowitz-Stafford, S., Stafford, S., Shanahan, W., et al. (1990). Testing a model of message sidedness: Three replications. Communication Monographs, 57, 275–291.Find this resource:
Allen, M., & Kim, S. (2016). Meta-analysis. In C. Berger & M. Roloff (Eds.), International Encyclopedia of Interpersonal Communication (pp. 1–6). New York: Wiley.Find this resource:
Allen, M., & Preiss, R. (1993). Replication and meta-analysis: A necessary connection. Journal of Social Behavior and Personality, 8, 9–20.Find this resource:
Allen, M., & Preiss, R. (1997). Comparing the persuasiveness of narrative and statistical evidence using meta-analysis. Communication Research Reports, 14, 125–131.Find this resource:
Allen, M., & Preiss, R. (2007). Media, messages, and meta-analysis. In R. Preiss, B. Gayle, N. Burrell, M. Allen, & J. Bryant, (Eds.), Mass media effects research: Advances through meta-analysis (pp. 15–30). Mahwah, NJ: Lawrence Erlbaum.Find this resource:
Allen, M., & Preiss, R. W. (2014). Meta-analysis and conflict research. In N. Burrell, M. Allen, B. Gayle, & R. Preiss (Eds.), Managing interpersonal conflict: Advances through meta-analysis (pp. 7–21). New York: Routledge.Find this resource:
Allen, M., Preiss, R. W., & Gayle, B. M. (2006). Meta-analytic examination of the base-rate fallacy. Communication Research Reports, 23, 1–7.Find this resource:
Allen, M., & Reynolds, R. (1993). The Elaboration Likelihood Model and the sleeper effect: An assessment of attitude change over time. Communication Theory, 3, 73–82.Find this resource:
Allen, M., Timmerman, L., Ksobiech, K., Valde, K., Gallagher, E. B., Hookham, L., et al. (2008). Persons living with HIV: Disclosing to partners. Communication Research Reports, 25, 192–199.Find this resource:
Bradford, L., Allen, M., Casey, M., & Emmers-Sommer, T. (2002). A meta-analysis examining the relationship between Latino acculturation levels and HIV/AIDS risk behaviors, condom use, and HIV/AIDS knowledge. Journal of Intercultural Communication Research, 31, 167–180.Find this resource:
Brainard, C. (2013). Sticking with the truth: How “balanced” coverage helped sustain the bogus claim that childhood vaccines can cause autism. Columbia Journalism Review.Find this resource:
Chigwedere, P., & Essex, M. (2010). AIDS denialism and public health practice. AIDS and Behavior, 14, 237–247.Find this resource:
Emmers-Sommer, T., & Allen, M. (1999). Surveying the effect of media effects: A meta-analytic summary of the media effects research in Human Communication Research. Human Communication Research, 25, 478–497.Find this resource:
Ford, L. A., & Smith, S. W. (1991). Memorability and persuasiveness of organ donation message strategies. American Behavioral Scientist, 34, 695–711.Find this resource:
Hunter, J., Hamilton, M., & Allen, M. (1989). The design and analysis of language experiments in communication. Communication Monographs, 56, 341–363.Find this resource:
Kim, S., Allen, M., & Cole, A. W. (2016). Testing the evidence effect of Additive Cues Model (ACM). Studies in Communication Sciences. Corrected proof available online.Find this resource:
Kim, S., Allen, M., Preiss, R. W., & Peterson, B. (2014). Meta-analysis of counterattudinal advocacy data: Evidence for an additive cues model. Communication Quarterly, 62, 607–620.Find this resource:
Kim, S., Levine, T. R., & Allen, M. (2014). The intertwined model of reactance for resistance and persuasive boomerang. Communication Research, 41, 1–21.Find this resource:
Kopfman, J. E., Smith, S. W., Ah Yun, J. K., & Hodges, A. (1998). Affective and cognitive reactions to narrative versus statistical evidence organ donation messages. Journal of Applied Communication Research, 26, 279–300.Find this resource:
Petty, R. E., & Cacioppo, J. T. (1986). Communication and persuasion: Central and peripheral routes to attitude change. New York: Springer-Verlag.Find this resource:
Preiss, R., & Allen, M. (1995). Understanding and using meta-analysis. Evaluation & the Health Professions, 18, 315–335.Find this resource:
Preiss, R., & Allen, M. (2002). Preface: On numbers, narratives, and insights regarding interpersonal communication. In M. Allen, R. Preiss, B. Gayle, & N. Burrell (Eds.), Interpersonal communication research: Advances through meta-analysis (pp. ix–xvii). Mahwah, NJ: Lawrence Erlbaum.Find this resource:
Preiss, R., & Allen, M. (2006). Meta-analysis, classroom communication, and instructional processes. In B. Gayle, R. Preiss, N. Burrell, & M. Allen (Eds.), Classroom communication and instructional processes: Advances through meta-analysis (pp. 3–14). Mahwah, NJ: Lawrence Erlbaum.Find this resource:
Schmidt, F., & Hunter, J. (2014). Methods of meta-analysis: Correcting for error and bias in research findings (3d ed.). Beverly Hills, CA: SAGE.Find this resource:
Toulmin, S. (1959). Uses of argument. Cambridge, U.K.: Cambridge University Press.Find this resource:
Venetis, M. K., Robinson, J. K., Turkiewicz, K. L., & Allen, M. (2009). An evidence base for patient-centered cancer care: A meta-analysis of studies of observed communication between cancer specialists and their patients. Patient Education and Counseling, 77, 379–383.Find this resource:
Witte, K. (1992). Putting the fear back into fear appeals: The extended parallel process model. Communication Monographs, 59, 329–349.Find this resource:
Witte, K., & Allen, M. (2000). A meta-analysis of fear appeals: Implications for effective health campaigns. Health Education & Behavior, 27, 591–615.Find this resource:
Wyer, R. S. (1970). Quantitative prediction of belief and opinion change: A further test of the subjective probability model. Journal of Personality and Social Psychology, 16, 559–570.Find this resource:
Zebregs, S., van den Putte, B., Neijens, P., & de Graaf, A. (2015). The differential impact of statistical and narrative evidence on beliefs, attitudes, and intention: A meta-analysis. Health Communication, 30, 282–289.Find this resource: