Patterns of Reasoning
Summary and Keywords
Health professionals and the public puzzle through new or controversial issues by deploying patterns of reasoning that are found in a variety of social contexts. While particular issues and vocabulary may require field specific training, the patterns of reasoning used by health advocates and authors reflect rhetorical forms found in society at large. The choices made by speakers often impact the types of evidence used in constructing an argument. For scholars interested in issues of policy, attending to the construction of arguments and the dominant cultural modes of reasoning can help expand the understanding of a persuasive argument in a health context. Argumentation scholars have been attentive to the patterns of reasoning for centuries. Deductive and inductive reasoning have been the most widely studied patterns in the disciplines of communication, philosophy, and psychology. The choice of reasoning, from generalization to specific case or from specific case to generalization, is often portrayed as an exclusive one. The classical pattern of deductive reasoning is the syllogism. Since its introduction to the field of communication in 1957, the Toulmin model has been the most impactful device used by critics to map inductive reasoning. Both deductive and inductive modes of argumentative reasoning draw upon implicit, explicit, and affective reasoning. While the traditional study of reasoning focused on the individual choice of a pattern of reasoning to represent a claim, in the last 40 years, there has been increasing attention to social deliberative reasoning in the field of communication. The study of social (public) deliberative reasoning allows argument scholars to trace patterns of argument that explain policy decisions that can, in some cases, exclude some rhetorical voices in public controversies, including matters of health and welfare.
Deductive reasoning is the use of logical syllogism to reach a necessarily valid conclusion, applying a general principle to the particular case. Deduction is characterized by reasoning from a set of premises, with the conclusion arising from two factors: the truth or falsity of the premises, and the construction of the argument. A classic example of deductive reasoning has two premises, with an undeniable conclusion:
- All people are mortal.
- Socrates is a person.
- Therefore, Socrates is mortal.
This example is both valid and sound. Validity refers to whether or not the structure of the argument supports a necessarily true conclusion, if the premises are all true. Soundness describes just such a situation, where the argument is valid, and the premises are in fact true (Hacking, 2001). Rhetorical reasoning of this type has its roots in the works of Aristotle, most prevalently in the Prior Analytics (Striker, 2009). While Aristotle was the first to systematize reasoning, our modern understanding of Aristotelian syllogistic logic is a synthesis of his work and the later work of Stoic logicians, which incorporated conditional statements (If P, then Q), conjunctions (A and B), and other logical operators (Striker, 2009).
Deductive reasoning has long been recognized for the capacity to create undeniable conclusions and was held up by Aristotle as the ideal form of reasoning. The process of logical disputation could provide a demonstration (apodeixis) of truth, rather than creating a more circumspect view (Kennedy, 2007). This pursuit of universal and timeless truths created through deduction has been the province of Western analytic philosophy for more than 2,000 years (Lakoff & Johnson, 1999). In this sense, deductive reasoning is a property of arguments and applies solely to message design, but there are elements of deductive reasoning that can be studied for how messages are processed as well. The study of deductive reasoning continues, but there is now a thriving psychological and emerging neurological perspective on this pattern of reasoning, with the attendant implications for message processing.
Psychological analysis of deductive reasoning has incorporated an appreciation for the way deductive activity informs a human reasoning process that often fails to achieve idealized logical coherence. In other words, people aren’t perfectly rational, but nevertheless, they can engage in deductive reasoning (Evans, 2014). Initial psychological studies troubled the philosophical classification of deduction as distinct from other, less certain forms of reasoning, including induction and implicit conclusion drawing (Evans, 2014). Generally, psychological approaches posit a difference between reasoning and argumentation; the former describes the psychological processes of an individual working through a logical sequence, the latter describes the form of the logical sequence(s) (Rips, 1994). This is a pivotal difference in understanding why even a properly designed deductive message may be processed using implicit or background information, rather than with the given premises.
Psychological perspectives on deduction also illuminate how individuals arrive at logical conclusions, and when they attribute logic to others. For example, individuals have strong beliefs in the deductive, and thus rational, powers of others (Rips, 1994). Deductive reasoning patterns provide a foundational basis for intersubjective interactions, as well as creating possible expectations for their own behavior. Additionally, psychological accounts of reasoning point towards a mixed, two process pattern of cognition, where deduction relies more heavily on analytic capabilities, while induction is driven by heuristic assessments (Heit & Rotello, 2010). As reasoning processes are sped up or pressured, however, individuals rely more heavily on background information or heuristics to make judgments, even in cases of deduction (Heit & Rotello, 2010). While psychological study of deductive process is largely done through social science methods employing surveys and behavioral studies, there is an emerging strand of research into the neuroscientific process of deductive reasoning.
Neuroscience has offered some concrete demonstrations of deductive reasoning in specific regions of the brain, though there isn’t consensus even in that field of inquiry. Early neuroscientific study of deductive reasoning centered around two competing hypotheses: (a) logical reasoning is primarily centered in the areas of the brain that control language process, and (b) logical reasoning is primarily centered in the areas of the brain that control visuospatial processing (Goel, 2007). Study participants were exposed to simple deductive reasoning examples and, using early positron emission tomography (PET), researchers were able to track the reasoning centers of the brain. Different studies have shown different results however, depending on the context and method of study (Goel, 2007; Prado, Chadha, & Booth, 2011). Meta-analyses of the differences have revealed that additional disambiguation along the lines of familiar and unfamiliar information may support a more limited set of conclusions. Familiar material is assessed and analyzed in the left lateralized frontal-temporal centers of the brain (conceptual/language center), while unfamiliar material primarily activates the bilateral parietal (visuospatial) systems (Goel, 2007). Different deductive processes can also be traced to differing regions of the brain, with the locus of activity in the left hemisphere. Dividing studies into the type of argument that participants are asked to evaluate creates additional distinctions in the region of the brain employed. Categorical (all A’s are Bs), propositional (if P, then Q), and relational (Beyoncé is taller than Becky) arguments all activate different areas. Specifically, these types of arguments correspond to the inferior frontal gyrus (categorical), the bilateral precentral gyrus (propositional), and the left parietal cortex (relational) (Prado et al., 2011). Finally, there is also a demonstrable difference in the speed of processing between deductive patterns of reasoning and probabilistic ones. Specifically, individuals typically process deductive messages more quickly than probabilistic ones (Malaia, Tommerdahl, & McKee, 2015).
In the health and risk messaging context, deductive reasoning plays a strong role in message creation and design, but an overreliance on this pattern of reasoning can undermine message processing. Scientists typically engage in both inductive and deductive reasoning as part of their research process. Deductive reasoning plays a role in the creation of research hypothesis (as they apply general knowledge to specific cases), while induction is useful for extrapolating results from a series of case studies. Focusing too heavily on the deductive aspect of science (with the capacity to create certainty) can, however, prompt the media to assign a more sensationalist tone to stories regarding health risks or advances in research (Willis, Willis, & Okunade, 1997). In some cases, the presence of a public understanding of science that conflates scientific inquiry with deductive certainty has created problems for attuning the public with probabilistic risks. Climate change skepticism and inaction can be explained in part by the public’s belief in the infallibility of the deductive powers of science, which creates a perception that probabilistic assessments by science must be, by their very nature, not worth acting upon (Lessl, 2008; Mosley-Jensen, 2011). These limitations with deductive reasoning in public health messaging provide a clear warrant for exploring other reasoning patterns, including informal logic and inductive reasoning, an approach that conforms more closely with observed patterns of human activity (Cummings, 2012).
Inductive reasoning is the use of specific information to reach a general conclusion. Inductive reasoning relies on empirical observation or data to draw a correlation, making a probabilistic claim about the world (Smith, 2003). This isn’t to suggest that inductive reasoning generally fails, or that it is unreliable, simply that it doesn’t provide the same certainty as deductive methods. Message design will typically involve at least some inductive reasoning, as the scientific process is thoroughly bound up with this reasoning pattern. For example, experimentation utilizes inductive reasoning to reach provisional conclusions about observable phenomena (Willis et al., 1997). Additionally, risk messaging involves inductive appeals because individuals must often make judgments without dispositive evidence in support of their actions. Health decisions invariably involve weighing probabilities, and so induction is vitally important to message design and processing in this context.
Scholarly treatments of induction follow similar methods as the approach to deduction, with a focus on the structure of a formal argument, with explicit premises and conclusions. In each case, these are propositional statements, or statements with a truth-value (Hacking, 2001). Induction can be expressed using logical operators, with inputs such as background knowledge (K), observational statement (S), and inductive hypotheses (H), where H expresses a valid induction if, and only if, it is a generalization of S and is consistent with K (Orłowska, 1986). These expressions of induction can be assessed according to their soundness and validity, in much the same way that deductive arguments can be assessed along those lines. Analytic philosophical approaches to inductive reasoning focus on the sets of objects and their attributes that constitute certain associations, creating mechanisms for assessing the relational aspect of background knowledge, observational statements, and inductive hypotheses (Orłowska, 1986).
Induction can also be described as a pattern of “informal” reasoning, informal because it occupies a perspective distinct from the strictures of formal logical systems. Informal methods of evaluation are not without specific criteria, but these criteria offer a qualitative approach towards reasoning and argumentation. When examining the argument that an inductive message postulates, it can be broken down into five parts: claim, data, warrant, backing, qualifier, and rebuttal (Toulmin, 2003). The claim represents the assertion made in the message, for example “smoking tobacco increases the likelihood that an individual will develop lung cancer.” The data represents the “proof” that is offered in favor of the claim, for example “rising rates of cigarette smoking strongly correlate with increased incidents of lung cancer” (Cornfield, Haenszel, Hammond, Lilienfeld, Shimkin, & Wynder, 2009). The warrant describes the relationship between the data and the claim, detailing the research methodology for arriving at the correlation, and in this case discussing why the finding might be statistically significant, etc. (Toulmin, 2003). The backing of a warrant provides the generalized criteria for trusting in that warrant, and others like it. Typically this corresponds to a field of inquiry, as the backing in the cigarette smoking case would justify the use of large-scale data sets, or the extrapolation of correlative claims from a representative sample size. The backing in this case is the field of epidemiological studies. The qualifier is any statement designed to moderate the strength of a claim, to introduce a probability assessment. This could be qualitative or quantitative, depending on the case. Rebuttal is the practice of anticipating possible responses and addressing those objections in the creation of a message. In furthering the case that smoking tobacco products are largely the cause of increased rates of lung cancer, it could be argued that increased lifespans are at least partially responsible for the higher incidents of fatal cancer (Cornfield et al., 2009). Anticipating and responding to this objection is a rebuttal. This scheme (claim, data, warrant, backing, qualifier, and rebuttal) is largely used to craft messages, but does have some relationship to how a message is processed.
Application of informal reasoning to specific case studies is often understood as the use of “practical reasoning.” Practical reasoning attempts to recognize the informal nature of many human patterns of reasoning as they occur in everyday situations. For example, claims are often made through the use of metaphor or analogy, with the data remaining implicit, and the backing found in the language community of the metaphor user. The use of practical reasoning done through metaphor and analogy (along with the examination of its salience) is a concern for bioethicists as well. During public exchanges for and against the practice of physician-assisted suicide, it is a common to hear the claim that physicians put into that position are “playing God,” a powerful metaphor (Childress, 1997). The debate over physician-assisted suicide can be characterized in part by our evaluation of this metaphor, and its explanatory power in illuminating the controversy.
Psychological research indicates that induction is part and parcel of human reasoning (Heit, 2000). Behavioral analysis suggests that individuals take specific cases and abstract those cases away from their details, mapping the probability of an event onto their own lives. There are several characteristics that are important when considering if this process will be successful (Heit & Rubinstein, 1994). One important characteristic is similarity, how much is the first case analogous to the second? Another characteristic is the fixity, or permanence of a condition or property. Individuals are more willing to abstract cases where the property or condition isn’t idiopathic, but represents a more general feature (Heit & Rubinstein, 1994). Another characteristic that is important for determining whether induction results in an abstracted case is how typical the example appears. Even small children use these criteria in crafting and applying a general principle to further cases (Heit, 2000). The more typical the initial example appears, the stronger the impulse to create an association and project characteristics onto other cases. That said, effectively using induction is an acquired skill. Though individuals do have some innate capacity for engaging in probability judgments about the future, those judgments are vastly improved through practice and an informed perspective about the rules governing probability (Jepson, Nisbett, & Krantz, 1993). In particular, individuals typically do not accurately apply statistical rules for inductive inferences correctly (Nisbett, Krantz, Jepson, & Kunda, 1983). Societal stereotypes can swamp otherwise accurate statistical applications, and counter-intuitive statistical principles, such as the law of large numbers, can be difficult for most reasoners to apply without training or direction.
Neuroscientific research into inductive reasoning suggests that the locus of induction occurs in the left pre-frontal cortex (Babcock & Vallesi, 2015), a location that is also active during relational deductions (Prado et al., 2011). Most studies of inductive reasoning rely on verbal promptings, which correlate with the current consensus regarding the left hemisphere as the primary seat of verbal processing. Though induction, which requires spatial reasoning as well as verbal reasoning, activates the right prefrontal cortex as part of the domain of the right hemisphere, the inductive process is uniquely active in the left ventrolateral prefrontal cortex (Babcock & Vallesi, 2015). For rule-based inductive reasoning, where individuals focus on generating a hypothesis and then applying that rule, the frontopolar cortex is the most active region of the brain (Crescentini, Seyed-Allaei, De Pisapia, Jovicich, Amati, & Shallice,2011). This accords with the function of the frontopolar cortex as the location for the exploration and acquisition of complex, higher-order behavior (Boschin, Piekema, & Buckley, 2015).
Inductive reasoning in the health and risk-messaging context is important for individuals assessing their general risks and for patient adherence to treatment regimens. One key cause for concern among physicians is a failure of patients to follow recommendations regarding prescription drug-use, in particular in the early stages of taking a new prescription (Kreps et al., 2011). A key barrier to address in this context is the patient’s uncertainty regarding the new medication, including if it is really necessary, if it will have any side-effects, and if the cost is worth the benefit. Inductive methods of persuasion can help overcome these concerns, by creating a clear connection between similar cases and the patient at hand. Specific, evidence-based messaging strategies have been the most successful at overcoming patient intransigence in starting a new drug regimen (Kreps et al., 2011). In the assessment of health risks, a person’s self-conception is important for how they understand and process relevant information (Klein & Monin, 2009). Health risks represent a threat to an individual’s conception of self, especially as it regards how they rate themselves comparatively to other individuals. Research on the attitudes of young smokers reveals that the level of negative self-reflection that an individual experiences can increase their inductive powers when considering the health risks of long-term tobacco use (Klein & Monin, 2009). Induction can be used to dampen a person’s belief in their invincibility and sober them to the realities of threats to their health.
When analyzing message design, the differences between inductive and deductive reasoning are important to note, as they can have a significant bearing on the construction of a persuasive message. Whether an argument is made based on an overriding principle (deduction) or developed through a specific case (induction), the attendant features of a given message could differ significantly. In the practice of reasoning, however, humans rarely engage in only inductive or deductive processes. Message processing can (and usually does) involve both reasoning methods. In addition, other identifiable reasoning patterns such as implicit, explicit, affective, and social deliberative reasoning all display elements of induction and deduction. Though certain forms lend themselves more easily to one or the other, the complexity of human reasoning processes means that even analyzing a relatively simple message is rarely associated with a single pattern of reasoning.
Human reasoning processes include implicit methods of reaching a decision or conclusion, where heuristics are used to govern the approach. Individuals create and sustain a number of implicit beliefs about the world (Harman, 2008). These beliefs include an accumulation of both inductive and deductive reasoning efforts, where general principles can configure an individual’s response to environmental stimuli and persuasive messages. Implicit reasoning methods can be distinguished from explicit forms, constituting distinct processes of human reasoning (E. R. Smith & DeCoster, 2000). Heuristic processing can occur with little cognitive effort on the part of an individual, where convenient shortcuts step in to provide guidance in message internalization (Chaiken, 1987). The use of implicit processes to guide decision-making presents some difficulties for assessing message design, as the implicit attitude can overwhelm even contrary information (Rydell, McConnell, Strain, Claypool, & Hugenberg, 2007).
Implicit reasoning is supported by the underlying process of information gathering and retrieval. Two forms of human learning are relevant to understanding implicit reasoning, slow and fast. Slow learning is characterized by an associative process, where new information is acquired and fit into existing frameworks of interpretation, while fast learning is related to high-order cognitive processing of verbal and logical symbols (Rydell & McConnell, 2006). Decisions about the future are made based on the balance of examples that an individual can draw on, where associative processes can inform deductive rule-based approaches (Sloman, 1996). From an evolutionary perspective, implicit reasoning provides a useful short-timeframe decision-making apparatus, as it relies on the accretion of information as a substitute for a drawn out systematic processes. Implicit attitudes thus typically correspond with spontaneous behaviors or judgments (Rydell et al., 2007), where individuals intuitively recognize a pattern and act on that intuition. Pattern activation is distinct from the propositionally guided explicit reasoning processes, which are more labor intensive and require longer processing time (Gawronski & Bodenhausen, 2006). Social systems generally promote rule-based reasoning methods, as implicit associative processes are more highly correlated with individual intuition than with generally accessible criteria (E. R. Smith & DeCoster, 2000). Interpersonal interaction in the social space is, however, an intuitive process, invoking implicit modes of reasoning (Frith & Frith, 2008).
Neuroscientific research bears out the significance of implicit reasoning, especially in a persuasive context. Research of this kind is relatively new, with the majority simply focusing on the broad effect of persuasive messages, not on distinguishing between explicit and implicit judgments in response to those messages (Vezich, Falk, & Lieberman, 2016). In general, neuroscientific investigation (through fMRI research) has shown that persuasive messages typically activate areas of the brain associated with processing semantic information, such as the left dorsomedial prefrontal cortex (Klucharev, Smidts, & Fernández, 2008). This area of the brain is also excited during powerful persuasive speeches, but fails to be similarly effected by weak speeches (Schmälzle, Häcker, Honey, & Hasson, 2015). Additionally, research into working memory provides evidence that even very short-term processes can become part of an individual’s implicit working memory, where little conscious effort is required for repetition, particularly for visuospatial information (Hassin, Bargh, Engell, & McCulloch, 2009). The integration of short-term information relies on a working hippocampus, however, as the hippocampus is the seat of new memory formation (Eichenbaum, Dudchenko, Wood, Shapiro, & Tanila, 1999). Implicit reasoning based on long-term associations and contiguous information can survive even in individuals with significant hippocampal damage, such as that resulting from Alzheimer’s disease (Smith & DeCoster, 2000).
Pharmaceutical drugs can affect implicit reasoning processes. Short-term memory integration is governed largely by the hippocampus, and so medications that affect this area of the brain can cause an increased reliance on implicit reasoning (Frank, O’Reilly, & Curran, 2006). Benzodiazepine midazolam is a memory inhibitor that has the effect of blocking access to an individual’s explicit memories. In the absence of explicit information, individuals who have been administered midazolam rely on their relational, implicit judgments (Frank et al., 2006). This suggests a competitive neurological environment where the most readily accessible information is utilized by an organism, with higher-order explicit forms of reasoning available in the absence of implicit heuristics and vice-versa.
Implicit attitudes pose potential problems for message design, as they may be deeply held and only tacitly acknowledged. With the presentation of counter-attitudinal information, implicitly formed attitudes are slow to reverse, reflecting their aggregate nature (Rydell & McConnell, 2006; Rydell et al., 2007). Social influence can also play a sizeable role in sustaining specific attitudes, as individuals display a great degree of attitude intransigence with socially oriented beliefs (Wood, 2000). Implicit reasoning can co-occur with explicit processes, making it difficult to disentangle the two (Chaiken & Maheswaran, 1994). A key element of implicit reasoning is the use of cues to trigger a broader heuristic, making the manipulation of specific cues important for overall message design (Bellur & Sundar, 2014). Media cues can shape the public’s awareness of specific issues; for example, breast cancer is viewed as a greater threat to women’s health than cardiovascular disease, in part because of the widespread campaigns associated with the former (Bellur & Sundar, 2014). Recognizable cues can provide for greater memory accessibility of a message, increasing the reception and internalization of key areas of concern, especially for conveying specific risks.
In the context of risk messaging, implicit methods can guide much of the message internalization. There are two overriding factors that correlate with implicit reasoning: motivation and information sufficiency. Implicit or heuristic processing is inversely correlated with motivation (rising as motivation decreases) and directly correlated with information sufficiency (the more confident an individual feels with the information they have acquired, the more likely they are to rely on heuristics) (Griffin, Dunwoody, & Neuwirth, 1999). Individuals without direct experience of a risk utilizing implicit reasoning methods are also likely to rely more heavily on expert opinion, which can correlate more strongly with established scientific opinion (Trumbo, 2002). On the other hand, individuals with direct experience of a risk are likely to have formed an implicit heuristic for similar situations and are more likely to experience attitude intransigence (Griffin et al., 1999). Prior knowledge (or perceived knowledge) can also dampen implicit reasoning methods and emphasize the importance of source credibility for conveying risk (Smith et al., 2016).
Explicit reasoning is characterized by a deliberate and conscious effort to seek information and apply logical and systematic efforts to a message. Explicit reasoning is a recurrent rhetorical option found in both deductive and inductive argumentation. Explicit reasoning can occur in circumstances where individuals encounter erratic or unconventional information and must come to a decision regarding the unknown, or where a person prefers to centrally process arguments surrounding an issue (Petty & Cacioppo, 1986). Central processing of messages occurs along a continuum, with motivation and ability dictating an individual’s likelihood of engaging in more systematic processes (Lien, 2001). Systematic processing is more cognitively demanding and assesses linguistic and symbolic data for their logical coherence. In this way, explicit reasoning patterns can provide a route to override initial implicit conclusions, including affective or emotional reactions to stimuli (MacDonald, 2008).
Cognitive neuroscience supports an understanding of explicit processing as integrative rather than exclusively relegated to a specific region of the brain. Through coordinating a variety of reactions, such as memory, emotions, and uncertainty in response to environmental stimulus, explicit processing provides an iterative mode of reasoning (Cunningham & Zelazo, 2007). Some reactions are implicit, such as a fear response, and those tend to produce brain activation in the amygdala (Öhman & Mineka, 2001). Other types of responses, such as racial prejudice, can be characterized as implicit or explicit. When an individual has competing impulses regarding a reaction, then activity can be observed in the amygdala, followed by responses in the dorsolateral prefrontal cortex and anterior cingulate cortex, the seats of logical reasoning. This represents the cognitive recognition of the implicit bias, and the explicit processing of that implicit reasoning (Frith & Frith, 2008). While initial stimuli is processed through the implicit systems, including the fastest and most accessible areas (beginning with the sensory cortex communicating with the amygdala), there are a series of subsequent processing possibilities for increasing the attention that a problem receives. The orbitofrontal cortex coordinates activity between the amygdala (with initial affective assessments) and the hypothalamus (responsible for hormone regulation and maintaining homeostasis), and provides basic assessment of risks and rewards associated with behavior, potentially moderating the initial physiological response (Cunningham & Zelazo, 2007). Even non-human animals have the ability to engage in this limited form of risk/reward explicit processing, as this function of the orbitofrontal cortex is a feature of mammals generally (MacDonald, 2008). These systems can facilitate decision-making processes that provide longer-term benefits over short-term desires, demonstrating the usefulness of explicit reasoning from an evolutionary perspective (MacDonald, 2008).
Humans have a multi-layered hierarchical set of neurological systems that are activated when the need arises for increasingly complex processing of information. The activation of the orbitofrontal cortex and the initial assessment of potential risks and rewards is only the first step. Further uncertainty arising from an unresolved problem is associated with activation of the anterior cingulate cortex, which deals with higher levels of reward anticipation and problem solving (Cunningham & Zelazo, 2007). Extensive consideration of the matter at hand moves activity into the prefrontal cortex, the region of the brain associated with higher cognitive functions (Bunge & Zelazo, 2006). Different portions of the lateral prefrontal cortex are responsible for processing distinct elements of rule-based reasoning, with the ventrolateral and dorsolateral prefrontal cortices handling conditional rules, and the rostrolateral prefrontal cortex associated with contemplation of task sets (Cunningham & Zelazo, 2007). The prefrontal cortex thus serves an integrative command and control function, with initial assessments filtered through risk and reward considerations, and the application of rule-based reasoning to moderate physiological and emotional responses.
In the risk communication context, motivation and information sufficiency are important variables in predicting the likelihood that individuals will engage in explicit reasoning processes rather than rely on implicit heuristics. Motivation is directly correlated with explicit reasoning patterns, as individuals with greater incentive to seek out information are typically highly motivated to engage in explicit, central processing. For example, individuals who live in suspected “cancer clusters” are more likely to examine the evidence and arguments regarding their specific risks much more closely than individuals who do not reside in potentially afflicted areas (Trumbo, 2002). Information sufficiency is thus inversely correlated with explicit or systematic modes of processing, the more information that an individual believes they need, the more likely they are to explicitly seek and process risk messages (Kahlor, Dunwoody, Griffin, Neuwirth, & Giese, 2003). Explicit reasoning can include the elaboration of a message, where individuals actively process why a particular piece of information may or may not be typical, or why it does or does not apply to them (Petty, Baker, Gleicher, Donohew, Sypher, & Bukoski, 1991). In elaborating messages, individuals can move through possible counter-arguments, accepting or rejecting them, or apply other conditional or rule-based reasoning methods to the case at hand.
In assessing the effectiveness of persuasive communication, the mere presence of explicit, central processing does not guarantee that an individual will align their thinking with the intended message. In some cases, heuristic reasoning processes can result in a conclusion that is more in line with accepted expert descriptions of risk (Trumbo, 2002). For example, when individuals trust authoritative sources, and there is consensus on the part of experts, then reasoning through implicit, heuristic methods may lead a person to more accurately judge their particular risk. Explicit reasoning thus does not necessarily lead to the conclusion that is supported by the preponderance of evidence. Individuals can also engage in motivated reasoning, where unconscious biases can influence their assessment of science and risk management (Sinatra, Kienhues, & Hofer, 2014). Scientifically derived information may also pose specific difficulties for message design, as it could challenge an individual’s worldview, especially if their experience seems to disconfirm the recommendations of an obscure expert. Additional difficulties can be encountered when discredited scientific communication supports an individual’s desire to explain phenomena that are otherwise difficult to explain. For example, in the case of the discredited link between vaccines and autism, some individuals seem unwilling to accept evidence that undermines the comfortable knowledge of the cause of their child’s autism (Sinatra et al., 2014). Despite these problems, inducing explicit patterns of reasoning is the best way to ensure the durability of attitudes and increase the likelihood that the established attitude correlates with behavioral outcomes (Petty et al., 1991).
Affective reasoning is associated with the activation of emotions and the impact that this has on overall decision making. Classical theories of emotion subsumed them under rational-cognitive approaches to reasoning, but further study demonstrated that emotional reactions occur prior to cognitive assessment (James, 1884). Emotions include physiological changes coincident with perception, including an increase in heart rate, faster breathing, or even running away from a source of danger (Kandel, 2013). Emotions are event focused, appraisal driven, have some behavioral impact, and are associated with specific action tendencies (Scherer, 2005). Emotions motivate actions and invite individuals and groups to consider certain actions depending on the emotion. Affective processes have special bearing on decisions regarding risk and risk messaging, as individuals under threat might feel and act differently than they otherwise would (Öhman & Mineka, 2001).
Emotions and the motivation to act are intimately connected, with specific emotions creating specific action tendencies (Condit, 2014). Two aspects of emotion demonstrate a motivation for action, intent, and behavior (Frijda, 2004). The action tendencies of an individual may vary based on cultural and social context as well as idiosyncratic variance across people. Despite this wide possibility for variance, emotion study has identified some specific action tendencies with a number of emotions. Anger may create a tendency to wish harm upon the object of anger, while fear can induce a fight or flight response (Elster, 2004). The reaction to an emotion can be rational choice, so not all emotions need override rationality, but they do inform decision making. A key inducing factor for emotions can be the beliefs that an agent holds, which then serve to modify their action tendencies (Elster, 2004). The action tendencies for an emotion need not lead directly to some particular and concrete movement, but rather can also facilitate action readiness where options are opened up or constrained (Mesquita, 2003). Cultural context for emotion expression and realization works to shape these outcomes, where environmental factors include other people and the perception of collective and individual commitments. The preferred means of individuals to achieve their end goals can be configured through the type of action readiness that they are culturally primed for and emotionally invested in. Affective reasoning thus has a bearing on social and group decision-making processes. For example, affective affiliations can help explain how democracy overcomes the collective action problem where there is tension between the public good of political activity and the individual’s interest in minimizing their commitment to action (Groenendyk, 2011). A purely rational agent would choose to forgo participation in political contests if their participation was unlikely to affect the outcome. Despite this simple cost-benefit calculation, people do choose to participate in politics. In a democratic country, affinity groups are created through a variety of means including common belief in country, party affiliation, religious beliefs, and ethnic and racial identification, overcoming the possibility of a rational disinvestment of the system (Groenendyk, 2011).
Neuroscientific study of emotional reactions demonstrates that their speed, intensity, and behavioral impact can be quite large (Cunningham, Raye, & Johnson, 2004). This is especially true for strong emotional reactions, such as fear (Öhman & Mineka, 2001). Functional magnetic resonance imaging studies of individuals exposed to emotionally triggering content suggest that the amygdala plays a lead role in the unconscious and automatic evaluative processes (Cunningham & Zelazo, 2007). This effect is apparent even when individuals are exposed to masked expressions (of fear), and the conscious response to the masked expressions is to suggest they are neutral. This suggests that the amygdala operates independently of conscious recognition and can be activated implicitly (Whalen, Rauch, Etcoff, McInerney, Lee, & Jenike, 1998). The amygdala is of particular importance for the recognition of social emotions, through the understanding of nuanced facial expressions (Adolphs, Baron-Cohen, & Tranel, 2002). Individuals with severe cases of autism may be experiencing impaired function of the amygdala, with an attendant reduction in reading facial expressions and social emotions.
Affective reasoning is relevant for the design and processing of risk communication messages. Managing risk and communicating the possible risks an individual faces creates the possibility of invoking strong emotional reactions. Affective reasoning can induce insensitivity to probability, which can impact even expert assessments of risk (Slovic, Peters, Finucane, & MacGregor, 2005). For example, individuals facing increased risk for developing cancer from family history are more likely to opt for preventative treatment options, even if those options are likely unnecessary, in part because they may perceive their risk to be greater than it is (Slovic et al., 2005). Alternatively, young smokers tend to underestimate the risks, including the possibility of addiction or the development of smoking related illness, of starting a tobacco habit. They do so in part because of the excitement that accompanies smoking, satisfying their affective reasoning process, while bypassing a more rational assessment of the risks (Slovic et al., 2005). One method of overcoming insensitivity to probability is by providing information over a longer-time scale, with the frequency of occurrence as the focus, not the probability. This facilitates a process whereby individuals can visualize the risks they incur (Keller, Siegrist, & Gutscher, 2006). There are also demographic differences for assessing the prevalence of affective reasoning. Older adults are more likely to engage in affective reasoning of health risks for a variety of reasons. Older individuals have more attuned emotional memory than younger adults, as well a greater need to conserve diminishing cognitive resources (Finucane, 2008).
Civic deliberation of social risks is also subject to the influence of affective reasoning, both on the part of experts and the public. Even expert deliberation of scientific risk is not immune to the influence of affective reasoning, in part because the experts chosen may have specific interests involved that create affiliative networks that impact their decision-making process (Condit, 2014). In the case of climate change, the emotions that individuals experience correlate very strongly with their preferred policy actions, with individuals who express worry being the most likely to support national climate policies (Smith & Leiserowitz, 2014). Fear is a more complex emotion than worry, and messages that invoke fear can just as easily fail to motivate the public as to motivate them, because fear appeals require a strong message of efficacy to be effective (Smith & Leiserowitz, 2014).
Social Deliberative Reasoning
Social deliberative reasoning describes the process by which collectives encounter, define, and resolve shared problems. There is an idealized and theoretical understanding of deliberative reasoning, as well as research into how deliberation occurs in practice. In general, investigations of social deliberative reasoning processes are driven by a need to explain how society comes to incorporate individual needs and interests, and how those are balanced in a collective arrangement. In the contemporary complex environment, where diverse individuals engage in different (and multiple) reasoning processes and have divergent interests, identifying deliberative processes of reasoning is increasingly difficult (Bohman, 2000). However, tracking deliberation across public communities is necessary for understanding social encounters with messages and noting the dynamic and fluid nature of shared meanings. This is especially true for understanding how and why institutional policy is crafted, and what barriers it may face in implementation (Pan & Kosicki, 2001).
Early American theories of social deliberative reasoning processes note the difficulties with crafting public argument and successfully persuading majority interests. John Dewey’s The Public and Its Problems (1954) represents a pivotal moment in the development of American political theory and is centrally concerned with the future of democracy in the United States. In the early 20th century, there were fears that technological change was creating too many difficulties for a reasoning public to grapple with, increasing worries surrounding technocratic, elite control of discourse. Additionally, the rise of distinct interests and interest groups threatened the possibility of crafting a coherent public response to emerging risks (Dewey, 1954). Argument and rhetoric played a crucial role in the creation and sustainment of healthy public dialogue on a given issue, with the necessity of face-to-face communication especially noted. With the rise of mass media and easier dissemination of information, there was increasing concern surrounding the presentment of too general “great principles,” which tend to undermine specific and active public deliberation (Dewey, 1954).
Idealized conceptions of public reason focus on the procedural elements of social and community discourse, specifically, on the ways social institutions legitimate themselves to the public, and the process in which citizens engage to assess the effectiveness of these legitimation claims (Habermas, 2015). Ideals of this procedure include the impartiality and inclusiveness of the proceedings, as well as the use of reasoned and evidence-driven argument. The purpose of crafting procedural norms of communication is to encourage a process that more closely reflects democratic ideals, in the hopes of improving the deliberation itself. As is often the case, empirical phenomena do not generally match the idealized conception of public deliberation, leading some to lament the decline of this form of collective reasoning (Habermas, 1991). This can also be true for elite-led discussions, as interested parties can dominate discussions, even in highly technical, scientifically informed policy discussions (Condit, 2014).
Empirical study of deliberative reasoning processes provides a method for evaluating the potential benefits expounded in the theoretical literature. In particular, providing a measurable method for assessing pessimism regarding the absence of healthy deliberative activity is a key goal for social scientific approaches to public deliberation. Empirical research has demonstrated that, in small-group contexts, deliberation can shift group consensus towards a majority opinion, but that minority opinions can also be effective (Carpini, Cook, & Jacobs, 2004). Information gathering by participants in deliberative settings is also mixed. Individuals who believe they share the majority opinion are less likely to engage in “opposition research,” seeking out the arguments of the opposition, while individuals who perceive they are in the minority typically engage in more research of this type (Carpini et al., 2004). A key problem for assessing the empirical effectiveness of public deliberative reasoning methods is disagreement about how to define the practice and the conditions for success. While some research finds positive outcomes in certain contexts, other research is less optimistic (Thompson, 2008). Practically speaking, most deliberative reasoning does not occur in an ideal context, so it is no surprise that there are different outcomes in empirical examinations.
In the health and risk-messaging context, social deliberative reasoning is an emerging area of research. The scope of public deliberation in this context varies wildly, ranging from very local examples of risks to national health care policy. The typical assumption is that more information will increase the public’s trust in scientific recommendations surrounding a health risk, but there could be a polarizing effect from increased deliberation on an issue (Kronberger, Holtz, & Wagner, 2011). Increasing public deliberative efforts around a specific, sophisticated, technical controversy can provide the public with more relevant information about an issue and provide the background for future decision-making regarding health risks. Additionally, the more exposure that individuals have to technical information, the more likely they are to trust credible, scientific institutions (Kronberger et al., 2011). This increased trust in social institutions translates into decreased risk assessments when individuals evaluate potentially harmful technologies (Griffin et al., 1999).
Creating and testing messages for adequate deliberative potential is time-consuming and resource intensive, but generally effective at outlining public concerns surrounding an issue. Applying the theoretical idealizations of impartiality and equity as well as promoting the goals of reasoned debate utilizing well-researched, scientifically generated information is possible, even in very technical arenas, such as bio-banking (Molster et al., 2013). Using representative samples of individuals in deliberative forums can also provide the opportunity to bring forward minority voices and perspectives on an issue, which might otherwise be lost in a political debate surrounding an issue. For example, engaging the African American community in public deliberation on the application of genomics research could help to reframe a historically sensitive issue (Bonham, Citrin, Modell, Franklin, Bleicher, & Fleck, 2009). Community stakeholders are generally best able to represent the specific views generated through diverse experiences and understandings, and thus can positively influence message framing. Participation in a deliberative forum on risk and policy assessment provides the opportunity to infuse different perspectives into the dialogue, ultimately improving communication with effected populations. Framing is of particular importance in the deliberative reasoning process because the interpretative scheme that is brought to bear upon a message can influence further dialogue and debate on that issue (Pan & Kosicki, 2001).
Discussion of the Literature
When communicating, humans engage in a series of complex, multi-variate reasoning processes. Despite the complexity, it is possible to identify different patterns in reasoning methods and behaviors. Communication scholars identify two major categories of reasoning methods, deductive and inductive. These scholars also discuss different individual reasoning behaviors, such as implicit, explicit, and affective. Furthermore, human collectivities combine and manage the reasoning methods and behaviors of individuals through a process of social, deliberative reasoning. Empirical research indicates that deductive reasoning is a powerful method of creating organizing principles for human interactivity, where most individuals create generalizable (deductive) rules and apply these easily. Psychological study reveals that, in time-pressured situations, individuals will fall back on previously constructed heuristics, applying a deductive process implicitly. Individuals also generally display a greater speed in processing deductive argumentation over probabalistic statements, pointing to the utility of this form of reasoning for message design and processing. Neuroscientific investigation has sought to provide a definitive location for higher cognitive functioning, with particular attention paid to differing reasoning processes. In general, the left hemisphere of the brain has been identified as the likely locus or seat of reason. Examinations of inductive reasoning point to the “informal” nature of the process, as well as its utility in practical applications. In communication and health contexts, the use of metaphors and analogies is noted for the powerful meanings that are conveyed in building a case through association. Research demonstrates the ubiquity of induction, as it pervades human life. Scholars note that even small children are constantly looking for paradigm cases, so as to create a generally applicable heuristic. Unfortunately, the displayed need to generalize creates difficulties when assessing statistical information or extrapolating the probability of a systemic issue affecting a single person. In health messaging studies, researchers note that clearly connecting the individual’s case to the proposed risk is essential in bridging the psychological realities of inductive reasoning processes. Implicit reasoning patterns occur through the creation of heuristics to guide behavior. Communication research suggests that there are two processes for engaging implicit modes, slow and fast-learning pathways, with the former describing visual or intuitive associations and the latter related to verbal or symbolic information. Some biologists speculate that implicit reasoning processes exist to provide an evolutionary edge in crafting faster, better decisions on the spot, where patterns can be recognized and prior information quickly retrieved. In processing persuasive messages, neuroscientific investigation bears out the significance of implicit reasoning, finding that the left dorsomedial prefrontal cortex is a key area of activation for effective messages. Additionally, studies of individuals with impaired memory functioning (such as Alzheimer’s patients) have found that these individuals are more likely to engage in implicit reasoning based on their long-term associations. Once implicit beliefs are formed, they are difficult to modify or change, posing a problem for messages challenging deeply held or socially prevalent attitudes. Activating specific, powerful associations is the key to effectively activating implicit reasoning processes. Studies of explicit reasoning processes note that it is likely to occur when individuals encounter new or contradictory information, or when an important decision about the future has to be made. Explicit reasoning engages central processing modes, integrating symbolic and verbal information, and providing a route for possibly overriding affective or emotional responses. Explicit reasoning is coordinated throughout different regions of the brain, with processes beginning in the amygdala and the hypothalamus, with the orbitofrontal cortex coordinating risk/reward assessments. Moving past basic assessments activates the anterior cingulate cortex, and the prefontal cortex, where processing of rule-based reasoning, conditional rules, and other complex decisional calculus occurs. Communication scholars suggests that for health messages, individuals typically engage in explicit reasoning when they believe they need more information about a subject, or are likely to be specifically impacted by a risk factor. This research shows that if an individual can be compelled to engage in explicit reasoning, then expert advice becomes more salient. Affective reasoning has been shown to influence decisions, especially when individuals feel under threat. Neuroscientific investigation demonstrates that individuals are highly attuned to emotional reactions of others, and can be influenced by them. Communication research study participants have also been found to be highly competent at sorting out “real” emotional reactions from “false” emotional displays, with the possibility of discrediting false or faked emotional content. Studies focused on messaging also suggest that affective reactions can modulate the perceived probability of various risk factors, where individuals with a family history of a disease might be more likely to opt for preventative treatment options due to the fear that it is likely to strike them (even if their actual probability is low). Research into social deliberative reasoning processes seeks to extrapolate the examinations of individual reasoning patterns into the collective decision-making context. Communication researchers note that public examination of risk is not immune from the influence of emotions, even when experts are at the head of deliberations. One risk that is noted by scholars working in this area is the possibility of information overload, where individuals become desensitized over time. However, the general exposure to scientific information has been found to increase trust in expert-produced knowledge, which can be useful across health and risk contexts. Empirical research into communicative efficacy suggests that creating forums and facilitating genuine public deliberation is effective at crafting better, more responsive policy in health management.
Bostrom, A., & Löfstedt, R. E. (2003). Communicating risk: Wireless and hardwired. Risk Analysis, 23(2), 241–248.Find this resource:
Buck, R., & Ferrer, R. (2012). Emotion, warnings, and the ethics of risk communication. In S. Roeser, R. Hillerbrand, P. Sandin, & M. Peterson (Eds.), Handbook of risk theory: Epistemology, decision theory, ethics, and social implications of risk. Dordrecht, The Netherlands: Springer.Find this resource:
Cheng, P. W., & Holyoak, K. J. (1985). Pragmatic reasoning schemas. Cognitive Psychology, 17(4), 391–416.Find this resource:
Cimpian, A., Brandone, A. C., & Gelman, S. A. (2010). Generic statements require little evidence for acceptance but have powerful implications. Cognitive Science, 34, 1452–1482.Find this resource:
Fairclough, N. (2003). Analysing discourse: Textual analysis for social research. New York: Routledge.Find this resource:
Hitchcock, D., & Verheij, B. (Eds.). (2006). Arguing on the Toulmin model: New essays in argument analysis and evaluation. Dordrecht, The Netherlands: Springer.Find this resource:
Johnson, B. B. (2005). Testing and expanding a model of cognitive processing of risk information. Risk Analysis, 25(3), 631–650.Find this resource:
Jonsen, A. R., & Toulmin, S. (1988). The abuse of casuistry: A history of moral reasoning: Berkeley: University of California Press.Find this resource:
Kahlor, L. A. (2007). An augmented risk information seeking model: The case of global warming. Media Psychology, 10(3), 414–435.Find this resource:
Kellner, D. (2014). Habermas, the public sphere, and democracy. In D. Boros & J. M. Glass (Eds.), Re-imagining public space: The Frankfurt School in the 21st century. New York: Palgrave Macmillan.Find this resource:
Leeper, T. J., & Slothuus, R. (2014). Political parties, motivated reasoning, and public opinion formation. Political Psychology, 35(S1), 129–156.Find this resource:
Lemke, A. A., Halverson, C., & Ross, L. F. (2012). Biobank participation and returning research results: Perspectives from a deliberative engagement in South Side Chicago. American Journal of Medical Genetics. Part A, 158(5), 1029.Find this resource:
Lynch, J. (2006). Making room for stem cells: Dissociation and establishing new research objects. Argumentation and Advocacy, 42(3), 143.Find this resource:
McComas, K. A. (2006). Defining moments in risk communication research: 1996–2005. Journal of Health Communication, 11(1), 75–91.Find this resource:
Natter, H. M., & Berry, D. C. (2005). Effects of active information processing on the understanding of risk information. Applied Cognitive Psychology, 19(1), 123–135.Find this resource:
Richardson, H. S. (2002). Democratic autonomy: Public reasoning about the ends of policy. New York: Oxford University Press on Demand.Find this resource:
Smith, N., & Leiserowitz, A. (2012). The rise of global warming skepticism: Exploring affective image associations in the United States over time. Risk Analysis, 32(6), 1021–1032.Find this resource:
Stewart, C. O., Dickerson, D. L., & Hotchkiss, R. (2009). Beliefs about science and news frames in audience evaluations of embryonic and adult stem cell research. Science Communication, 30(4), 427–452.Find this resource:
Visschers, V. H. M., Wiedemann, P. M., Gutscher, H., Kurzenhäuser, S., Seidl, R., Jardine, C. G., et al. (2012). Affect-inducing risk communication: Current knowledge and future directions. Journal of Risk Research, 15(3), 257–271.Find this resource:
Adolphs, R., Baron-Cohen, S., & Tranel, D. (2002). Impaired recognition of social emotions following amygdala damage. Journal of Cognitive Neuroscience, 14(8), 1264–1274.Find this resource:
Babcock, L., & Vallesi, A. (2015). The interaction of process and domain in prefrontal cortex during inductive reasoning. Neuropsychologia, 67, 91–99.Find this resource:
Bellur, S., & Sundar, S. S. (2014). How can we tell when a heuristic has been used? Design and analysis strategies for capturing the operation of heuristics. Communication Methods and Measures, 8(2), 116–137.Find this resource:
Bohman, J. (2000). Public deliberation: Pluralism, complexity, and democracy. Cambridge, MA: MIT press.Find this resource:
Bonham, V. L., Citrin, T., Modell, S. M., Franklin, T. H., Bleicher, E. W., & Fleck, L. M. (2009). Community-based dialogue: Engaging communities of color in the United States’ genetics policy conversation. Journal of Health Politics, Policy and Law, 34(3), 325–359.Find this resource:
Boschin, E. A., Piekema, C., & Buckley, M. J. (2015). Essential functions of primate frontopolar cortex in cognition. Proceedings of the National Academy of Sciences, 112(9), E1020–E1027.Find this resource:
Bunge, S. A., & Zelazo, P. D. (2006). A brain-based account of the development of rule use in childhood. Current Directions in Psychological Science, 15(3), 118–121.Find this resource:
Carpini, M. X. D., Cook, F. L., & Jacobs, L. R. (2004). Public deliberation, discursive participation, and citizen engagement: A review of the empirical literature. Annual Review of Political Sciences, 7, 315–344.Find this resource:
Chaiken, S. (1987). The heuristic model of persuasion. In M. P. Zanna, J. M. Olson, & C. P. Herman (Eds.), Social influence: The Ontario symposium (pp. 3–38). Mahwah, NJ: Lawrence Erlbaum.Find this resource:
Chaiken, S., & Maheswaran, D. (1994). Heuristic processing can bias systematic processing: Effects of source credibility, argument ambiguity, and task importance on attitude judgment. Journal of Personality and Social Psychology, 66(3), 460.Find this resource:
Childress, J. F. (1997). Practical reasoning in bioethics. Bloomington, IN: Indiana University Press.Find this resource:
Condit, C. M. (2014). Insufficient fear of the “super-flu”? The World Health Organization’s global decision-making for health. Poroi: An Interdisciplinary Journal of Rhetorical Analysis & Invention, 10(1), 1–31Find this resource:
Cornfield, J., Haenszel, W., Hammond, E. C., Lilienfeld, A. M., Shimkin, M. B., & Wynder, E. L. (2009). Smoking and lung cancer: Recent evidence and a discussion of some questions. International Journal of Epidemiology, 38(5), 1175–1191.Find this resource:
Crescentini, C., Seyed-Allaei, S., De Pisapia, N., Jovicich, J., Amati, D., & Shallice, T. (2011). Mechanisms of rule acquisition and rule following in inductive reasoning. The Journal of Neuroscience, 31(21), 7763–7774.Find this resource:
Cummings, L. (2012). The public health scientist as informal logician. International Journal of Public Health, 57(3), 649–650.Find this resource:
Cunningham, W. A., Raye, C. L., & Johnson, M. (2004). Implicit and explicit evaluation: fMRI correlates of valence, emotional intensity, and control in the processing of attitudes. Journal of Cognitive Neuroscience, 16(10), 1717–1729.Find this resource:
Cunningham, W. A., & Zelazo, P. D. (2007). Attitudes and evaluations: A social cognitive neuroscience perspective. Trends in Cognitive Sciences, 11(3), 97–104.Find this resource:
Dewey, J. (1954). The public and its problems: Athens, OH: Ohio University Press. Originally published in 1927.Find this resource:
Eichenbaum, H., Dudchenko, P., Wood, E., Shapiro, M., & Tanila, H. (1999). The hippocampus, memory, and place cells: Is it spatial memory or a memory space? Neuron, 23(2), 209–226.Find this resource:
Elster, J. (2004). Emotions and rationality. In N. F. Antony, S. R. Manstead, & A. Fischer (Eds.), Feelings and emotions: The Amsterdam symposium (pp. 30–48). Cambridge, U.K.: Cambridge University Press.Find this resource:
Evans, J. S. B. (2014). The psychology of deductive reasoning (psychology revivals). New York: Psychology Press.Find this resource:
Finucane, M. L. (2008). Emotion, affect, and risk communication with older adults: Challenges and opportunities. Journal of Risk Research, 11(8), 983–997.Find this resource:
Frank, M. J., O’Reilly, R. C., & Curran, T. (2006). When memory fails, intuition reigns Midazolam enhances implicit inference in humans. Psychological Science, 17(8), 700–707.Find this resource:
Frijda, N. H. (2004). Emotions and action. In N. F. Antony, S. R. Manstead, & A. Fischer (Eds.), Feelings and emotions: The Amsterdam symposium (pp. 158–173). Cambridge, U.K.: Cambridge University Press.Find this resource:
Frith, C. D., & Frith, U. (2008). Implicit and explicit processes in social cognition. Neuron, 60(3), 503–510.Find this resource:
Gawronski, B., & Bodenhausen, G. V. (2006). Associative and propositional processes in evaluation: An integrative review of implicit and explicit attitude change. Psychological Bulletin, 132(5), 692.Find this resource:
Goel, V. (2007). Anatomy of deductive reasoning. Trends in Cognitive Sciences, 11(10), 435–441.Find this resource:
Griffin, R. J., Dunwoody, S., & Neuwirth, K. (1999). Proposed model of the relationship of risk information seeking and processing to the development of preventive behaviors. Environmental Research, 80(2), S230–S245.Find this resource:
Groenendyk, E. (2011). Current emotion research in political science: How emotions help democracy overcome its collective action problem. Emotion Review, 3(4), 455–463.Find this resource:
Habermas, J. (1991). The structural transformation of the public sphere: An inquiry into a category of bourgeois society (T. Burger, Trans.). Cambridge, MA: MIT Press.Find this resource:
Habermas, J. (2015). Between facts and norms: Contributions to a discourse theory of law and democracy. New York: John Wiley & Sons.Find this resource:
Hacking, I. (2001). An introduction to probability and inductive logic. Cambridge, U.K.: Cambridge University Press.Find this resource:
Harman, G. (2008). Change in view: Principles of reasoning. Cambridge, U.K.: Cambridge University Press.Find this resource:
Hassin, R. R., Bargh, J. A., Engell, A. D., & McCulloch, K. C. (2009). Implicit working memory. Consciousness and Cognition, 18(3), 665–678.Find this resource:
Heit, E. (2000). Properties of inductive reasoning. Psychonomic Bulletin & Review, 7(4), 569–592.Find this resource:
Heit, E., & Rotello, C. M. (2010). Relations between inductive reasoning and deductive reasoning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 36(3), 805–812.Find this resource:
Heit, E., & Rubinstein, J. (1994). Similarity and property effects in inductive reasoning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 20(2), 411.Find this resource:
James, W. (1884). What is an emotion? Mind, 9(34), 188–205.Find this resource:
Jepson, C., Nisbett, R. E., & Krantz, D. H. (1993). Inductive reasoning: Competence or skill? In R. E. Nisbett (Ed.), Rules for reasoning. Hillsdale, NJ: Lawrence Elrbaum.Find this resource:
Kahlor, L., Dunwoody, S., Griffin, R. J., Neuwirth, K., & Giese, J. (2003). Studying heuristic‐systematic processing of risk communication. Risk Analysis, 23(2), 355–368.Find this resource:
Kandel, E. (2013). The new science of mind and the future of knowledge. Neuron, 80(3), 546–560.Find this resource:
Keller, C., Siegrist, M., & Gutscher, H. (2006). The role of the affect and availability heuristics in risk communication. Risk Analysis, 26(3), 631–639.Find this resource:
Kennedy, G. A. (2007). On rhetoric: A theory of civic discourse: New York: Oxford University Press.Find this resource:
Klein, W. M., & Monin, M. M. (2009). When focusing on negative and positive attributes of the self elicits more inductive self-judgment. Personality and Social Psychology Bulletin, 35(3), 376–384.Find this resource:
Klucharev, V., Smidts, A., & Fernández, G. (2008). Brain mechanisms of persuasion: How “expert power” modulates memory and attitudes. Social Cognitive and Affective Neuroscience, 3(4), 353–366.Find this resource:
Kreps, G. L., Villagran, M. M., Zhao, X., McHorney, C. A., Ledford, C., Weathers, M., et al. (2011). Development and validation of motivational messages to improve prescription medication adherence for patients with chronic health problems. Patient Education and Counseling, 83(3), 375–381.Find this resource:
Kronberger, N., Holtz, P., & Wagner, W. (2011). Consequences of media information uptake and deliberation: Focus groups’ symbolic coping with synthetic biology. Public Understanding of Science, 12(2), 174–187.Find this resource:
Lakoff, G., & Johnson, M. (1999). Philosophy in the flesh: The embodied mind and its challenge to Western thought. New York: Basic Books.Find this resource:
Lessl, T. M. (2008). Scientific demarcation and metascience. In F. H. van Eemeren & B. Garssen (Eds.), Controversy and confrontation: Relating controversy analysis with argumentation theory (Vol. 6). Philadelphia: John Benjamins.Find this resource:
Lien, N.-H. (2001). Elaboration likelihood model in consumer research: A review. Proceedings of the National Science Council, 11(4), 301–310.Find this resource:
MacDonald, K. B. (2008). Effortful control, explicit processing, and the regulation of human evolved predispositions. Psychological Review, 115(4), 1012.Find this resource:
Malaia, E., Tommerdahl, J., & McKee, F. (2015). Deductive versus probabilistic reasoning in healthy adults: An EEG analysis of neural differences. Journal of Psycholinguist Research, 44(5), 533–544.Find this resource:
Mesquita, B. (2003). Emotions as dynamic cultural phenomena. In R. J. Davidson, K. R. Scherer, & H. H. Goldsmith (Eds.), Handbook of affective sciences. Oxford: Oxford University Press.Find this resource:
Molster, C., Maxwell, S., Youngs, L., Kyne, G., Hope, F., Dawkins, H., et al. (2013). Blueprint for a deliberative public forum on biobanking policy: Were theoretical principles achievable in practice? Health Expectations, 16(2), 211–224.Find this resource:
Mosley-Jensen, W. (2011). The climate change controversy: A technical debate in the public sphere. Saarbrücken, Germany: VDM Verlag Dr Müller.Find this resource:
Nisbett, R. E., Krantz, D. H., Jepson, C., & Kunda, Z. (1983). The use of statistical heuristics in everyday inductive reasoning. Psychological Review, 90(4), 339.Find this resource:
Öhman, A., & Mineka, S. (2001). Fears, phobias, and preparedness: Toward an evolved module of fear and fear learning. Psychological Review, 108(3), 483–522.Find this resource:
Orłowska, E. (1986). Semantic analysis of inductive reasoning. Theoretical Computer Science, 43, 81–89.Find this resource:
Pan, Z., & Kosicki, G. M. (2001). Framing as a strategic action in public deliberation. In S. D. Reese, O. H. Gandy, & A. E. Grant (Eds.), Framing public life: Perspectives on media and our understanding of the social world (pp. 35–65). Mahwah, NJ: Lawrence Erlbaum.Find this resource:
Petty, R. E., Baker, S. M., Gleicher, F., Donohew, L., Sypher, H., & Bukoski, W. (1991). Attitudes and drug abuse prevention: Implications of the elaboration likelihood model of persuasion. In L. Donohew, H. E. Sypher, & W. J. Bukoski (Eds.), Persuasive communication and drug abuse prevention (pp. 71–90). New York: Lawrence Erlbaum.Find this resource:
Petty, R. E., & Cacioppo, J. T. (1986). The elaboration likelihood model of persuasion. In R. E. Petty & J. T. Cacioppo (Eds.). Communication and persuasion: Central and peripheral routes to attitude change (pp. 1–24). New York: Springer.Find this resource:
Prado, J., Chadha, A., & Booth, J. R. (2011). The brain network for deductive reasoning: A quantitative meta-analysis of 28 neuroimaging studies. Journal of Cognitive Neuroscience, 23(11), 3483–3497.Find this resource:
Rips, L. J. (1994). The psychology of proof: Deductive reasoning in human thinking: Cambridge, MA: MIT Press.Find this resource:
Rydell, R. J., & McConnell, A. R. (2006). Understanding implicit and explicit attitude change: A systemof reasoning analysis. Journal of Personality and Social Psychology, 91(6), 995.Find this resource:
Rydell, R. J., McConnell, A. R., Strain, L. M., Claypool, H. M., & Hugenberg, K. (2007). Implicit and explicit attitudes respond differently to increasing amounts of counterattitudinal information. European Journal of Social Psychology, 37(5), 867–878.Find this resource:
Scherer, K. R. (2005). What are emotions? And how can they be measured? Social Science Information, 44(4), 695–729.Find this resource:
Schmälzle, R., Häcker, F. E., Honey, C. J., & Hasson, U. (2015). Engaged listeners: Shared neural processing of powerful political speeches. Social Cognitive and Affective Neuroscience, 10, 1137–1143.Find this resource:
Sinatra, G. M., Kienhues, D., & Hofer, B. K. (2014). Addressing challenges to public understanding of science: Epistemic cognition, motivated reasoning, and conceptual change. Educational Psychologist, 49(2), 123–138.Find this resource:
Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119(1), 3.Find this resource:
Slovic, P., Peters, E., Finucane, M. L., & MacGregor, D. G. (2005). Affect, risk, and decision making. Health Psychology, 24(4S), S35.Find this resource:
Smith, E. R., & DeCoster, J. (2000). Dual-process models in social and cognitive psychology: Conceptual integration and links to underlying memory systems. Personality and Social Psychology Review, 4(2), 108–131.Find this resource:
Smith, N., & Leiserowitz, A. (2014). The role of emotion in global warming policy support and opposition. Risk Analysis, 34(5), 937–948.Find this resource:
Smith, P. (2003). An introduction to formal logic: Cambridge, U.K.: Cambridge University Press.Find this resource:
Smith, S. W., Hitt, R., Russell, J., Nazione, S., Silk, K., Atkin, C. K., et al. (2016). Risk belief and attitude formation from translated scientific messages about PFOA, an environmental risk associated with breast cancer. Health Communication, 32(3), 279–287.Find this resource:
Striker, G. (2009). Aristotle: Prior analytics: Book 1. Oxford: Clarendon.Find this resource:
Thompson, D. F. (2008). Deliberative democratic theory and empirical political science. Annual Review of Political Science, 11, 497–520.Find this resource:
Toulmin, S. (2003). The uses of argument (updated ed.). Cambridge, U.K.: Cambridge University Press.Find this resource:
Trumbo, C. W. (2002). Information processing and risk perception: An adaptation of the heuristic‐systematic model. Journal of Communication, 52(2), 367–382.Find this resource:
Vezich, S., Falk, E., & Lieberman, M. (2016). Persuasion neuroscience: New potential to test dual process theories. In E. Harmon-Jones & M. Inzlicht (Eds.), Social neuroscience: Biological approaches to social psychology. New York: Psychology Press.Find this resource:
Whalen, P. J., Rauch, S. L., Etcoff, N. L., McInerney, S. C., Lee, M. B., & Jenike, M. A. (1998). Masked presentations of emotional facial expressions modulate amygdala activity without explicit knowledge. The Journal of Neuroscience, 18(1), 411–418.Find this resource:
Willis, J., Willis, W. J., & Okunade, A. A. (1997). Reporting on risks: The practice and ethics of health and safety communication: Westport, CT: Praeger.Find this resource:
Wood, W. (2000). Attitude change: Persuasion and social influence. Annual Review of Psychology, 51(1), 539–570.Find this resource: