Satisficing in Survey Design - PDF Document

Presentation Transcript

  1. Satisficing in Survey Design Scott Daniel Swinburne University of Technology sdaniel@swin.edu.au Survey design is beguiling in its apparent simplicity. It can seem so straightforward to investigate some issue by merely writing a few quick questions with the conviction that the analysis of the responses subsequently collected will be unambiguous. Yet, such details as the order of questions and responses, word choice, visual layout, question length, and much more, can have a profound effect upon how participants respond. In this paper I will explore the concept of ‘satisficing’, a decision-making strategy in which the easiest adequate solution is chosen, and how it relates to good survey design. Bias in Survey Design Surveys and questionnaires continue to be widely used in a variety of research settings. In recent years awareness has grown of the effects that subtle changes in survey design can cause. Two good entry points into the growing research literature are Schumann & Presser’s comprehensive summary (1996), and Choi & Pak’s catalogue of biases (2005). Although the Choi & Pak paper was published in a public health journal, the issues they identify are easily transferable to other contexts, and can be used as a quick checklist for identifying possible sources of bias. They classify 48 different types of bias into different categories and sub- categories. The three main categories are: •biases in question design; •biases in questionnaire design; and •biases in the administration of the questionnaire Several of the biases they identify can be understood as the respondent picking the easiest adequate response. This decision-making strategy is known as ‘satisficing’ and is the focus of this paper. Satisficing Herbert Simon (cited in Krosnick, 1991) coined the term satisficing in 1957 in the context of economic decision-making. Rather than exhaustively pondering every decision in order to maximise profits (i.e. optimizing), he argued that people only do what is sufficient to obtain a satisfactory outcome. He combined both these terms (‘satisfy’ and ‘suffice’) into one: ‘satisfice’. This concept can also be understood as a combination of ‘satisfy’ and ‘sacrifice’, in that the outcome satisfies the essential criteria but sacrifices some of the superfluous ones. Contemporary Approaches to Researchin Mathematics, Science, Health and Environmental Education 2012 1

  2. Daniel Satisficing in Survey Design Satisficing versus Optimising A simple example (Wikipedia, 2012) will help illuminate the differences between satisficing and optimising. Imagine you need to sew a patch on your jeans, and the perfect needle for the job would be 4 inches long, with a 3 mm eye. You have a giant pile with thousands of needles of all different sizes. An optimiser would search relentlessly for the perfect needle, whereas the satisficer would quickly find the first needle that was good enough for the job. Table 1 Satisficing versus Optimising Optimise Satisfice Find the perfect needle! More time More effort Not always possible Find a needle that will work. Fast Easy ‘good enough’ Satisficing in Question Design Several different types of survey questions can evoke a satisficing response and thus bias the data. Some examples are illustrated below, with some strategies to address them. Long banks of identical ratings scales Ratings scales like semantic differentials (choosing a position between two opposing adjectives e.g. ‘valuable’ / ‘worthless’), Likert-scale agree/disagree, or numerical scores (out of 10 for example) are common survey questions that can extract meaningful information about respondents’ attitudes and beliefs. However, if they are used repetitively at length, the temptation for a satisficer is to give the same response for each item. For example, if different chocolate bars were being rated out of 10 for enjoyment, it is easy to imagine a satisficer quickly giving them all a score of 8 because they simply like chocolate. Several strategies can be employed to avoid collecting such potentially biased data. Firstly, such long sets of repetitive questions can be avoided. Otherwise, different checks can be built into the questions. Internal consistency is the extent to which questions that relate to the same construct are answered equivalently, and this measure is often used in psychology questionnaires to establish their reliability. For example, it would not be consistent to agree with both “I don’t like riding bicycles” and “I look forward to riding a bicycle in the future”. An alternate strategy is used by The Colorado Learning Attitudes about Science Survey (Adams et al., 2006), which consists solely of 42 Likert-scale ratings questions. To check students are reading each question, one of their questions reads: 31. We use this question to discard the survey of people who are not reading the statements. Please select agree - option 4 (not strongly agree) to preserve your answers. Contemporary Approaches to Researchin Mathematics, Science, Health and Environmental Education 2011 2

  3. Daniel Satisficing in Survey Design Choosing one response from a long list Rather than expending the effort to read each response in detail and carefully weigh its merits against all the other responses in the list, respondents may simply choose the first adequate response. For example, consider a question about the biggest barriers to faculty engaging in research into learning and teaching. If the first response option was “time” this may appear adequate because certainly ‘all academics are time-poor’. However, a bigger barrier may in fact be “different epistemological underpinnings” (Mann, Chang, & Mazzolini, 2011), yet if this response was placed at the end of the list the respondent may not even read that far. Two strategies can be used to minimise this bias. If the population norms are known (or can be reasonably inferred) for the question, listing the response options from least to most popular will offset the satisficing response. In practice however, it is rare that the population norms can be listed in such a way with much certainty. A better option, easily implemented with online survey tools such as Survey Monkey, is to randomise the response order for each respondent. This ensures that, even though individual respondents may satisfice in choosing their response, with a large enough sample size this satisficing primacy effect will be washed out by its distribution across the set of response options. Neutral or ‘Don’t Know’ Responses Krosnick et al. (2002) argue that offering a no-opinion response option, such as a “don’t know” or neutral response on a Likert-scale, may attract satisficing respondents who are not sufficiently motivated to carefully consider their views about the question in order to decide on a more definitive response. The concern is that leaving out such response options forces respondents that truly hold no opinion to randomly choose from the other available options, adding noise to the data. Krosnick’s team found no evidence for such a negative effect on data quality, and argue instead that leaving out these no-opinion responses unmasks meaningful views from respondents that would otherwise satisfice. They identified several ‘risk factors’ for such satisficing responses, which are outlined in the section below. To eliminate this effect, avoid “don’t know” responses and use even-numbered Likert-scales or semantic differentials. Odd-numbered scales necessarily include the ‘middle’ neutral response, whereas even-numbered scales (e.g. Strongly Agree / Agree / Disagree / Strongly Disagree) necessarily exclude it. Risk Factors for Satisficing In their research, Krosnick et al. (2002) identified a number of risk factors for satisficing (as measured by increased rates of no-opinion responses). Time pressure When respondents had some sort of time pressure imposed, they were more likely to satisfice. This effect was also observed in another study comparing student feedback surveys administered in-class versus online (Mazzolini, Daniel, & Mann, 2012). The in-class data showed a greater rate of neutral responses, which may be explained by the fact that data was collected using audience response devices (i.e. clickers) and a countdown timer was displayed to close voting. Contemporary Approaches to Researchin Mathematics, Science, Health and Environmental Education 2011 3

  4. Daniel Satisficing in Survey Design Motivation More motivated respondents are less likely to satisfice. The reduced rate of neutral responses recorded from online respondents in the previous student feedback study could be interpreted as a reflection of their greater motivation, in bothering to respond to the survey online in the first place. Anonymity Krosnick et al. (2002) found that respondents surveyed face-to-face were less likely to satisfice than respondents who answered the same questions anonymously. Cognitive skills Krosnick’s team took education level as a proxy measure of cognitive skills. They found that respondents of greater cognitive skill were less likely to satisfice. This could be because such respondents are more familiar with the subtleties of the topic in question, they may have already considered the question being asked and reached a conclusion, or perhaps they are just more generally adept at complex mental comparisons. Task difficulty The more difficult a task, the more likely it is to evoke a satisficing response. Survey length Questions towards the end of long surveys are more likely to attract a satisficing response. One strategy to minimise this effect is to randomise question order, but this must not be done without considering other effects of question order (Schumann & Presser, 1996). Conclusion Rigorous survey design and research requires an awareness of satisficing. However, by avoiding the above risk factors and implementing some of the simple strategies outlined earlier, its effects can be minimised and the resulting survey data interpreted with greater confidence. References Adams, W. K., Perkins, K. K., Podolefsky, N. S., Dubson, M., Finkelstein, N. D., Wieman, C. E. (2006). New Instrument for Measuring Student Beliefs about Physics and Learning Physics: The Colorado Learning Attitudes about Science Survey. Physical Review Special Topics - Physics Education Research, 2(1). Choi, B. C. K., & Pak, A. W. P. (2005). A Catalog of Biases in Questionnaires. Preventing chronic disease, 2(1). Krosnick, J. A. (1991). Response Strategies for Coping with the Cognitive Demands of Attitude Measures in Surveys. Applied Cognitive Psychology, 5(3), 213- 236. Krosnick, J. A., Holbrook, A. L., Berent, M. K., Carson, R. T., Hanemann, W. M., Kopp, R. J., . . . Conaway, M. (2002). The impact of 'no opinion' response options on data quality: Non-attitude reduction or an invitation to satisfice? Public Opinion Quarterly, 66(3), 371-403. doi: 10.1086/341394 Contemporary Approaches to Researchin Mathematics, Science, Health and Environmental Education 2011 4

  5. Daniel Satisficing in Survey Design Mann, L., Chang, R., Mazzolini, A. (2011). Hidden Barriers to Academic Staff Engaging in Engineering Education Research. Proceedings of the Research in Engineering Education Symposium 2011 – Madrid. Mazzolini, A. P., Daniel, S., & Mann, L. (2012). A Comparison of On-line and ‘In- class’ Student Feedback Surveys: Some Unexpected Results. Paper presented at the Australasian Association for Engineering Education 2012 Annual Conference, Melbourne, Australia. accepted for publication Satisficing. (2012, December 17). In Wikipedia, The Free Encyclopedia. Retrieved 23:19, December 20, 2012, from http://en.wikipedia.org/w/index.php?title=Satisficing&oldid=528470073 Schuman, H., & Presser, S. (1996). Questions and Answers in Attitude Surveys: Experiments on Question Form, Wording, and Context. United States: Commercial. Contemporary Approaches to Researchin Mathematics, Science, Health and Environmental Education 2011 5