1/2

QUALITATIVE RESEARCH PROPOSAL

THE SCHOLARLY LITERATURE Everything that we research is driven by the scholarly literature, and more precisely, the recent scholarly literature. If a topic is not contained in the recent scholarly literature, then the discipline is not interested in that topic and it cannot be researched for a dissertation. The scholarly literature not only lets us know what topics are appropriate to research but also which methodology (qualitative or quantitative) should be used and what theories are appropriate to use as an underpinning for the dissertation.

RESEARCH TOPIC The research topic emerges from the literature of the discipline and must be acceptable within that discipline. All disciplines have topics relevant to them and topics that lie outside the bounds of the discipline. For example, researchers in the �eld of criminal justice investigate di�erent topics than researchers in the �elds of health care or emergency management. One of the best places to obtain ideas for appropriate dissertation research topics is to look at current dissertations (those written within the past �ve years) and read the sections in them concerning suggestions for future research. All dissertations must have a section that addresses future research. These sections re�ect the cutting edge of current research.

RESEARCH PROBLEM The research problem emerges from the literature. If the literature does not support a research problem, then it cannot be researched. Likewise, if a research problem has already been answered by the discipline, then it cannot be researched again. The research problem is generally written in the following format:

A is known (reference from the literature).

B is known (another reference from the literature).

C is known (another reference from the literature).

What in not known is D (D then becomes the problem that is being researched).

Keep in mind, the known pieces of the literature must logically lead to the research problem; they must all be connected, otherwise you are just citing random pieces of literature.

Scienti�c Merit Review Form Content On the Scienti�c Merit Review Form, you must present the research problem for your dissertation in exactly the format discussed here: "A is known (reference from the literature). B is known (another reference from the literature). C is known (another reference from the literature). What in not known is D (D then becomes the problem that is being researched)."

In addition, you must answer the following three questions:

1. Does the study address something that is not known or has not been studied before? Explain how this study is new or di�erent from other studies.

2/2

2. If your research questions are studied, how could your �ndings a�ect your �eld of interest? This is the answer to the "So What?" question.

3. What possible practical implications do you predict the results of your research will have? Explain how the results will a�ect your sample, your site location, or your workplace. This is the answer to the "Who cares?" question.

RESEARCH QUESTIONS The research questions also emerge from the literature. The goal of the study, which guides the proposed research, is to answer the research questions. The goal must be feasible and the questions answerable by the method proposed in the research. In addition, the research questions must be speci�c. A question such as, "What is leadership?" is too general. If the answers are already known and re�ected in the literature, there is no reason to conduct the study.

The research questions clearly identify the concepts under investigation by the study. (In a quantitative study, these concepts are referred to as variables. In a qualitative study they are referred to as concepts.) Concepts in qualitative studies are often perceptions or experiences. A qualitative study most often focuses on only one concept, whereas a quantitative study always includes at least two variables that are under investigation.

Scienti�c Merit Review Form Content On the SMR form, you must list your research questions and clearly identify your variables (concepts).

METHODOLOGY The methodology also emerges from the literature. For this course, the methodology must be qualitative, but when an actual research study is designed, the methodology must be accepted by the discipline and must be appropriate for the state of the overall research on that topic. Topics where considerable research has already been conducted are generally investigated using quantitative methods, whereas topics with little research are more often investigated using qualitative methods. This is because the nature of qualitative research is to "see what's out there" when we don't already know. On the other hand, the nature of quantitative research is to make generalizable statements.

Interactive Design:

Instructional Designer:

Project Manager:

CREDITS Gregory Romero Stephen Sorenson

Kathryn Green

L i c e n s e d u n d e r a C r e a t i v e C o m m o n s A t t r i b u t i o n 3 . 0 L i c e n s e .

ACC 690 Milestone One Guidelines and Rubric Overview: The final project for this course is the creation of a portfolio consisting of a report, spreadsheets, and a PowerPoint presentation. You will be placed in a real-world scenario in which you will take the role of an associate in a certified public accountant (CPA) firm. The CPA partners in the scenario would like to help you grow within the firm by getting you more contact with some of the larger clients. You will address questions from one of the firm’s most influential and growing clients by assembling and presenting the necessary information in report and presentation format. Your presentation should include spreadsheet examples. Topics addressed in the portfolio will cover partnership formation, bankruptcy, and acquisition of another company (which may be international). Your three milestone assignments will consist of shorter reports and supporting spreadsheets, which will prepare you for completing a comprehensive report, spreadsheets, and a presentation. You should use your instructor’s feedback from the milestone submissions to improve your final report, spreadsheets, and presentation. Prompt: In Milestone One, you will complete a report covering Section I and Part A of Section II of the final project. In the report you will discuss the key issues of partnership, such as formation, splitting profits/losses, dissolution, and a cash distribution schedules. You will also discuss the issue of bankruptcy, both voluntary and forced, as well as liquidation. Specifically, the following critical elements must be addressed:

I. Partnership: The company is considering forming a partnership and wants to be sure it understands the key issues regarding partnership formation, income distribution, and liquidation.

A. Explain the process and methods used to account for partnership formation. How do these methods impact the firm’s balance sheet? B. Illustrate how the company could split profits and losses. C. Describe what happens if the partnership doesn’t do well and the company has to dissolve it, or one of the partners becomes insolvent. D. Illustrate the dissolution process by creating a hypothetical cash distribution schedule. Ensure all information is entered accurately.

II. Corporation: The company is also considering structuring its business as a corporation, but is aware that there are a lot of complex issues to consider

when accounting for an incorporated entity. The company is concerned about the following key areas: A. Differentiate between various forms of bankruptcy and restructuring that the firm should understand.

1. Summarize the key points of interest if the firm fell on hard times and had to file voluntary bankruptcy. What ethical implications should be considered when debating whether or not to file bankruptcy?

2. Identify the key areas of concern if the firm fell on hard times and their creditors forced them into bankruptcy. What defenses are available in this situation?

3. Illustrate hypothetical calculations that would be done to help creditors understand how much money they might receive if the company were to liquidate. Ensure all information is entered accurately. Please refer to the illustration (Exhibit 13.2) on page 592 from your textbook to view potential calculations.

Guidelines for Submission: Your report must be submitted as a 2- to 3-page Word document with double spacing, 12-point Times New Roman font, one-inch margins, and at least two sources (in addition to your textbook) cited in APA format. Your accompanying spreadsheets must be submitted as Microsoft Excel files.

Critical Elements Proficient (100%) Needs Improvement (75%) Not Evident (0%) Value

Partnership: Formation

Explains the process and methods used to account for partnership formation and how they impact the firm’s balance sheet

Explains the process and methods used to account for partnership formation but does not explain how they impact the firm’s balance sheet or explanation is cursory or has inaccuracies

Does not explain the process and methods used to account for partnership foundation

12.86

Partnership: Split Illustrates the options for distribution of profits and ensures all information is entered accurately

Illustrate the options for distribution of profits and losses but there are inaccuracies

Does not illustrate the options for distribution of profits and losses

12.86

Partnership: Dissolve

Describes what happens in partnership dissolution or partner insolvency

Describes what happens in partnership dissolution or partner insolvency but description is cursory or has inaccuracies

Does not describe what happens in partnership dissolution or partner insolvency

12.86

Partnership: Cash Distribution

Schedule

Illustrates the dissolution process by providing an example of a cash distribution schedule, ensuring accuracy

Illustrates the dissolution process by providing an example of a cash distribution schedule, but there are inaccuracies

Does not provide an example of a cash distribution schedule

12.86

Corporation: Voluntary

Bankruptcy

Summarizes the key points of interest if the firm had to file voluntary bankruptcy and discusses the ethical implications that should be considered

Summarizes the key points of interest if the firm had to file voluntary bankruptcy but does not discuss the ethical implications that should be considered, or discussion is cursory or has inaccuracies

Does not summarize the key points of interest if the firm had to file voluntary bankruptcy

12.86

Corporation: Forced Bankruptcy

Identifies the key areas of concern if the firm was forced into bankruptcy and the defenses available in this situation

Identifies the key areas of concern if the firm was forced into bankruptcy but does not identify defenses available, or identification is cursory or has inaccuracies

Does not identify the key areas of concern if the firm was forced into bankruptcy

12.86

Corporation: Liquidate

Illustrates calculations and ensures all information is entered accurately

Illustrates calculations but there are inaccuracies

Does not illustrate calculations 12.86

Articulation of Response

Submission has no major errors related to citations, grammar, spelling, syntax, or organization

Submission has major errors related to citations, grammar, spelling, syntax, or organization that negatively impact readability and articulation of main ideas

Submission has critical errors related to citations, grammar, spelling, syntax, or organization that prevent understanding of ideas

9.98

Total 100%

Language Differences

The data from interviews are words. It is tricky enough to be sure what a person means when using a common language, but words can take on a very different meaning in other cultures. In Sweden, I participated in an international conference discussing policy evaluations. The conference was conducted in English, but I was there two days, much of the time confused, before I came to understand that their use of the term policy corresponded to my American use of the term program. I interpreted policies from an American context, to be fairly general directives, often very difficult to evaluate because of their vagueness. In Sweden, however, policies were articulated and even legislated at such a level of specificity that they resembled programmatic prescriptions more than the vague policies that typically emanate from the legislative process in the United States.

The situation becomes more precarious when a translator or interpreter must be used. Special and very precise training of translators is critical. Translators need to understand what, precisely, you want them to ask and that you will need full and complete translation of responses as verbatim as possible. Interpreters often want to be helpful by summarizing and explaining responses. This contaminates the interviewee’s actual response with the interpreter’s explanation to such an extent that you can no longer be sure whose perceptions you have—the interpreter’s or the interviewee’s.

Some words and ideas simply can’t be translated directly. People who regularly use the language come to know the unique cultural meaning of special terms. One of my favorites from the Caribbean is liming, meaning something like hanging out, just being, doing nothing—guilt free. In interviews for a Caribbean program evaluation, a number of participants said that they were just “liming” in the program. That was not meant as criticism, however, for liming is a highly desirable state of being, at least to the participants. Funders viewed the situation somewhat differently.

Rheingold (2000) published a whole book on “untranslatable words and phrases” with special meanings in other cultures. Below are four examples that are especially relevant to researchers and evaluators.

1. Schlimmbesserung (German): A so-called improvement that makes things worse

2. bigapeula (Kiriwina, New Guinea): Potentially disruptive, unredeemable true statements

3. animater (French): A word of respect for a person who can communicate difficult concepts to general audiences

4. ta (Chinese): To understand things and thus take them lightly

In addition to the possibility of misunderstandings, there may be the danger of contracting a culturally specific disease, including for some what the Chinese call koro—“the hysterical belief that one’s penis is shrinking” (Rheingold, 1988, p. 59).

Attention to language differences cross-nationally can, hopefully, make us more sensitive to barriers to understanding that can arise even among those who speak the same language. Joyce Walker undertook a collaborative study with 18 women who had written to each other annually for 25 years, from 1968 to 1993. She involved them actively in the study, including having them confirm the authenticity of her findings. In reviewing the study’s findings, one participant reacted to the research language used: “Why call us a cohort? There must be something better—a group, maybe?” (Walker, 1996, p. 10).

Differing Norms and Values

The high esteem in which science is held has made it culturally acceptable in Western countries to conduct interviews on virtually any subject in the name of scholarly inquiry and the common good. Such is not the case worldwide. Researchers cannot simply presume that they have the right to ask intrusive questions. Many topics discussed freely in Western societies are taboo in other parts of the world. I have experienced cultures where it was simply inappropriate to ask questions to a subordinate about a superordinate. Any number of topics may be insensitive to ask or indelicate if brought up by strangers, for example, family matters, political views, who owns what, how people came to be in certain positions, and sources of income.

Interviewing farmers for an agricultural extension in Central America became nearly impossible to do because for many their primary source of income came from growing illegal crops. In an African dictatorship, we found that we could not ask about “local” leadership because the country could have only one leader. Anyone taking on or being given the designation “local leader” would have been endangered. Interviewees can be endangered by insensitive and inappropriate questions; so can naive interviewers. I know of a case where an American female student was raped following an evening interview in a foreign country because the young man interpreted her questions about local sexual customs and his own dating experiences as an invitation to have sex.

EXHIBIT 7.15 Ten Examples of Variations in Cross-Cultural Norms That Can Affect Interviewing and Qualitative Fieldwork

As noted in the previous section on group interviews, different norms govern cross-cultural interactions. I remember going to an African village to interview the chief and finding the whole village assembled. Following a brief welcoming ceremony, I asked if we could begin the interview. I expected a private, one-on-one interview. He expected to perform in front of and involve the whole village. It took me a while to understand this, during which time I kept asking to go somewhere else so we could begin the interview. He did not share my concern about and preference for privacy. What I expected to be an individual interview soon became a whole village group dialogue.

In many cultures, it is a breach of etiquette for an unknown man to ask to meet alone with a woman. Even a female interviewer may need the permission of a husband, brother, or parent to interview a village woman. A female colleague created a great commotion, and placed a woman in jeopardy, by pursuing a personal interview without permission from the male headman. Exhibit 7.15 lists some of the variations in cultural behaviors that can affect cross-cultural interviewing and qualitative fieldwork.

The value of Cross-Cultural Interviewing

As difficult as cross-cultural interviewing may be, it is still far superior to standardized questionnaires for collecting data from nonliterate villagers. Salmen (1987) described a major water project undertaken by The World Bank based on a needs assessment survey. The project was a failure because the local people ended up opposing the approach used. His reflection on the project’s failure includes a comparison of survey and qualitative methods.

Although it is difficult to reconstruct the events and motivation that led to the rejection there is little question that a failure of adequate communication between project officials and potential beneficiaries was at least partly responsible. The municipality’s project preparation team had conducted a house-to- house survey in Guasmo Norte before the outset of the project, primarily to gather basic socioeconomic data such as family size, employment and income. The project itself, however, was not mentioned at this early stage. On the basis of this survey, World Bank and local officials had decided that standpipes would be more affordable to the people than household connections. It now appears, from hindsight, that the questionnaire survey method failed to elicit the people’s negative attitude toward standpipes, their own criterion of affordability, or the opposition of their leaders who may have played on the negative feelings of the people to undermine acceptance of the project. Qualitative interviews and open discussions would very likely have revealed people’s preferences and the political climate far better than did the preconstructed questionnaire. (Salmen, 1987, pp. 37–38)

Appropriately, Salmen’s book is called Listen to the People and advocates qualitative methods for international project evaluation of development efforts.

Interviewers are not in the field to judge or change values and norms. Researchers are there to understand the perspectives of others. Getting valid, reliable, meaningful, and usable information in cross-cultural environments requires special sensitivity to and respect for differences, including concerns about decolonizing research in cross-cultural contexts (Denzin & Lincoln, 2011, pp. 92 –93; L. T. Smith, 2012; Mutua & Swadener, 2011) and cultural competence (American Evaluation Association, 2013). Moreover, specific challenges emerge in specific countries, as Bubaker, Balakrishnan, and Bernadine (2013) demonstrated when conducting quality studies in Libya and Malaysia. (For additional discussion and examples of cross-cultural research and evaluation, see Ember & Ember, 2009; Lonner & Berry, 1986; Patton, 1985a.)

One final observation on international and cross-cultural evaluations may help emphasize the value of such experiences. Connor (1985) found that doing international evaluations made him

more sensitive and effective in his domestic evaluation work. The heightened sensitivity we expect to need in exotic, cross-cultural settings can serve us well in our own cultures. Sensitivity to and respect for other people’s values, norms, and worldviews is as much needed at home as abroad.

SIDEBAR

CROSS-CULTURAL PERSPECTIVES ON DEVELOPMENT

The titles of these books communicate the challenges and importance of generating cross- cultural understanding and appropriate action around the world. These books also illustrate the global scale at which mixed-methods inquiries are now being done.

• Can Anyone Hear Us? Voices of the Poor

The voices of more than 40,000 poor women and men in 50 countries from the World Bank’s participatory poverty assessments (Narayan, Patel, Schafft, Rademacher, & Koch- Schulte, 2000)

• Crying Out for Change: Voices of the Poor

Fieldwork on poverty conducted in 23 countries (Narayan, Chambers, Shah, & Petesch, 2000)

• Voices of the Poor: From Many Lands

Regional patterns of poverty and country case studies (Narayan & Petesch, 2002)

MODULE

62 Creative Modes of Qualitative Inquiry

Creativity is just connecting things. When you ask creative people how they did something, they feel a little guilty because they didn’t really do it, they just saw something. It seemed obvious to them after a while. That’s because they were able to connect experiences they’ve had and synthesize new things.

—Steve Jobs (1955–2011) Apple Computer founder

We have looked in some depth at various kinds of standard qualitative interviewing approaches: conversational interviews during fieldwork; in-depth, open-ended, one-on-one formal interviews; interactive, relationship-based interviews; focus group interviews; and cross-cultural interviews. These represent mainstream qualitative interviewing approaches. Now, we move beyond these well-known and widely used interviewing approaches to consider innovative, creative, and pioneering approaches to qualitative interviewing.

Photo Elicitation Interviewing

Photo elicitation involves using photographs to stimulate reflections, support memory recall, and elicit stories as part of interviewing. Photo elicitation methods are proving powerful supplements in in-depth interviewing. For example, to study the meaning of change in dairy farming in northern New York, sociologist Douglas Harper (2008) showed elderly farmers photographs from the 1940s (a period when they had been teenage or young adult farmers) and asked them to remember events, stories, or commonplace activities that the photos brought to mind. He used archived documentary photographs from the era that the elderly farmers had experienced at the beginnings of their careers. The photographs inspired “detailed and often deep memories.”

The farmers described the mundane aspects of farming, including the social life of shared work. But more important, they explained what it meant to have participated in agriculture that had been neighbor based, environmentally friendly, and oriented toward animals more as partners than as exploitable resources.

In this and other photo elicitation studies, photographs proved to be able to stimulate memories that word-based interviewing did not. The result was discussions that went beyond “what happened when and how” to themes such as “this was what this had meant to us as farmers.” (pp. 198–199)

Visual data generally are becoming increasingly important in qualitative research and evaluation (Azzam & Evergreen, 2013a, 2013b; Banks, 2007; Prosser, 2011). A classic example by Hedy Bach (1998) involved documenting the daily life of four schoolgirls who, in addition to regular schoolwork, engaged in art, drama, ballet, and music programs in and out of school. Using disposable cameras, the students visually documented their lives both inside and outside classrooms. Their photos served as the basis for one-on-one interviews with them about their

images. The analysis of the images and interviews revealed an “evaded curriculum” within adolescent life, exposing pain, pleasure, and the intensity of joy in making and creating schoolgirl culture.

Using photos as part of interviewing can reduce

the awkwardness that an interviewee may feel from being put on the spot and grilled by the interviewer . . . ; direct eye contact need not be maintained, but instead interviewee and interviewer can both turn to the photographs as a kind of neutral third party. Awkward silences can be covered as both look at the photographs, and in situations where the status difference between interviewer and interviewee is great (such as between an adult and a child) or where the interviewee feels they are involved in some kind of test, the photographic content always provides something to talk about. (Banks, 2007, pp. 65–66)

Lorenz (2011) used photo elicitation as one way to generate empathy in research with acquired brain injury survivors. Brain injury patients can experience

a lack of empathy that leads to feelings of being disrespected and powerless. . . . Practicing empathy by using photos to create discursive spaces in research relationships may help us to learn about ourselves as we learn with patients. (p. 259)

As powerful as photography is, a picture being legendarily worth a thousand words, visual methods have limitations, as do all methods. Qualitative methodologists debate “the myth of photographic truth,” recognizing that “photographs represent a highly selected sample of the ‘real’ world” (Prosser, 2011, p. 480). Seeing is not necessarily believing, as photos can be manipulated, distorted, and removed from context (Morris, 2011). In this regard, visual methods do not stand alone but are most credible and useful when used in conjunction with other data: interviews, direct observations, and supporting documents.

Video Elicitation Interviewing

Teachnological developments have increased access to video and lowered costs making video elicitation interviewing and storytelling practical as well as innovative—the frontier of visual qualitatiuve inquiry. During video elicitation interviews, researchers interview using video to stimulate responses. An example is using video elicitation interviews for investigating physician –patient interactions (Henry & Fetters, 2012). Patients or physicians were interviewed about a recent clinical interaction using a video recording of that interaction as an elicitation tool.

Video elicitation is useful because it allows researchers to integrate data about the content of physician–patient interactions gained from video recordings with data about participants’ associated thoughts, beliefs, and emotions gained from elicitation interviews. This method also facilitates investigation of specific events or moments during interactions. Video elicitation interviews are logistically demanding and time-consuming, and they should be reserved for research questions that cannot be fully addressed using either standard interviews or video recordings in isolation. As many components of primary care fall into this category, high-quality video elicitation interviews can be an important method for understanding and improving physician –patient interactions in primary care (Henry & Fetters, 2012, p. 118).

Nick Agafanoff (2014) calls the use of video real ethnography—using documentary film techniques to capture and communicate stories and narratives. An example of the kind of experimentation going on in video interviewing is the approach of award-winning documentary

filmmaker Errol Morris (The Thin Blue Line). He invented the “interrotron” to increase rapport and deepen eye contact when videotaping interviews. Morris believes that Americans are so comfortable with television sets that doing his interviews through a television enhances rapport. Morris asks his questions through a specially designed video camera in the same room with the interview subject. The interviewee sees and hears him on a television and responds by talking into the television. Morris, in turn, watches the interview live on TV. The whole interaction takes place face-to-face through televisions placed at right angles to each other in the same room. For the results of this approach, see Morris (2000, 2003, 2012).

Writing as Preparation for or as Part of the Interview

The subject–object interview methodology illustrates another basis for interviewing—including writing as part of the interview. Prior to interviewing the research participants about the 10 ideas, they are given 15 to 20 minutes to jot down things on the index cards. They subsequently choose which cards to talk about and can use their jottings to facilitate their verbal responses. Such an approach gives interviewees a chance to think through some things before responding verbally.

In reflective practice group interviews (Patton, 2011, pp. 266–269), participants are asked to come with stories written that address the focused question for the session:

• Tell about an effective collaboration you have experienced.

• Tell about a time you felt excluded from a group.

• Tell about a time when you overcame a huge obstacle.

Writing the stories in advance facilitates documenting the participants’ experiences and helps the group move more quickly to identifying patterns and themes across their stories. The process of asking and clarifying questions about one another’s stories deepens and enriches the stories.

Critical incidents are a revealing focus for reflective practice with groups. Interviewees can be invited to identify and write about incidents they consider “critical” to their own development, their family’s history, or the work of their organization. Dialogue ensues to understand what made the incident “critical.”

Projection Elicitation Techniques

Projection techniques are widely used in psychological assessment to gather information from people. The best-known projective test is probably the Rorschach Test. The general principle involved is to have people react to something other than a question—an inkblot, a picture, a drawing, a photo, an abstract painting, a film, a story, a cartoon, or whatever is relevant. This approach is especially effective in interviewing children, but it can be helpful with people of any age. I found, for example, when doing follow-up interviews two years after completion of a program, that some photographs of the program site and a few group activities greatly enhanced recall.

Students can be interviewed about work they have produced. In the wilderness program evaluation, we interviewed participants about the entries they shared from their journals. Walker (1996) used letters exchanged between friends as the basis for her study of a generation of

American women. Holbrook (1996) contrasted official welfare case records with a welfare recipient’s journals to display two completely different constructions of reality. Hamon (1996) used proverbs, stories, and tales as a starting point for her inquiry into Bahamian family life. Rettig, Tam, and Magistad (1996) extracted quotes from transcripts of public hearings on child support guidelines as a basis for their fieldwork. Laura Palmer (1988) used objects left in memory of loved ones and friends at the Vietnam War Memorial in Washington, D.C., as the basis for her inquiry and later interviews. An ethnomusicologist will interview people as they listen and react to recorded music. The options for creative interviewing stretch out before us like an ocean teeming with myriad possibilities, some already known, many more waiting to be discovered or created.

Robert Kegan (1982) and colleagues (Helsing, Howell, Kegan, & Lahey, 2008) have had success basing interviews on reactions to 10 words in what they call the subject–object interview. To understand how the interviewee organizes interpersonal and intrapersonal experience, real-life situations are elicited from a series of 10 uniform probes. The interviewee responds to 10 index cards, each listing an idea, concept, or emotion: Angry, Success, Sad, Moved/Touched, Change, Anxious/Nervous, Strong Stand/Conviction, Torn, Lost Something, and Important To Me.

Reactions to these words provide data for the interviewer to explore the interviewee’s underlying epistemology or “principle of meaning-coherence” based on Kegan’s work The Evolving Self (1982). The subject–object interview is a complex and sophisticated method that requires extensive training for proper application and theoretical interpretation. For my purposes, the point is that a lengthy and comprehensive interview interaction can be based on reaction to 10 deceptively simple ideas presented on index cards rather than fully framed questions.

Think-Aloud Protocol Interviewing

“Protocol analysis” or, more literally, the “think-aloud protocol” approach, aims to elicit the inner thoughts or cognitive processes that illuminate what’s going on in a person’s head during the performance of a task, for example, painting or solving a problem. The point is to undertake interviewing as close to the action as possible. While someone engages in an activity, the interviewer asks questions and probes to get the person to talk about what he or she is thinking as he or she does the task. In “teaching rounds” at hospitals, senior physicians do a version of this when they talk aloud about how they’re engaging in a diagnosis while medical students listen, presumably learning the experts’ thinking processes by hearing them in action. (For details of the think-aloud protocol method, see Ericsson & Simon, 1993; Krahmer & Ummelen, 2004; Pressley & Afflerbach, 1995.)

Wilson (2000) used a protocol research design in a doctoral dissertation that investigated student understanding and problem solving in college physics. Twenty students in individual 45-minute sessions were videotaped and asked to talk aloud as they tried to solve three introductory physics problems of moderate difficulty involving Newton’s second law. Wilson was able to pinpoint the cognitive challenges that confronted the students as they tried to derive the acceleration of a particle moving in various directions and angles with respect to a particular reference frame.

Real-Time Qualitative Data Collection

The experience sampling method was developed to collect information on people’s reported feelings in real time in natural settings during selected moments of the day. Participants in

experience sampling studies carry a handheld computer that prompts them several times during the course of the day (or days) to answer a set of questions immediately. They indicate their physical location, the activities in which they were engaged just before they were prompted, and the people with whom they were interacting. They also report their current subjective experience by indicating their feelings (Kahneman & Krueger, 2006, p. 9).While the experience sampling method as originally developed had participants respond on quantitative scales, the pervasive use of handheld devices in contemporary society means that qualitative data (e.g., texts, tweets) can be collected with this technique.

Creative Interviewing

As the preceding examples illustrate, qualitative inquiry need not be confined to traditional written interview protocols and taking field notes. Researchers and evaluators have considerable freedom to creatively adapt qualitative methods to specific situations and purposes using anything that comes to mind—and works—as a way to enter into the world and worldview of others. Not only are there many variations in what stimuli to use and how to elicit responses, but creative variations also exist for who conducts interviews. We turn now to those options.

SIDEBAR

PERFORMING THE INTERVIEW

Interviews are performance texts. A performative social science uses the reflexive, active interview as a vehicle for producing moments of performance theater, a theater that is sensitive to the moral and ethical issues of our time. This interview form is gendered and dialogic. In it, gendered subjects are created through their speech acts. Speech is performative. It is action. The act of speech, the act of being interviewed, becomes a performance itself. The reflexive interview, as a dialogic conversation, is the site and occasion for such performances; that is, the interview is turned into a dramatic, poetic text. In turn, these texts are performed, given dramatic readings.

—Norman K. Denzin (2003, p. 84)

Performance Ethnography

Collaborative and Participatory Interviewing

Chapter 4, “Practical and Actionable Qualitative Applications,” discussed conducting research and evaluation in a collaborative mode in which professionals and nonprofessionals become co- inquirers. Participatory approaches are widely used in community research, health research, educational evaluation, and organizational development. For example, participatory action research encourages collaboration within a mutually respectful inquiry relationship to understand and/or solve organizational or community problems. Participatory evaluation involves program staff and participants in all aspects of the evaluation process to increase evaluation understanding and use while also building capacity for future inquiries. Empowerment evaluation aims to foster self- determination among those who participate in the inquiry process. This can involve forming empowerment collaborations between researchers and participants and teaching participants to do research themselves. Feminist methods are often highly participatory (Ackerly & True, 2010;

Bamberger & Podems, 2002; Brisolara, Seigart, & SenGupta, 2014, Hesse-Biber, 2011; Hesse- Biber & Leavy, 2006b; Hughes, 2002; Mertens, 2005; Olesen, 2011; Podems, 2014b).

In-depth interviewing is especially useful for supporting collaborative inquiry because the methods are accessible to and understandable by nonresearchers and community-based people without much technical expertise. Exhibit 4.13 (p. 222) presents Principles of Fully Participatory and Genuinely Collaborative Inquiry. The following sections provide some specific examples of collaborative and participatory interviewing approaches.

SIDEBAR

PARTICIPATORY ACTION RESEARCH

Participatory action research is the sum of its individual terms, which have had and continue to have multiple combinations and meanings, as well as a particular set of assumptions and processes. . . . Participation is a major characteristic of this work, not only in the sense of collaboration, but in the claim that all people in a particular context (for both epistemological and, with it, political reasons) need to be involved in the whole of the project undertaken. Action is interwoven into the process because change, from a situation of injustice toward envisioning and enacting a “better” life (as understood from those in the situation) is a primary goal of the work. Research as a social process of gathering knowledge and asserting wisdom belongs to all people, and has always been part of the struggle toward greater social and economic justice locally and globally.

—Brydon-Miller, Kral, Maguire, Noffke, and Sabhlok (2011, p. 388)

Jazz and the Banyan Tree: Roots and Riffs on Participatory Action Research

Participant Interview Chain

As a participant-observer in the wilderness training program for adult educators, I was involved in (a) documenting the kinds of experiences program participants were having and (b) collecting information about the effects of those experiences on the participants and their work situations. In short, the purpose of the evaluation was to provide formative insights that could be used to help understand the personal, professional, and institutional outcomes of intense wilderness experiences for these adult educators. But the two of us doing the evaluation didn’t have sufficient time and resources to track everyone—40 people—in depth. Therefore, we began discussing with the program staff ways in which the participants might become involved in the data collection effort to meet both program and evaluation needs. The staff liked the idea of involving participants, thereby introducing them to observation and interviewing as ways of expanding their own horizons and deepening their perceptions.

The participants’ backpacking field experience was organized in two groups. We used this fact to design a data collection approach that would fit with the programmatic needs of sharing information between the two groups. Participants were paired to interview each other. At the very beginning of the first trip, before people knew each other, all of the participants were given a short, open-ended interview of 10 questions. They were told that each of them, as part of their project participation, was to have responsibility for documenting the experiences of their pairmate throughout the year. They were given a little bit of interview training and a lot of encouragement

about probing and were told to record responses fully, thereby taking responsibility for helping build this community record of individual experiences. They were then sent off in pairs and given two hours to complete the interviews with each other, recording the responses by hand.

At the end of the 10-day experience, when the separate groups came back together, the same pairs of participants, consisting of one person from each group, were again given an interview outline and sent off to interview each other about their respective experiences. This served the program need of sharing of information and an evaluation need of collection of information. The trade-off, of course, was that with the minimal interview training given to the participants and the impossibility of carefully supervising, controlling, and standardizing the data collection, the results were of variable quality. This mode of data collection also meant that confidentiality was minimal and certain kinds of information might not be shared. But we gathered a great deal more data than we could have obtained if we had had to do all the interviews ourselves.

Exhibit 7.16 presents an example of participatory inquiry in a study of a program aimed at helping women engaged in prostitution transition into a new life. The program participants, women who had themselves been prostitutes, were trained to conduct the evaluation interviews.

EXHIBIT 7.16 Training Nonresearchers as Focus Group Interviewers: Women Leaving Prostitution

Rainbow Research studied the feasibility of developing a transitional housing program for prostituted women. To assist us we recruited 5 women who had been prostituted, trained them in focus group facilitation and had them do our interviews with women leaving prostitution. For them the experience was empowering and transformational. They were excited about learning a new skill, pleased to be paid for this work and thought it rewarding that it might benefit prostituted women. Especially thrilling for them during the interviews was the validation and encouragement they received from their peers for the work they were doing. Our work together also had its light moments. During a group simulation, our interviewers loudly and provocatively bantered with one another as they might have on the street.

Our interviewers were proud of their contribution. At project’s end they requested certificates acknowledging the training they had received and the interviews successfully performed. And, because they had performed well, we were pleased to oblige. In the simulations they critiqued our interview guide, leading us to edit the language, content, order and length, introduce new questions and drop others. Clearly they had rapport with their peers based on shared discourse and experience, allowing them to gather information others without the experience of prostitution would have been hard pressed to secure. This was apparent in the reliability of our data. Comparing across the interviews, responses to the same items were highly consistent. For all concerned it was a positive experience, with findings that most definitely shaped our final recommendations.

—Barry B. Cohen

Executive Director, Rainbow Research

Minneapolis, Minnesota

Limitations certainly exist as to how far one can push participant involvement in data collection and analysis without interfering in the program or burdening participants. But before those limits are reached, a considerable amount of useful information can be collected by involving program participants in the actual data collection process. I have since used similar participant interview

pairs in a number of program evaluations with good results. The trick is to integrate the data collection into the program. Such cooperative and interactive interviewing can deepen the program experience for those involved by making qualitative data collection and reflection part of the change process (Cousins, Whitmore, & Shulha, 2014; Fetterman, Rodríguez-Campos, Wandersman, & O’Sullivan, 2014; King & Stevahn, 2013; Patton, 2011, 2012a).

Photovoice

Earlier, I discussed photo elicitation approaches to enhance interviewing. Incorporating photography into data collection can also be a highly participatory process. The best-known and most widely used participatory photo elicitation approach is Photovoice, which combines photography and qualitative inquiry with grassroots social action (see sidebar). The Minnesota Department of Health (2013) sponsored a Photovoice project in which youth from 13 communities were trained in Photovoice and given cameras to answer questions about their everyday lives through photography.

SIDEBAR

PHOTOVOICE

Photovoice combines photography and qualitative inquiry with grassroots social action. Community participants present their experiences and perspectives by taking photographs, developing narratives to go with their photos, analyzing them together, and using the results to advocate for change with policymakers and philanthropic funders. Photovoice, as a form of qualitative participatory photography, was developed by Caroline C. Wang and Mary Ann Burris while working in Beijing, China, in 1992. They sought a way for rural women of Yunnan Province, China, to tell their stories and influence the policies and programs that affected them (Wang & Burris, 1994). It has evolved into a global movement to combine visual evidence, qualitative narratives, and social change (http://www.photovoice.org/).

Data Collection by Program Staff

Program staff constitute another resource for data collection that is often overlooked. Involving program staff in data collection raises objections about staff subjectivity, data contamination, loss of confidentiality, the vested interests of staff in particular kinds of outcomes, and the threat that staff can pose to clients or students from whom they are collecting the data. Balancing these objections are the things that can be gained from staff involvement in data collection: (a) greater staff commitment to the evaluation, (b) increased staff reflectivity, (c) enhanced understanding of the data collection process that comes from training staff in data collection procedures, (d) increased understanding by staff of program participants’ perceptions, (e) increased data validity because of staff rapport with participants, and (f) cost savings in data collection.

One of my first evaluation experiences involved studying a program to train teachers in open education at the University of North Dakota. The faculty were interested in evaluating that program, but there were almost no resources available for a formal evaluation. Certainly not

enough funds existed to bring in an external evaluation team to design the study, collect data, and analyze the results. The main means of data collection consisted of in-depth interviews with student teachers in 24 different schools and classrooms throughout North Dakota and structured interviews with 300 parents who had children in those classrooms. The only evaluation monies available would barely pay for the transportation and the actual mechanical costs of data collection. Staff and students at the university agreed to do the interviews as an educational experience. I developed structured interview forms for both teacher and parent interviews and trained all of the interviewers in a full-day session. Interviewers were assigned to geographical areas, making sure that no staff collected data from their own student teachers. The interviews were recorded and transcribed. I did follow-up interviews with a 5% sample as a check on the validity and reliability of the student and staff data.

After data collection, seminars were organized for staff and students to share their personal perceptions based on their interview experiences. Their stories had considerable impact on both staff and students. One outcome was the increased respect both staff and students had for the parents. They found the parents to be perceptive, knowledgeable, caring, and deeply interested in the education of their children. Prior to the interviewing, many of the interviewers had held fairly negative and derogatory images of North Dakota parents. The systematic interviewing had put them in a situation where they were forced to listen to what parents had to say, rather than telling parents what they (as educators) thought about things, and in learning to listen, they had learned a great deal. The formal analysis of the data yielded some interesting findings that were used to make some changes in the program, and the data provided a source of case materials that were adapted for us in training future program participants, but it is likely that the major and most lasting impact of the evaluation came from the learnings of students and staff who participated in the data collection. That experiential impact was more powerful than the formal findings of the study—an example of “evaluation process use” (Patton, 2008, 2012a), using an evaluation process for participant and organizational learning.

By the way, had the interviewers been paid at the going commercial rate, the data collection could have cost at least $30,000 in personnel expenses. As it was, there were no personnel costs, and a considerable human contribution was made to the university program by both students and staff.

Such participatory action research remains controversial. As Kemmis and McTaggart (2000) noted in their extensive review of participatory approaches,

In most action research, including participatory action research, the researchers make sacrifices in methodological and technical rigor in exchange for more immediate gains in face validity: whether the evidence they collect makes sense to them in their context. For this reason, we sometimes characterize participatory action research as “low-tech” research: it sacrifices in methodological sophistication in order to generate timely evidence that can be used and further developed in a real-time process of transformation (of practices, practitioners, and practice settings). (p. 591)

Whether some loss of methodological sophistication is merited depends on the primary purpose of the inquiry and the primary intended users of the results. Participatory research will have lower credibility among external audiences, especially among scholars who make rigor their primary criterion for judging quality. Participants involved in improving their work or lives, however, lean toward pragmatism where what is useful determines what is true. As Kemmis and McTaggart (2000) conclude,

The inevitability—for participants—of having to live with the consequences of transformation provides a very concrete “reality check” on the quality of their transformative work, in terms of whether their

practices are more efficacious, their understandings are clearer, and the settings in which they practice are more rational, just, and productive of the kinds of consequences they are intended to achieve. For participants, the point of collecting compelling evidence is to achieve these goals, or, more precisely, to avoid subverting them intentionally or unintentionally by their action. Evidence sufficient for this kind of “reality checking” can often be low-tech (in terms of research methods and techniques) or impressionistic (from the perspective of an outsider who lacks the contextual knowledge that the insider draws on in interpreting this evidence). But it may still be “high-fidelity” evidence from the perspective of understanding the nature and consequences of particular interventions in transformations made by participants, in their context—where they are privileged observers. (p. 592)

Interactive Group Interviewing and Dialogues

The involvement of program staff or clients as colleagues or coresearchers in action research and program evaluation changes the relationship between evaluators and staff, making it interactive and cooperative rather than one-sided and antagonistic. William Tikunoff (1980), a pioneer in “interactive research” in education projects, found that putting teachers, researchers and trainer/developers together as a team increased both the meaningfulness and the validity of the findings because teacher cooperation with and understanding of the research made the research less intrusive, thus reducing rather than increasing reactivity. Their discussions were a form of group interviews in which they all asked each other questions.

The problem of how research subjects or program clients will react to staff involvement in an evaluation, particularly involvement in data collection, needs careful scrutiny and consideration in each situation in which it is attempted. Reactivity is a potential problem in both conventional and nonconventional designs. Breaches of confidence and/or reactivity-biased data cannot be justified in the name of creativity. On the other hand, as Tikunoff’s experiences indicate, interactive designs may increase the validity of data and reduce reactivity by making evaluation more visible and open, thereby making participants or clients less resistant or suspicious (King & Stevahn, 2013).

These approaches can reframe inquiry from a duality (interviewer–interviewee) to a dialogue in which all are co-inquirers. Miller and Crabtree (2000) advocate such a collaborative approach even in the usually closed and hierarchical world of medical and clinical research:

We propose that clinical researchers investigate questions emerging from the clinical experience with the clinical participants, pay attention to and reveal any underlying values and assumptions, and direct results toward clinical participants and policy makers. This refocuses the gaze of clinical research onto the clinical experience and redefines its boundaries so as to answer three questions: Whose question is it? Are hidden assumptions of the clinical world revealed? For whom are the research results intended? . . . Patients and clinicians are invited to explore their own and/or each other’s questions and concerns with whatever methods are necessary. Clinical researchers share ownership of the research with clinical participants, thus undermining the patriarchal bias of the dominant paradigm and opening its assumptions to investigation. This is the situated knowledge . . . where space is created to find a larger, more inclusive vision of clinical research. (p. 616)

SIDEBAR

MEDIATED CONVERSATIONS

Mediated conversations are an innovative method for the generation of rich qualitative data on complex issues such as those that researchers often face in education contexts. The method

draws from the sociocultural paradigm (Wertsch, 1991), which highlights the role of artifacts and audience in mediating participants’ actions and thoughts. Mediated conversations were developed during the Curriculum Implementation Exploratory Studies (in New Zealand), when we invited school leaders and teachers to a series of one-day data-generating “workshops.” The conversations that took place were mediated by two kinds of artifacts: one that participants had been asked to bring and discuss and the other the 10 contract research questions. The artifacts served as a practical situated exemplification of the participants’ descriptions of their work in relation to curriculum implementation. They served as a locus and prompt for questions and discussion. We convened the mediated conversations for our research purposes, but the participants experienced them as rich professional learning. Mediated conversations appear to provide a participatory method for data generation that has immediate benefits for researchers and research participants.

SOURCE: Cowie et al. (2009) and Hipkins, Cowie, Boyd, Keown, and McGee (2011).

Creativity and Data Quality: Qualitative Bricolage

I realized quite early in this adventure that interviews, conventionally conducted, were meaningless. Conditioned cliches were certain to come. The question-and-answer technique may be of some value in determining favored detergents, toothpaste and deodorants, but not in the discovery of men and women.

—Studs Terkel (Quoted by Douglas, 1985, p. 7)

No definitive list of creative interviewing or inquiry approaches can or should be constructed. Such a list would be a contradiction in terms. Creative approaches are those that are situationally responsive and appropriate, credible to primary intended users, and effective in opening up new understandings. The approaches just reviewed are deviations from traditional research practice. Each idea is subject to misuse and abuse if applied without regard for the ways in which the quality of the data collected can be affected. I have not discussed such threats and possible errors in depth because I believe it is impossible to identify in the abstract and in advance all the trade-offs involved in balancing concerns for accuracy, utility, feasibility, and propriety. For example, having program staff do client interviews in an outcomes evaluation could (a) seriously reduce the validity and reliability of the data, (b) substantially increase the validity and reliability of the data, or (c) have no measurable effect on data quality. The nature and degree of effect would depend on staff relationships with clients, how staff were assigned to clients for interviewing, the kinds of questions asked, the training of the interviewers, the attitudes of clients toward the program, the purposes to be served by the evaluation, the environmental turbulence of the program, and so on and so forth. Program staff might make better or worse interviewers than external evaluation researchers depending on these and other factors. An evaluator must grapple with these kinds of data quality questions for all designs, particularly nontraditional approaches.

Practical, but creative, data collection consists of using whatever resources are available to do the best job possible. Constraints always exist and do what constraints do—constrain. Our ability to think of alternatives is limited. Resources are always limited. This means data collection will be

imperfect, so dissenters from research and evaluation findings who want to attack a study’s methods can always find some grounds for doing so. A major reason for actively involving intended evaluation users in methods decisions is to deal with weaknesses and consider trade-off threats to data quality before data are collected. By strategically calculating threats to utility, as well as threats to validity and authenticity, it is possible to make practical decisions about the strengths of creative and nonconventional data collection procedures (Patton, 2008, 2012a).

The creative, adaptive inquirer using diverse techniques may be thought of as a “bricoleur.” The term comes from Levi-Strauss (1966), who defined a bricoleur as a “jack of all trades or a kind of professional do-it-yourself person” (p. 17).

The qualitative researcher as bricoleur or maker of quilts uses the aesthetic and material tools of his or her craft, deploying whatever strategies, methods, or empirical materials are at hand. If new tools or techniques have to be invented, or pieced together, then the researcher will do this. (Denzin & Lincoln, 2000, p. 4)

Creativity begins with being open to new possibilities, the bricolage of combining old things in new ways, including alternative and emergent forms of data collection, transformed observer –observed relations, and reframed interviewer–interviewee interconnections. Naturalistic inquiry calls for ongoing openness to whatever emerges in the field and during interviews. This openness means avoiding forcing new possibilities into old molds. The admonition to remain open and creative applies throughout naturalistic inquiry, from design through data collection and into analysis. Failure to remain open and creative can lead to the error made by a traveler who came across a peacock for the first time, a story told by Halcolm.

A traveler to a new land came across a peacock. Having never seen this kind of bird before, he took it for a genetic freak. Taking pity on the poor bird, which he was sure could not survive for long in such deviant form, he set about to correct nature’s error. He trimmed the long, colorful feathers; cut back the beak; and dyed the bird black. “There now,” he said, with pride in a job well done, “you now look more like a standard guinea hen.”

Adapting Interviewing for Particular Interviewees

This chapter has reviewed general principles of and approaches to in-depth qualitative interviewing. However, most researchers, evaluators, and practitioners specialize in working with and studying specific target populations using finely honed interviewing techniques. Interviewing “elites” or “experts” often requires an interactive style.

Elites respond well to inquiries about broad topics and to intelligent, provocative, open-ended questions that allow them the freedom to use their knowledge and imagination. In working with elites, great demands are placed on the ability of the interviewer, who must establish competence by displaying a thorough knowledge of the topic or, lacking such knowledge, by projecting an accurate conceptualization of the problem through shrewd questioning. (Rossman & Rallis, 1998, p. 134)

Robert Coles (1990) became adept at interviewing children, as have Guerrero-Manalo (1999), Graue and Walsh (1998), Holmes (1998), and Greig and Taylor (1998). Rita Arditti (1999) developed special culturally and politically sensitive approaches for gaining access to and interviewing grandmothers whose children were among “the disappeared” during Argentina’s military regime. Guerrero (1999a, 1999b) developed special participatory approaches for interviewing women in developing countries. Judith Arcana (1981, 1983) drew on her own

experiences as a mother to become expert at interviewing 180 mothers for two books about the experiences of mothering (one about mothers and daughters, one about mothers and sons).

SIDEBAR

ON INTERVIEWING POLITICIANS

Political people could always be more frank. . . . There is a truth that can’t be spoken. You can come to a shorthand understanding of certain realities—I know what we can’t talk about.

—Mark Leibovich (2013)

New York Times journalist

At the other end of the continuum from Cole’s delightful stories of childhood innocence are Jane Gilgun’s haunting interviews with male sexual offenders and Angela Browne’s intensive interviews with women incarcerated in a maximum security prison. Gilgun (1991, 1994, 1995, 1996, 1999) conducted hundreds of hours of interviews with men who have perpetrated violent sex offenses against women and children, most of them having multiple victims. She learned to establish relationships with these men through repeated life history interviews but did so without pretending to condone their actions, and sometimes challenging their portrayals. Two examples offer a sense of the challenges for someone undertaking such work through long hours and horrific details.

One man, engaged to be married at the time of his arrest, confessed to seven rapes; Gilgun interviewed him for a total of 14 hours over 12 different interviews, including detailed descriptions of his sexual violence. Another man molested more than 20 boys; at the time of his arrest, he was married, sexually active with his wife, and a stepfather of two boys; she obtained 20 hours of tape over 11 interviews. These cases were particularly intriguing as purposeful samples because both men were white college graduates in their early 30s who were employed as managers with major supervisory responsibilities and who came from upper middle-class, two-parent, never divorced families where the fathers held executive positions and mothers were professionals. Gilgun worked with an associate in transcribing and interpreting the interviews.

The data were so emotionally evocative that we spent a great deal of time working through our personal responses. Almost two years went by before we found we had any facility in articulating the meanings of the discourses we identified in the informants’ accounts. Their way of thinking was for the most part outside our frames of reference. As we struggled through these interpretive processes, we made notes of our responses. . . . Most compelling to us [was the men believing they were] entitled to take what they wanted and of defining persons and situations as they wished . . . , to suit themselves. Overall, the discourses they invoked served oppressive hegemonic ends. We also found that the men experienced chills, thrills, and intense emotional gratification as they imposed their wills on smaller, physically weaker persons. (Gilgun & McLeod, 1999, p. 175)

The life work of Angela Browne illustrates a similar commitment to in-depth, life history, interviewing with people who are isolated from the mainstream and whose experiences are little understood by the general culture. After her groundbreaking study of women who kill violent partners in self-defense (When Battered Women Kill, 1987), Browne began gathering life history narratives from women incarcerated in a maximum security prison. These interviews, conducted in a small room off a tunnel in the middle of the facility and six hours in length, included women with lifetime histories of trauma, much of it at the hands of family members in early childhood. Some

had witnessed brutal homicides; others were serving time for crimes of violence they had committed. Their stories were painful to tell and to hear. Interviews often were so emotionally draining that Browne came away exhausted, sometimes needing to debrief on both the impact of what she had heard and the dynamics of the interviewing process. On several occasions, we had lengthy phone conversations immediately following the interviews, while she was still within the prison walls. This kind of extreme interviewing takes unusual skill, dedication, and self-knowledge, coupled with a keen interest in the dynamics of human interaction. Although Browne returned home drained after each full week of conducting these day-long interviews, her enthusiasm for the task and her appreciation of the respondents’ strength and lucidity never dimmed.

The works of Gilgun and Browne illustrate the intensity, commitment, long hours, and hard work involved with certain in-depth and life history approaches to interviewing.

Adapting Interviewing to Particular Target Populations

Applications and methods of qualitative inquiry, especially interviewing techniques for specially targeted populations and specialized disciplinary approaches, continue to evolve as interest in qualitative methods grows exponentially (a metaphoric rather than a statistical estimation). General interview skills include asking open-ended questions, listening carefully to ask follow-up questions, effective and sensitive probing, distinguishing different kinds of questions, and pacing the interview. But additional challenges arise in interviewing particular target populations. General interview skills are necessary as a foundation for in-depth qualitative interviewing, but those skills must then be adapted to particular target groups and situations. Exhibit 7.17 on the next page highlights some specialized interviewing approaches and accompanying challenges, including online interviewing. As applications and techniques have proliferated, so have concerns about the ethical challenges of qualitative inquiry, our next topic.

EXHIBIT 7.17 Special Interviewing Challenges for Particular Target Populations: Five Examples

This table highlights some of the specific challenges that arise in interviewing particular target populations. This is by no means an exhaustive list of target populations or challenges. It is meant to illustrate how general interview skills are necessary as a foundation for in-depth qualitative interviewing but those skills must then be adapted to particular target groups and situations.

MODULE

63 Ethical Issues and Challenges in Qualitative Interviewing

Interviews are interventions. They affect people. A good interview evokes thoughts, feelings, knowledge, and experience not only to the interviewer but also to the interviewee. The process of being taken through a directed, reflective process affects the persons being interviewed and leaves them knowing things about themselves that they didn’t know—or least were not fully aware of—before the interview. Two hours or more of thoughtfully reflecting on an experience, a program, or one’s life can be change inducing; 10, 15, or 20 hours of life history interviewing can be transformative—or not. Therein lies the rub. Neither you nor the interviewee can know, in advance, and sometimes even after the fact, what impact an interviewing experience will have or has had.

The purpose of a research interview is first and foremost to gather data, not change people. Earlier, in the section on neutrality, I asserted that an interviewer is not a judge. Neither is a research interviewer a therapist. Staying focused on the purpose of the interview is critical to gathering high-quality data. Still, there will be many temptations to stray from that purpose. It is common for interviewees to ask for advice, approval, or confirmation. Yielding to these temptations, the interviewer may become the interviewee—answering more questions than are asked.

On the other hand, the interviewer, in establishing rapport, is not a cold slab of granite—unresponsive to learning about great suffering and pain that may be reported and even reexperienced during an interview. In a major farming systems needs assessment project to develop agricultural extension programs for distressed farm families, I was part of a team of 10 interviewers (working in pairs) who interviewed 50 farm families. Many of these families were in great pain. They were losing their farms. Their children had left for the city. Their marriages were under stress. The two-hour interviews traced their family history, their farm situation, their community relationships, and their hopes for the future. Sometimes questions would lead to husband–wife conflict. The interviews would open old wounds, lead to second-guessing decisions made long ago, or bring forth painful memories of dreams never fulfilled. People often asked for advice—what to do about their finances, their children, government subsidy programs, even their marriages. But we were not there to give advice. Our task was to get information about needs that might, or might not, lead to new programs of assistance. Could we do more than just ask our questions and leave? Yet, as researchers, could we justify in any way intervening? Yet again, our interviews were already an intervention. Such are the ethical dilemmas that derive from the power of interviews.

What we decided to do in the farm family interviews was leave each family a packet of information about resources and programs of assistance, everything from agricultural referrals to financial and family counseling. To avoid having to decide which families really needed such assistance, we left the information with all families—separate and identical packages for both husband and wife. When interviewees asked for advice during the interview, we could tell them that we would leave them referral information at the end of the interview.

While interviews may be intrusive in reopening old wounds, they can also be healing. In doing follow-up interviews with families who had experienced child sexual abuse, we found that most of the mothers appreciated the opportunity to tell their stories, vent their rage against the system, and share their feelings with a neutral, but interested, listener. Our interviews with elderly residents

participating in a program to help them stay in their homes and avoid nursing home institutionalization typically lasted much longer than planned because the elderly interviewees longed to have company and talk. When interviewees are open and willing to talk, the power of interviewing poses new risks. People will tell you things they never intended to tell you. This can be true even with reluctant or hostile interviewees—a fact depended on by journalists. Indeed, it seems at times that the very thing someone is determined not to say is the first thing he or she tells, just to release the psychological pressure of secrecy or deceit.

I repeat, people in interviews will tell you things they never intended to tell. Interviews can become confessions, particularly under the promise of confidentiality. But beware that promise. Social scientists can be summoned to testify in court. We do not have the legal protection of clergy and lawyers. In addition, some information must be reported to the police, for example, evidence of child abuse. Thus, the power of interviewing can put the interviewees at risk. The interviewer needs to have an ethical framework for dealing with such issues.

There are also direct impacts on interviewers. The previous section described the wrenching interviews conducted by Jane Gilgun with male sex offenders and Angela Browne with incarcerated women, and the physical and emotional toll of those interviews on them as interviewers, exposed for hours on end to horrendous details of violence and abuse. In a family sex abuse project (Patton, 1991), we found that the interviewers needed to be extensively debriefed, sometimes in support groups together, to help them process and deal with the things they heard. They could only take in so much without having some release, some safety valve for their own building anger and grief. Middle-class interviewers going into poor areas may be shocked and depressed by what they hear and see. It is not enough to do preparatory training before such interviewing. Interviewers may need debriefing—and their observations and feelings can become part of the data on team projects.

These examples are meant to illustrate the power of interviewing and why it is important to anticipate and deal with the ethical dimensions of qualitative inquiry. Because qualitative methods are highly personal and interpersonal, because naturalistic inquiry takes the researcher into the real world where people live and work, and because in-depth interviewing opens up what is inside people, qualitative inquiry may be more intrusive and involve greater reactivity than surveys, tests, and other quantitative approaches.

Exhibit 7.18 presents a checklist of 12 common ethical issues as a starting point in thinking through ethical issues in design, data collection, analysis, and reporting. The next sections elaborate some common issues of special concern.

• Informed consent and confidentiality

• Confidentiality versus people owning their own stories

• How much of an interview must be approved in advance?

• Reciprocity: Should interviewees be compensated? If so, how?

• How hard to push for sensitive information?

• Being careful in the face of danger in the field

EXHIBIT 7.18 Ethical Issues Checklist

1. Explaining purpose. How will you explain the purpose of the inquiry and the methods to be used in ways that are accurate and understandable?

• What language will make sense to the participants in the study? • What details are critical to share? What can be left out? • What’s the expected value of your work to society and to the greater good?

Guiding principle: Be clear, honest, and transparent about purpose.

2. Reciprocity. What’s in it for the interviewee?

• Why should the interviewee participate in the interview? • What is appropriate compensation?

Guiding principle: Honor the gift of an interviewee’s time in a meaningful and tangible way.

3. Promises. Don’t make promises lightly, for example, promising a copy of the recording or the report.

Guiding principle: If you make promises, keep them.

4. Risk assessment. In what ways, if any, will conducting the interview put people at risk?

• Psychological stress • Legal liabilities • In evaluation studies, continued program participation (if certain things become known) • Ostracism by peers, program staff, or others for talking • Political repercussions

How will you describe these potential risks to interviewees?

How will you handle them if they arise?

Guiding principle: First, do no harm.

5. Confidentiality. What are reasonable promises of confidentiality that can be fully honored? Know the difference between confidentiality and anonymity. (Confidentiality means you know but won’t tell. Anonymity means you don’t know, as in a survey returned anonymously.)

• What things can you not promise confidentiality about, for example, illegal activities, evidence of child abuse, or neglect?

• Will names, locations, and other details be changed? Or do participants have the option of being identified? (See discussion of this in the text.)

• Where will data be stored? • How long will data be maintained?

Guiding principle: Know the ethical and legal dimensions of confidentiality.

6. Informed consent. What kind of informed consent, if any, is necessary for mutual protection?

• What are your local institutional review board (IRB) guidelines and/or requirements, or those of an equivalent committee for protection of human subjects in research?

• What has to be submitted, under what timelines, for IRB approval, if applicable?

Guiding principle: Know and follow the standards of your discipline or field.

7. Data access and ownership. Who will have access to the data? For what purposes?

• Who owns the data in an evaluation? (Be clear about this in the contract.) • Who has right of review before publication? (For example, of case studies, by the person

or organization depicted in the case, or of the whole report, by a funding or sponsoring organization)

Guiding principle: Don’t wait until publication to deal with data ownership issues; anticipate data access and ownership issues from the beginning.

8. Interviewer mental health. How will you and other interviewers likely be affected by conducting the interviews?

• What might be heard, seen, or learned that may merit debriefing and processing? • Who can you talk to about what you experience without breaching confidentiality?

Guiding principle: Fieldwork is engaging, intellectually and emotionally. Take care of yourself and your coresearchers.

9. Ethical advice. Who will be the researcher’s confidant and counselor on matters of ethics during a study? Not all issues can be anticipated in advance. Knowing who you will go to in the event of difficulties can save precious time in a crisis and bring much needed comfort.

Guiding principle: Plan ahead, and know who you will consult on emergent ethical issues.

10. Data collection boundaries. How hard will you push for responses from interviewees?

• What lengths will you go to in trying to gain access to data you want? What won’t you do?

• How hard will you push interviewees to respond to questions about which they show some discomfort?

Guiding principles: Know yourself. Err on the side of caution. Don’t let the ends justify the means in overstepping boundaries.

11. Intersection of ethical and methodological choices.

• Methods and ethics are intertwined. Understand the intersection.

• As a qualitative methodologist, based on your own work, contribute to the field by reporting the ethical challenges you face.

Guiding principle: Include ethical dilemmas faced and handled in your methods discussion.

12. Ethical versus legal. What ethical framework and philosophy informs your work and ensures respect and sensitivity for those you study beyond whatever may be required by law?

• What disciplinary or professional code of ethical conduct will guide you?

Guiding principles: Don’t make up ethical responses along the way. Know your profession’s ethical standards. Know what the law in your jurisdiction requires.

For additional guidance on ethics in qualitative inquiry, see Christians (2000), Rubin and Rubin (2012, pp. 85–93), and Silverman and Marvasti (2008).

Informed Consent and Confidentiality

Informed consent protocols and opening statements in interviews typically cover the following issues:

What is the purpose of collecting the information?

Who is the information for? How will it be used?

What will be asked in the interview?

How will responses be handled, including confidentiality?

What risks and/or benefits are involved for the person being interviewed?

The interviewer often provides this information in advance of the interview and then again at the beginning of the interview. Providing such information does not, however, require making long and elaborate speeches. Statements of purpose should be simple, straightforward, transparent, and understandable. Long statements about what the interview is going to be like, and how it will be used, when such statements are made at the beginning of the interview, are usually either boring or anxiety producing. The interviewee will find out soon enough what kinds of questions are going to be asked and, from the nature of the questions, will make judgments about the likely use of such information. The basic messages to be communicated in the opening statement are (a) that the information is important, (b) the reasons for that importance, and (c) the willingness of the interviewer to explain the purpose of the interview out of respect for the interviewee. Here’s an example of an opening interview statement from an evaluation study.

I’m a program evaluator brought in to help improve this program. As someone who has been in the program, you are in a unique position to describe what the program does and how it affects people. And that’s what the interview is about: your experiences with the program and your thoughts about your experiences.

The answers from all the people we interview, and we’re interviewing about 25 people, will be combined for our report. Nothing you say will ever be identified with you personally. As we go through the interview, if you have any questions about why I’m asking something, please feel free to ask. Or if there’s anything you don’t want to answer, just say so. The purpose of the interview is to get your insights into how the program operates and how it affects people.

Any questions before we begin?

This may seem straightforward enough, but dealing with real people in the real world, all kinds of complications can arise. Moreover, some genuine ethical quandaries have arisen in recent years around the ethics of research in general and qualitative inquiry in particular.

How Much of an Interview Must Be Approved in Advance?

In the chapter on observation and fieldwork, I discussed the problems posed by approval protocols aimed at protecting human subjects given the emergent and flexible designs of naturalistic inquiry. IRBs for the protection of human subjects often insist on approving actual interview questions, which can be done when using the standardized interview format discussed in this chapter, but it works less well when using an interview guide and doesn’t work at all for conversational interviewing, where the questions emerge at the moment in context. A compromise is to specify those questions one can anticipate and list other possible topics while treating the conversational component as probes, which are not typically specified in advance. Adopt a strategy of “planned flexibility” insofar as possible, and be prepared to educate your IRB about why this is appropriate both methodologically and ethically (Silverman & Marvasti, 2008, p. 48). In a fully naturalistic inquiry design, interview questions can and should change as the interviewer understands the situation more fully and discovers new pathways for questioning. The tension between specifying questions for approval in advance versus allowing questions to emerge in context in the field led Elliott Eisner (1991) to ask, “Can qualitative studies be informed . . . [since] we have such a hard time predicting what we need to get consent about?” (p. 215). An alternative to specifying precise questions for approval in advance is to specify areas of inquiry that will be avoided—that is, to anticipate ways in which respondents might be put at risk and affirm that the interviewer will avoid such areas.

SIDEBAR

WHAT DO YOU DO WITH DANGEROUS REVELATIONS? THE ETHICAL CHALLENGE OF CONFIDENTIALITY

Robert Weiss (1994) reported the problem posed by a woman respondent who was HIV positive.

She said that all her life, from the time she was a child, she had been treated brutally by men. Contracting HIV from a boyfriend was only the most recent instance. Now she wanted to get even with the whole male sex. She visited barrooms every evening to pick up men with whom she could have intercourse, in the hope that she would infect them. The woman’s sister had already reported her to a public health agency, mostly because she wanted the woman stopped before she was hurt by some man she had tried to infect. The public health agency did nothing.

In our final interview I learned the woman was no longer seeking revenge through sex. She had met a man who had become her steady boyfriend and who remained with her even after he was told—by that same sister—that she was HIV positive. (His first reaction was to yell at the woman and, I think, push her around.) If in our final interview the woman had reported continuing her campaign to spread HIV among men, I would have told her to stop. I can’t believe that would have done much good, but I would have told her anyway. I also would have discussed her report with the head of the clinic where she was being treated, with the thought of devising some way to interrupt her behavior. (p. 132)

©2002 Michael Quinn Patton and Michael Cochran

Uninformed Consent Seeker

Conversational Interviewing Poses Special Informed Consent Problems:

Whipping out an informed consent statement and asking for a signature can be awkward at best. To the extent that interviews are an extension of a conversation and part of a relationship, the legality and formality of a consent form may be puzzling to your conversational partner or disruptive to the research. On the one hand, you may be offering conversational partners anonymity and confidentiality, and on the other asking them to sign a legal form saying they are participating in the study. How can they later deny they spoke to you—which they may need to do to protect themselves—if you possess a signed form saying they were willing to participate in the study? (Rubin & Rubin, 1995, p. 95)

Qualitative methodologists find that many IRBs are not competent to judge qualitative research. When engaging with an IRB, distinguish between legal compliance with human subjects protection requirements versus conscientious ethical behavior:

You cannot achieve ethical research by following a set of preestablished procedures that will always be correct. Yet, the requirement to behave ethically is just as strong in qualitative interviewing as in other types of research on humans—maybe even stronger. You must build ethical routines into your work. You should carefully study code of ethics and cases of unethical behavior to sensitize yourself to situations in which ethical commitments become particularly salient. Throughout your research, keep thinking and judging what are your ethical obligations. (Rubin & Rubin, 1995, p. 96)

New Directions in Informed Consent: Confidentiality Versus People Owning Their Own Stories

Confidentiality norms are also being challenged by new directions in qualitative inquiry. Traditionally, researchers have been advised to disguise the locations of their fieldwork and change the names of respondents, usually giving them pseudonyms, as a way of protecting their identities. The presumption has been that the privacy of research subjects should always be protected. This remains the dominant presumption, as well it should. It is being challenged, however, by participants in research who insist on “owning their own stories.” Some politically active groups take pride in their identities and refuse to be involved in research that disguises who they are. Some programs that aim at empowering participants emphasize that participants “own” their stories and should insist on using their real names. In a program helping them overcome a history of violence and abuse, I encountered women who were combating the stigma of their past by telling their stories and attaching their real names to their stories as part of healing, empowerment, and pride. Does the researcher, in such cases, have the right to impose confidentiality against the wishes of those involved? Is it patronizing and disempowering for a university-based human subjects committee to insist that these women are incapable of understanding the risks involved if they choose to turn down an offer of confidentiality? On the other hand, by identifying themselves, they give up not only their own privacy but also perhaps that of their children, other family members, and current or former partners.

A doctoral student studying a local church worked out an elaborate consent form in which the entire congregation decided whether to let itself be identified in his dissertation. Individual church members also had the option of using their real names or choosing pseudonyms. Another student studying alternative health practitioners offered them the option of confidentiality and pseudonyms or using their real identities in their case studies. Some chose to be identified, while some didn’t. A study of organizational leaders offered the same option. In all of these cases, the research participants also had the right to review and approve the final versions of their case studies and transcripts before they were made public. In cases of collaborative inquiry, where the researcher works with “coresearchers” and data collection involves more of a dialogue than an interview, the coresearchers may also become coauthors as they choose to identify themselves and share in publication.

These are examples of how the norms about confidentiality are changing and being challenged as tension has emerged between the important ethic of protecting people’s privacy and, in some cases, their desire to own their stories. Informed consent, in this regard, does not automatically mean confidentiality. Informed consent can mean that participants understand the risks and benefits of having their real names reported and choose to do so. Protection of human subjects properly insists

on informed consent. That does not now automatically mean confidentiality, as these examples illustrate.

Reciprocity: Should Interviewees Be Compensated? If So, How?

Quid pro quo: “something for something” (Ancient Roman principle)

The issues of whether and how to compensate interviewees involve questions of both ethics and data quality. Will payment, even of small amounts, affect people’s responses, increasing acquiescence or, alternatively, enhancing the incentive to respond thoughtfully and honestly? Is it somehow better to appeal to people on the basis of the contribution they can make to knowledge or, in the case of evaluation, improving the program, instead of appealing to their pecuniary interest? Can modest payments in surveys increase response rates to ensure an adequate sample size? Does the same apply to indepth interviewing and focus groups? The interviewer is usually getting paid. Shouldn’t the time of interviewees be respected, especially the time of low-income people, by offering compensation? What alternatives are there to cash for compensating interviewees? In Western capitalist societies, issues of compensation are arising more and more often both because people in economically disadvantaged communities are reacting to being overstudied and undervalued and because private sector marketing firms routinely compensate focus group participants; so this practice has spread to the public and nonprofit sectors.

Professionals in various fields differ about compensation for interviewees. Below are some comments from a lively discussion of these issues that took place on EvalTalk, the American Evaluation Association Internet Listserv.

• I believe in paying people, particularly in areas of human services. I am thinking of parenting and teen programs where it can be very difficult to get participation in interviews. If their input is valuable, I believe you should put your money where your mouth is. However, I would always make it very clear to the respondents that, although they are being paid for their time, they are not being paid for their responses and should be as candid and forthright as possible.

• One inner-city project offered vouchers to parent participants in a focus group to buy books for their kids, which for some low-income parents proved to be their first experience owning books rather than always borrowing them.

• Cash payment for participation in interviews is considered income and is therefore taxable. This can create problems if the payment comes from a public agency, as our county attorney has pointed out in the past. Consequently, when we “pay” for participation, we use incentives other than cash, for example, vouchers or gift certificates donated by local commercial vendors, such as a discount store. These seem to be as effective.

• If you are detaining a person with a face-to-face interview, and it isn’t a friendly conversation, rather it is a business exercise, it is only appropriate to offer to pay the prospective respondent for his or her time and effort. This should not preclude, and it would certainly help to explain, the importance of his or her contribution.

• We have paid and not paid incentives for focus groups for low-income folks as well as professionals and corporate CEOs. The bottom line is that in most cases the incentive doesn’t make a lot of difference in terms of participation rates, especially if you have well-trained interviewers and well-designed data collection procedures.

One of my concerns is that we are moving in a direction in which it is assumed (with very little substantive foundation) that people will only respond if given incentives. My plea here is that colleagues not fall into the trap of using incentives as a crutch but that they constantly examine and reexamine the whole issue of incentives and not simply assume that they are either needed or effective.

Alternatives to cash can instill a deeper sense of reciprocity. In doing family history interviews, I found that giving families a copy of the interview was much appreciated and increased the depth of responses because they were speaking not just to me, the interviewer, but to their grandchildren and great-grandchildren in telling the family’s story. In one project in rural areas, we carried a tape duplicator in the truck and made copies for them instantly at the end of the interview. Providing complete transcripts of interviews can also be attractive to participants. In an early-childhood parenting program where data collection included videotaping parents playing with their children, copies of the videotapes were prized by the parents. The basic principle informing these exchanges is reciprocity. Participants in research provide us with something of great value—their stories and their perspectives on their world. We show that we value what they give us by offering something in exchange.

How Hard to Push for Sensitive Information?

Skillful interviewers can get people to talk about things they may later regret having revealed. Or sharing revelations in an interview may unburden a person, letting one get something off one’s chest. Since one can’t know for sure, interviewers are often faced with an ethical challenge concerning how hard to push for sensitive information, a matter in which the interviewer has a conflict of interest since the interviewer’s predilection is likely to be to push for as much as possible. Herb and Irene Rubin tell of interviewing an administrator in Thailand and learning that two months after their fieldwork, he committed suicide, “leaving us wondering if our encouraging him to talk about his problems may have made them more salient to him” (Rubin & Rubin, 1995, p. 98).

In deciding how hard to push for information, the interviewer must balance the value of a potential response against the potential distress for the respondent. This requires sensitivity, but it is not a burden the interviewer need to take on alone. When I see that someone is struggling for an answer, seems hesitant or unsure, or I simply know that an area of inquiry may be painful or uncomfortable, I prefer to make the interviewee a partner in the decision about how deeply to pursue the matter. I say something like this:

I realize this is a difficult thing to talk about. Sometimes people feel better talking about something like this and, of course, sometimes they don’t. You decide how much is comfortable for you to share. If you do tell me what happened and how you feel and later you wish you hadn’t, I promise to delete it from the interview. Okay? Obviously, I’m very interested in what happened, so please tell me what you’re comfortable telling me.

Be Careful. It’s Dangerous Out There.

In our teaching and publications we tend to sell students a smooth, almost idealized, model of the research process as neat, tidy, and unproblematic. . . . Perhaps we should be more open and honest about the actual pains and perils of conducting research in order to prepare and forewarn aspiring researchers.

—Maurice Punch (1986, pp. 13–14)

In an old police investigation television show, Hill Street Blues, the precinct duty sergeant ended each daily briefing of police officers by saying, “Let’s be careful out there.” The same warning applies to qualitative researchers doing fieldwork and interviewing: “Be careful. It’s dangerous out there.” It’s important to protect those who honor us with their stories by participating in our studies. It’s also important to protect yourself.

I was once interviewing a young man at a coffee shop for a recidivism study when another man showed up, an exchange took place, and I realized I had been used as a cover for a drug purchase. In doing straightforward outcomes evaluation studies, I have discovered illegal and unethical activities that I would have preferred not to have stumbled across. When we did the needs assessment of distressed farm families in rural Minnesota, we took the precaution of alerting the sheriffs’ offices in the counties where we would be interviewing in case any problems arose. One sheriff called back and said that a scam had been detected in the county that involved a couple in a pickup truck soliciting home improvement work and then absconding with the down payment. Since we were interviewing in couple teams and driving pickup trucks, the sheriff, after assuring himself of the legitimacy of our work, offered to provide us with a letter of introduction, an offer we gratefully accepted.

I supervised a dissertation that involved interviews with young male prostitutes. We made sure to clear that study with the local police and public prosecutors and to get their agreement that promises of confidentiality would be respected given the potential contribution of the findings to reducing both prostitution and the spread of AIDS. This, by the way, was a clear case where it would have been inappropriate to pay the interviewees. Instead of cash, the reciprocity incentive the student offered was the result of a personality instrument he administered.

One of the more famous cases of what seemed like straightforward fieldwork that became dangerous involved dissertation research on the culture of a bistro in New York City. Through in- depth interviews, graduate student Mario Brajuha gathered detailed information from people who worked and ate at the restaurant—information about their lives and their views about others involved with the restaurant. He made the usual promise of confidentiality. In the midst of his fieldwork, the restaurant was burned, and the police suspected arson. Learning of his fieldwork, they subpoenaed his interview notes. He decided to honor his promises of confidentiality and ended up going to jail rather than turning over his notes. This case, which dragged on for years, disrupting his graduate studies and his life, reaffirmed that researchers lack the protection of clergy and lawyers when subpoenas are involved, promises of confidentiality notwithstanding. (For details, see Brajuha & Hallowell, 1986.)

It helps to think about potential risks and dangers prior to gathering data, but Brajuha could not have anticipated the arson. Anticipation, planning, and ethical reflection in advance only take you so far. As Maurice Punch (1986) has observed, sounding very much like he is talking from experience,

How to cope with a loaded revolver dropped in your lap is something you have to resolve on the spot, however much you may have anticipated it in prior training. (p. 13)

So be careful. It’s dangerous out there.

MODULE

64 Personal Reflections on Interviewing, and ChapterSummary and Conclusion

The word interview has roots in Old French and meant something like “to see one another.”

—Narayan and George (2003, p. 449)

Personal Reflections on Interviewing

Though there are dangers, there are also rewards.

I find interviewing people invigorating and stimulating—the opportunity for a short period of time to enter another person’s world. If participant observation means “walk a mile in my shoes,” in-depth interviewing means “walk a mile in my head.” New worlds are opened up to the interviewer on these journeys.

I’m personally convinced that to be a good interviewer you must like doing it. This means being interested in what people have to say. You must yourself believe that the thoughts and experiences of the people being interviewed are worth knowing. In short, you must have the utmost respect for these persons who are willing to share with you some of their time to help you understand their world. There is a Sufi story that describes what happens when the interviewer loses this basic sensitivity to and respect for the person being interviewed.

An Interview With the King of the Monkeys

A man once spent years learning the language of monkeys so that he could personally interview the king of monkeys. Having completed his studies, he set out on his interviewing adventure. In the course of searching for the king, he talked with a number of monkey underlings. He found that the monkeys he spoke to were generally, to his mind, neither very interesting nor very clever. He began to doubt whether he could learn very much from the king of the monkeys after all.

Finally, he located the king and arranged for an interview. Because of his doubts, however, he decided to begin with a few basic questions before moving on to the deeper meaning-of-life questions that had become his obsession.

“What is a tree?” he asked.

“It is what it is,” replied the king of the monkeys. “We swing through trees to move through the jungle.”

“And what is the purpose of the banana?”

SIDEBAR

THE PERSONAL EXPERIENCE OF QUALITATIVE INQUIRY

Question. What’s the most memorable or meaningful evaluation that you have been a part of—and why?

I worked briefly on an evaluation of a rail safety intervention program aimed at reducing safety- related incident rates through peer-to-peer observation (Zuschlag, Ranney, Coplen, & Harnar, 2012). To collect some qualitative insights, I was sent to San Antonio where I got a full tour, a ride aboard a locomotive, and the opportunity to talk with some engineers and conductors about changes they saw in safety related to the program. One of their stories, in particular, touches me every time I tell it.

A trained safety observer noticed an older engineer not wearing any hearing protection, even though he was right alongside a locomotive. The observer said something to the order of “Don’t you want to hear your granddaughter’s voice when she talks to you?” This relatively casual but purposeful effort had a tremendous impact. After that, every time the “old-timer” saw our observer he would point to his hearing protection to say, “Look, I’m wearing them!”

In my youth, I spent long days playing in the dead yards of the Reading Railroad (yes, the one from the Monopoly game, pronounced Redding), and I never dreamed of someday riding the tracks as a researcher and learning about that world through the eyes of its practitioners. Every time I recall the story it takes me back to that day in the locomotive and the hours I spent talking with those engineers and conductors, and it reminds me that one conversation can change a person’s world . . . and maybe let him hear his granddaughter’s voice in his old age.

—Michael A. Harnar, Mosaic Network, Inc.

Senior Associate, Research and Evaluation

American Evaluation Association Newsletter, November, 2012

“Purpose? Why, to eat.”

“How do animals find pleasure?”

“By doing things they enjoy.”

At this point, the man decided that the king’s responses were rather shallow and uninteresting, and he went on his way, crushed and cynical. Soon afterward, an owl flew into the tree next to the king of the monkeys. “What was that man doing here?” the owl asked.

“Oh, he was only another silly human,” said the king of the monkeys. “He asked a bunch of simple and meaningless questions, so I gave him simple and meaningless answers.”

Not all interviews are interesting, and not all interviews go well. Certainly, there are uncooperative respondents, people who are paranoid, respondents who seem overly sensitive and easily embarrassed, aggressive and hostile interviewees, timid people, and the endlessly verbose, who go on at great length about very little. When an interview is going badly, it is easy to call forth one of these stereotypes to explain how the interviewee is ruining the interview. Such blaming of the victim (the interviewee), however, does little to improve the quality of the data. Nor does it improve interviewing skills.

I prefer to believe that there is a way to unlock the internal perspective of every interviewee. My challenge and responsibility as an interviewer involve finding the appropriate and effective interviewing style and question format for a particular respondent. It is my responsibility as the interviewer to establish an interview climate that facilitates open responses. When an interview goes badly, as it sometimes does, even after all these years, I look first at my own shortcomings and miscalculations, not the shortcomings of the interviewee. That’s how, over the years, I’ve gotten better and come to value reflexivity, not just as an intellectual concept but also as a personal and professional commitment to learning and engaging people with respect.

Overview and Conclusion

Interviewing has become central in our modern knowledge age Interview Society. This chapter began by discussing different approaches to interviewing for different purposes: journalism, therapeutic interviews, forensic investigations, celebrity interviews, and personnel interviewing, among others (see Exhibit 7.1, pp. 423–424). We examined differences in how interviews are used to guide diverse theoretical and methodological traditions: ethnography, phenomenology, social constructionism, hermeneutics, and other frameworks for inquiry (see Exhibit 7.3, pp. 432–436).

A major focus of this chapter has been that obtaining high-quality data from interviews requires skilled interviewing. Exhibit 7.2 (p. 428) presented 10 core interviewing skills. We then moved to different types of interview formats: standardized questions, interview guides, and conversational interviewing (see Exhibit 7.4, pp.437–438).We looked at how to phrase questions and probe responses. A point of emphasis was the importance of anticipating analysis and reporting to organize, sequence, and format interviews (see Exhibit 7.6, p. 443).

Every interview is also an observation, a two-way interaction, and, therefore, a relationship. Exhibit 7.12 (pp.462–463) summarized six distinct approaches to undertaking interviewing as a relationship. We also compared interviews with individuals with group interviews, like focus groups and 11 other types of group interviews (see Exhibit 7.14, pp. 475–476).

Cross-cultural interviewing and qualitative fieldwork presents special challenges (see Exhibit 7.15, p. 482). Special target populations pose particular challenges in qualitative inquiries: interviewing children, older people, elites, and marginalized people or conducting online interviews (see Exhibit 7.17, pp. 493–494).

The interpersonal nature of in-depth qualitative interviewing raises special ethical dilemmas and concerns (see Exhibit 7.18, pp. 496–497). The people interviewed can be affected by the inquiry, but so can the person conducting the interviews. Nora Murphy (2014) interviewed homeless youth for her dissertation.

This also took a personal toll. At times, the harsh reality of the experiences the youths have gone through had me in tears at home. In fact, in some interviews, the youth and I cried together. The stories are so powerful because they are real and true, and sharing these truths means opening wounds. I felt guilty that I could not help them more and feared exploiting them. Sometimes I emailed advisors just to vent to him about how difficult it was; playing the role of an objective researcher who could not step in to help felt unnatural and uncomfortable. It helped when the youth thanked me for listening and glowed when I read their stories back to them. They often said things like, “You made connections about me that I didn’t realize . . . No one else in my life knows all of this . . . I want to share this with my brother. It will help me explain things to him that I’ve never been able to say before . . . I want to make this into a book.” I left each interview with the feeling that the youths appreciated being listened to, that they felt gratitude toward me and the process. I still harbor guilt that I could not do more for each of them, but I

also believe the process, overall, has done more good than harm. I still carry some of the sadness I experienced as I sat with the youth and listened to their tales of their difficult journeys, but I also carry a hope inspired by their optimism and their strength. (p. 428)

While this chapter has emphasized skill and technique as ways of enhancing the quality of interview data, no less important is a genuine interest in and caring about the perspectives of other people. If what people have to say about their world is generally boring to you, then you will never be a great interviewer. Unless you are fascinated by the rich variation in human experience, qualitative interviewing will become drudgery. On the other hand, a deep and genuine interest in learning about people is insufficient without disciplined and rigorous inquiry based on skill, technique, and a deep capacity for empathic understanding.

Conclusion: Halcolm on Interviewing

Ask.

Listen and record.

Ask.

Listen and record.

Asking involves a grave responsibility.

Listening is a privilege.

Researchers, listen and observe. Remember that your questions will be studied by those you study. Evaluators, listen and observe. Remember that you shall be evaluated by your questions.

To ask is to seek entry into another’s world. Therefore, ask respectfully and with sincerity. Do not waste questions on trivia and tricks, for the value of the answering gift you receive will be a reflection of the value of your question.

Blessed are the skilled questioners, for they shall be given mountains of words to ascend.

Blessed are the wise questioners, for they shall unlock hidden corridors of knowledge.

Blessed are the listening questioners, for they shall gain perspective. —From Halcolm’s Beatitudes

Looking Forward

Part I of this book presented the niche of qualitative methods in studying the world (Chapter 1), presented 12 fundamental qualitative strategies (Chapter 2), reviewed major theoretical traditions that inform diverse approaches to qualitative inquiry (Chapter 3), and examined practical applications for qualitative methods (Chapter 4). Part II presented qualitative design options (Chapter 5), fieldwork approaches (Chapter 6), and interview methods (Chapter 7). We now turn to Part III: analyzing and reporting qualitative findings (Chapter 8) and ways of enhancing the

credibility and utility of qualitative results (Chapter 9). As we make the transition from Part II to Part III, from ways of gathering data to ways of making sense of the data gathered, the Halcolm story that ends this chapter reminds us that things are not always what they seem. The story also reminds us of the centrality of Thomas’s theorem for interpretation of qualitative data: What is perceived as real is real in its consequences.

EXHIBIT 7.19 Examples of Standardized Open-Ended Interviews

The edited interviews below were used in evaluation of an Outward Bound program for the disabled. Outward Bound is an organization that uses the wilderness as an experiential education medium. This particular program consisted of a 10-day experience in the Boundary Waters Canoe Area of Minnesota. The group consisted of half able-bodied participants and half disabled participants, including paraplegics; persons with cerebral palsy, epilepsy, or other developmental disabilities; blind and deaf participants; and, on one occasion, a quadriplegic. The first interview was conducted at the beginning of the program; the second interview was used at the end of the 10-day experience; and the third interview took place six months later. To save space, many of the probes and elaboration questions have been deleted, and space for

writing notes has been eliminated. The overall thrust and format of the interviews have, however, been retained.

Precourse Interview: Minnesota Outward Bound School Course for the Able-Bodied and the Disabled

This interview is being conducted before the course as part of an evaluation process to help us plan future courses. You have received a consent form to sign, which indicates your consent to this interview. The interview will be recorded.

1. First, we’d be interested in knowing how you became involved in the course. How did you find out about it?

a. What about the course appealed to you? b. What previous experiences have you had in the outdoors?

2. Some people have difficulty deciding to participate in an Outward Bound course, and others decide fairly easily. What kind of decision process did you go through in thinking about whether or not to participate?

a. What particular things were you concerned about? b. What is happening in your life right now that stimulated your decision to take the

course?

3. Now that you’ve made the decision to go on the course, how do you feel about it?

a. How would you describe your feelings right now? b. What lingering doubts or concerns do you have?

4. What are your expectations about how the course will affect you personally?

a. What changes in yourself do you hope will result from the experience? b. What do you hope to get out of the experience?

5. During the course you’ll be with the same group of people for an extended period of time. What feelings do you have about being part of a group like that for nine full days?

a. Based on your past experience with groups, how do you see yourself fitting into your group at Outward Bound?

For the Disabled

6. One of the things we’re interested in understanding better as a result of these courses is the everyday experience of disabled people. Some of the things we are interested in are as follows:

a. How does your disability affect the types of activities you engage in? b. What are the things that you don’t do that you wish you could do? c. How does your disability affect the kinds of people you associate with? Clarification:

Some people find that their disability means that they associate mainly with other disabled persons. Others find that their disability does not affect their contacts with people. What has your experience been along these lines?

d. Sometimes people with disabilities find that their participation in groups is limited. What has been your experience in this regard?

For the Able-Bodied

7. One of the things we’re interested in understanding better as a result of these courses is feelings that able-bodied people have about being with disabled folks. What kinds of experiences with disabled people have you had in the past?

a. What do you personally feel you get out of working with disabled people? b. In what ways do you find yourself being different from your usual self when you’re

with disabled people? c. What role do you expect to play with disabled people on the Outward Bound course?

Clarification: Are there any particular things you expect to have to do? d. As you think about your participation in this course, what particular feelings do you

have about being part of an outdoor course with disabled people?

8. About half of the participants on the course are disabled people, and about half are people without disabilities. How would you expect your relationship with the disabled people to be different from your relationship with course participants who are not disabled?

9. We’d like to know something about how you typically face new situations. Some people kind of like to jump into new situations, whether or not some risk is be involved. Other people are more cautious about entering situations until they know more about them. Between these two, how would you describe yourself?

10. Okay, you’ve been very helpful. Are there other thoughts or feelings you’d like to share with us to help us understand how you’re seeing the course right now. Anything at all you’d like to add?

Postcourse Interview

We’re conducting this interview right at the end of your course with Minnesota Outward Bound. We hope this will help us better understand what you’ve experienced so that we can improve future courses. You have signed a form giving your consent for material from this interview to be used in a written evaluation of the course. This interview is being tape- recorded.

1. To what extent was the course what you expected it to be?

a. How was it different from what you expected? b. To what extent did the things you were concerned about before the course come true?

b-1. Which things came true?

b-2. Which didn’t come true?

2. How did the course affect you personally?

a. What changes in yourself do you see or feel as a result of the course? b. What would you say you got out of the experience?

3. During the past nine days, you’ve been with the same group of people constantly. What kind of feelings do you have about having been a part of the same group for that time?

a. What feelings do you have about the group? b. What role do you feel you played in the group? c. How was your experience with this group different from your experiences with other

groups? d. How did the group affect you? e. How did you affect the group? f. In what ways did you relate differently to the able-bodied and disabled people in your

group?

4. What is it about the course that makes it have the effects it has? What happens on the course that makes a difference?

a. What do you see as the important parts of the course that make an Outward Bound course what it is?

b. What was the high point of the course for you? c. What was the low point?

5. How do you think this course will affect you when you return to your home?

a. Which of the things you experienced this week will carry over to your normal life? b. What plans do you have to change anything or do anything differently as a result of

this course?

For the Disabled

6. We asked you before the course about your experience of being disabled. What are your feelings about what it’s like to be disabled now?

a. How did your disability affect the type of activities you engaged in on the course? Clarification: What things didn’t you do because of your disability?

b. How was your participation in the group affected by your disability?

For the Able-Bodied

7. We asked you before the course your feelings about being with disabled people. As a result of the experiences of the past nine days, how have your feelings about disabled people changed?

a. How have your feelings about yourself in relation to disabled persons changed? b. What did you personally get out of being/working with disabled people on this course? c. What role did you play with the disabled people? d. How was this role different from the role you usually play with disabled people?

8. Before the course, we asked you how you typically faced a variety of new situations. During the past nine days, you have faced a variety of new situations. How would you describe yourself in terms of how you approached these new experiences?

a. How was this different from the way you usually approach things? b. How do you think this experience will affect how you approach new situations in the

future?

9. Suppose you were being asked by a government agency whether or not they should sponsor a course like this. What would you say?

a. What arguments would you give to support your opinion?

10. Okay, you’ve been very helpful. We’d be very interested in any other feelings and thoughts you’d like to share with us to help us understand your experience of the course and how it affected you.

Six-Month Follow-Up Interview

This interview is being conducted about six months after your Outward Bound course to help us better understand what participants experience so that we can improve future courses.

1. Looking back on your Outward Bound experience, I’d like to ask you to begin by describing for me what you see as the main components of the course? What makes an Outward Bound course what it is?

a. What do you remember as the highlight of the course for you?

b. What was the low point?

2. How did the course affect you personally?

a. What kinds of changes in yourself do you see or feel as a result of your participation in the course?

b. What would you say you got out of the experience?

3. For nine days, you were with the same group of people, how has your experience with the Outward Bound group affected your involvement with groups since then?

For the Disabled

(*Check previous responses before the interview. If the person’s attitude appears to have changed, ask if he or she perceives a change in attitude.)

4. We asked you before the course to tell us what it’s like to be disabled. What are your feelings about what it’s like to be disabled now?

a. How does your disability affect the types of activities you engage in? Clarification: What are some of the things you don’t do because you’re disabled?

b. How does your disability affect the kinds of people you associate with? Clarification: Some people find that their disability means that they associate mainly with other disabled persons. Other people with disabilities find that their disability in no way limits their contacts with people. What has been your experience?

c. As a result of your participation in Outward Bound, how do you believe you’ve changed the way you handle your disability?

For the Able-Bodied

5. We asked you before the course to tell us what it’s like to work with the disabled. What are your feelings about what it’s like to work with the disabled now?

a. What do you personally feel you get out of working with disabled persons? b. In what ways do you find yourself being different from your usual self when you are

with disabled people? c. As you think about your participation in the course, what particular feelings do you

have about having been part of a course with disabled people?

6. About half of the people on the course were disabled people, and about half were people without disabilities. To what extent did you find yourself acting differently with disabled people compared with the way you acted with able-bodied participants?

7. Before this course, we asked you how you typically face new situations. For example, some people kind of like to jump into new situations even if some risks are involved. Other people are more cautious, and so on. How would you describe yourself along these lines right now?

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 1/37

EXHIBIT 8.21 Immigration Roadmap Into the United States

Based on interviews, document analysis, case studies, and key informants’ expertise, this diagram depicts the barriers to foreigners getting a green card in the United States as of 2009. Describing (in words) what is on this one-page diagram would take 30 to 50 pages. The daunting nature of the visual and the seven “Sorry” hexagons that represent failure give a visceral feel to what it would be like to be caught somewhere in this maze.

SOURCE: ImmigrationRoad.com (2013). Used with permission.

EXHIBIT 8.22 Interpersonal Systems

In a program for low-income, drug-addicted, pregnant teenagers, each young woman was asked to draw a map of her closest relationships as they were at the start of the program (baseline)—people who could support her during her pregnancy and after her baby was born. The program aimed to strengthen that network of support in keeping with the mantra that it takes a village. The interpersonal maps show the diversity of relationships these young women had, including one who was completely isolated and alone (Map 3). The program used the maps to help the program participants map the strength of their interpersonal support systems. The maps were used in the evaluation to graphically depict change, accompanied by a case study of each young woman.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 2/37

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 3/37

SOURCE: McDonnell, Tran, and McCoy (2010).

EXHIBIT 8.23 Mountain of Accountability

I have worked with the The Blandin Foundation in Grand Rapids, Minnesota, on and off, over more than 30 years. In 2013, the senior staff undertook a strategic reflective practice exercise that included examining and making sense of the many different kinds of evaluation going on at the foundation. A lot was happening, but the pieces didn’t seem to fit together. As we examined the various elements and what each did, we looked for a metaphor or graphic depiction that would bring order and cohesion to the messiness and multifaceted evaluation functions. What emerged was a Mountain of Accountability, based on Maslow’s hierarchy framework, where basic necessities are at the bottom and organizational excellence is at the top. Information and patterns from lower levels inform higher levels, and understandings, lessons, and insights from higher levels flow back down to support lower levels and facilitate adaptation and ongoing learning.

(For the full explanation of the Mountain of Accountability, see the website http://blandinfoundation.org/who-we-are/accountability.php.)

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 4/37

EXHIBIT 8.24 Depicting Interconnected Factors

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 5/37

In the discussion on causal attribution, I featured the example of how the GEM was used to determine how an advocacy campaign might have contributed to and influenced a U.S. Supreme Court decision. The preponderance of evidence pointed to some influence (see p. 595). Based on our synthesis of the findings, we generated a systems model depicting the interdependent elements of an integrated approach to policy reform that makes coalition building a centerpiece strategy. The model consists of six factors that, together, contribute to strengthening policy reform. There are six elements in the generic model:

1. Strong, high-capacity coalitions: Working through coalitions is a common centerpiece of advocacy strategy.

2. Strong national–state grassroots coordination: Effective policy change coalitions in the United States have to be able to work bottom up and top down, with national campaigns supporting and coordinating state and grassroots efforts while state efforts infuse national campaigns with local knowledge and grassroots energy. Strengthening strong national–state coordination is part of coalition development and field building.

3. Disciplined and focused messages with effective communication: Effective communication must occur within movements (message discipline) and to target audiences (focused messaging). Strengthening communication has been a key component of advocacy coalition building.

4. Solid research and knowledge base: The content of effective messages must be based on solid research and timely knowledge. In the knowledge age, policy coalitions must be able to marry their values with relevant research and real-time data about the dynamic policy environment.

5. Timely, opportunistic lobbying and judicial engagement: The evaluation findings emphasize that effective lobbying requires connections, skill, flexibility, coordination, and strategy.

6. Collaborating funders engaged in strategic funding: Effective funding involves not only financial support but also infusion of expertise and strategy as part of field building.

Overall Lesson for Effective Advocacy

In essence, strong national–state–grassroots coordination depends on having a high-capacity coalition. A solid knowledge and research base contributes to a focused message and effective communication. Message discipline depends on a strong coalition and national–state coordination, as does timely and opportunistic lobbying and judicial engagement. To build and sustain a high-capacity coalition, funders must use their resources and knowledge to collaborate around shared strategies. These factors in combination and as mutual reinforcement strengthen advocacy efforts. In classic systems framing, the whole is greater than the sum of the parts, and the optimal functioning of each part depends on the optimal integration and integrated functioning of the whole.

We needed a graphic depicting the dynamic nature of the relationship among these factors, basically depicting the interconnections among the factors as looking like a fluid spider web.

The interdependent system of factors that contribute to effective advocacy and change

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 6/37

EXHIBIT 8.25 Depicting Organizational Tensions

An organizational development study revealed a number of fundamental tensions. The study was based on interviews, focus groups, and extensive review of documents, and observations of meetings and operational implementation and program delivery sessions. To report the findings, we created a set of graphic representations highlighting the tensions. Two ovals represent the two arenas of action that are in tension (e.g., strategy vs. execution). Within each oval are listed key elements that define each strategic dimension (i.e., key elements of strategy in one oval and key elements of execution in the other oval). Connecting the two ovals is a large arrow that shows the two dimensions as interrelated. Highlighted between the two ovals are some of the principal tensions we identified between them.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 7/37

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 8/37

EXHIBIT 8.26 Depicting Evaluation Tensions

No tension more deeply permeates evaluation than the admonition that evaluators should work closely with stakeholders to enhance mutual understanding, increase relevance, and facilitate use while maintaining independence to ensure credibility. Managing this tension was at the center of the way the evaluation of the Paris Declaration on Development Aid was structured, administered, and governed.

• At the country level, the national reference group in each country selected the evaluation team, based on published terms of reference and competitive processes; country evaluation teams were to operate independently in carrying out the evaluation.

• At the international level, the secretariat administered the evaluation, with an evaluation management group that selected the core evaluation team and provided oversight and guidance, while the core evaluation team conducted the evaluation independently.

• The international reference group of country representatives, donor members, and international organization participants provided a stakeholder forum for engaging with the core evaluation team and the evaluation’s draft findings.

Thus, at every level, the evaluation was structured to separate evaluation functions from political and management functions, while also providing forums and processes for meaningful interaction. Such structures and processes are fraught with tensions and risks. Yet, in the end, the credibility and integrity of the evaluation depended on doing both well: engaging key stakeholders to ensure relevance and buy-in while maintaining independence to ensure the credibility of findings. In our report, we created a graphic to depict this tension (Patton & Gornick, 2011a, p. 37).

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 9/37

The Paris Declaration evaluation team committed to a bottom-up design with country evaluations and donor studies feeding into a final synthesis of findings. But to make the synthesis manageable and facilitate aggregation of findings, the evaluation approach included building overarching, common elements:

• A common evaluation framework • Common core questions • A standardized data collection matrix • Standardized scales for rendering judgments • Standardized analytical categories for describing and interpreting progress: direction of travel,

distance traveled, and pace

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 10/37

• A common glossary of terms

So the evaluation had a top-down, standardized framework to facilitate synthesis. The joint, collaborative, and participatory nature of the evaluation meant that these bottom-up and top-down processes had to be carefully managed. We found some confusion and tension about the relationship between country evaluations and the overall global synthesis, so we created a graphic to summarize, highlight, and depict the tensions that emerged in the data (Patton & Gornick, 2011a, p. 51).

Strengths and Weaknesses of Visualization

The visualization examples I’ve shared are not meant to stand alone. They supplement and focus qualitative reporting but are not a substitute for narrative. Exhibit 8.27 concludes this section with a graphic overview of strengths and weaknesses in visualizing qualitative data and findings.

EXHIBIT 8.27 Visualization of Qualitative Data and Findings: Strengths and Weaknesses

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 11/37

Resources for Qualitative Data Visualization

• Azzam and Evergreen (2013a, 2013b). Data Visualization • Ball and Smith (1992). Analyzing Visual Data • Banks (2007). Using Visual Data in Qualitative Research • Bryson, Ackermann, and Eden (2014). Visual Strategy: A Workbook for Strategy Mapping for Public

and Nonprofit Organizations • Bryson, Ackerman, Eden, and Finn (2004). Visible Thinking • Davis (2014). Conversations About Qualitative Communication Research • Evergreen (2012). “Potent Presentations” • Heath et al. (2010). Video in Qualitative Research: Analyzing Social Interaction in Everyday Life • Henderson and Segal (2013). Visualizing Qualitative Data in Evaluation • Kistler (2014). “Innovative Reporting” • Margolis and Pauwels (2011). The SAGE Handbook of Visual Research Methods • Mathison (2009). “Seeing Is Believing: The Credibility of Image-Based Research and Evaluation” • Miles et al. (2014). Qualitative Data Analysis: A Methods Sourcebook

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 12/37

SOURCE: Brazilian cartoonist Claudius Ceccon. Used with permission.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 13/37

MODULE

74 Special Analysis and Reporting Issues: Mixed Methods,

Focused Communication, Principles-Focused Report Exemplar, and Creativity

For some critics, qualitative research has been getting better and more inventive. Even such a demanding veteran as Van Maanen suggests that ethnographers, especially those in the vanguard of the new ethnographic genres, are “learning to write better, less soothing, more faithful and ultimately more truthful accounts of their fellow human beings than ever before.” And Denzin sees methodological blossoming in the cross-fertilization of new “epistemologies . . . and new genres.”

—Huberman and Miles (2002, pp. ix–x) The Qualitative Researcher’s Companion

Integrating Qualitative and Quantitative Data

Mixed methods researchers are extending our understandings of how to understand complex social phenomena, as well as how to use research to develop effective interventions to address complex social problems.

—Donna M. Mertens (2013) Editor, Journal of Mixed Methods Research

“Emerging Advances in Mixed Methods: Addressing Social Justice”

Exhibit 5.8, on purposeful sampling (pp. 266–272), includes several mixed-methods strategies for picking cases. Here, again, design frames analysis. Thus, a mixed-methods design anticipates a mixed-methods analysis. For example, a random, representative sample of recent immigrants in Minnesota was surveyed to determine their priority needs. A small number of respondents in different categories of need were then interviewed to understand in depth their situations and priorities and to make the statistical findings more personal through stories. The qualitative sample was also used to validate the accuracy and meaningfulness of the survey responses.

Mixed methods have increased in both importance and credibility as the strengths and weaknesses of both quantitative and qualitative approaches have become more evident. Quantitative data are strong in measuring how widespread a phenomenon is. Qualitative methods are strong in explaining what the phenomenon means. Combining methods would seem straightforward. It is not. Often, mixed-methods reports are written in two sections, a qualitative section and a quantitative section, and the two self-contained studies never engage each other. The two reports, often written by two different people, are like toddlers engaging in parallel play in a sandbox. They’re old enough to recognize each other but not developed enough to interact. Ideally, however, studies are designed to be genuinely mixed (valuing both kinds of data) and, because the design makes it possible, the qualitative and quantitative data are integrated in analysis and reporting (Bergman, 2008; Greene, 2007; Mertens, 2013; Morgan, 2014). Exhibit 8.28 on pages 622–623 reviews challenges and solutions in mixed-methods analysis and reporting. But, first, to get into an appreciative mind-set for integrating qualitative and quantitative data, consider this intriguing example from the reflections of Abd Al- Rahman III, an emir and caliph of Córdoba in 10th-century Spain. He was an absolute ruler who lived in complete luxury. Here’s how he assessed his life:

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 14/37

I have now reigned above 50 years in victory or peace; beloved by my subjects, dreaded by my enemies, and respected by my allies. Riches and honors, power and pleasure, have waited on my call, nor does any earthly blessing appear to have been wanting to my felicity. I have diligently numbered the days of pure and genuine happiness which have fallen to my lot: They amount to 14. (Quoted by Brooks, 2014, p. SR1)

SIDEBAR

INSIGHTS FROM AN EXPERIENCED MIXED-METHODS EVALUATOR

Quantitative evidence is the bones; qualitative evidence is the flesh; and evaluative reasoning is the vital organs. If you are missing any of these you don’t have the full evaluative picture.

I frame all my evaluation work around asking and answering important evaluative questions. These are the “what are we trying to find out from this whole evaluation?” questions (not the interview questions, survey questions, etc, which are much lower level).

I’ve yet to conduct an evaluation that didn’t require some fairly substantial qualitative evidence to convincingly support the conclusions (answers to these high-level questions). In most cases I use mixed methods, but I have done a few evaluations that are 99–100% qualitative.

But more important than that, I use evaluation-specific methods. These are the ones that answer not just what’s happening and what’s changed, but the really tricky questions like whether the change is big enough, fast enough, worth the time, money and effort invested.

I also use qualitative methods a lot for causal inference. It’s a rare case where quantitative data alone makes a convincing case for a causal contribution, and qualitative evidence can be used to substantially strengthen the case—and in some cases is enough on its own.

—Jane Davidson Evaluation Methodology Basics: The Nuts and Bolts of Sound Evaluation (2005)

Actionable Evaluation Basics: Getting Succinct Answers to the Most Important Questions (2012)

Cofounder of Genuine Evaluation blog (http://genuineevaluation.com/)

Focused Communications: Audience-Sensitive Presentations and Reports I emphasized earlier that reports should be written to communicate what you want to say to specific audiences. The particular challenge of qualitative reporting is reducing the sheer volume of data into digestible morsels. Book-length monographs may be the only way to do justice to the rich descriptions and detail that undergird long-term fieldwork and the in-depth case studies that in-depth interviews yield. But even if you get the opportunity to do a book, write a dissertation, or publish a full monograph, opportunities will arise for shorter and more focused articles and presentations. In such cases, focus is critical.

Even a comprehensive report will have to omit a great deal of information collected in a qualitative inquiry. If you try to include everything, you risk losing your readers or audience members in the sheer volume of the presentation. To enhance a report’s coherence or a presentation’s impact, follow the adage that less is more. This translates into covering a few key findings or conclusions well rather than lots of them poorly.

An evaluation report, for example, should address clearly each major evaluation question. The extent to which minor questions are addressed is a matter of negotiation with primary intended users. Sometimes,

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 15/37

answers to minor questions are provided orally or in an appendix to keep the focus on major questions. For each one, present the descriptive findings, analysis, and interpretation succinctly. An evaluation report should be readable, understandable, and relatively free of academic jargon. The data should impress the reader beyond the academic credentials of the evaluator.

©2002 Michael Quinn Patton and Michael Cochran

The advice I find myself repeating most often to students when they are writing articles or preparing presentations is FOCUS! FOCUS! FOCUS! The agony of omitting on the part of the qualitative researcher or evaluator is matched only by the readers’ or listeners’ agony in having to read or hear those things that were not omitted but should have been. Exhibit 8.29 presents an image of audience- and utilization-focused reporting.

EXHIBIT 8.28 Mixed-Methods Challenges and Solutions

The column on the left presents common challenges in mixing methods (qualitative and quantitative) during analysis and reporting. As summary items in tabular format, these are admittedly overstated and oversimplified contrasts. Those who are sophisticated about mixing methods will certainly find the contrasts exaggerated. That said, the literature on mixed methods (e.g., the Journal of Mixed Methods Research) and my own experience facilitating integration of different types of data lead me to emphasize the contrasts to highlight the potential and actual challenges of integration. Integrating different kinds of data is neither easy nor straightforward. That’s the core message of this exhibit. The column on the right, therefore, offers integrative strategies and solutions.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 16/37

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 17/37

Connecting With the Audience

An Evaluative Example Using Indigenous Typologies

Earlier in this chapter (pp. 546–548), I discussed analyzing findings through the framework of an indigenous typology—that is, using the language, concepts, and categories of the people studied to organize the data and make sense of the findings. Such an indigenous typology can also serve as an organizing framework for reports and presentations. Here’s an example.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 18/37

SIDEBAR

COLLABORATIVE OUTCOMES REPORTING

Collaborative Outcomes Reporting is a participatory approach to impact evaluation. Multiple kinds of evidence of outcomes are pulled together and presented as a “performance story.” To ensure the validity and credibility of the performance story, it is independently reviewed by both technical experts and knowledgeable stakeholders, which may include both program participants and community members. A collaborative approach to developing the performance story and review by independent experts are the distinctive elements of this approach (Dart & Andrews, 2014).

Collaborative Outcomes Report Structure

The report aims to explore and report the extent to which a program has contributed to outcomes.

In Collaborative Outcomes Reporting, reports are short and generally structured in terms of the following sections:

1. A narrative section explaining the program context and rationale 2. A “results chart” summarizing the achievements of the program against a program logic model 3. A narrative section describing the implications of the results, for example, the achievements (expected

and unexpected), the issues, and the recommendations 4. A section that provides a number of “vignettes” providing instances of significant change, usually

first-person narratives 5. An index providing more detail on the sources of evidence

COR [Collaborative Outcomes Reporting] is based on the premise that the values of stakeholders, program staff, and key stakeholders are of highest importance in an evaluation. The evaluators attempt to “bracket off” their opinions and instead present a series of data summaries to panel and summit participants for them to analyze and interpret. Values are surfaced and debated throughout the process. Participants debate the value and significance of data sources and come to agreement on the key findings of the evaluation. In addition, qualitative inquiry is used to capture unexpected outcomes, and deliberative processes are used to make sense of the findings. (Dart & Johnson, 2014)

EXHIBIT 8.29 Audience- and Utilization-Focused Reporting

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 19/37

Our evaluation of a rural leadership program included participant observation. The program involved a six- day residential retreat experience. After six days and evenings of intense (and sometimes tense) participation, observation, interviewing participants, and taking extensive field notes, our team of three evaluators needed a framework for providing feedback to program staff about what we were finding in a way that could be heard. We knew that the staff were heavily ego involved in the program and would be very sensitive to an approach that might appear to substitute our concept of the program for theirs. Yet a major purpose of the evaluation was to help them identify and make explicit their operating assumptions as evidenced in what actually happened during the six-day retreat.

As our team of three accumulated more and more data, debriefing each night what we were finding, we became increasingly worried about how to focus feedback. The problem was solved the fifth night when we realized that we could use their frameworks for describing to them what we were finding. For example, a major component of the program was having participants work with the Myers-Briggs Type Indicator, an instrument that measures individual personality type based on the work of Carl Jung (see Berens & Nardi, 1999; Hirsh & Kummerow, 1987; Kroeger & Thuesen, 1988; Myers & Meyers, 1995). The Myers-Briggs Type Indicator gives individuals scores on four bipolar scales:

In the feedback session, we began by asking the six staff members to characterize the overall retreat culture using the Myers-Briggs framework. The staff shared their separate ratings, on which there was no consensus, and then we shared our perceptions. We spent the whole morning discussing the database for and implications of each scale as a manifestation of the program’s culture. We ended the session by discussing where the staff wanted the program to be on each dimension. Staff were able to hear what we said without becoming defensive because we used their framework, a framework they defined, when giving individual feedback to participants, as nonjudgmental, facilitative, and developmental. Thus, they could experience our feedback as, likewise, nonjudgmental, facilitative, and developmental.

We formatted our presentation to staff using a distinction between “observations” and “perceived impacts” that program participants were taught as part of the leadership training. Observation: “You interrupted me in mid-sentence.” Perceived impact: “I felt cut-off and didn’t contribute after that.” This simple distinction, aimed at enhancing interpersonal communication, served as a comfortable, familiar format for program staff to receive formative evaluation feedback. Our reporting, then, followed this format. Three of the 20 observations from the report are reproduced in Exhibit 8.30.

EXHIBIT 8.30 Example of Reporting Feedback to Program Staff: Distinguishing Observations From Perceived Impacts Based on Their Indigenous Framework for Working With Participants in the Leadership Program

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 20/37

The critical point here is that we presented the findings using their categories and their frameworks. This greatly facilitated the feedback and enhanced the subsequent formative, developmental discussions. Capturing and using indigenous typologies can be a powerful analytical approach for making sense of and reporting qualitative data in evaluations.

The next example provides a quite different reporting format and illustrates a collaborative analysis and reporting effort.

Collaborative Reporting Example

A Principles-Focused Evaluation Report: An Example of the Flow of an Inquiry From Design to Data Collection and Analysis to Reporting

In 2012, six agencies serving homeless youth in Minneapolis and Saint Paul, Minnesota, began collaborating to evaluate the implementation and effectiveness of their shared principles. They first identified shared values and common principles and found that their work was undergirded and informed by eight principles. At the outset, these principles were just sensitizing concepts: labels and phrases without definition. Or where there was a descriptive definition, it varied among the agencies.

1. Trauma-informed care 2. Nonjudgmental engagement 3. Harm reduction 4. Trusting youth–adult relationships 5. Strengths-based approach

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 21/37

6. Positive youth development 7. Holistic approach 8. Collaboration

Working together, we designed a qualitative study to examine whether the principles actually guided the agencies’ work with youth and whether the outcomes the youth achieved could be linked to the principles. Nora Murphy, a graduate student at the University of Minnesota, was commissioned to conduct the study, which also served as her doctoral dissertation (Murphy, 2014). Selecting a maximum variation sample of homeless youth who had been involved with several of the agencies, Murphy generated 14 case studies of homeless youth based on interviews with the youth and program staff and review of case records. The results of the case studies were then synthesized. The collaborative participated in reviewing every case study to determine which principles were at work and their effectiveness in helping the youth meet their needs and achieve their goals. The story of Thmaris (not his real name) is one of the cases (see Exhibit 7.20, pp. 511– 516).

Coding the Case Studies and Interviews to Illuminate the Principles

The six agency directors participated in reviewing and validating the coding, as did the staff of the philanthropic foundation that funded the study. In pairs, they read through the case studies and identified passages, quotations, and events and experiences reported by the youth that seemed to illustrate a principle. Independently, as principal investigator, Murphy (2014) coded all the case studies against the sensitizing concept framework of the principles. Below, as an example, are three quotations from three different case studies (different youth) that were coded as illustrating the principle of trusting youth–adult relationships. (Pseudonym were selected by the youth.)

• Pearl: And you be like, “Okay, I have all this on my plate. I have to dig in and look into [the choices I’m making] to make my life more complete.” And I felt that on my own, I really couldn’t. Not even the strongest person on God’s green Earth can do it. I couldn’t do it. So I ended up reaching out to [the youth shelter], and they opened their arms. They were like just, “Come. Just get here,” and they got me back on track.

• Maria: If I was to sit in a room and think about, like, everything that happened to me or I’ve been through, I’ll get to cryin’ and feelin’ like I don’t wanna be on Earth anymore—like I wanted to die. When I talk to somebody [at the youth program] about it, it makes me feel better. The people I talk to about it give me good advice. They tell me how much they like me and how [good] I’m doin’. They just put good stuff in my head, and then I think about it and realize I am a good person and everything’s gonna work out better.

• Thmaris: [My case worker] is not going to send me to the next man, put me onto the next person’s caseload. He just always took care of me. . . . I honestly feel like if I didn’t have [him] in my corner, I would have been doing a whole bunch of dumb shit. I would have been right back at square one. I probably would have spent more time in jail than I did. I just felt like if it wasn’t for him, I probably wouldn’t be here right now, talking to you.

From Analysis to Reporting

Once all the case studies were coded against the principles, Murphy (2014) examined the data for additional principles. She found four possibilities and, in the end, confirmed one: being journey oriented in working with youth. In addition to the case studies, Murphy reviewed other available research (literature review for the dissertation) for definitions of and evidence of the effectiveness of each principle. Murphy used the cross-case analysis of the 14 interviews with the youth and the research literature to draft a definition for each principle. She and I then facilitated a process with the six agency directors to revise and finalize the definitions. In December 2013, the six agencies, acting together as a collaboration, formally adopted the nine principles (the original eight plus the one that emerged during the inquiry). Exhibit 8.31 presents the nine principles with the definitions developed and adopted. Next, the agency directors and Murphy (2014) worked together to

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 22/37

summarize the evidence, explaining each principle in a two-page statement. The outline for the report the group produced is presented in Exhibit 8.31. One of the most frequent questions I’m asked is how to consolidate, synthesize, and summarize all the rich data that come from a qualitative inquiry into a useful report that can be communicated publicly and used to inform practice and policy. There is no recipe for doing so, but Exhibit 8.31 provides an example to show that it can be done. Even though heavily edited, this is a lengthy exhibit. I have included it because of the importance of having some sense of how one moves from analysis to reporting when the purpose of the inquiry includes communicating results to a larger audience, as in evaluation and policy studies.

EXHIBIT 8.31 Principles-Focused Qualitative Evaluation Report Example

Nine Evidence-Based, Guiding Principles to Help Youth Overcome Homelessness

The Homeless Youth Collaborative on Developmental Evaluation

Part 1. The report begins with one page on each of the six agencies in the collaboration organized by type of service provided to homeless youth:

Outreach: StreetWorks

Drop-in Centers: Face to Face: SafeZone and YouthLink

Shelters: Avenues for Homeless Youth; Catholic Charities: Hope Street; and The Salvation Army: Booth Brown House

Other contributors to the process and the report

Funder: The Otto Bremer Foundation, Saint Paul, Minnesota

Technical assistance: Nora Muphy (2013) and Michael Quinn Patton

Part 2. The nine evidence-based, guiding principles to help youth overcome homelessness are presented.

The principles begin with the perspective that youth are on a journey; all of our interactions with youth are filtered through that journey perspective. This means we must be trauma-informed, non-judgmental and work to reduce harm. By holding these principles, we can build a trusting relationship that allows us to focus on youths’ strengths and opportunities for positive development. Through all of this, we approach youth as whole beings through a youth-focused collaborative system of support.

Journey Oriented: Interact with youth to help them understand the interconnectedness of past, present and future as they decide where they want to go and how to get there.

Trauma-Informed Care: Recognize that all homeless youth have experienced trauma; build relationships, responses and services on that knowledge.

Non-Judgmental Engagement: Interact with youth without labeling or judging them on the basis of background, experiences, choices or behaviors.

Harm Reduction: Contain the effects of risky behavior in the short-term and seek to reduce its effects in the long-term.

Trusting Youth-Adult Relationships: Build relationships by interacting with youth in an honest, dependable, authentic, caring and supportive way.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 23/37

Strengths-based Approach: Start with and build upon the skills, strengths and positive characteristics of each youth.

Positive Youth Development: Provide opportunities for youth to build a sense of competency, usefulness, belonging and power.

Holistic: Engage youth in a manner that recognizes that mental, physical, spiritual and social health are interconnected and interrelated.

Collaboration: Establish a principles-based, youth-focused system of support that integrates practices, procedures and services within and across agencies, systems and policies.

Part 3. Introduction to and explanation of principles-focused evaluation (excerpts)

All homeless young people have experienced serious adversity and trauma. The experience of homelessness is traumatic enough, but most also have faced poverty, abuse, neglect or rejection. They have been forced to grow up way too early. Most have serious physical or mental health issues. Some are barely teenagers; others may be in their late teens or early twenties.

Some homeless youth have family connections, some do not; all crave connection and value family. They come from the big city, small towns and rural areas. Most are youth of color and have been failed by systems with institutionalized racism and programs that best serve the white majority. Homeless youth are straight, gay, lesbian, bisexual, transgender or questioning. Some use alcohol or drugs heavily. Some have been in and out of homelessness. Others are new to the streets.

The main point here is that, while all homeless youth have faced trauma, they are all unique. Each homeless youth has unique needs, experiences, abilities and aspirations. Each is on a personal journey through homelessness and, hopefully, to a bright future.

Because of their uniqueness, how we approach and support each homeless young person also must be unique. No recipe exists for how to engage with and support homeless youth. As homeless youth workers and advocates, we cannot apply rigid rules or standard procedures. To do so would result in failure, at best, and reinforce trauma in the young person, at worst. Rules don’t work. We can’t dictate to any young person what is best. The young people know what is best for their future and need the opportunity to engage in self-determination.

This is where Principles come in. Organizations and individuals that successfully support homeless youth take a principles-based approach to their work, rather than a rules-based approach. Principles provide guidance and direction to those working with homeless youth. They provide a framework for how we approach and view the youth, engage and interact with them, build relationship with them and support them. The challenge for youth workers is to meet and connect with each young person where they are and build a supportive relationship from there. Principles provide the anchor for this relationship-building process.

Evaluation Design

Nora Murphy conducted 14 in-depth case studies of homeless youth who had interacted with the agencies, including interviews with the youth and program staff, and review of case records. The results of the case studies were then synthesized. The collaborative participated in reviewing every case study to determine which principles were at work and their effectiveness in helping the youth meet their needs and achieve their goals.

The results showed that all the participating organizations were adhering to the principles and that the principles were effective (even essential) in helping them make progress out of homelessness. While staff could not necessarily give a label to every principle at the time of the evaluation interviews, they clearly

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 24/37

talked about and followed them in practice. In the interviews with the youth, their stories showed how the staffs’ principles-based approaches made a critical difference in their journey through homelessness. (Reference for full dissertation: Murphy, 2013)

Part 4. Statements of 2–3 pages in which each principle is defined and discussed. For each principle, the summary statement explains why it matters, what key research shows, what the practice implications are, and concludes with illustrative quotes from the evaluation case studies.

Part 5. Following the principles statements are 2-page excerpts from each of the 14 case studies, excerpts that provide an overview of the outcomes the youth experienced through engagement with the youth- serving agencies.

Part 6. One full 15-page case study

Report Conclusion: Principles-Based Practice

Taken together, this set of principles provides a cohesive framework that guides practice in working with homeless youth.

For the full report, see Homeless Youth Collaborative on Developmental Evaluation (2014)

Nine Evidence-Based, Guiding Principles to Help Youth Overcome Homelessness (public report length, 71 pp.)

Use of the Report Findings and Case Studies

The six participating agencies are using the principles and case studies for program and staff development, recruitment and selection of new staff, policy advocacy on behalf of homeless youth, fund-raising, and expanded collaboration with other youth-serving organizations locally, statewide, and nationally. For further discussion of how and why qualitative inquiry is especially appropriate for evaluating principles-based programs and collaborations, see Chapters 4 (pp. 189–194) and 5 (p. 292)

Meeting the Challenge of Saying a Lot in a Small Space: The Executive Summary and Research Abstract

The executive summary is a fiction. —Robert Stake (1998, p. 370)

The fact that qualitative reports tend to be relatively lengthy can be a major problem when busy decision makers do not have the time (or, more likely, will not take the time) to read a lengthy report. Stake’s preference for insisting on telling the whole story notwithstanding is a preference I share, but my pragmatic, living-in-the-real-world side leads me to conclude that qualitative researchers and evaluators must develop the ability to produce an executive summary of one or two pages that presents the essential findings, conclusions, and reasons for confidence in the summary. The executive summary or research abstract is a dissemination document, a political instrument, and cannot be—nor is it meant to be—a full and fair representation of the study. An executive summary or abstract should be written in plain language, be highly focused, and state the core findings and conclusions. When writing the executive summary or research abstract, keep in mind that more people are likely to read that summary than any other document you produce.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 25/37

So here are a couple of tips:

1. Focus on findings not methods: I see lots of summaries and abstracts where it’s clear that the person reporting is more in love with his or her methods than his or her findings. But the reader is more likely to want to know about your findings than your methods. Report just enough about the methods to explain how the findings were generated, but highlight the most important findings.

2. Pilot test your drafts: Get some people you know to read your executive summary or abstract and tell you what it says to them.

3. Take time to write a well-crafted executive summary or abstract: French philosopher Blaise Pascal wrote a witty and oft-repeated apology in 1657: “I would have written a shorter letter if I had had more time.”

Carpe Diem Briefings As the hymn book is to the sound of music, the executive summary is to the oral briefing. Legendary are the stories of having spent a year of one’s life gathering data, poring over it, and writing a rigorous and conscientious report and then encountering some “decision maker” (I use the term lightly here) who says, “Well, now, I know that you put a lot of work into this. I’m anxious to hear all about what you’ve learned. I’ve got about 10 minutes before my next appointment.”

Should you turn heel and show him your backside? Not if you want your findings to make a difference. Use that 10 minutes well! Be prepared to make it count. Carpe Diem.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 26/37

MODULE

75 Chapter Summary and Conclusion, Plus Case Study Exhibits

The role of the qualitative researcher in research projects is often determined by the researcher’s stance and intent. . . . All senses are used to understand the context of the phenomenon under study, the people who are participants in the study and their beliefs and behaviors, and of course the researcher’s own orientation and purposes. . . . It is complicated, filled with surprises, and open to serendipity, and it often leads to something unanticipated in the original design of the research project. At the same time, the researcher works within the frame of a disciplined plan of inquiry, adheres to the high standards of qualitative inquiry, and looks for ways to complement and extend the description and explanation of the project through multiple methods of research, providing that this is done for a specific reason and makes sense. Qualitative researchers do not accept the misconception that more methods mean a better or richer analysis. Rather, the rationale for using selected methods is what counts. The qualitative researcher wants to tell a story in the best possible configuration.

—Valerie Janesick (2011, p. 176) “Stretching” Exercises for Qualitative Researchers

Chapter Overview and Summary This chapter opened by cautioning that no formula exists for transforming qualitative data into findings. Purpose drives analysis. Design frames analysis. But even with a clear purpose and a strong, appropriate design, the challenge of qualitative analysis remains: making sense of massive amounts of data. This involves reducing the volume of raw information, sifting trivia from significant data, identifying significant patterns, distinguishing signal from noise, and constructing a framework for communicating the essence of what the data reveal.

Exhibit 8.1 (pp. 522–523) offered 12 tips for laying a strong foundation for qualitative analysis. Exhibit 8.2 (p. 528) provided guidance in connecting design and analysis by linking purposeful sampling and purpose- driven analysis. Part of that connection is that in qualitative inquiry, analysis begins during fieldwork. Quantitative research texts, focused on surveys, standardized tests, and experimental designs, typically make a hard-and-fast distinction between data collection and analysis. But the fluid and emergent nature of naturalistic inquiry makes the distinction between data gathering and analysis far less absolute. In the course of fieldwork, ideas about directions for analysis will occur. Patterns take shape. Signals start to emerge from the noise. Possible themes spring to mind. Thinking about implications and explanations deepens the final stage of fieldwork. While earlier stages of fieldwork tend to be generative and emergent, following wherever the data lead, later stages bring closure by moving toward confirmatory data collection—deepening insights into and confirming (or disconfirming) patterns that seem to have appeared. Indeed, sampling confirming and disconfirming cases requires a sense of what there is to be confirmed or disconfirmed. Thus, ideas for making sense of the data that emerge while still in the field constitute the beginning of analysis.

Once formal analysis begins, the first task is organizing the raw data for analysis; then coding and content analysis begin systematically and rigorously. Qualitative software can help manage the data, but sense making remains a quintessential human responsibility. Analysis includes identifying and reflecting on patterns and themes that emerge during analysis as well as being reflective and reflexive about the analysis process. How do you know what you know? How do you make sense of the perspectives of the people you’ve observed and

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 27/37

interviewed? What do you know about the primary audience to whom your findings are addressed? These reflective questions are the doorway into reflexivity.

Constructing, analyzing, comparing, and interpreting case studies constitute core sense-making processes in many qualitative inquiries. Exhibit 8.33 (pp. 638–642), at the end of this chapter, presents an extensive example of an individual-level case study. Analytical approaches other than case studies are also part of qualitative analysis. Exhibit 8.10 (pp. 551–552) presented 10 types of qualitative analysis. Module 70 discussed interpreting findings, determining substantive significance, and elucidating phenomenological essence. Throughout the chapter, we’ve examined and discussed how and why analysis and reporting involve making distinctions between organizing data, presenting descriptive patterns and findings, interpreting themes, determining substantive significance, and, a particularly challenging challenge in qualitative inquiry, inferring causal relationships. Exhibit 8.19 (pp. 600–601) presents 12 approaches to qualitative causal analysis. The Halcolm graphic comic at the end of this chapter (pp. 635–637) offers an African fable to stimulate reflection on the nature of causal explanation.

Module 73 presented guidance on writing up and reporting findings, including incorporating creative visualizations that can powerfully and succinctly communicate core findings. Exhibits 8.20 to 8.27 presented an array of examples of visualization of findings. Exhibit 8.26 (pp. 616–618) summarized the strengths and weaknesses of visualization of qualitative data and findings. Exhibit 8.27 highlighted mixed-methods challenges and solutions, especially integrating qualitative and quantitative data and results. Exhibit 8.31 (pp. 627–628) presented an example of a qualitative report focused on principles for serving homeless youth. Exhibit 8.32 on the next page provides a checklist for data analysis, reporting, and interpretation that highlights some of the main points presented in this chapter. Exhibit 8.35 (pp. 643–649) presents excerpts from a qualitative report based on interviews.

Conclusion: The Creativity of Qualitative Inquiry

Creativity will dominate our time after the concepts of work and fun have been blurred by technology.

—Award-winning science fiction writer Isaac Asimov (1983, p. 42)

I have commented throughout this book that the human element in qualitative inquiry is both its strength and its weakness—its strength in allowing human insight and experience to blossom into new understandings and ways of seeing the world, its potential weakness in being so heavily dependent on the inquirer’s skills, training, intellect, discipline, and creativity. Because the researcher is the instrument of qualitative inquiry, the quality of the result depends heavily on the qualities of that human being. Nowhere does this ring more true than in analysis. Being an empathetic interviewer or astute observer does not necessarily make one an insightful analyst—or a creative one. Creativity seems to be one of those special human qualities that play an especially important part in qualitative analysis, interpretation, and reporting. Therefore, I close this chapter with some observations on creativity in qualitative inquiry.

I opened this chapter by commenting on qualitative inquiry as both science and art, especially qualitative analysis. The scientific part demands systematic and disciplined intellectual work, rigorous attention to details within a holistic context, and a critical perspective in questioning emergent patterns even while bringing evidence to bear in support of them. The artistic part invites exploration, metaphorical flourishes, risk taking, insightful sense making, and imaginative connection making. While both science and art involve critical analysis and creative expression, science emphasizes critical faculties more, especially in analysis, while art encourages creativity. The critical thinker assumes a stance of doubt and skepticism: Things have to be proven. Faulty logic, slippery linkages, tautological theories, and unsupported deductions are targets of the critical mind. The critical thinker studies details and looks beyond appearances to find out what is really

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 28/37

happening. Evaluators are trained to be rigorous and unyielding in critically thinking about and analyzing programs. Indeed, evaluation is built on the foundation of critical analysis.

Critical thinkers, however, tend not to be very creative. The creative mind generates new possibilities; the critical mind analyzes those possibilities, looking for inadequacies and imperfections. In summarizing research on critical and creative thinking, Barry F. Anderson (1980) has warned that the centrality of doubt in critical thinking can lead to a narrow, skeptical focus that hampers the creative ability to come up with innovative linkages or new insights.

EXHIBIT 8.32 Checklist for Qualitative Data Analysis, Interpreting Findings, and Reporting Results

1. Keep analysis grounded in the data. Return over and over to your field notes, interview transcripts, and the documents or artifacts you’ve collected that inform your analysis.

2. Present actual data for readers to experience firsthand. Use direct quotations, detailed observations, and document excerpts as the core of your reporting and as empirical support for your analysis and interpretations. “In this case, more is more. Convince your reader of your argument with evidence from transcripts, observations, reflective journals, and any other documentary evidence” (Janesick, 2011, p. 177).

3. Distinguish description from interpretation and explanation. Interpretation and explanation go beyond the data. Interpreting and explaining are part of the analyst’s responsibility, but the first and foremost responsibility is organizing and presenting accurate and meaningful descriptive findings.

4. Do justice to each case before doing cross-case analysis. This admonition follows from the previous one. Well-constructed, well-written, thorough, and rich case studies are worthy in and of themselves. Don’t let the allure of finding and presenting cross-case patterns and themes undercut full attention to the integrity and particularity of each case. And when distinguishing signal from noise, describe and interpret both the signal and the noise.

5. Purpose drives analysis and interpretation. If your purpose is to generate or test theory, then a significant part of your discussion will involve connecting your findings and analysis to inform and support theory-focused propositions. If your purpose is evaluative, to support program improvements and inform decision making about programs and policies, organize your findings to address core evaluation issues and provide actionable answers to evaluation questions.

6. Throughout the study track, document, and report your inquiry methods and analytical procedures. Findings flow from methods. Methods flow from inquiry questions. Inquiry questions flow from inquiry purpose. Connect these dots so that the linkages are clear. Report your methods and analytical process in sufficient detail that readers know from whence come your findings and interpretations and why you did what you did.

7. Be reflective and reflexive. As a qualitative inquirer, you are the instrument of the inquiry. Who you are, what brings you to this inquiry, your background, experience, knowledge, and training—all these matter. Report your reflexive inquiry in the methods section of your report (or elsewhere as appropriate). Acknowledge your prior beliefs. You should work to reduce your biases, but to say you have none is a sign that you have many. To state your beliefs up front—to say “Here’s where I’m coming from”—is a way to operate in good faith and to recognize that you perceive reality through a subjective filter. . . . Distinguishing the signal from noise requires both scientific knowledge and self-knowledge. (Silver, 2012, pp. 451, 453)

8. Work at the writing. How you present your findings will matter. Get editorial assistance as needed. Solicit feedback. Take the writing seriously. High school English teachers are right when they say that excellence in writing reflects excellence in thinking. All of your rigorous fieldwork and diligent analysis can come to naught if you present your findings poorly, incoherently, or in boring or error- plagued prose.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 29/37

9. Make the case for substantive significance. Substantive significance is a matter of judgment. It’s where you synthesize the evidence, consider its import, make transparent weaknesses and limitations in the data, balance strengths against weaknesses, weigh alternative possibilities, and state why the findings are important and merit attention. Don’t overstate the case. Don’t overgeneralize. But don’t become timid and afraid of naysayers. You know the data better than anyone. You’ve immersed yourself in them, worked with them, and lived with them. Say what you’ve come to know and what you don’t know, and provide the evidence for both judgments. Then, make the case for substantive significance. And, by all means, stay qualitative. Don’t determine significance by the numbers of people who said something. It’s not how many said something that matters. It’s the import, wisdom, relevance, insightfulness, and applicability of what was said, by many or by a few. (See MQP Rumination #8, pp. 557–560.)

10. Use both your critical and your creative faculties. Qualitative analysis offers myriad opportunities for both critical and creative thinking. These are not antagonists. Both capabilities dwell within you. Draw on both. Critical thinking is manifest in rigorous attention to alternative interpretations and explanations and in settling on those interpretations and explanations that best fit the preponderance of evidence. Creative thinking is manifest in well-told stories, evocative case studies, astute selection and juxtaposition of quotes and observations, and powerful, captivating, and accurate visual representations of the findings.

The critical attitude and the creative attitude seem to be poles apart. . . . On the one hand, there are those who are always telling you why ideas won’t work but who never seem able to come up with alternatives of their own; and, on the other hand, there are those who are constantly coming up with ideas but seem unable to tell good from the bad.

There are people in whom both attitudes are developed to a high degree . . . , but even these people say they assume only one of these attitudes at a time. When new ideas are needed, they put on their creative caps, and when ideas need to be evaluated, they put on their critical caps. (p. 66)

Qualitative inquiry draws on both critical and creative thinking—both the science and the art of analysis. But the technical, procedural, and scientific side of analysis is easier to present and teach. Creativity, while easy to prescribe, is harder to teach and perhaps harder to learn, but here’s some guidance derived from research and training on creative thinking (De Bono, 1999; Kelley & Littman, 2001; Patton, 1987, pp. 247– 248; Von Oech, 1998).

1. Be open: Creativity begins with openness to multiple possibilities. 2. Generate options: There’s always more than one way to think about or do something. 3. Diverge–converge–integrate: Begin by exploring a variety of directions and possibilities before focusing

on the details. Branch out, go on mental excursions, and brainstorm multiple perspectives before converging on the most promising.

4. Use multiple stimuli: Creativity training often includes exposure to many different avenues of expression: drawing, music, role-playing, story boarding, metaphors, improvisation, playing with toys, and constructing futuristic scenarios. Synthesizing through triangulation (see Chapter 9) promotes creative integration of multiple stimuli.

5. Side track, zigzag, and circumnavigate: Creativity is seldom a result of purely linear and logical induction or deduction. The creative person explores back and forth, round and about, in and out, over and under, and so on.

6. Change patterns: Habits, standard operating procedures, and patterned thinking pose barriers to creativity. Become aware of and change your patterned ways of thinking and behaving.

7. Make linkages: Many creative exercises include practice in learning how to connect the seemingly unconnected. The matrix approaches presented in this chapter push linkages. Explore linking qualitative

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 30/37

and quantitative data. Work at creative syntheses of case studies and findings. 8. Trust yourself: Self-doubt short-circuits creative impulses. If you say to yourself, “I’m not creative,” you

won’t be. Trust the process. 9. Work at it: Creativity is not all fun. It takes hard work, background research, and mental preparation. 10. Play at it: Creativity is not all work. It can and should be play and fun.

Drawing Conclusions and Closure

Certainty is false closure.

Ambiguity is honest closure.

Staying open eschews closure.

Enough is enough yields pragmatic closure.

Closure is hard. Not closing is harder.

Close . . . for now. —From Halcolm’s Benedictions

In his practical monograph Writing Up Qualitative Research, Wolcott (1990) considers the challenge of how to conclude a qualitative study. Purpose again rules in answering this question. Scholarly articles, dissertations, and evaluation reports have different norms for drawing conclusions. But Wolcott goes further by questioning the very idea of conclusions.

Give serious thought to dropping the idea that your final chapter must lead to a conclusion or that the account must build toward a dramatic climax. . . . In reporting qualitative work, I avoid the term conclusion. I do not want to work toward a grand flourish that might tempt me beyond the boundaries of the material I have been presenting or detract from the power (and exceed the limitations) of an individual case. (p. 55)

This admonition reminds us not to take anything for granted or fall into following some recipe for writing. Asking yourself, “When all is said and done, what conclusions do I draw from all this work?” can be a focusing question that forces you to get at the essence. Indeed, in most reports, you will be expected to include a final section on conclusions. But, as Wolcott (1990) suggests, it can be an unnecessary and inappropriate burden.

Or it can be a chance to look to the future. Spanish-born philosopher and poet George Santayana concluded thus when he retired from Harvard. Students and colleagues packed his classroom for his final appearance. He gave an inspiring lecture and was about to conclude when, in midsentence, he caught sight of a forsythia flower beginning to blossom in the melting snow outside the window. He stopped abruptly, picked up his coat, hat, and gloves, and headed for the door. He turned at the door and said emphatically, “Gentlemen, I should not be able to finish that sentence. I have just discovered that I have an appointment with April.”

Or as Halcolm would say, not concluding is its own conclusion.

I close this chapter with a practical reminder that both the science and the art of qualitative analysis are constrained by limited time. Some people thrive under intense time pressure, and their creativity blossoms. Others don’t. The way in which any particular analyst combines critical and creative thinking becomes partly a matter of style and partly a function of the situation, and it often depends on how much time can be found to play with creative possibilities. But exploring possibilities can also become an excuse for not finishing. There

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 31/37

comes a time for bringing closure to analysis (or a book chapter) and getting on with other things. Taking too much time to contemplate creative possibilities may involve certain risks, a point made by the following story (to which you can apply both your critical and your creative faculties).

The Past and the Future: Deciding in Which Direction to Look

A spirit appeared to a man walking along a narrow road. “You may know with certainty what has happened in the past, or you may know with certainty what will happen in the future, but you cannot know both. Which do you choose?”

The startled man sat down in the middle of the road to contemplate his choices. “If I know with certainty what will happen in the future,” he reasoned to himself, “then the future will soon enough become the past and I will also know with certainty what has happened in the past. On the other hand, it is said that the past is a prologue to the future, so if I know with certainty what has happened in the past, I will know much about what will happen in the future without losing the elements of surprise and spontaneity.” Deeply lost to the present in the reverie of his calculations about the past and future, he was unaware of the sound of a truck approaching at great speed. Just as he came out of his trance to tell the spirit that he had chosen to know with certainty the future, he looked up and saw the truck bearing down on him, unable to stop its present momentum.

—From Halcolm’s Evaluation Parables

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 32/37

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 33/37

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 34/37

EXHIBIT 8.33 Mike’s Career Education Experience: An Illustrative Case Study

Background

Sitting in a classroom at Metro City High School was difficult for Mike. In some classes, he was way behind. In math, he was always the first to finish a test. “I loved math and could always finish a test in about 10 minutes, but I wasn’t doing well in my other classes,” Mike explained.

He first heard about Experience-Based Career Education (EBCE) when he was a sophomore. “I really only went to the assembly to get out of one of the classes I didn’t like,” Mike confessed.

But after listening to the EBCE explanation, Mike was quickly sold on the idea. He not only liked the notion of learning on the job but also thought the program might allow him to work at his own speed. The notion of no grades and no teachers also appealed to him.

Mike took some descriptive materials home to his parents, and they joined him for an evening session at the EBCE learning center to find out more about the program. Now, after two years in the program, Mike is a senior, and his parents want his younger brother to get into the program.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 35/37

Early EBCE testing sessions last year verified the inconsistency of Mike’s experiences in school. While his reading and language scores were well below the average scored by a randomly selected group of juniors at his school, he showed above-average abilities in study skills and demonstrated superior ability in math.

On a less tangible level, EBCE staff members early last school year described Mike as being hyperactive, submissive, lacking in self-confidence, and unconcerned about his health and physical appearance when he started the EBCE program. He was also judged to have severe writing deficiencies. Consequently, Mike’s EBCE learning manager devised a learning plan that would build his communication skills (in both writing and interpersonal relations) while encouraging him to explore several career possibilities. Mike’s job experiences and projects were designed to capitalize on his existing interests and broaden them.

First-Year EBCE Experiences. A typical day for Mike started at 8:00 a.m., just as in any other high school, but the hours in between varied considerably. On arriving at the EBCE learning center, Mike said he usually spent some time “fooling around” with the computer before he worked on projects under way at the center.

On his original application, Mike had indicated that his career preference would be computer operator. This led to an opportunity in the EBCE program to further explore that area and to learn more about the job. During April and May, Mike’s second learning-level experience took place in the computer department of City Bank Services. He broke up his time there each day into morning and afternoon blocks, often arriving before his employer-instructor did for the morning period. Mike usually spent that time going through computer workbooks. When his employer-instructor arrived, they went over flow charts together and worked on computer language.

Mike returned to the high school for lunch and a German class he selected as a project. EBCE students seldom take classes at the high school, but Mike had a special interest in German since his grandparents speak the language.

Following German class, Mike returned to the learning center for an hour of work on other learning activities and then went back to City Bank. “I often stayed there until 5:00 p.m.,” Mike said, even though high school hours ended at three.

Mike’s activities and interests widened after that first year in the EBCE program, but his goal of becoming a computer programmer was reinforced by the learning experience at City Bank. The start of a new hobby—collection of computer materials—also occurred during the time he spent at City Bank. “My employer-instructor gave me some books to read that actually started the collection,” Mike said.

Mike’s interest in animals also was enhanced by his EBCE experience. Mike has always liked animals, and his family has owned a horse since he was 12 years old. By picking blueberries, Mike was able to save enough to buy his own colt two years ago. One of Mike’s favorite projects during the year related to his horse. The project was designed to help Mike with basic skills and to improve his critical thinking skills. Mike read about breeds of horses and how to train them. He then joined a 4-H group with hopes of training his horse for a show.

Several months later, Mike again focused on animals for another EBCE project. This time he used the local zoo as a resource, interviewing the zoo manager and doing a thorough study of the Alaskan brown bear. Mike also joined an Explorer Scouting Club of volunteers to help at the zoo on a regular basis. “I really like working with the bears,” Mike reflected. “They were really playful. Did you know when they rub their hair against the bars it sounds like a violin?” Evaluation of the zoo project, one of the last Mike completed during the year, showed much improvement. The learning manager commented to Mike, “You are getting your projects done faster, and I think you are taking more time than you did at first to do a better job.”

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 36/37

Mike got off to a slow start in the area of life skills development. Like some of his peers, he went through a period described by one of the learning managers as “freedom shock,” when removed from the more rigid structure normally experienced in a typical school setting. Mike tended to avoid his responsibility to the more “academic” side of his learning program. At first, Mike seldom followed up on commitments and often did not let the staff know what he was doing. By the end of the year, he had improved remarkably in both of these behavior areas.

Through the weekly writing required in maintaining his journal, Mike demonstrated a significant improvement in written communications, both in terms of presenting ideas and feelings and in the mechanics of writing. Mike also noted an interesting change in his behavior. “I used to watch a lot of TV and never did any reading.” At the beginning of the following year, Mike said, “I read two books last year and have completed eight more this summer. Now I go to the book instead of the television.” Mike’s favorite reading material is science fiction.

Mike also observed a difference in his attitude to homework. “After going to school for six hours, I wouldn’t sit down and do homework. But in the EBCE program, I wasn’t sitting in a classroom, so I didn’t mind going home with some more work on my journal or projects.”

Mike’s personal development was also undergoing change. Much of this change was attributed to one of his employer-instructors, an elementary school teacher, who told him how important it is in the work world to wear clean clothes. Both she and the project staff gave Mike much positive reinforcement when his dressing improved. That same employer also told Mike that she was really interested in what he had to say and therefore wanted him to speak slower so he could be understood.

Mike’s school attendance improved while in the EBCE program. During the year, Mike missed only six days. This was better than the average absence for others in the program, which was found to be 12.3 days missed during the year, and much improved over his high school attendance.

Like a number of other EBCE students in his class, Mike went out on exploration-level experiences but completed relatively few other program requirements during the first three months of the school year. By April, however, he was simultaneously working on eight different projects and pursuing a learning experience at City Bank. By the time Mike completed his junior year, he had finished 9 of the required 13 competencies, explored nine business sites, completed two learning levels, and carried through on 11 projects. Two other projects were dropped during the year, and 1 is uncompleted but could be finished in the coming year.

On a more specific level, Mike’s competencies included transacting business on a credit basis, maintaining a checking account, designing a comprehensive insurance program, filing taxes, budgeting, developing physical fitness, learning to cope with emergency situations, studying public agencies, and operating an automobile.

Mike did not achieve the same level of success on all of his job sites. However, his performance consistently improved throughout the year. Mike criticized the exploration packages when he started them in the first months of the program and, although he couldn’t pinpoint how, said they could be better. His own reliance on the questions provided in the package was noted by the EBCE staff, with a comment that he rarely followed up on any cues provided by the person he interviewed. The packets reflected Mike’s disinterest in the exploration portion of EBCE work. They showed little effort and a certain sameness in the remarks about his impressions at the various sites.

Mike explored career possibilities at an automobile dealer, an audiovisual repair shop, a supermarket, an air control manufacturer, an elementary school, a housing development corporation, a city public works, a junior high school, and a bank services company.

Mike’s first learning-level experience was at the elementary school. At the end of three and one-half months, the two teachers serving as his employer-instructors indicated concern about attendance,

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 37/37

punctuality, initiative in learning, and the amount of supervision needed to see that Mike’s time was used constructively. Mike did show significant improvement in appropriate dress, personal grooming, and quality of work on assignments.

Reports from the second learning-level experience—at the computer department of the bank services company—showed a marked improvement. The employer-instructor there rated Mike as satisfactory in all aspects and, by the time of the final evaluation, gave excellent ratings in 10 categories—(1) attendance/punctuality, (2) adhering to time schedules, (3) understanding and accepting responsibility, (4) observing employer rules, (5) showing interest and enthusiasm, (6) showing poise and self-confidence, (7) using initiative in seeking opportunities to learn, (8) using employer site learning resources, (9) beginning assigned tasks promptly, and (10) completing the tasks assigned.

Asking Singular Questions

One of the basic rules of questionnaire writing is that each item must be singular—that is, no more than one idea should be contained in any given question. Consider this example:

How well do you know and like the staff in this program?

a. A lot b. Pretty much c. Not too much d. Not at all

This item is impossible to interpret in analysis because it asks two questions:

1. How well do you know the staff?

2. How much do you like the staff?

When one turns to open-ended interviewing, however, many people seem to think that singular questions are no longer called for. Precision gives way to vagueness and confused multiplicity, as in the illustration in this section. I’ve seen transcripts of interviews conducted by experienced and well-known field researchers in which several questions have been thrown together, which they might think are related but which are likely to confuse the person being interviewed about what is really being asked.

To help the staff improve the program, we’d like to ask you to talk about your opinion of the program: What you think are the strengths and weaknesses of the program? What you like? What you don’t like? What you think could be improved or should stay the same? Those kinds of things—and any other comments you have.

The evaluator who used this question regularly in interviewing argued that by asking a series of questions he could find out which was most salient to the person being interviewed because the interviewee was forced to choose what he or she most cared about in order to respond to the question. The evaluator would then probe more specifically in those areas that were not answered in the initial question.

It’s necessary to distinguish, then, between giving an overview of a series of questions at the beginning of a sequence, and then asking each one singularly, versus laying out a whole series of questions at once, and then seeing which one strikes a respondent’s fancy. In my experience, multiple questions create tension and confusion because the person being interviewed doesn’t really know what is being asked. An analysis of the strengths and weaknesses of a program is not the same as reporting what one likes and dislikes. Likewise, recommendations for change may be unrelated to strengths, weaknesses, likes, and dislikes. The following is an excerpt from an interview with a parent participating in a family education program aimed at helping parents become more effective as parents.

Q: Based on your experience, what would you say are the strengths of this program?

A: The other parents. Different parents can get together and talk about what being a parent is like for them. The program is really parents with parents. Parents really

need to talk to other parents about what they do, and what works and doesn’t work. It’s the parents, it really is.!

Q: What about weaknesses?

A: I don’t know. . . . I guess I’m not always sure that the program is really getting to the parents who need it the most. I don’t know how you do that, but I just think there are probably a lot of parents out there who need the program and . . . especially maybe single-parent families. And fathers. It’s really hard to get fathers into something like this. It should just get to everybody and that’s real hard.

Q: Let me turn now to your personal likes and dislikes about the program. What are some of the things that you have really liked about the program?

A: I’d put the staff right at the top of that. I really like the program director. She’s really well educated and knows a lot, but she never makes us feel dumb. We can say anything or ask anything. She treats us like people, like equals even. I like the other parents. And I like being able to bring my daughter along. They take her into the child’s part of the program, but we also have some activities together. But it’s also good for her to have her activities with other kids, and I get some time with other parents.

Q: What about dislikes? What are some things you don’t like so much about the program?

A: I don’t like the schedule much. We meet in the afternoons after lunch, and it kind of breaks into the day at a bad time for me, but there isn’t any really good time for all the parents, and I know they’ve tried different times. Time is always going to be a hassle for people. Maybe they could just offer different things at different times. The room we meet in isn’t too great, but that’s no big deal.

Q: Okay, you’ve given us a lot of information about your experiences in the program, strengths and weaknesses you’ve observed, and some of the things you’ve liked and haven’t liked so much. Now I’d like to ask you about your recommendations for the program. If you had the power to change things about the program, what would you make different?

A: Well, I guess the first thing is money. It’s always money. I just think they should put, you know, the legislature should put more money into programs like this. I don’t know how much the director gets paid, but I hear that she’s not even getting paid as much as school teachers. She should get paid like a professional. I think there should be more of these programs and more money in them.

Oh, I know what I’d recommend. We talked about it one time in our group. It would be neat to have some parents who have already been through the program come back and talk with new grou\ps about what they’ve done with their kids since they’ve been in the program, you know, like problems that they didn’t expect or things that didn’t work out, or just getting the benefit of the experiences of parents who’ve already been through the program to help new parents. We talked about that one day and thought that would be a neat thing to do. I don’t know if it would work, but it would be a neat thing. I wouldn’t mind doing it, I guess.

Notice that each of these questions elicited a different response. Strengths, weaknesses, likes, dislikes, and recommendations—each question meant something different and deserved to be asked

separately. Qualitative interviewing can be deepened through thoughtful, focused, and distinct questions.

A consistent theme runs through this discussion of question formulation: The wording used in asking questions can make a significant difference in the quality of responses elicited. The interviewer who throws out a bunch of questions all at once to see which one takes hold puts an unnecessary burden on the interviewee to decipher what is being asked. Moreover, multiple questions asked at the same time suggests that the interviewer hasn’t figured out what question should be asked at that juncture in the interview. Taking the easy way out by asking several questions at once transfers the burden of clarity from the interviewer to the interviewee.

Asking several questions at once can also waste precious interview time. In evaluation interviews, for example, both interviewers and respondents typically have only so much time to give to an interview. To make the best use of that time, it is helpful to think through priority questions that will elicit relevant responses. This means that the interviewer must know what issues are important enough to ask questions about and to ask those questions in a way that the person being interviewed can clearly identify what he or she is being asked—that is, to ask clear questions.

Clarity of Questions

If names are not correct, language will not be in accordance with the truth of things. —Confucius

The interviewer bears the responsibility to pose questions that make it clear to the interviewee what is being asked. Asking understandable questions facilitates establishing rapport. Unclear questions can make the person being interviewed feel uncomfortable, ignorant, confused, or hostile. Asking singular questions helps a great deal to make things clear. Other factors also contribute to clarity.

First, in preparing for an interview, find out what special terms are commonly used by people in the setting. For example, state and national programs often have different titles and language at the local level. CETA (Comprehensive Employment and Training Act Programs) was designed as a national program in which local contractors were funded to establish and implement services in their area. We found that participants only knew these programs by the name of the local contractor, such as “Youth Employment Services,” “Work for Youth,” and “Working Opportunities for Women.” Many participants in these programs did not even know that they were in CETA programs. Conducting an interview with these participants where they were asked about their “CETA experience” would have been confusing and disruptive to the interview.

When I was doing fieldwork in Burkina Faso, the national government was run by the military after a coup d’etat. Local officials carried the title “commandant” (commander). However, no one referred to the government as a military government. To do so was not only politically incorrect but risky too. The appropriate official phrase mandated by the rulers in the capitol, Ouagadougou, was “the people’s government.”

Second, clarity can be sharpened by understanding what language participants use among themselves in talking about a setting, activities, or other aspects of life. When we interviewed juveniles who had been placed in foster group homes by juvenile courts, we had to spend a good

deal of preparatory time trying to find out how the juveniles typically referred to the group home parents, to their natural parents, to probation officers, and to each other in order to ask questions clearly about each of those sets of people. For example, when asking about relationships with peers, should we use the word juveniles, adolescents, youth, teenagers, or what? In preparation for the interviews, we checked with a number of juveniles, group home parents, and court authorities about the proper language to use. We were advised to refer to “the other kids in the group home.” However, we found no consensus about how “kids in the group home” referred to group home parents. Thus, one of the questions we had to ask in each interview was “What do you usually call Mr. and Mrs. __________?” We then used the language given to us by that youth throughout the rest of the interview to refer to group home parents.

Third, providing clarity in interview questions may mean avoiding using labels altogether. This means that when asking about a particular phenomenon or program component, it may be better to first find out what the interviewee believes that phenomenon to be and then ask questions about the descriptions provided by the person being interviewed. In studying officially designated “open classrooms” in North Dakota, I interviewed parents who had children in those classrooms. (Open classrooms were designed to be more informal, integrated, community based, project oriented, and experiential than traditional classrooms.) However, many of the teachers and local school officials did not use the term open to refer to these classrooms because they wanted to avoid political conflicts and stereotypes that were sometimes associated with the notion of “open education.” Thus, when interviewing parents, we could not ask their opinions about “open education.” Rather, we had to pursue a sequence of questions like the following:

What kinds of differences, if any, have you noticed between your child’s classroom in the past year and the classroom this year? (Parent responds.)

Ok, you’ve mentioned several differences. Let me ask you your opinion about each of the things you’ve mentioned. What do you think about _______?

This strategy avoids the problem of collecting responses that later turn out to be uninterpretable because you can’t be sure what respondents meant by what they said. Their opinions and judgments are grounded in descriptions, in their own words, of what they’ve experienced and what they’re assessing.

A related problem emerged in interviewing children about their classrooms. We wanted to find out how basic skills were taught in “open” classrooms. In preparing for the interviews, we learned that many teachers avoided terms like math time or reading time because they wanted to integrate math and reading into other activities. In some cases, we learned during parent interviews, children reported to parents that they didn’t do any “math” in school. These same children would be working on projects, such as the construction of a model of their town using milk cartons, that required geometry, fractions, and reductions to scale, but they did not perceive of these activities as “math” because they associated math with worksheets and workbooks. Thus, to find out the kind of math activities children were doing, it was necessary to talk with them in detail about specific projects and work they were engaged in without asking them the simple question, “What kind of math do you do in the classroom?”

Another example of problems in clarity comes from follow-up interviews with mothers whose children were victims of sexual abuse. A major part of the interview focused on experiences with and reactions to the child protection agency, the police, welfare workers, the court system, the school counselor, probation officers, and other parts of the enormously complex system constructed to deal with child sexual abuse. We learned quickly that mothers could seldom differentiate the

parts of the system. They didn’t know when they were dealing with the courts, the child protection people, the welfare system, or some treatment program. It was all “the system.” They had strong feelings and opinions about “the system,” so our questions had to remain general, about the system, rather than specifically asking about the separate parts of the system (Patton, 1991).

The theme running through these suggestions for increasing the clarity of questions centers on the importance of using language that is understandable and part of the frame of reference of the person being interviewed. It means taking special care to find out what language the interviewee uses. Questions that use the respondent’s own language are most likely to be clear. This means being sensitive to “languaculture” by attending to “meanings that lead the researcher beyond the words into the nature of the speaker’s world” (Agar, 2000, pp. 93–94). This sensitivity to local language, the “emic perspective” in anthropology, is usually discussed in relation to data analysis in which a major focus is illuminating a setting or culture through its language. Here, however, we’re discussing languaculture not as an analytical framework but as a way of enhancing data collection during interviewing by increasing clarity, communicating respect, and facilitating rapport.

Using words that make sense to the interviewee, words that reflect the respondent’s worldview, will improve the quality of data obtained during the interview. Without sensitivity to the impact of particular words on the person being interviewed, an answer may make no sense at all—or there may be no answer. A Sufi story makes this point quite nicely.

A man had fallen between the rails in a subway station. People were all crowding around trying to get him out before the train ran him over. They were all shouting. “Give me your hand!” but the man would not reach up.

Mulla Nasrudin elbowed his way through the crowd and leaned over the man. “Friend,” he asked, “what is your profession?”

“I am an income tax inspector,” gasped the man.

“In that case,” said Nasrudin, “take my hand!”

The man immediately grasped the Mulla’s hand and was hauled to safety. Nasrudin turned to the amazed by-standers. “Never ask a tax man to give you anything, you fools,” he said. (Shah, 1973, p. 68)

Before leaving the issue of clarity, let me offer one other suggestion: Be especially careful asking “why” questions.

Why to Take Care Asking “Why?”

Three Zen masters were discussing a flapping flag on a pole. The first observed dryly: “The flag moves.”

“No,” said the second. “Wind is moving.”

“No,” said the third. “It is not flag. It is not wind. It is mind moving.”

“Why” questions presuppose that things happen for a reason and that those reasons are knowable. “Why” questions presume cause–effect relationships, an ordered world, and rationality. “Why” questions move beyond what has happened, what one has experienced, how one feels, what one opines, and what one knows to the making of analytical and deductive inferences.

The problems in deducing causal inferences have been thoroughly explored by philosophers of science (Bunge, 1959; Nagel, 1961). On a more practical level and more illuminative of interviewing challenges, reports from parents about “why” conversations with their children document the difficulty of providing causal explanations about the world. The infinite regression quality of “why” questions is part of the difficulty engendered by using them as part of an interview. Consider this parent–child exchange:

Dad, why does it get dark at night?

Because our side of the earth turns away from the sun.

Dad, why does our side of the earth turn away from the sun?

Because that’s the way the world was made.

Dad, why was the world made that way?

So that there would be light and dark.

Dad, why should there be dark? Why can’t it just be light all the time?

Because then we would get too hot.

Why would we get too hot?

Because the sun would be shining on us all the time.

Why can’t the sun be cooler sometimes?

It is, that’s why we have night.

But why can’t we just have a cooler sun?

Because that’s the way the world is.

Why is the world like that?

It just is. Because . . . .

Because why?

Just because.

Oh.

Daddy?

Yes.

Why don’t you know why it gets dark?

In a program evaluation interview, it might seem that the context for asking a “why” question would be clearer. However, if a precise reason for a particular activity is what is wanted, it is usually possible to ask that question in a way that does not involve using the word why. Let’s look first at the difficulty posed for the respondent by the “why” question and then look at some alternative phrases.

“Why did you join this program?” The actual reasons for joining the program probably consist of some constellation of factors, including the influences of other people, the nature of the program, the nature of the person being interviewed, the interviewee’s expectations, and practical considerations. It is unlikely that an interviewee can sort through all of these levels of possibility at once, so the person to whom the question is posed must pick out some level at which to respond.

• “Because it takes place at a convenient time.” (programmatic reason)

• “Because I’m a joiner.” (personality reason)

• “Because a friend told me about the program.” (information reason)

• “Because my priest told me about the program and said he thought it would be good for me.” (social influence reason)

• “Because it was inexpensive.” (economic reason)

• “Because I wanted to learn about the things they’re teaching in the program.” (outcomes reason)

• “Because God directed me to join the program.” (personal motivation reason)

• “Because it was there.” (philosophical reason)

Anyone being interviewed could respond at any or all of these levels. The interviewer must decide before conducting the interview which of these levels carries sufficient importance to make it worth asking a question. If the primary evaluation question concerns characteristics of the program that attracted participants, then instead of asking “Why did you join?” the interviewer should ask something like the following: “What was it about the program that attracted you to it?” If the evaluator is interested in learning about the social influences that led to participation in a program, either voluntary or involuntary participation, a question like the following could be used:

Other people sometimes influence what we do. What other people, if any, played a role in your joining this program?

In some cases, the evaluator may be particularly interested in the characteristics of participants, so the question might be phrased in the following fashion:

I’m interested in learning more about you as a person and your personal involvement in this program. What is it about you—your situation, your personality, your desires, whatever—what is it about you that you think led you to become part of this program?

When used as a probe, “why” questions can imply that a person’s response was somehow inappropriate. “Why did you do that?” may sound like doubt that an action (or feeling) was justified. A simple “Tell me more, if you will, about your thinking on that” may be more inviting.

The point is that by thinking carefully about what you want to know and being sensitive to what the interviewee will hear in your question, there is a greater likelihood that respondents will supply answers that make sense—and are relevant, usable, and interpretable. My cautions about the difficulties raised with “why” questions come from trying to analyze such questions when responses covered such a multitude of dimensions that it was clear different people were responding to different things. This makes analysis unwieldy.

Perhaps my reservations about the use of “why” questions come from having appeared the fool when asking such questions during interviews with children. In our open classroom interviews, several teachers had mentioned that children often became so involved in what they were doing that they chose not to go outside for recess. We decided to check this out with the children.

“What’s your favorite time in school?” I asked a first grader.

“Recess,” she answered quickly.

“Why do you like recess?”

“Because we go outside and play on the swings.”

“Why do you go outside?” I asked.

“Because that’s where the swings are!”

She replied with a look of incredulity that adults could ask such stupid questions, then explained helpfully, “If you want to swing on the swings, you have to go outside where the swings are.”

Children take interview questions quite literally, and so it becomes clear quickly when a question is not well thought out. It was during those days of interviewing children in North Dakota that I learned about the problems with “why” questions.

©2002 Michael Quinn Patton and Michael Cochran

MODULE

60 Rapport, Neutrality, and the Interview Relationship

The four essential elements of the universe are light, energy, time, and rapport. —Halcolm

Rapport Through Neutrality

As an interviewer, I want to establish rapport with the person I am interviewing, but that rapport must be established in a way that it does not undermine my neutrality concerning what the person tells me. I must be nonjudgmental. Neutrality means that the person being interviewed can tell me anything without engendering either my favor or disfavor. I cannot be shocked; I cannot be angered; I cannot be embarrassed; I cannot be saddened. Nothing the person tells me will make me think more or less of her or him. Openness and trust flow from nonjudgmental rapport.

At the same time that I am neutral with regard to the content of what is being said to me, I care very much that that person is willing to share with me what he or she is saying. Rapport is a stance vis-à-vis the person being interviewed. Neutrality is a stance vis-à-vis the content of what that person says. Rapport means that I respect the people being interviewed, so what they say is important because of who is saying it. I want to convey to them that their knowledge, experiences, attitudes, and feelings are important. Yet I will not judge them for the content of what they say to me.

SIDEBAR

EMPATHIC NEUTRALITY

Empathic neutrality is one of the 12 core strategies of qualitative inquiry discussed in Chapter 2. (See Exhibit 2.1 and pages 46–47. )

Naturalistic inquiry involves fieldwork that puts one in close contact with people and their problems. What is to be the researcher’s cognitive and emotional stance toward those people and problems? No universal prescription can capture the range of possibilities, for the answer will depend on the situation, the nature of the inquiry, and the perspective of the researcher. But thinking strategically, I offer the phrase “empathic neutrality” as a point of departure. It connotes a middle ground between becoming too involved, which can cloud judgment, and remaining too distant, which can reduce understanding. What is empathic neutrality? In essence, it is understanding a person’s situation and perspective without judging the person—and communicating that understanding with authenticity to build rapport, trust, and openness.

Reflections on Empathic Neutrality From an Experienced International Program Evaluator

For me there is a big difference between being empathetic when you interview (like when I work with full-blown AIDS patients, abused women, or orphans—seriously how can you not be touched?) and taking on evaluations because you are empathetic (as I often do) and therefore care about getting good empirical data. Let me explain.

Being empathetic, and really caring, should make an evaluator do an even better EMPIRICAL job. For me, when I am involved emotionally (which I usually am), I need to try even harder because I realize that it is UP TO ME (I take that all on. . . . ) to make sure that when the critics come around, I have ensured that the program has a strong Theory of Change and Theory of Action and data to evaluate the program, and, data to support making changes in order to make it a stronger program. That holds true for impact evaluations as well. If I don’t find a program effective then I also think how that money could be spent on a MORE EFFECTIVE program that WILL change people's lives. It’s not that I want the program to be successful in itself (thus my neutrality), I want people to have better lives (the basis for empathy). If I was neutral about influencing people’s lives, I wouldn’t try so hard. I care, so I work very hard to do a good job.

I am sometimes told, “But you are [an] inside evaluator so you are biased.” AGHH! I am an inside evaluator so I know all the dirty laundry, which makes me better able to identify critical points of data to inform improvements. If anything being inside enables me to [be] MORE critical, because you can’t hide anything from me.

—Donna Podems, PhD

Director, Otherwise Research and Evaluation

Cape Town, South Africa

Rapport is built by conveying empathy and understanding without judgment. In this section, I want to focus on ways of wording questions that are particularly aimed at conveying that important sense of neutrality.

Using Illustrative Examples in Questions

One kind of question wording that can help establish neutrality is the illustrative examples format. When phrasing questions in this way, I want to let the person I’m interviewing know that I’ve pretty much heard it all—the bad things and the good—and so I’m not interested in something that is particularly sensational, particularly negative, or especially positive. I’m really only interested in what that person’s genuine experience has been. I want to elicit open and honest judgments from them without making them worry about my judging what they say. Consider this example of the illustrative examples format from interviews we conducted with juvenile delinquents who had been placed in foster group homes. One section of the interview was aimed at finding out how the juveniles were treated by group home parents.

Okay, now I’d like to ask you to tell me how you were treated in the group home by the parents. Some kids have told us they were treated like one of the family; some kids have told us that they got knocked around and beat up by the group home parents; some kids have told us about sexual things that were done to them; some of the kids have told us about fun things and trips they did with the parents; some kids have felt they were treated really well and some have said they were treated pretty bad. What about you—how have you been treated in the group home?

A closely related approach is the illustrative extremes format—giving examples only of extreme responses. This question is from a follow-up study of award recipients who received a substantial fellowship with no strings attached.

How much of the award, if any, did you spend on entirely personal things to treat yourself well? Some fellows have told us they spent a sizable portion of the award on things like a new car, a hot tub, fixing up their house, personal trips, and family. Others spent almost nothing on themselves and put it all into their work. How about you?

In both the illustrative examples format and the illustrative extremes format, it is critical to avoid asking leading questions. Leading questions are the opposite of neutral questions; they give the interviewee hints about what would be a desirable or appropriate kind of answer. Leading questions “lead” the respondent in a certain direction. Below are questions I found on transcripts during review of an evaluation project carried out by a reputable university center.

“We’ve been hearing a lot of really positive comments about this program. So what’s your assessment?”

or

“We’ve already heard that this place has lots of troubles, so feel free to tell us about the troubles you’ve seen.”

or

“I imagine it must be horrible to have a child abused and have to deal with the system, so you can be honest with me. How bad was it?”

Each of these questions builds in a response bias that communicates the interviewer’s belief about the situation prior to hearing the respondent’s assessment. The questions are “leading” in the sense that the interviewee can be led into acquiescence with the interviewer’s point of view.

In contrast, the questions offered below to demonstrate the illustrative examples format included several dimensions to provide a balance between what might be construed as positive and negative kinds of responses. I prefer to use the illustrative example format primarily as a clarifying strategy after having begun with a simple, straightforward, and truly open-ended question: “What do you think about this program?” or “What has been your experience with the system?” Only if this initial question fails to elicit a thoughtful response, or if the interviewee seems to be struggling, will I offer illustrative examples to facilitate a deeper response.

Role-Playing and Simulation Questions

Providing context for a series of questions can help the interviewee hone in on relevant responses. A helpful context provides cues about the level at which a response is expected. One way of providing such a context is to role-play with persons being interviewed, asking them to respond to the interviewer as if he or she were someone else.

Suppose I was a new person who just came into this program, and I asked you what I should do to succeed here, what would you tell me?

or

Suppose I was a new kid in this group home, and I didn’t know anything about what goes on around here, what would you tell me about the rules that I have to be sure to follow?

These questions provide a context for what would otherwise be quite difficult questions, for example, “How does one get the most out of this program?” or “What are the rules of this group home?” The role-playing format emphasizes the interviewees’ expertise—that is, it puts him or her in the role of expert because he or she knows something of value to someone else. The interviewee is the insider with inside information. The interviewer, in contrast, as an outsider, takes on the role of novice or apprentice. The “expert” is being asked to share his or her expertise with the novice. I’ve often observed interviewees become more animated and engaged when asked role-playing questions. They get into the role.

A variation on the role-playing format involves the interviewer dissociating somewhat from the question to make it feel less personal and probing. Consider these two difficult questions for a study of a tough subject: teenage suicide.

“What advice would you give someone your age who was contemplating suicide?”

versus

“Think of someone you know and like who is moody. Suppose that person told you that he or she was contemplating suicide. What would you tell that person?”

The first question comes across as abrupt and demanding, almost like an examination to see if he or she knows the right answer. The second question, with the interviewee allowed to create a personal context, is softened and, hopefully, more inviting. While this technique can be overused and can sound phony if asked insensitively, with the right intonation communicating genuine interest and used sparingly, with subtlety, the role-playing format can ease the asking of difficult questions to deepen answers and enhance the quality of responses.

Simulation questions provide context in a different way, by asking the person being interviewed to imagine himself or herself in the situation in which the interviewer is interested.

Suppose I was present with you during a staff meeting. What would I see going on? Take me there.

or

Suppose I was in your classroom at the beginning of the day when the students first come in. What would I see happening as the students came in? Take me to your classroom, and let me see what happens during the first 10 to 15 minutes as the students arrive—what you’d be doing, what they’d be doing, what those first 15 minutes are like.

In effect, these questions ask the interviewee to become an observer. In most cases, a response to this question will require the interviewee to visualize the situation to be described. I frequently find that the richest and most detailed descriptions come from a series of questions that ask a respondent to reexperience and/or simulate some aspect of an experience.

Presupposition Questions

Presupposition questions involve a twist on the theme of empathic neutrality. Presuppositions have been identified by linguists as a grammatical structure that creates rapport by assuming shared knowledge and assumptions (Bandler & Grinder, 1975; Kartunnen, 1973). Natural language is

filled with presuppositions. In the course of our day-to-day communications, we often employ presuppositions without knowing we’re doing so. By becoming aware of the effects of presupposition questions, we can use them strategically in interviewing. The skillful interviewer uses presuppositions to increase the richness and depth of responses.

What then are presuppositions? Linguists Bandler & Grinder (1975) define presuppositions as follows:

When each of us uses a natural language system to communicate, we assume that the listener can decode complex sound structures into meanings, i.e., the listener has the ability to derive the Deep-Structure meaning from the Surface-Structure we present to him auditorily. . . . We also assume the complex skill of listeners to derive extra meaning from some Surface-Structures by the nature of their form. Even though neither the speaker nor the listener may be aware of this process, it goes on all the time. For example, if someone says: I want to watch Kung Fu tonight on TV we must understand that Kung Fu is on TV tonight in order to process the sentence “I want to watch . . . ” to make any sense. These processes are called presuppositions of natural language. (p. 241)

Used in interviewing, presuppositions communicate that the respondent has something to say, thereby increasing the likelihood that the person being interviewed will, indeed, have something to say. Consider the following question: “What is the most important experience you have had in the program?” This question presupposes that the respondent has had an important experience. Of course, the response may be “I haven’t had any important experiences.” However, it is more likely that the interviewee will internally access an experience to report as important rather than dealing first with the question of whether or not an important experience has occurred.

EXHIBIT 7.10 Illustrative Dichotomous Versus Presupposition Questions

Listed below, on the left, are typical dichotomous questions used to introduce a longer series of questions. On the right are presupposition questions that bypass the dichotomous lead-in query and, in some cases, show how adding “if any” retains a neutral framing.

Contrast the presupposition format—“What is the most important experience you have had in the program?”—to the following dichotomous question: “Have you had any experiences in the program that you would call really important?” This dichotomous framing of the question requires the person to make a decision about what an important experience is and whether one has occurred. The presupposition format bypasses this initial step by asking directly for description rather than asking for an affirmation of the existence of the phenomenon in question. Exhibit 7.10 contrasts typical dichotomous response questions with presuppositions that bypass the dichotomous lead-in

query and, in some cases, show how adding “if any” retains a neutral framing. Compare the two question formats, and think about how you would likely respond to each.

A naturalness of inquiry flows from presuppositions making more comfortable what might otherwise be embarrassing or intrusive questions. The presupposition includes the implication that what is presupposed is the natural way things occur. It is natural for there to be conflict in programs. The presupposition provides a stimulus that asks the respondent to mentally access the answer to the question directly without making a decision about whether or not something has actually occurred.

I first learned about interview presuppositions from a friend who worked with the agency in New York City that had responsibility for interviewing carriers of venereal disease. His job was to find out about the carrier’s previous sexual contracts so that those persons could be informed that they might have venereal disease. He had learned to avoid asking men “Have you had any sexual relationships with other men?” Instead, he asked, “How many sexual contacts with other men have you had?” The dichotomous question carried the burden for the respondent of making a decision about some admission of homosexuality and/or promiscuity. The presupposition form of the open- ended question implied that some sexual contacts with other men might be quite natural and focused on the frequency of occurrence rather than on whether or not such sexual contacts have occurred at all. The venereal disease interviewers found that they were much more likely to generate open responses with the presupposition format than with the dichotomous response format.

The purpose of in-depth interviews is to find out what someone has to say. By presupposing that the person being interviewed does, indeed, has something to say, the quality of the descriptions received may be enhanced. However, a note of warning: Presuppositions, like any single form of questioning, can be overused. Presuppositions are one option. There are many times when it is more comfortable and appropriate to check out the relevance of a question with a dichotomous inquiry (Did you go to the lecture?) before asking further questions (What did you think of the lecture?)

EXHIBIT 7.11 Summary of Question Formats to Facilitate Communicating Interviewer Neutrality

Summary of Skillful Questioning to Communicate Neutrality

We’ve reviewed five question formats to communicate neutrality and help establish rapport. Exhibit 7.11 reviews and summarizes these question types: (1) illustrative examples format, (2) illustrative extremes format, (3) role-playing questions, (4) simulation questions, and (5) presupposition questions. But as important as skillful questioning is, and I believe it’s quite important, it is even more important to be attentive to and mindful about the overall patterns of interaction that emerge during an interview and ultimately determine the kind of relationship that gets built between interviewer and interviewee. Rapport and empathy reside in that relationship.

Beyond Skillful Questioning: Relationship-Focused, Interactive Interview Approaches

The interview is a specific form of conversation where knowledge is produced through the interaction between an interviewer and an interviewee. . . .

If you want to know how people understand their world and their lives, why not talk with them? Conversation is a basic mode of human interaction. Human beings talk with each other, they interact, pose questions and answer questions. Through conversations we get to know other people, get to learn about their experiences, feelings and hopes and the world they live in. In an interview conversation, the researcher asks about, and listens to, what people themselves tell about their lived world, about their dreams, fears and hopes, hears their views and opinions in their own words, and learns about their school and work situation—their family and social life. The research interview is an interview where knowledge is constructed in the inter-action between the interviewer and the interviewee. (Kvale, 2007, pp. xvii, 1)

As we discussed earlier in this chapter, some approaches to interviewing focus on standardizing the interview protocol so that each person interviewed is asked the same questions in the same way. That standardization is considered the foundation of validity and reliability in traditional social science interviewing (see Exhibit 7.1, pp. 423–424). In contrast, other qualitative methodologists emphasize that the depth, honesty, and quality of responses in an interview depend on the relationship that develops between the interviewee and the interviewer (Josselson, 2013). Exhibit 7.12 presents six relationship-focused, interactive interview approaches aimed at establishing rapport, empathy, mutual respect, and mutual trust. It is important not to get so caught up in trying to word questions perfectly that you miss the dynamics of the unfolding relationship that is at the heart of interactive interviewing.

EXHIBIT 7.12 Six Relationship-Focused, Interactive Interview Approaches

Below are six relationship-focused, interactive interview approaches aimed at establishing rapport, empathy, mutual respect, and mutual trust.

Pacing and Transitions in Interviewing

The longer an interview, the more important it is to be aware of pacing and transitions. The popular admonition to “go with the flow” alerts us to the fact that flow varies. Sometimes there are long stretches of calm water in a river trip; then around a corner, there are rapids. A river narrows and widens as it flows. To “go with the flow,” you must be aware of the flow. Interviews have their own flow. Here are some ways to manage the flow.

Prefatory Statements and Announcements

The interaction between interviewer and interviewee is greatly facilitated when the interviewer alerts the interviewee to what is about to be asked before it is actually asked. Think of it as warming up the respondent, or ringing the interviewee’s mental doorbell. This is done with prefatory statements that introduce a question. These can serve two functions. First, a preface alerts the interviewees to the nature of the question that is coming, directs their awareness, and focuses their attention. Second, an introduction to a question gives respondents a few seconds to organize

their thoughts before responding. Prefaces, transition announcements, and introductory statements help smooth the flow of the interview. Any of several formats can be used.

The transition format announces that one section or topic of the interview has been completed and a new section or topic is about to begin.

We’ve been talking about the goals and objectives of the program. Now I’d like to ask you some questions about actual program activities. What are the major activities offered to participants in this program?

or

We’ve been talking about your childhood family experiences. Now I’d like to ask you about your memories of and experiences in school.

The transition format essentially says to the interviewee, “This is where we’ve been . . . , and this is where we’re going.” Questions prefaced by transition statements help maintain the smooth flow of an interview.

An alternative format is the summarizing transition. This involves bringing closure to a section of the interview by summarizing what has been said and asking if the interviewee has anything to add or clarify before moving on to a new subject.

Before we move on to the next set of questions, let me make sure I’ve got everything you said about the program goals and objectives. You said the program had five goals. First, . . . Second, . . .

Before I ask you some questions about program activities related to these goals, are there any additional goals or objectives that you want to add?

The summarizing transition lets the person being interviewed know that the interviewer is actively listening to and recording what is being said. The summary invites the interviewee to make clarifications, corrections, or additions before moving on to a new topic.

The direct announcement format simply states what will be asked next. A preface to a question that announces its content can soften the harshness or abruptness of the question itself. Direct prefatory statements help make an interview more conversational and easily flowing. The transcriptions below show two interview sequences, one without prefatory statements and the other with prefatory statements.

• Question without preface: How have you changed as a result of the program?

• Question with preface: Now, let me ask you to think about any changes you see in yourself as a result of participating in this program. (Pause). How, if at all, have you been changed by your experiences in this program?

The attention-getting preface goes beyond just announcing the next question to making a comment about the question. The comment may concern the importance of the question, the difficulty of the question, the openness of the question, or any other characteristic of the question that would help set the stage. Consider these examples:

• This next question is particularly important to the program staff. How do you think the program could be improved?

• This next question is purposefully vague so that you can respond in any way that makes sense to you. What difference has this program made to the larger community?

SIDEBAR

CONTEXTUALIZING A QUESTION MAKES A DIFFERENCE

How you introduce a question can have a significant impact on how interviewees respond. Kim Manturuk, Senior Research Associate at the University of North Carolina Center for Community Capital, interviewed low-income people about whether they had enough food to eat. Manturuk (2013) found that asking the question straight out could be offensive.

We added an introduction along the lines of “In these difficult economic times, more people are finding it difficult to always have the kinds of foods they like.” . . . We found that, by setting it up so that the cause of food insecurity was the economy (and not an individual failing), people seemed more comfortable answering. (p. 1)

• This next question may be particularly difficult to answer with certainty, but I’d like to get your thoughts on it. In thinking about how you’ve changed during the past year, how much has this program caused those changes compared with other influences on your life at this time?

• This next question is aimed directly at getting your unique perspective. What’s it like to be a participant in this program?

• As you may know, this next issue has been both controversial and worrisome. What kinds of staff are needed to run a program like this?

The common element in each of these examples is that some prefatory comment is made about the question to alert the interviewee to the nature of the question. The attention-getting format communicates that the question about to be asked has some unique quality that makes it particularly worthy of being answered.

Making statements about the questions being asked is a way for the interviewer to engage in some conversation during the interview without commenting judgmentally on the answers being provided by the interviewee. What is said concerns the questions and not the respondent’s answers. In this fashion, the interview can be made more interesting and interactive. However, all of these formats must be used selectively and strategically. Constant repetition of the same format or mechanical use of a particular approach will make the interview more awkward rather than less so. Exhibit 7.13 summarizes the pacing and transition formats we’ve just discussed.

EXHIBIT 7.13 Summary of Pacing and Transition Formats

Probes and Follow-Up Questions

Probes are used to deepen the response to a question, increase the richness and depth of responses, and give cues to the interviewee about the level of response that is desired. The word probe is usually best avoided in interviews—a little too proctological. The expression “Let me probe that further” can sound as if you’re conducting an investigation of something illicit or illegal. Quite simply, a probe is a follow-up question used to go deeper into the interviewee’s responses. As such, probes should be conversational, offered in a natural style and voice, and used to follow up initial responses.

One natural set of conversational probes consists of detail-oriented questions. These are the basic questions that fill in the blank spaces of a response.

When did that happen?

Who else was involved?

Where were you during that time?

What was your involvement in that situation?

How did that come about?

Where did that happen?

These detail-oriented probes are the basic “who,” “where,” “what,” “when,” and “how” questions that are used to obtain a complete and detailed picture of some activity or experience.

©2002 Michael Quinn Patton and Michael Cochran

At other times, an interviewer may want to keep a respondent talking about a subject by using elaboration probes. The best cue to encourage continued talking is nonverbal—gently nodding your head as positive reinforcement. However, overenthusiastic head nodding may be perceived as endorsement of the content of a response or as wanting the person to stop talking because the interviewer has already understood what the respondent has to say. Gentle and strategic head nodding is aimed at communicating that you are listening and want to go on listening.

The verbal corollary of head nodding is the quiet “uh-huh.” A combination may be necessary; when the respondent seems about to stop talking and the interviewer would like to encourage more comment, a combined “uh-huh” with a gentle rocking of the whole upper body can communicate interest in having the interviewee elaborate.

Elaboration probes also have direct verbal forms:

• Would you elaborate on that?

• That’s helpful. I’d appreciate a bit more detail.

• I’m beginning to get the picture. (The implication is that I don’t have the full picture yet, so please keep talking.)

If something has been said that is ambiguous or an apparent non sequitur, a clarification probe may be useful. Clarification probes tell the interviewee that you need more information, a restatement of the answer, or more context.

You said the program is a “success.” What do you mean by “success”? I’m not sure I understand what you meant by that. Would you elaborate, please?

I want to make sure I understand what you’re saying. I think it would help me if you could say some more about that.

A clarification probe should be used naturally and gently. It is best for the interviewer to convey the notion that the failure to understand is the fault of the interviewer and not a failure of the person being interviewed. The interviewer does not want to make the respondent feel inarticulate, stupid,

or muddled. After one or two attempts at achieving clarification, it is usually best to leave the topic that is causing confusion and move on to other questions, perhaps returning to that topic at a later point.

Another kind of clarifying follow-up question is the contrast probe (McCracken, 1988, p. 35). The purpose of a contrast probe is to “give respondents something to push off against” by asking, “How does x compare with y?” This is used to help define the boundaries of a response. How does this experience/feeling/action/term compare with some other experience/feeling/action/term?

A major characteristic that separates probes from general interview questions is that probes are seldom written out in an interview. Probing is a skill that comes from knowing what to look for in the interview, listening carefully to what is said and what is not said, and being sensitive to the feedback needs of the person being interviewed. Probes are always a combination of verbal and nonverbal cues. Silence at the end of a response can indicate as effectively as anything else that the interviewer would like the person to continue. Probes are used to communicate what the interviewer wants. More detail? Elaboration? Clarity?

SIDEBAR

ENTERING INTO THE WORLDS OF OTHERS THROUGH INTERVIEWING IN THE FIELD

In Far From the Tree, Andrew Solomon (2012) reports his inquiry into family experiences of deafness, dwarfism, autism, schizophrenia, disability, prodigies, transgender, crime, and children born of rape. Over 10 years, he interviewed more than 300 families. He reflects,

I had to learn a great deal to be able to hear these men and women and children. On my first day at my first dwarf convention, I went over to help an adolescent girl who was sobbing. “This is what I look like,” she blurted between gasps, and it seemed she was half laughing. “These people look like me.” Her mother, who was standing nearby, said, “You don’t know what this means to my daughter. But it also means a lot to me, to meet these other parents who will know what I’m talking about.” She assumed I, too, must be a parent of a child with dwarfism; when she learned that I was not, she chuckled, “For a few days, now, you can be the freakish one” (p. 41).

Follow-Up Questions: Listening for Markers

Follow-up questions pick up on cues offered by the interviewee. While probes are used to get at information the interviewer knows is important, follow-up questions are more exploratory. The interviewee says something almost in passing, maybe as an afterthought or a side comment, and you note that there may be something there worth following up on. Weiss (1994) calls these “markers.”

I define a marker as a passing reference made by a respondent to an important event or feeling state. . . . Because markers occur in the course of talking about something else, you may have to remember them and then return to them when you can, saying, “A few minutes ago you mentioned . . . .” But it is a good idea to pick up a marker as soon as you conveniently can if the material it hints at could in any way be relevant for your study. Letting the marker go will demonstrate to the respondent that the area is not of importance for you. It can also demonstrate that you are only interested in answers to your questions, not in the respondent’s full experience. (p. 77)

In interviewing participants in the wilderness-based leadership program, one section asked about any concerns leading up to participation. In the midst of talking about having diligently followed the suggested training routine (hiking at least a half-hour every day leading up to the program), a participant mentioned that she knew a friend whose boyfriend had died in a climbing accident on a wilderness trip; then, she went on talking about her own preparation. When she had finished describing how she had prepared for the program, I returned to her comment about the friend’s boyfriend. Quite unexpectedly, pursuing this casual offhand comment (marker) opened up strong feelings, not about possible injuries or death but about the possibility of forming romantic relationships on the 10-day wilderness experiences. It turned out to mark a decisive shift in the interview into a series of important and revealing interpersonal issues that provided a crucial context for understanding her subsequent program experiences.

Probes and follow-up questions, then, provide guidance to the person interviewed. They also provide the interviewer with a way to facilitate the flow of the interview, a subject we now go to.

Process Feedback During the Interview

Previous sections have emphasized the importance of thoughtful wording so that interview questions are clear, understandable, and answerable. Skillful question formulation and probing concern the content of the interview. This section emphasizes feedback about how the interview process is going.

A good interview feels like a connection has been established in which communication is flowing two ways. Qualitative interviews differ from interrogations or detective-style investigations. The qualitative interviewer has a responsibility to communicate clearly what information is desired and why that information is important, and to let the interviewee know how the interview is progressing.

An interview is an interaction in which an interchange occurs and a temporary interdependence is created. The interviewer provides stimuli to generate a reaction. That reaction from the interviewee, however, is also a stimulus to which the interviewer responds. You, as the interviewer, must maintain awareness of how the interview is flowing, how the interviewee is reacting to questions, and what kinds of feedback are appropriate and helpful to maintain the flow of communication.

Support and Recognition Responses

A common mistake among novices is failing to provide reinforcement and feedback. This means letting the interviewee know from time to time that the purpose of the interview is being fulfilled. Words of thanks, support, and even praise will help make the interviewee feel that the interview process is worthwhile and support ongoing rapport. Here are some examples:

• Your comments about program weaknesses are particularly helpful, I think, because identification of the kind of weaknesses you describe can really help in considering possible changes in the program.

• It’s really helpful to get such a clear statement of what it feels like to be an “outsider” as you’ve called yourself. Your reflections are just the kind of thing we’re trying to get at.

• We’re about halfway through the interview now, and from my point of view, it’s going very well. You’ve been telling me some really important things. How’s it going for you?

• I really appreciate your willingness to express your feelings about that. You’re helping me understand—and that’s exactly why I wanted to interview you.

You can get clues about what kind of reinforcement is appropriate by watching the interviewee. When verbal and nonverbal behaviors indicate that someone is really struggling with a question, going mentally deep within, working hard trying to form an answer, after the response it can be helpful to say something like the following: “I know that was a difficult question and I really appreciate your working with it because what you said was very meaningful and came out very clearly. Thank you.”

At other times, you may perceive that only a surface or shallow answer has been provided. It may then be appropriate to say something like the following:

I don’t want to let that question go by without asking you to think about it just a little bit more, because I feel you’ve really given some important detail and insights on the other questions and I’d like to get more of your reflections about this question.

In essence, the interviewer, through feedback, is “training” the interviewee to provide high- quality and relevant responses.

Maintaining Control and Enhancing the Quality of Responses

Time is precious in an interview. Long-winded responses, irrelevant remarks, and digressions reduce the amount of time available to focus on critical questions. These problems exacerbate when the interviewer fails to maintain a reasonable degree of control over the process. Control is facilitated by (a) knowing what you want to find out, (b) asking focused questions to get relevant answers, (c) listening attentively to assess the quality and relevance of responses, and (d) giving appropriate verbal and nonverbal feedback to the person being interviewed.

Knowing what you want to find out means being able to recognize and distinguish relevant from irrelevant responses. It is not enough just to ask the right questions. You, the interviewer, must listen carefully to make sure that the responses you receive provide meaningful answers to the questions you ask. Consider the following exchange:

SIDEBAR

CREATIVE BUMBLING

If doing nothing can produce a useful reaction, so can the appearance of being dumb. You can develop a distinct advantage by waxing slow of wit. Evidently, you need help. Who is there to help you but the person who is answering your questions? The result is the opposite of the total shutdown that might have occurred if you had come on glib and omniscient. If you don’t seem to get something, the subject will probably help you get it. If you are listening to speech and at the same time envisioning it in print, you can ask your question again, and again, until the repeated reply will be clear in print. Who is going to care if you seem dumber than a cardboard box? Reporters call that creative bumbling.

—John McPhee (2014, p. 50)

Journalist

Q: What happens in a typical interviewer training session that you lead?

A: I try to be sensitive to where each person is at with interviewing. I try to make sure that I am able to touch base with each person so that I can find out how she or he is responding to the training, to get some notion of how each person is doing.

Q: How do you begin a session, a training session?

A: I believe it’s important to begin with enthusiasm, to generate some excitement about interviewing.

Earlier in this chapter, I discussed the importance of distinguishing different kinds of questions and answers: behavior, opinion, knowledge, feelings (see Exhibit 7.7, p. 445). In the interaction above, the interviewer is asking descriptive, behavioral questions. The responses, however, are about beliefs and hopes. The answers do not actually describe what happened. Rather, they describe what the interviewee thinks ought to happen (opinions). Since the interviewer wants behavioral data, it is necessary to first recognize that the responses are not providing the kind of data desired, and then to ask appropriate follow-up questions that will lead to behavioral responses, something like this:

Interviewer: Okay, you try to establish contact with each person and generate enthusiasm at the beginning. What would help me now is to have you actually take me into a training session. Describe for me what the room looks like, where the trainees are, where you are, and tell me what I would see and hear if I were right there in that session. What would I see you doing? What would I hear you saying? What would I see the trainees doing? What would I hear the trainees saying? Take me into a session so that I can actually experience it, if you will.

It is the interviewer’s responsibility to work with the person being interviewed to facilitate the desired kind of responses. At times, it may be necessary to give very direct feedback about the difference between the response given and what is desired.

Interviewer: I understand what you try to do during a training session—what you hope to accomplish and stimulate. Now I’d like you to describe to me what you actually do, not what you expect, but what I would actually see happening if I were present at the session.

It’s not enough to simply ask a well-formed and carefully focused initial question. Neither is it enough to have a well-planned interview with appropriate basic questions. The interviewer must listen actively and carefully to responses to make sure that the interview is working. I’ve seen many well-written interviews that have resulted in largely useless data because the interviewer did not listen carefully and thus did not recognize that the responses were not providing the information needed. The first responsibility, then, in facilitating the interview interaction is knowing what kind of data you are looking for and managing the interview so as to get quality responses.

Verbal and Nonverbal Feedback

Giving appropriate feedback to the interviewee is essential in pacing an interview and maintaining control of the interview process. Head nodding, taking notes, “uh-huhs,” and silent probes (remaining quiet when a person stops talking to let him or her know that you’re waiting for more) are all signals to the person being interviewed that the responses are on the right track. These techniques encourage greater depth in responses, but you also need skill and techniques to stop a highly verbal respondent who gets off the track. The first step in stopping the long-winded respondent is to cease giving the usual cues that encourage talking: stop nodding the head, interject a new question as soon as the respondent pauses for breath, stop taking notes, or call attention to the fact that you’ve stopped taking notes by flipping the page of the writing pad and sitting back, waiting. When these nonverbal cues don’t work, you simply have to interrupt the long-winded respondent.

SIDEBAR

SKILLFUL AND EFFECTIVE INTERVIEWING

Anyone who uses interviewing in their evaluation work—and, let’s face it, most of us do—learns over time the importance of following a fruitful line of questioning:

Focused but not narrow; flexible but not directionless.

We need to make sure that our interview (1) goes after what’s most important; (2) doesn’t go off on tangents; (3) keeps the interviewee engaged; and (4) doesn’t get prematurely evaluative by throwing in conclusions while still gathering the evidence.

—E. Jane Davidson and Patricia Rogers

Genuine Evaluation Blog (January 4, 2013)

Let me stop you here for a moment. I want to make sure I fully understand something you said earlier. (Then ask a question aimed at getting the response more targeted.)

or

Let me ask you to stop for a moment because some of what you’re talking about now I want to get later in the interview. First, I need to find out from you. . . .

Interviewers are sometimes concerned that it is impolite to interrupt an interviewee. It certainly can be awkward, but when done with respect and sensitivity, the interruption can actually help the interview. It is both patronizing and disrespectful to let the respondent run on when no attention is being paid to what is said. It is respectful of both the person being interviewed and the interviewer to make good use of the short time available to talk. It is the responsibility of the interviewer to help the interviewee understand what kind of information is being requested and to establish a framework and context that makes it possible to collect the right kind of information.

Reiterating the Purpose of the Interview as a Basis for Focus and Feedback

Helping the interviewee understand the purpose of the overall interview and the relationship of particular questions to that overall purpose are important pieces of information that go beyond simply asking questions. While the reason for asking a particular question may be absolutely clear to the interviewer, don’t assume it’s clear to the respondent. You communicate respect for the persons being interviewed by giving them the courtesy of explaining why the questions are being asked. Understanding the purpose of a question will increase the motivation of the interviewee to respond openly and in detail.

The overall purpose of the interview is conveyed in an opening statement. Specific questions within the interview should have a connection to that overall purpose. (We’ll deal later with issues of informed consent and protection of human subjects in relation to opening statements of purpose. The focus here is on communicating purpose to improve responses. Later, we’ll review the ethical issues related to informing interviewees about the study’s purpose.) While the opening statement at the beginning of an interview provides an overview about the purpose of the interview, it will still be appropriate and important to explain the purpose of particular questions at strategic points throughout the interview. Here are two examples from evaluation interviews.

• This next set of questions is about the program staff. The staff has told us that they don’t really get a chance to find out how people in the program feel about what they do, so this part of the interview is aimed at giving them some direct feedback. But as we agreed at the beginning, the staff won’t know who said what. Your responses will remain confidential.

• This next set of questions asks about your background and experiences before you entered this program. The purpose of these questions is to help us find out how people with varying backgrounds have reacted to the program.

The One-Shot Question

Informal, conversational interviewing typically takes place as a natural part of fieldwork. It is opportunistic and often unscheduled. A chance arises to talk with someone when the interview is under way. More structured and scheduled interviewing takes place by way of formal appointments and site visits. Yet the best-laid plans for scheduled interviews can go awry. You arrive at the appointed time and place only to find that the person to be interviewed is unwilling to cooperate or needs to run off to take care of some unexpected problem. When faced with such a situation, it is helpful to have a single, one-shot question in mind to salvage at least something. This one-shot question is the one you ask if you are only going to get a few minutes with the interviewee.

For an agricultural extension needs assessment, I was interviewing farmers in rural Minnesota. The farmers in the area were economically distressed, and many of them felt alienated from politicians and professionals. I arrived at a farm for a scheduled interview, but the farmer refused to cooperate. At first, he refused to even come out of the barn to call off the dogs surrounding my truck. Finally, he appeared and said,

I don’t want to talk to you tonight. I know I said I would, but the wife and I had a tiff and I’m tired. I’ve always helped with your government surveys. I fill out all the forms the government sends. But I’m tired of it. No more. I don’t want to talk.

I had driven a long way to get this interview. The fieldwork was tightly scheduled, and I knew that I would not get another shot at this farmer, even if he later had a change of heart. And I didn’t figure it would help much to explain that I wasn’t from the government. Instead, to try and salvage the situation, I took my one-shot question, a question stimulated by his demeanor and overt hostility.

I’m sorry I caught you at a bad time. But as long as I’m here, let me ask you just one quick question; then, I’ll be on my way. Is there anything you want to tell the bastards in Saint Paul?

He hesitated for just a moment, grinned, and then launched into a tirade that turned into a full, two-hour interview. I never got out of the truck, but I was able to cover the entire interview (though without ever referring to or taking out the written interview schedule). At the end of this conversational interview, which had fully satisfied my data collection needs, he said, “Well, I’ve enjoyed talkin’ with you, and I’m sorry about refusin’ to fill out your form. I just don’t want to do a survey tonight.”

I told him I understood and asked him if I could use what he had told me as long as he wasn’t identified. He readily agreed, having already signed the consent form when we set up the appointment. I thanked him for the conversation. My scheduled, structured interview had become an informal, conversational interview developed from a last ditch, one-shot question.

Here’s a different example. The story is told of a young ethnographer studying a village that had previously been categorized in anthropological studies as aggressive and war oriented. He sat outside the school at the end of the day and asked each boy who came out his one-shot, stupid, European question: “What do men do?” The responses he obtained overwhelmingly referred to farming and fishing and almost none to warfare. In one hour, he had a totally different view of the society from that portrayed by previous researchers.

The Final or Closing Question

In the spirit of open-ended interviewing, it’s important in qualitative interviewing to provide an opportunity for the interviewee to have the final say: “That covers the things I wanted to ask. Anything you care to add?” I’ve gotten some of my richest data from this question with interviewees taking me in directions that had never occurred to me to pursue.

In a routine evaluation of an adult literacy program, we were focused on what learning to read meant to the participants. At the end of the interview, I asked, “What should I have asked you that I didn’t think to ask?” Without hesitation one young Hispanic woman replied, “About sexual harassment.” The program had a major problem that the evaluation ended up exposing.

Experienced qualitative methodologist David Morgan (2012) offers this advice about bringing closure to an interview:

I think there are two basic points [for] closure: first [is] to avoid ending with the sense “OK, I’ve got my data, so long.” The second is to give the participant[s] a chance to express their thoughts in their own voice.

The question I use most often is: “Out of all the things we’ve talked about today—or maybe some topics we’ve missed—what should I pay most attention to? What should I think about when I read your interview?” (p. 1)

It can also be helpful to leave interviewees with a way to contact you in case they want to add something that they forgot to mention, or clarify some point.

This way, you don’t “close the door,” but leave it ajar—and it is up to them whether they want to open it again. I think it also gives the participants some power in the process—to decide when the process is over, rather than the researcher doing so. (Gustason, 2012, p. 1)

Beyond Technique

We’ve been looking with some care at different kinds of questions in an effort to polish interviewing technique and increase question precision. Below I’ll offer suggestions about the mechanics of managing data collection—things like recording the data and taking notes. Before moving on, though, it may be helpful to step back and remember the larger purpose of qualitative inquiry so that we don’t become overly technique oriented. You’re trying to understand a person’s world and worldview. That’s why you ask focused questions in a sensitive manner. You’re hoping to elicit relevant answers that are meaningful and useful in understanding the interviewee’s perspective. That’s basically what interviewing is all about.

SIDEBAR

HOW MUCH TECHNIQUE?

Sociologist Peter Berger is said to have told his students, “In science as in love, a preoccupation with technique may lead to impotence.”

To which Halcolm adds, “In love as in science, ignoring technique reduces the likelihood of attaining the desired results. The path of wisdom joins that of effectiveness somewhere between the outer boundaries of ignoring technique and being preoccupied with it.”

This chapter has offered ideas about how to do quality interviews, but ultimately, no recipe can prescribe the single right way of interviewing. No single correct format exists that is appropriate for all situations, and no particular way of wording questions will always work. The specific interview situation, the needs of the interviewee, and the personal style of the interviewer all come together to create a unique situation for each interview. Therein lies the challenge of qualitative interviewing.

Maintaining focus on gathering information that is useful, relevant, and appropriate requires concentration, practice, and the ability to separate that which is foolish from that which is important. In his great novel Don Quixote, Cervantes (1964) describes a scene in which his uneducated sidekick Sancho is rebuked by the knight errant Don Quixote for trying to impress his cousin by repeating deeply philosophical questions and answers that he has heard from others, all the while trying to make the cousin think that these were Sancho’s own insights.

“That question and answer,” said Don Quixote, “are not yours, Sancho. You have heard them from someone else.”

“Whist, sir,” answered Sancho, “if I start questioning and answering, I shan’t be done till tomorrow morning. Yes, for if it’s just a matter of asking idiotic questions and giving silly replies, I needn’t go begging help from the neighbors.”

“You have said more than you know, Sancho,” said Don Quixote, “for there are some people who tire themselves out learning and proving things that, once learned and proved, don’t matter a straw as far as the mind or memory is concerned.” (p. 682)

Regardless of which interview strategy is used—the informal conversational interview, the interview guide approach, or a standardized open-ended interview—the wording of questions will affect the nature and quality of the responses received. So will careful management of the interview process. Constant attention to both content and process, with both informed by the purpose of the interview, will reduce the extent to which, in Cervantes’s words, researchers and evaluators “tire themselves out learning and proving things that, once learned and proved, don’t matter a straw as far as the mind or memory is concerned.”

Mechanics of Gathering Interview Data Recording the Data

No matter what style of interviewing you use and no matter how carefully you word questions, it all comes to naught if you fail to capture the actual words of the person being interviewed. The raw data of interviews are the actual quotations spoken by interviewees. Nothing can substitute for these data—the actual things said by real people. That’s the prize sought by the qualitative inquirer.

Data interpretation and analysis involve making sense out of what people have said, looking for patterns, putting together what is said in one place with what is said in another place, and integrating what different people have said. These processes occur primarily during the analysis phase, after the data have been collected. During the interviewing process itself—that is, during the data collection phase—the purpose of each interview is to record as fully and fairly as possible that particular interviewee’s perspective. Some method for recording the verbatim responses of people being interviewed is therefore essential.

As a good hammer is essential to fine carpentry, a good recorder is indispensable to fine fieldwork. Recorders do not “tune out” conversations, change what has been said because of interpretation (either conscious or unconscious), or record words more slowly than they are spoken. (Recorders, do, however, malfunction.) Obviously a researcher doing conversational interviews as part of covert fieldwork does not walk around with a recorder. However, most interviews are arranged in such a way that recorders are appropriate if properly explained to the interviewee:

I’d like to record what you say so I don’t miss any of it. I don’t want to take the chance of relying on my notes and maybe missing something that you say or inadvertently changing your words somehow. So, if you don’t mind, I’d very much like to use the recorder. If at any time during the interview you would like to stop the recorder, juts let me know.

When it is not possible to use a recorder because of some sensitive situation, interviewee request, or recorder malfunction, notes must become much more thorough and comprehensive. It becomes critical to gather actual quotations. When the interviewee has said something that seems particularly important or insightful, it may be necessary to say,

I’m afraid I need to stop you at this point so that I can get down exactly what you said because I don’t want to lose that particular quote. Let me read back to you what I have and make sure it is exactly what you said.

This point emphasizes again the importance of capturing what people say in their own words.

But such verbatim note taking has become the exception now that most people are familiar and comfortable with recorders. More than just increasing the accuracy of data collection, using a recorder permits the interviewer to be more attentive to the interviewee. If you tried to write down every word said, you’d have a difficult time responding appropriately to interviewee needs and cues. Ironically, verbatim note taking can interfere with listening attentively. The interviewer can get so focused on note taking that the person speaking gets only secondary attention. Every interview is also an observation, and having one’s eyes fixed on a note pad is hardly conducive to careful observation. In short, the interactive nature of in-depth interviewing can be seriously affected by an attempt to take verbatim notes. Lofland (1971) has made this point forcefully:

One’s full attention must be focused upon the interviewee. One must be thinking about probing for further explication or clarification of what he is now saying; formulating probes linking up current talk with what he has already said; thinking ahead to putting in a new question that has now arisen and was not taken account of in the standing guide (plus making a note at that moment so one will not forget the question); and attending to the interviewee in a manner that communicates to him that you are indeed listening. All of this is hard enough simply in itself. Add to that the problem of writing it down—even if one takes shorthand in an expert fashion—and one can see that the process of note-taking in the interview decreases one’s interviewing capacity. Therefore, if conceivably possible, record; then one can interview. (p. 89)

SIDEBAR

ADVICE FROM AN EXPERIENCED JOURNALIST ON RECORDING AN INTERVIEW

Whatever you do, don’t rely on memory. Don’t even imagine that you will be able to remember verbatim in the evening what people said during the day. And don’t squirrel notes in a bathroom—that is, runoff to the john and write surreptitiously what someone said back there with the cocktails. From the start, make clear what you are doing and who will publish what you write. Display your notebook as if it were a fishing license. While the interview continues, the notebook may serve other purposes, surpassing the talents of a taperecorder. As you scribble away, the interviewee is, of course, watching you. Now, unaccountably, you slowdown, and even stop writing, while the interviewee goes on talking. The interviewee becomes nervous, tries harder, and spills out the secrets of a secret life, or may be just a clearer and more quotable version of what was said before. Conversely, if the interviewee is saying nothing of interest, you can pretend to be writing, just to keep the enterprise moving forward.

—John McPhee (2014, p. 50)

Distinguished journalist

So if verbatim note taking is neither desirable nor really possible, what kinds of notes are taken during a recorded interview?

Taking Notes During Interviews

The use of the recorder does not eliminate the need for taking notes, but you take strategic and focused notes, not verbatim notes. Notes can serve at least four purposes:

1. Notes taken during the interview can help the interviewer formulate new questions as the interview moves along, particularly where it may be appropriate to check out something said earlier.

2. Looking over field notes before transcripts are done helps make sure the inquiry is unfolding in the hoped-for direction and can stimulate early insights that may be relevant to pursue in subsequent interviews while still in the field—the emergent nature of qualitative inquiry.

3. Taking notes will facilitate later analysis, including locating important quotations, from the recording.

4. Notes are a backup in the event the recorder has malfunctioned or, as I’ve had happen, a recording is erased inadvertently during transcription.

When a recorder is being used during the interview, notes will consist primarily of key phrases, lists of major points made by the respondent, and key terms or words shown in quotation marks that capture the interviewee’s own language. It is useful to develop some system of abbreviations and informal shorthand to facilitate taking notes, for example, in an interview on leadership writing “L” instead of the full word. Some important conventions along this line include (a) using quotation marks only to indicate full and actual quotations; (b) developing some mechanism for indicating interpretations, thoughts, or ideas that may come to mind during the interview, for example, the use of brackets to set off one’s own ideas from those of the interviewee; and (c) keeping track of questions asked as well as answers received. Questions provide the context for interpreting answers.

Note taking serves functions beyond the obvious one of taking notes. Note taking helps pace the interview by providing nonverbal cues about what’s important, providing feedback to the interviewee about what kinds of things are especially “noteworthy”—literally. Conversely, the failure to take notes may indicate to the respondent that nothing of particular importance is being said. And don’t start making out your work “to do” list while someone is droning on endlessly. The person might think that you’re taking notes. Enchanted, he or she will keep on talking. The point is that taking notes affects the interview process. Be mindful of those effects.

After the Interview: Quality Control

The period after an interview or observation is critical to the rigor and validity of qualitative inquiry. This is a time for guaranteeing the quality of the data.

Immediately after a recorded interview, check the recording to make sure it worked. If, for some reason, a malfunction occurred, you should immediately make extensive notes of everything that you can remember. Even if the recorder functioned properly, you should go over the interview notes to make certain that they make sense and to uncover areas of ambiguity or uncertainty. If you find things that don’t quite make sense, as soon possible check back with the interviewee for clarification. This can often be done over the telephone. In my experience, people who are interviewed appreciate such a follow-up because it indicates the seriousness with which the interviewer is taking their responses. Guessing the meaning of a response is unacceptable; if there is no way of following up the comments with the respondent, then those areas of vagueness and uncertainty simply become missing data.

The immediate postinterview review is a time to record details about the setting and your observations about the interview. Where did the interview occur? Under what conditions? How did

the interviewee react to questions? How well do you think you did asking questions? How was the rapport?

Answers to these questions establish a context for interpreting and making sense of the interview later. Reflect on the quality of information received. To what extent did you find out what you really wanted to find out in the interview? Note weaknesses and problems: poorly worded questions, wrong topics, poor rapport. Reflect on these issues, and make notes on the interview process while the experience is still fresh in your mind. These process notes will inform the methodological section of your research report, evaluation, or dissertation.

Reflection as Qualitative Data

This period after an interview or observation is a critical time for reflection and elaboration. It is a time of quality control to guarantee that the data obtained will be useful. This kind of postinterview ritual requires discipline. Interviewing and observing can be exhausting, so much so that it is easy to forego this time of reflection and elaboration, put it off, or neglect it altogether. To do so is to seriously undermine the rigor of qualitative inquiry. Interviews and observations should be scheduled so that sufficient time is available for data clarification, elaboration, and evaluation afterward. Where a team is working together, the whole team needs to meet regularly to share observations and debrief together. This is the beginning of analysis, because, while the situation and data are fresh, insights can emerge that might otherwise be lost. Ideas and interpretations that emerge following an interview or observation should be written down and clearly marked as emergent, field-based insights to be further reviewed later.

I think about the time after an interview as a period for postpartum reflection, a time to consider what has been revealed and what has been birthed. In eighteenth-century Europe, the quaint phrase “in an interesting condition” became the genteel way of referring to an expectant mother in “polite company.” The coming together of an interviewer and an interviewee makes for “an interesting condition.” The interviewer is certainly expectant, as may be the interviewee. What emerged? What was created? Did it go okay? Is some form of triage necessary? As soon as a child is born, a few basic observations are made, and tests are performed to make sure that everything is alright. That’s what you’re doing right after an interview—making sure that everything came out okay.

Such an analogy may be a stretch for thinking about a postinterview debrief, but interviews are precious to those who hope to turn them into dissertations, contributions to knowledge, and evaluation findings. It’s worth managing the interview process to allow time to make observations about, reflect on, and learn from each interview.

Up to this point, we’ve been focusing on techniques to enhance the quality of the standard one- on-one interview. We turn now to some important variations in interviewing and specialized approaches. We begin with interviewing groups.

MODULE

61 Interviewing Groups and Cross-Cultural Interviewing

Human beings are social creatures. We are social not just in the trivial sense that we like company, and not just in the obvious sense that we each depend on others. We are social in a more elemental way: simply to exist as a normal human being requires interaction with other people.

—Atul Gawande Surgeon and journalist

Group interviews take a variety of forms and serve diverse purposes. Focus groups, interviews with naturally occurring groups, family interviews, and documenting consciousness-raising groups offer a variety of opportunities to engage in qualitative inquiry. What these group interviewing approaches have in common is recognition that, as social beings, qualitative inquiry with groups makes data collection a social experience. That social experience is presumed to increase the meaningfulness and validity of findings because our perspectives are formed and sustained in social groups. Our interactions with each other are how we come to more deeply understand our own views, test our knowledge, get in touch with our feelings, and make sense of our behaviors.

The purpose of the inquiry drives the use of group interviews. We may use groups to get diverse perspectives or to facilitate consensus. We may interview naturally occurring groups to compare and contrast their activities and views. We may conduct group interviews in certain cross-cultural settings because people are most comfortable in groups and are not accustomed to one-on-one inquiries. In the sections that follow, we’ll examine and compare several group-interviewing approaches. Exhibit 7.14 provides an overview of 12 varieties of group interviews with different purposes and priorities.

Focus Group Interviews

A focus group is an interview with a small group of people on a specific topic. Groups are typically of 6 to 10 people with similar backgrounds who participate in the interview for one to two hours. In a given study, a series of different focus groups will be conducted to get a variety of perspectives and increase confidence in whatever patterns emerge. Focus group interviewing was developed in recognition that many consumer decisions are made in a social context, often growing out of discussions with other people. Thus, market researchers began using focus groups in the 1950s as a way of stimulating the consumer group process of decision making to gather more accurate information about consumer product preferences (Higginbotham & Cox, 1979). On the academic side, distinguished sociologist Robert K. Merton pioneered the seminal work on research-oriented focus group interviews in 1956: The Focused Interview (Merton, Riske, & Kendall).

The focus group interview is, first and foremost, an interview. It is not a problem-solving session. It is not a decision-making group. It is not primarily a discussion, though direct interactions among participants often occur. It is an interview. The twist is that, unlike a series of

one-on-one interviews, in a focus group, participants get to hear each other’s responses and to make additional comments beyond their own original responses as they hear what other people have to say. However, participants need not agree with each other or reach any kind of consensus. Nor is it necessary for people to disagree. The object is to get high-quality data in a social context where people can consider their own views in the context of the views of others.

Focus group experts Richard Krueger and Mary Anne Casey (2008) explain that a focused interview should be carefully planned to create a “permissive, nonthreatening environment,” a setting that “encourages participants to share perceptions and points of view without pressuring participants to vote or reach consensus” (p. 4). A focus group is conducted in a way that is comfortable and even enjoyable for participants as they share their ideas and perceptions. Group members influence each other by responding to the ideas and comments they hear.

The term focus group moderator may be used instead of interviewer because

this term [moderator] highlights a specific function of the interviewer—that of moderating or guiding the discussion. The term interviewer tends to convey a more limited impression of two-way communication between an interviewer and an interviewee. By contrast, the focus group affords the opportunity for multiple interactions not only between the interviewer and respondent but among all participants in the group. The focus group is not a collection of simultaneous individual interviews but rather a group discussion where the conversation flows because of the nurturing of the moderator. (Krueger, 1994, p. 100)

EXHIBIT 7.14 Twelve Varieties of Group Interviews

The combination of moderating and interviewing is sufficiently complex that Krueger recommends that teams of two conduct the groups so that one person can focus on facilitating the group while the other takes detailed notes and deals with mechanics like recorders, cameras, and any special needs that arise, for example, someone needing to leave early or becoming

overwrought. Even when the interview is recorded, good notes help in sorting out who said what when the recording is transcribed.

SIDEBAR

GIVING VOICE TO MARGINALIZED PEOPLE THROUGH FOCUS GROUPS

By bringing together people who share a similar background, focus groups create the opportunity for participants to engage in meaningful conversations about the topics that researchers wish to understand. This ability to learn about participants’ perspectives by listening to their conversations makes focus groups especially useful for hearing from groups whose voices are often marginalized within the larger society. Focus groups are thus widely used in studies of ethnic and cultural minority groups, along with studies of sexuality and substance use. . . .

For evaluation research, focus groups are used in both preliminary phases, such as needs assessment or program development, and in follow-up or summative evaluation, to hear about the participants’ experiences with a program. (Morgan, 2008a, p. 352)

SIDEBAR

THE QUESTIONING ROUTE AND GROUP INTERACTIONS

The series of questions used in a focused interview—the questioning route—looks deceptively simple. Typically, a focused interview will include about a dozen questions for a two-hour group. If you asked these questions in an individual interview, the respondent could probably tell you everything he or she could think of related to the questions in just a few minutes. But when these questions are asked in a group environment, the discussion can last for several hours. Part of the reason is in the nature of the questions and the cognitive processes of humans. As participants answer questions, their responses spark ideas from other participants. Comments provide mental cues that trigger memories or thoughts of other participants—cues that help explore the range of perceptions. (Krueger & Casey, 2008, p. 35)

©2002 Michael Quinn Patton and Michael Cochran

Postmodern Unfocused Group Interview

Strengths of Focus Groups

Focus group interviews have several advantages for qualitative inquiry.

• Focus groups offer cost effective data collection: In one hour, you can gather information from eight people instead of only one, significantly increasing sample size. “Focus group interviews are widely accepted within marketing research because they produce believable results at a reasonable cost” (Krueger, 1994, p. 8).

• Focus groups highlight diverse perspectives: Focus groups should be homogeneous in terms of background and not attitudes. Differences of opinion “lend ‘bite’ to focus group discussions. Provided that we are not cavalier about mixing together people who are known to have violently differing perspectives on emotive issues, a little bit of argument can go a long way towards teasing out what lies beneath ‘opinions’ and can allow both focus group facilitators and participants to clarify their own and others’ perspectives. Perhaps, in some contexts, this can even facilitate greater mutual understanding. In terms of generating discussion, a focus group consisting of people in agreement about everything would make for very dull conversation and data lacking in richness” (Barbour, 2007, p. 59).

• Interactions among participants enhance data quality: Participants tend to provide checks and balances on each other, which weeds out false or extreme views (Krueger & Casey, 2000, 2008). Moreover, how people talk about a topic is important, not just what they say about it. The researcher attends “not only the content of the conversation, but also what the conversation situation is like in terms of emotions, tensions, interruptions, conflicts and body language . . . ; how people tell and retell different narratives or how they draw on and reproduce discourses in interaction” (Eriksson & Kovalainen, 2008, p. 175).

• Silences and topics avoided are revealing: What is not said in focus groups and what precipitates periods of silence can generate fruitful insights.

• Analysis unfolds as the interview unfolds: The extent to which there is a relatively consistent, shared view or great diversity of views can be quickly assessed.

• Focus groups tend to be enjoyable to participants: They draw on our human tendencies as social animals to enjoy interacting with others.

Limitations of Focus Groups

Focus groups, like all forms of data collection, also have limitations.

• The number of questions that can be asked is greatly restricted in the group setting.

• The available response time for any particular individual is restrained to hear from everyone. A rule of thumb: With eight people and an hour for the group, plan to ask no more than 10 major questions.

• Facilitating and conducting a focus group interview requires considerable group process skill beyond simply asking questions. The moderator must manage the interview so that it’s not dominated by one or two people, and so that those participants who tend not to be highly verbal are able to share their views.

• Those who realize that their viewpoint is a minority perspective may not be inclined to speak up and risk negative reactions.

• Focus groups appear to work best when people in the group, though sharing similar backgrounds, are strangers to one another. The dynamics are quite different and more complex when participants have prior established relationships.

• Controversial and highly personal issues are poor topics for focus groups (Kaplowitz, 2000).

• Confidentiality cannot be assured in focus groups. Indeed, in market research, focus groups are often videotaped so that marketers can view them and see for themselves the emotional intensity of people’s responses.

• “The focus group is beneficial for identification of major themes but not so much for the micro- analysis of subtle differences” (Krueger, 1994, p. x).

• Compared with most qualitative fieldwork approaches, focus groups typically have the disadvantage of taking place outside the natural settings where social interactions normally occur (Madriz, 2000, p. 836).

SIDEBAR

LEARNING IN FOCUS GROUPS

Focus group participants sometimes engage in problem solving as they respond to questions. Sharing what they think and know, participants generate new knowledge as a group that can affect individual knowledge and beliefs, and even subsequent behavior. Expressing disagreement can also stimulate learning as participants challenge each other, defend their own views, and sometimes modify their viewpoints. Thus, while the quotations from focus groups constitute evaluation findings, the interactions and learnings in the group can constitute learning among the participants (Wiebeck & Dahlgren, 2007).

The Focus in Focus Groups

As these strengths and limitations suggest, the power of focus groups resides in their being focused. The focused questions typically seek reactions to something (a product, a program, an idea, or a shared experience) rather than exploring complex life issues in depth and detail. The groups are focused by being formed homogeneously. The facilitation is focused, keeping responses on target. Interactions among participants are focused, staying on topic. Use of time must be focused, because the time passes quickly. Despite some of the limitations introduced by the necessity of sharp focus, applications of focus groups are widespread and growing (Fontana & Frey, 2000; Krueger & Casey, 2000, 2008; Madriz, 2000; Morgan, 1988, 2008a).

Focus groups have entered into the repertoire of techniques for qualitative researchers and evaluators involved in participatory studies with coresearchers. For community research, collaborative action research, and participatory evaluations, local people who are not professional researchers are being successfully trained and supported to do focus groups (King & Stevahn, 2013; Krueger & King, 1997). Rossman and Rallis (1998) have done focus groups effectively with children in elementary schools.

Because the focus group “is a collectivistic rather than an individualistic research method,” focus groups have also emerged as a collaborative and empowering approach in feminist research (Madriz, 2000, p. 836). Sociologist and feminist researcher Esther Madriz explains,

Focus groups allow access to research participants who may find one-on-one, face-to-face interaction “scary” or “intimidating.” By creating multiple lines of communication, the group interview offers participants . . . a safe environment where they can share ideas, beliefs, and attitudes in the company of people from the same socioeconomic, ethnic, and gender backgrounds. . . .

For years, the voices of women of color have been silenced in most research projects. Focus groups may facilitate women of color “writing culture together” by exposing not only the layers of oppression that have suppressed these women’s expressions, but the forms of resistance that they use every day to deal with such oppressions. In this regard, I argue that focus groups can be an important element in the advancement of an agenda of social justice for women, because they can serve to expose and validate women’s everyday experiences of subjugation and their individual and collective survival and resistance strategies. (Madriz, 2000, pp. 835–836)

I experienced firsthand the potential of focus groups to provide safety in numbers for people in vulnerable situations. I conducted focus groups among low-income recipients of legal aid as one technique in an evaluation of services provided to people in a large public housing project. As the interview opened, the participants, who came from different sections of the project and did not know each other, were reserved and cautious about commenting on the problems they were experiencing. As one woman shared in vague terms a history of problems she had had in getting needed repairs, another woman jumped in and supported her saying, “I know exactly what you’re talking about and who you’re talking about. It’s really bad. Bad. Bad. Bad.” She then shared her story. Soon others were telling similar stories, often commenting that they had no idea so many other people were having the same kind of problems. Several also commented at the end of the interview that they would have been unlikely to share their stories with me in a one-on-one interview because they would have felt individually vulnerable, but they drew confidence and a sense of safety and camaraderie from being part of the interview group.

On the other hand, Kaplowitz (2000) studied whether sensitive topics were more or less likely to be discussed in focus groups versus individual interviews. Ninety-seven year-round residents from the Chelem Lagoon region in Yucatan, Mexico, participated in one of 12 focus groups or 19 individual in-depth interviews. A professional moderator used the same interview guide to get reactions to a shared mangrove ecosystem. The 31 sessions generated more than 500 pages of transcripts that were coded for the incidence of discussions of sensitive topics. The findings showed that the individual interviews were 18 times more likely to raise socially sensitive discussion topics than the focus groups. Additionally, the study found the two qualitative methods, focus groups and individual interviews, to be complementary to each other, each yielding somewhat different information.

Internet Focus Groups

Internet and social media platforms have created new opportunities for focus groups (Krueger & Casey, 2012; Lee & O’Brien, 2012). Walston and Lissitz (2000) compared the reactions of Internet-based versus face-to-face focus groups discussing academic dishonesty and found that the Internet environment appeared to reduce participants’ anxiety about what the moderator thought of them, making it easier for them to share embarrassing information.

Interviews With Existing Groups

Not all group interviews are of the focus group variety. During fieldwork, unstructured conversational interviews may occur in groups that are not at all focused. In evaluating a community leadership program, much of the most important information came from talking with groups of people informally during breaks from the formal training. During the fieldwork for the wilderness education program described extensively in the previous chapter, informal group interviews became a mainstay of data collection, sometimes with just two or three people, and sometimes in dinner groups as large as 10.

Parameswaran (2001) found important differences in the data she could gather in group versus individual interviews during her fieldwork in India. Moreover, she found that she had to do group interviews with the young women before she could interview them one-on-one. She was studying the reading of Western romance novels among female college students in India. She reports,

To my surprise, several young women did not seem happy or willing to spend time with me alone right away. When I requested the first group of women to meet me on an individual basis and asked if they could meet me during their breaks from classes, I was surprised and uncomfortable with the loud silence that ensued. . . .

When I faced similar questions from another group of women who also appeared to resist my appeals to meet with them alone, I realized that I had arrogantly encroached into their intimate, everyday rituals of friendship. . . . Knowing well that without collecting data from these possibly recalcitrant subjects, I had no project, I reluctantly changed my plans and agreed to accept their demands. I began talking to them in groups first, and gradually, more than 30 women agreed to meet me in individual sessions. Later, I discovered that they preferred to respond to me as a group first because they were wary about the kinds of questions I planned to ask about their sexuality and romance reading. The more public nature of group discussions meant that it was a safe space where I might hesitate to ask intrusive and personal questions. . . .

Group interviews in which women spoke about love, courtship, and heterosexual relations in Western romance fiction became opportunities to debate, contradict, and affirm their opinions about a range of gendered social issues in India such as sexual harassment of women in public places, stigmas associated with single women, expectations of women to be domestic, pressures on married women to obey elders in husbands’ families, and the merits of arranged versus choice/love marriages. . . . In contrast to these collective sessions where young women’s discussions primarily revolved around gender discrimination toward women as a group, in individual interviews, many women were much more talkative about restrictions on their sexuality, and several women shared their frustrations with immediate, everyday problems pertaining to family members’ control over their movements. (pp. 84–86)

Parameswaran’s (2001) work illustrates well the different kinds of data that can be collected from groups versus individuals, with both kinds of data being important in long-term or intensive fieldwork. I had similar experiences in Burkina Faso and Tanzania, where villagers much preferred group interviews for formal and structured interactions and the only way to get individual interviews was informally when walking somewhere or doing something with an individual. On several occasions, I tried scheduling and conducting individual interviews only to have many other people present when I arrived. Such cross-cultural differences in valuing individual versus group interactions provide a segue to the next section on cross-cultural interviewing.

Cross-Cultural Interviewing

Culture and place demand our attention not because our concepts of them are definitive or authoritative, but because they are fragile and fraught with dispute.

—Jody Berland (1997, p. 9) Cultural studies scholar, York University, Toronto, Canada

Cross-cultural inquiries add layers of complexity to the already-complex interactions of an interview. The possibility for misunderstandings is increased significantly. Ironically, economic and cultural globalization, far from reducing the likelihood of misunderstandings, may simply make miscommunication more nuanced and harder to detect because of false assumptions about shared meanings. Whiting (1990) tellingly explored cross-cultural differences in his book, You Gotta Have Wa, on how Americans and Japanese play seemingly the same game, baseball, quite differently—and then uses those differences as entrées into the two cultures.

Ethnographic interviewing has always been inherently cross-cultural but has the advantage of being grounded in long-term relationships and in-depth participant observations. More problematic, and the focus of this section, are short-term studies for theses or student exchange projects and brief evaluation site visits sponsored by international development agencies and philanthropic foundations. In the latter case, teams of a few Westerners are flown into a developing country for a week to a month to assess a project, often with counterparts from the local culture. These rapid appraisals revolve around cross-cultural interviewing and are more vulnerable to misinterpretation and miscommunication than traditional, long-term anthropological fieldwork. Examples of the potential problems, presented in the sections that follow, will hopefully help sensitize students, short-term site visitors, and evaluators to the precariousness of cross-cultural interviewing. As Rubin and Rubin (1995) have noted,

You don’t have to be a woman to interview women, or a sumo wrestler to interview sumo wrestlers. But if you are going to cross social gaps and go where you are ignorant, you have to recognize and deal with cultural barriers to communication. And you have to accept that how you are seen by the person being interviewed will affect what is said. (p. 39)

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 1/39

These descriptions are progressive in that each new category identifies a person more serious about the exhibit hall.

• The commuter: A person who merely uses the hall as a vehicle to get from the entry point to the exit point. . . . • The nomad: A casual visitor, a person who is wandering through the hall, apparently open to become interested

in something. The Nomad is not really sure why he or she is in the hall and not really sure that s/he is going to find anything interesting in this particular exhibit hall. Occasionally the Nomad stops, but it does not appear that the nomadic visitor finds any one thing in the hall more interesting than any other thing.

• The cafeteria type: This is the interested visitor who wants to get interested in something, and so the entire museum and the hall itself are treated as a cafeteria. Thus, the person walks along, hoping to find something of interest, hoping to “put something on his or her tray” and stopping from time to time in the hall. While it appears that there is something in the hall that spontaneously sparks the person’s interest, we perceive this visitor has a predilection to becoming interested, and the exhibit provides the many things from which to choose.

• The V.I.P.—very interested person: This visitor comes into the hall with some prior interest in the content area. This person may not have come specifically to the hall, but once there, the hall serves to remind the V.I.P.’s that they were, in fact, interested in something in that hall beforehand. The V.I.P. goes through the hall much more carefully, much slower, much more critically—that is, they move from point to point, they stop, they examine aspects of the hall with a greater degree of scrutiny and care. (pp. 10–11)

This typology of types of visitors became important in the full evaluation because it permitted analysis of different kinds of museum experiences. Moreover, the evaluators recommended that when conducting interviews to get museum visitors’ reactions to exhibits, the interview results should be differentially valued depending on the type of person being interviewed—commuter, nomad, cafeteria type, or VIP.

A different typology was developed to distinguish how visitors learn in a museum, “Museum Encounters of the First, Second, and Third Kind,” a takeoff on the popular science fiction movie Close Encounters of the Third Kind, which referred to direct human contact with visitors from outer space.

• Museum encounters of the first kind: This encounter occurs in halls that use display cases as the primary approach to specimen presentation. Essentially, the visitor is a passive observer to the “objects of interest.” Interaction is visual and may occur only at the awareness level. The visitor is probably not provoked to think or consider ideas beyond the visual display.

• Museum encounters of the second kind: This encounter occurs in halls that employ a variety of approaches to engage the visitor’s attention and/or learning. The visitor has several choices to become active in his/her participation. . . . The visitor is likely to perceive, question, compare, hypothesize, etc.

• Museum encounters of the third kind: This encounter occurs in halls that invite high levels of visitor participation. Such an encounter invites the visitor to observe phenomena in process, to create, to question the experts, to contribute, etc. Interaction is personalized and within the control of the visitor. (Wolf & Tymitz, 1978, p. 39)

Here’s a sample of a quite different classification scheme, this one developed from fieldwork by sociologist Rob Rosenthal (1994) as “a map of the terrain” of the homeless.

• Skidders: Most often women, typically in their 30s, who grew up middle or upper class but “skidded” into homelessness as divorced or separated parents

• Street people: Mostly men, often veterans, rarely married; highly visible and know how to use the resources of the street

• Wingnuts: People with severe mental problems, occasionally due to long-term alcoholism, a visible subgroup (Note to readers: Including this example and the label “wingnuts” is not an endorsement of its insensitivity. The label is offensive. Labeling is treacherous and will appear especially inappropriate when removed from the context in which it was generated; in this case, the term was sometimes used among homeless people themselves.)

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 2/39

• Transitory workers: People with job skills and a history of full-time work who travel from town to town, staying months or years in a place and then heading off to greener pastures

EXHIBIT 8.10 Ten Types of Qualitative Analysis

These varying types of qualitative analysis are distinct, not mutually exclusive. An analysis can include, and typically does include, several approaches. It is worth distinguishing them because they involve different ways of approaching the challenge of making sense of qualitative data.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 3/39

SIDEBAR

COMPLEMENTARY PAIRS AS CONCEPTUALLY SENSITIZING CONTINUA

Nobel laureate Niels Bohr’s maxim is as follows:

Contraria sunt complementa (“Contraries are complementary”)

Contraries are not contradictory: . . . We replace all related but slightly different terms like contraries, polar opposites, duals, opposing tensions, binary oppositions, dichotomies, and the like with the all-encompassing term “complementary pairs.” (Kelso & Engstrom, 2006, p. 7)

Sampling of Complementary Pairs From Various Field of Endeavor

Anatomy: organ/organism; form/function

Art: foreground/background; original/reproduction

Culture: permissible/taboo; public/private

Economics: boom/bust; equilibrium/disequilibrium

Education: knowledge/ignorance; student/teacher

Entertainment: amateur/professional; comedy/tragedy

Mathematics: problem/solution; finite/infinite

Medicine: curative/palliative; invasive/noninvasive; prevention/cure

Military: all/enemy; defensive/offensive; peace/war

Mythology: hero/villain; beauty/ugliness

Philosophy: faith/reason; physical/spiritual; truth/falsehood

Politics: conservative/liberal; rights/responsibilities

Psychology: abnormal/normal; extraversion/introversion

Sociology: folk/urban; general/particular; task oriented/process oriented; social/antisocial

SOURCE: Kelso and Engstrom (2006, pp. 257–262).

Categories of How Homeless People Spend Their Time:

• Hanging out • Getting by • Getting ahead

As these examples illustrate, the first purpose of typologies is to distinguish aspects of an observed pattern or phenomenon descriptively. Once identified and distinguished, these types can later be used to make interpretations, and they can be related to other observations to draw conclusions, but the first purpose is

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 4/39

description based on an inductive analysis of patterns that appear in the data. Kenneth Bailey’s (1994) classic on typologies and taxonomies in qualitative analysis remains an excellent resource.

Summary of Pattern, Theme, and Content Analysis Purpose drives analysis. Design frames analysis. Purposeful sampling strategies determine the unit of analysis. Different analytical approaches will yield different kinds of findings based on distinct analysis procedures and priorities. There is no single right way to engage in qualitative analysis. Distinguishing signal from noise (detecting patterns and identifying themes) results from immersion in the data, systematic engagement with what the data reveal, and judgment about what is meaningful and useful. The next module gets into some of the nitty-gritty operational processes involved in analysis. Exhibit 8.10 presents the 10 analytical approaches reviewed in this module.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 5/39

MODULE

68 The Intellectual and Operational Work of Analysis

Classification is Ariadne’s clue through the labyrinth of nature. —George Sand (1869)

Nouvelles Lettres d’un Voyageur

Coding Data, Finding Patterns, Labeling Themes, and Developing Category Systems Thus far, I’ve provided lots of examples of the fruit of qualitative inquiry: patterns, themes, categories, and typologies. Let’s back up now to consider how you recognize patterns in qualitative data and turn those patterns into meaningful categories and themes. This chapter could have started with this section, but I think it’s helpful to understand what kinds of findings can be generated from qualitative analysis before delving very deeply into the mechanics and operational processes, especially because the mechanics vary greatly and are undertaken differently by analysts in different disciplines working from divergent frameworks. That said, some guidance can be offered.

Raw field notes and verbatim transcripts constitute the undigested complexity of reality. Simplifying and making sense out of that complexity constitutes the challenge of content analysis. Developing some manageable classification or coding scheme is the first step of analysis. Without classification, there is chaos and confusion. Content analysis, then, involves identifying, coding, categorizing, classifying, and labeling the primary patterns in the data. This essentially means analyzing the core content of interviews and observations to determine what’s significant. In explaining the process, I’ll describe it as done traditionally, which is without software, to highlight the thinking involved. Software programs provide different tools and formats for coding, but the principles of the analytical process are the same whether doing it manually or with the assistance of a computer program.

I begin by reading through all of my field notes or interviews and making comments in the margins or even attaching post-it notes that contain my notions about what I can do with the different parts of the data. This constitutes the first cut at organizing the data into topics and files. Coming up with topics is like constructing an index for a book or labels for a file system: You look at what is there and give it a name, a label. The copy on which these topics and labels are written becomes the indexed copy of the field notes or interviews. Exhibit 8.11 shows examples of codes from the field note margins of the evaluation of a wilderness education program. You create your own codes, or in a team situation, you create codes together.

EXHIBIT 8.11 First-Cut Coding Examples

P is for participants, S is for staff.

Sample codes from the field note margins of the evaluation of a wilderness education program

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 6/39

The shorthand codes (abbreviations) are written in the margins directly on the relevant data passages or quotations. The full labels in the second column of the above table are the designations for separate files that contain all similarly coded passages.

The shorthand codes (abbreviations) are written directly on the relevant data passages, either in the margins or with an attached tab on the relevant page. Many passages will illustrate more than one theme or pattern. The first reading through the data is aimed at developing the coding categories or classification system. Then a new reading is done to actually start the formal coding in a systematic way. Several readings of the data may be necessary before field notes or interviews can be completely indexed and coded. Some people find it helpful to use colored highlighting pens—color coding different idea or concepts. Using self-adhesive colored dots or post-it notes is another option.

If sensing a pattern or “occurrence” can be called seeing, then the encoding of it can be called seeing as. That is, you first make the observation that something important or notable is occurring, and then you classify or describe it. . . . The seeing as provides us with a link between a new or emergent pattern and any and all patterns that we have observed and considered previously. It also provides a link to any and all patterns that others have observed and considered previously through reading. (Boyatzis, 1998, p. 4)

Where more than one person is working on the analysis, it is helpful to have each person (or small teams for large projects) develop the coding scheme independently, then compare and discuss similarities and differences. Important insights can emerge from the different ways in which two people look at the same set of data—a form of analytical triangulation.

Often an elaborate classification system emerges during coding, particularly in large projects where a formal scheme must be developed that can be used by several trained coders. In our study of evaluation use, which is the basis for Utilization-Focused Evaluation (Patton, 2008), graduate students in the evaluation program at the University of Minnesota conducted lengthy interviews with 60 project officers, evaluators, and federal decision makers. We developed a comprehensive classification system that would provide easy access to the data by any of the student or faculty researchers. Had only one investigator been intending to use the

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 7/39

data, such an elaborate classification scheme would not have been necessary. However, to provide access to several students for different purposes, every paragraph in every interview was coded using a systematic and comprehensive coding scheme made up of 15 general categories with subcategories. Portions of the codebook used to code the utilization of evaluation data appear as Exhibit 8.34 at the end of this chapter (pp. 642–643), as an example of one kind of qualitative analysis codebook. This codebook was developed from four sources: (1) the standardized open-ended questions used in interviewing; (2) review of the utilization literature for ideas to be examined and hypotheses to be reviewed; (3) our initial inventory review of the interviews, in which two of us read all the data and added categories for coding; and (4) a few additional categories added during coding when passages didn’t fit well into the available categories.

Every interview was coded twice by two independent coders. Each individual code, including redundancies, was entered into our qualitative analysis database so that we could retrieve all passages (data) on any subject included in the classification scheme, with brief descriptions of the content of those passages. The analyst could then go directly to the full passages and complete interviews from which the passages were extracted to keep quotations in context. In addition, the computer analysis permitted easy cross-classification and cross-comparison of passages for more complex analyses across interviews.

Some such elaborate coding system is routine for very rigorous analysis of a large amount of data. Complex coding systems with multiple coders categorizing every paragraph in every interview constitutes a labor- intensive form of coding, one that would not be used for most small-scale formative evaluation or action research projects. However, where data are going to be used by several people or where data are going to be used over a long period of time, including additions to the data set over time, such a comprehensive and computerized system can be well worth the time and effort required. This is the case, for example, where an action research project involves a number of people working together in an organizational or community context where the stakes are high.

Classifying and coding qualitative data produces a framework for organizing and describing what has been collected during fieldwork. (For published examples of coding schemes, see Bernard, 2000, pp. 447–450; Bernard & Ryan, 2010, pp. 325–328, 387–389, 491–492, 624; Boyatzis, 1998; Miles, Huberman, & Saldaña, 2014; Strauss & Corbin, 1998.) This descriptive phase of analysis builds a foundation for the interpretative phase, when meanings are extracted from the data, comparisons are made, creative frameworks for interpretation are constructed, conclusions are drawn, significance is determined, and, in some cases, theory is generated.

Convergence and Divergence in Coding and Classifying In developing codes and categories, a qualitative analyst must first deal with the challenge of “convergence” (Guba, 1978)—figuring out what things fit together. Begin by looking for recurring regularities in the data. These regularities reveal patterns that can be sorted into categories. Categories should then be judged by two criteria: (1) internal homogeneity and (2) external heterogeneity. The first criterion concerns the extent to which the data that belong in a certain category hold together or “dovetail” in a meaningful way. The second criterion concerns the extent to which differences among categories are bold and clear. “The existence of a large number of unassignable or overlapping data items is good evidence of some basic fault in the category system” (Guba, 1978, p. 53). The analyst then works back and forth between the data and the classification system to verify the meaningfulness and accuracy of the categories and the placement of data in categories. If several different possible classification systems emerge or are developed, some priorities must be established to determine which are more important and illuminative. Prioritizing is done according to the utility, salience, credibility, uniqueness, heuristic value, and feasibility of the classification schemes. Finally, the category system or set of categories is tested for completeness.

SIDEBAR

ERRORS TO AVOID OR FIX WHEN FOUND

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 8/39

In a clever graphic comic titled The Good, the Bad, and the Data: Shane the Lone Ethnographer’s Basic Guide to Qualitative Data Analysis, Sally Galman (2013) identifies an alphabet soup of qualitative errors to be avoided or fixed. Here’s a sample:

A is for Anemia. As you code, you find that your data are terribly thin. Go back to the field to beef up your data.

J is for Jaws (the shark). Something is lurking under the surface—you are in denial of the disconfirming evidence, so you avoid it.

O is for “Oh no! I didn’t OBSERVE.” All you have are notes filled with interpretations rather than actual observations.

P is for Procrastination. Don’t let too much time pass before you analyze.

X is for Extra Stuff. You’ve completed your analysis, and you have all this extra data that do not seem to fit. Maybe it’s time to revisit your questions and design.

Z is for Zealotry. Did you make room for discovery or did you only confirm your own ideas? (pp. 82–85)

©2002 Michael Quinn Patton and Michael Cochran

1. The set should have internal and external plausibility, a property that might be termed “integratability.” Viewed internally, the individual categories should appear to be consistent; viewed externally, the set of categories should seem to comprise a whole picture. . . .

2. The set should be reasonably inclusive of the data and information that do exist. This feature is partly tested by the absence of unassignable cases, but can be further tested by reference to the problem that the inquirer is investigating or by the mandate given the evaluator by his client/sponsor. If the set of categories did not appear to be sufficient, on logical grounds, to cover the facets of the problem or mandate, the set is probably incomplete.

3. The set should be reproducible by another competent judge. . . . The second observer ought to be able to verify that a) the categories make sense in view of the data which are available, and b) the data have been appropriately arranged in the category system. . . . The category system auditor may be called upon to attest that the category system “fits” the data and that the data have been properly “fitted into” it.

4. The set should be credible to the persons who provided the information which the set is presumed to assimilate. . . . Who is in a better position to judge whether the categories appropriately reflect their issues and concerns than the people themselves? (Guba, 1978, pp. 56–57)

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 9/39

After analyzing for convergence, the mirror analytical strategy involves examining divergence. By this, Guba means that the analyst must “flesh out” the patterns or categories. This is done by the processes of extension (building on and going deeper into the patterns and themes already identified), bridging (making connections among different patterns and themes), and surfacing (proposing new categories that ought to fit and then verifying their existence in the data). The analyst brings closure to the process when sources of information have been exhausted, when sets of categories have been saturated so that new sources lead to redundancy, when clear regularities have emerged that feel integrated, and when the analysis begins to “overextend” beyond the boundaries of the issues and concerns guiding the it. Divergence also includes careful and thoughtful examination of data that do not seem to fit, including deviant cases that don’t fit the dominant identified patterns.

This sequence, convergence then divergence, should not be followed mechanically, linearly, or rigidly. The processes of qualitative analysis involve both technical and creative dimensions. As noted early in this chapter, no abstract processes of analysis, no matter how eloquently named and finely described, can substitute for the skill, knowledge, experience, creativity, diligence, and work of the qualitative analyst. “The task of converting field notes and observations about issues and concerns into systematic categories is a difficult one. No infallible procedure exists for performing it” (Guba, 1978, p. 53).

(For in-depth guidance on qualitative coding and analysis see Bernard & Ryan, 2010, Analyzing Qualitative Data: Systematic Approaches; Boeije, 2010, Analysis in Qualitative Research; Guest, MacQueen, & Namey, 2012, Applied Thematic Analysis; Northcutt & McCoy, 2004, Interactive Qualitative Analysis: A Systems Method for Qualitative Research; Saldaña, 2009, The Coding Manual for Qualitative Researchers.)

SIDEBAR

FEAR OF FINDING NOTHING

Students beginning dissertations often ask me, their anxiety palpable and understandable, “What if I don’t find out anything?” Bob Stake, of “responsive evaluation” and “case study” fame, said at his retirement, “Paraphrasing Milton: They also serve who leave the null hypothesis tenable. . . . It is a sophisticated researcher who beams with pride having, with thoroughness and diligence, found nothing there” (Stake, 1998, p. 364, with a nod to Michael Scriven for inspiration).

True enough. But in another sense, it’s not possible to find nothing there, at least not in qualitative inquiry. The case study is there. It may not have led to new insights or confirmed one’s predilections, but the description of that case at that time and in that place is there. That is much more than nothing. The interview responses and observations are there. They, too, may not have led to headline-grabbing insights or confirmed someone’s eminent theory, but the thoughts and reflections from those people at that time and in that place are there, recorded and reported. That is much more than “nothing.”

Halcolm will tell you this:

You can only find nothing if you stare at a vacuum. You can only find nothing if you immerse yourself in nothing. You can only find nothing if you go nowhere. Go to real places. Talk to real people. Observe real things. You will find something. Indeed, you will find much, for much is there. You will find the world.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 10/39

MQP Rumination # 8

Make Qualitative Analysis First and Foremost Qualitative

I am offering one personal rumination per chapter. These are issues that have persistently engaged, sometimes annoyed, occasionally haunted, and often amused me over more than 40 years of research and evaluation practice. Here’s where I state my case on the issue and make my peace.

Here’s the scenario. I’ve conducted 15 key informant interviews with executive directors of nonprofit agencies that receive funds from the same philanthropic foundation. I’m presenting the results to the foundation’s senior staff and trustees. I report as follows:

Most of those I interviewed report being quite frustrated with your evaluation reporting requirements. They don’t think you’re asking the most important questions and they are dubious that anyone here is reading or using their reports. Most said that they get no feedback after submitting the required reports.

I then share three examples of direct quotes supporting this overall conclusion:

• “I do the reports because we’re required to, and we take them seriously and answer seriously. But there are important things we’d like to report and think they’d like to know that aren’t asked, and there’s no space for. That feels like a lost opportunity.”

• “Look, I’ve been at this for years. It’s very frustrating. We know it’s just a compliance thing. No one reads our reports. We do them because they’re required. That’s it. End of story.”

• “Truth be told, it’s a waste of time, a frustrating waste of time.”

I then invite questions, comments, and reactions.

The board chair asks, “How many said it was a waste of time?”

I take a deep breath, and bite my tongue (metaphorically) to stop myself from saying, “You have a problem here. Does it really matter whether it’s 7 people or 9 or 12? You have a problem! It’s not about the number. It’s about the substance. YOU HAVE A PROBLEM!”

The Allure of Precision

This scenario occurs over and over again. It’s the knee-jerk response to the ambiguities of qualitative findings: “Many said,” “some said,” “ a few said,” and so on. When presenting findings at a major international evaluation that involved 20 key informant interviews, the response from the conference chair was to dismiss the report as “evaluation by adjective.” He wanted to know how many said what? “What are the percentages?” he demanded.

I refused. I invite you to refuse. Here are 5 reasons why. (Count them. There are exactly 5 reasons. Now I could have generated 10 reasons or just offered my top 3. But I decided on 5. Elsewhere, I’ve offered lists of 10, 12, or 3, but 5 struck me as about right for a rumination. So that’s what you get: 5.)

1. Open-ended interviews generate diverse responses. That’s the purpose of an open-ended question, to find out what’s salient in the interviewees’ own words. We then group together those responses that manifest a common theme. The three quotes above all fall into a category of Feeling Frustrated. Only one person used the phrase “waste of time.” Another said, “I put it off as long as I can and do it

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 11/39

just in time to meet the deadline for submission, because I have a lot of more important things to do and it’s not a great use of my time. But I do it.” Not quite “waste of time,” but pretty close. What responses go together is a matter of interpretation and judgment. Coding, categorizing, and theme analysis are not precise. The result is qualitative. Stay qualitative.

2. The adjectives “most,” “many,” “some,” or “a few” are actually more accurate than a precise number. It’s common to have a couple of responses that could be included in the category or left out, thus changing the number. I don’t want to add a quote to a category just to increase the number. I want to add it because, in my judgment, it fits. So when I code 12 of 20 saying some version of “feeling frustrated,” I’m confident in reporting that “many” felt frustrated. It could have been 10, or it could have been 14, depending on the coding. But it definitely was many.

3. Percentages may be misleading. With a key informant sample of 20, each response is 5%. Thus, going from 12 of 20 to 14 of 20 is a jump from 60% to 70%. In a survey of 300 respondents, a 10% difference is significant. In a small purposeful sample, it’s not. Going from 12 to 14 is still “many.”

4. The “how many” question can distract from dealing with substantive significance. I regularly conduct workshops with 20 to 40 participants. The workshop sponsors usually have some standardized evaluation form that solicits ratings and then invites an overall open-ended response. Over the years, a single, particularly insightful and specific response has proved more valuable to me than a large number of general comments (e.g., “I learned a lot”). The point of qualitative analysis is not to determine how many said something. The point is to generate substantive insight into the phenomenon. One or two very insightful and substantive responses can easily trump 15 general responses. Here’s an example. I interviewed 15 participants in an employment training program. Two female participants said they were on the verge of dropping out because of sexual harassment by a staff member. That’s “only” 13%. That’s just 2 of 15. But any sexual harassment is unacceptable. The program has a problem, a potentially quite serious problem.

5. Small purposeful samples pose confidentiality challenges. When I’m reporting qualitative findings, I say in the methods section that I will not report that “all” or “no one” responded in a certain way because that would potentially break the confidentiality pledge. In the example that opened this rumination, all the 15 agency directors I interviewed complained about the foundation’s evaluation reporting process, especially the lack of feedback. But I reported that “many” complained, and refused attempts to get me to provide a number (which would have been 20 of 20), so as not to put any of the directors at risk.

Reasons Galore

So there you have 5 reasons to keep qualitative analysis qualitative. But maybe that doesn’t seem like enough. Maybe you’d be more persuaded and feel more confident if I gave you 10 reasons. No sooner asked, than done. Here are 5 more rumination-inspired reasons to keep qualitative analysis first and foremost qualitative:

6. Doing so demonstrates integrity. 7. It reinforces the message that the inquiry is qualitative. 8. It requires people to think about meanings. 9. Numbers are easily manipulated and analysis is corrupted under pressure to increase the number.

(Hmmm, is that 2 reasons or just 1?) 10. Meaning is essentially qualitative and about qualities.

And a bonus item: Generating numbers is not the purpose of qualitative inquiry. If someone wants precise numbers, tell them to do a survey and ask closed questions and count the responses. That’s what quantitative methods are for!

Pragmatism

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 12/39

Readers of this book know by now that I’m fundamentally a pragmatist. Thus, the prior points notwithstanding, sometimes numbers are appropriate, sometimes they are illuminative, and sometimes they are simply demanded by those who commission evaluations. My point is not to be rigid but to place the burden of proof on justifying quantitizing. Do not go gently down that primrose path. Use numbers when appropriate, and then in moderation.

Here’s an example where numbers are appropriate. Psychologist Marvin Eisenstadt studied the link between career achievement and loss of a parent in childhood by identifying famous people from ancient Greece through to modern times whose lives merited significant entries in encyclopedias. He generated a list of 573 eminent people and did extensive research on their childhoods, an inquiry that took 10 years. “A quarter had lost at least one parent before the age of ten. By age 15, 34.5 percent had at least one parent die, and by the age of twenty, 45 percent” (Gladwell, 2013, p. 141). This conversion of qualitative codes to quantitative distributions is appropriate because the sample size is large, the numbers are accurate, and the focus of the inquiry is on a single variable. When there is something meaningful to be counted, then count. As sample sizes increase, especially in mixed-methods studies, quantizing is likely to become even more pervasive. Now let me offer an example where quantitizing strikes me as considerably less appropriate and meaningful.

Feeding the Quantitative Beast

The opening scenario in this rumination involved a board chair reacting to my qualitative presentation by asking how many said what. But those involved in qualitative studies exacerbate the problem by turning their reports into numbers even before being asked to do so. As I was completing this chapter, I received an analysis from a graduate student who had taken interviews I had given and counted how many times I used various words, a form of so-called content analysis that actually diminishes the meaning of both “content” and “analysis.” Having counted my use of various words, he then correlated them. He was seeking my interpretation of a couple of statistical correlations that he couldn’t explain. My response was that the entire analytical approach struck me as meaningless since I adapt my language in an interview to context, audience, and whatever I’m working on at the time. To lose the contextual meaning of words by counting them as isolated data points strikes me as highly problematic—and certainly not qualitative meaning making.

I receive a substantial number of qualitative evaluations to review each year. The most common pattern I see, and criticize in my review, is a qualitative study filled with numbers. Here’s an example that just came to me the very week I was writing this rumination. I’m afraid my response was rather intemperate.

Qualitative Report Excerpts

• Of the 20 students interviewed, 14 mentioned gaining leadership skills; 8 of 21 staff said leadership skills were important; 7 out of 13 field personnel said this, as did 4 out of 6 community leaders.

• Eighteen of the 20 students said they were more committed to scholarly publication; 2 said they didn’t want to be university scholars.

• Two out of the seven program directors at different universities felt that the purpose of the professional development program was mainly to train advanced students how to write for academic publication; the other five emphasized writing for policymakers.

• Out of the 14 university researchers interviewed, 7 had no opinion about students becoming better teachers because they were not sure what the program was doing to train students as teachers. Four other interviewees claimed that combining teaching skills with research skills caused confusion. Three said combining the two made sense and was valuable.

The 20-page report was filled with this kind of quantitative gibberish—I’m sorry, analytical reporting. Qualitative software easily generates such numbers, so that may feed this trend and give it the appearance of being appropriate and expected. It is not appropriate and should not be expected. Indeed,

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 13/39

I urge those involved in qualitative evaluations to make it clear at the outset to those who will be receiving the findings that numbers will generally not be reported. The focus will be on substantive significance. The point is not to be anti-numbers. The point is to be pro-meaningfulness.

Keep qualitative analysis first and foremost qualitative.

SOURCE: © Chris Lysy—freshspectrum.com

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 14/39

MODULE

69 Logical and Matrix Analyses, and Synthesizing QualitativeStudies

Logic: The art of thinking and reasoning in strict accordance with the limitations and incapacities of the human misunderstanding.

—Satirist Ambrose Bierce (1842–1914)

Contrariwise, if it was so, it might be; and if it were so, it would be; but as it isn’t, it ain’t. That’s logic.

—Author Lewis Carroll (1832–1898)

Logical Analysis While working inductively, the analyst is looking for emergent patterns in the data. These patterns, as noted in the preceding sections, can be represented as dimensions, categories, classification schemes, and themes. Once some dimensions have been constructed, using either participant-generated constructions or analyst- generated constructions, it is sometimes useful to cross-classify different dimensions to generate new insights about how the data can be organized and to look for patterns that may not have been immediately obvious in the initial, inductive analysis. Creating cross-classification matrices is an exercise in logic.

The logical process involves creating potential categories by crossing one dimension or typology with another and then working back and forth between the data and one’s logical constructions, filling in the resulting matrix. This logical system will create a new typology all parts of which may or may not actually be represented in the data. Thus, the analyst moves back and forth between the logical construction and the actual data in search of meaningful patterns.

In the high school dropout program described earlier, the focus of the program was reducing absenteeism, skipping classes, and tardiness. An external team of change agents worked with teachers in the school to help them develop approaches to the dropout problem. Observations of the program and interviews with the teachers gave rise to two dimensions. The first dimension distinguished teachers’ beliefs about what kind of programmatic intervention was effective with dropouts—that is, whether they primarily favored maintenance (i.e., caretaking or warehousing of kids to just keep the schools running), rehabilitation efforts (helping kids with their problems), or punishment (no longer letting them get away with the infractions they had been committing in the past). Teachers’ behaviors toward dropouts could be conceptualized along a continuum from taking direct responsibility for doing something about the problem at one end to shifting responsibility to others at the opposite end. Exhibit 8.12 shows what happens when these two dimensions are crossed. Six cells are created, each of which represents a different kind of teacher role in response to the program.

The qualitative analyst working with these data had been struggling in the inductive analysis to find the patterns that would express the different kinds of teacher roles manifested in the program. He had tried several constructions, but none of them quite seemed to work. The labels he came up with were not true to the data. When he described to me the other dimensions he had generated, I suggested that he cross them, as shown in Exhibit 8.12. When he did, he said that “the whole thing immediately fell into place.” Working back and forth between the matrix and the data, he generated a full descriptive analysis of diverse and conflicting teacher roles.

The description of teacher roles served several purposes. First, it gave teachers a mirror image of their own behaviors and attitudes. It could thus be used to help teachers make more explicit their own understanding of

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 15/39

roles. Second, it could be used by the external team of consultants to more carefully gear their programmatic efforts toward different kinds of teachers who were acting out the different roles. The matrix makes it clear that an omnibus strategy for helping teachers establish a program that would reduce dropouts would not work in this school; teachers manifesting different roles would need to be approached and worked with in different ways. Third, the description of teacher roles provided insights into the nature of the dropout problem. Having identified the various roles, the evaluator–analyst had a responsibility to report on the distribution of roles in this school and the observed consequences of that distribution.

Abductive Analysis One must be careful about purely logical analysis. It is tempting for an analyst using a logical matrix to force data into the categories created by the cross-classification to fill out the matrix and make it work. Logical analysis to generate new sensitizing concepts must be tested out and confirmed by the actual data. Such logically derived sensitizing concepts provide conceptual possibilities to test. Levin-Rozalis (2000), following American philosopher Charles Sanders Pierce of the pragmatic school of thought, suggests labeling the logical generation and discovery of hypotheses and findings abduction to distinguish such logical analysis from data- based inductive analysis and theory-derived deductive analysis.

EXHIBIT 8.12 An Empirical Typology of Teacher Roles in Dealing With High School Dropouts

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 16/39

Denzin (1978b) has explained abduction in qualitative analysis as a combination of inductive and deductive thinking with logical underpinnings:

Naturalists inspect and organize behavior specimens in ways which they hope will permit them to progressively reveal and better understand the underlying problematic features of the social world under study. They seek to ask the question or set of questions which will make that world or social organization understandable. They do not approach that world with a rigid set of preconceived hypotheses. They are initially directed toward an interest in the routine and taken-for-granted features of that world. They ask how it is that the persons in question know about producing orderly patterns of interaction and meaning. . . . They do not use a full-fledged deductive hypothetical scheme in thinking and developing propositions. Nor are they fully inductive, letting the so-called facts speak for themselves. Facts do not speak for themselves. They must be interpreted. Previously developed deductive models seldom conform with empirical data that are gathered. The method of abduction combines the deductive and inductive models of proposition development and theory construction. It can be defined as working from consequence back to cause or antecedent. The observer records the occurrence of a particular event, and then works back in time in an effort to reconstruct the events (causes) that produced the event (consequence) in question. (pp. 109–110)

The famous fictional detective Sherlock Holmes relied on abduction more than deduction or induction, at least according to William Sanders’s (1976) review of Holmes’s analytical thinking in The Sociologist as Detective. We’ve already suggested that the qualitative analyst is part scientist and part artist. Why not add the qualitative analyst as detective? Here’s an example.

An Example of Abductive Qualitative Analysis

In the evaluation of the rural community leadership program, we did follow-up interviews with participants to find out how they were using their training when they returned to their home communities. We found ourselves with a case not unlike the “Silver Blaze” story, in which Sherlock Holmes made much of the fact that the dog at the scene of the crime had not barked during the night while the crime was being committed. He inferred that the criminal was someone known to the dog. In our case, we discovered that graduates of the leadership program were not leading. In fact, they weren’t doing much of anything. Were their skills inadequate after only 1 week of intensive training? Did they lack confidence? Were they discouraged? Disinterested? Intimidated? Incompetent? Unmotivated?

So we had a finding. We had an outcome—or more precisely, the lack of an outcome. We worked backward from the experience of the training, examined what had happened during the training right up to the final session, and tried to connect the dots between what had happened and this unexpected result. We also returned to the participants for further reflections that might explain this general lack of follow-up action.

The participants expressed great interest in and commitment to exercising community leadership and engaging in community development. They expressed confidence in their abilities and felt they were competent to use the skills they had learned. But at perhaps the most teachable moment of all, in the final session of training, as the participants enthusiastically prepared to return to their communities and begin to use their learnings, the director of the program had offered a closing word of caution:

Take your time when you return. Don’t go back like a cadre of activists invading your community. You’ve had an intense experience together. Let things settle. It can be pretty overwhelming to the people back home when they get a sense that you’ve been through what for many of you has been a transformative experience. So go easy. Take your time. Resettle.

And so they did—more than he imagined. What he had neglected was any guidance about how to know when it was time to begin engaging after the reentry period of resettling. So they waited. And waited. And waited. Not wanting to get it wrong.

Of all the explanations we considered, that one fit the evidence the best. Its accuracy was further borne out when the director changed his parting advice at the end of subsequent programs and we found a different

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 17/39

result in the communities. A qualitative inquirer is a detective, using both data and reasoning to solve the mystery of how a story has unfolded and why it has unfolded in the way documented.

Abductive Analysis Caution

Abduction is not widely known, understood, or appreciated. I think it provides a distinct and important alternative to deductive and inductive reasoning. But one external reviewer of this book worried that including abduction might confuse novice researchers and weaken the emphasis on induction as the core of qualitative reasoning. I disagree but think the caution is worth acknowledging, so here it is:

Although interesting, the discussion of abductive analysis will likely only confuse the novice researcher. Perhaps this section should start with a stronger disclaimer that advises novice qualitative researchers to give their undivided attention to the skills needed for inductive inquiry and clearly identify potential logical positivist pitfalls.

Abductive Matrix Analysis

The empty cell of a logically derived matrix (the cell created by crossing two dimensions for which no name or label immediately occurs) creates an intersection of a possible consequence and antecedent that begs for abductive exploration and explanation. Each such intersection of consequence and antecedent sensitizes the analyst to the possibility of a category of activity or behavior that has either been overlooked in the data or that is logically a possibility in the setting but has not yet been documented. The latter cases are important to note for their importance derives from the fact that they did not occur. The next section will look in detail at a process–outcomes matrix ripe with abductive possibilities. First, Exhibit 8.13 shows a matrix for mapping stakeholders’ stakes in a program or policy. This matrix can be used to guide data collection as well as analysis.

A Process–Outcomes Matrix The linkage between processes and outcomes constitutes such a fundamental issue in many program evaluations that it provides a particularly good focus for illustrating qualitative matrix analysis. As discussed in Chapter 4, qualitative methods can be particularly appropriate for evaluation where program processes, impacts, or both are largely unspecified or difficult to measure. This can be the case because the outcomes are meant to be individualized; sometimes one is simply uncertain as to what a program’s outcomes will be; and in many programs, neither processes nor impacts have been carefully articulated. Under such conditions, one purpose of the evaluation may be to illuminate program processes, program impacts, and the linkages between the two. This task can be facilitated by constructing a process–outcomes matrix to organize the data.

EXHIBIT 8.13 Power Versus Interest Grid for Analyzing Diverse Stakeholders’ Engagement With a Program, a Policy, or an Evaluation

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 18/39

Players. High interest, high power: They make things happen.

Context setters. High power, low interest: They watch what unfolds and can become players if they get interested.

Subjects. High interest, low power: They do not act unless organized and empowered.

Crowd. Low interest, low power: They are disengaged until they become aware that they have a stake in what is unfolding.

This matrix can be used to map the stakeholder environment for any initiative by gathering data about the perspectives, interests, and nature of power of people in diverse relationships to the initiative (Bryson & Patton, 2010, p. 42; Patton, 2008, p. 80).

Exhibit 8.14 shows how such a matrix can be constructed. Major program processes or identified implementation components are listed along the left side. Types or levels of outcomes are listed across the top. The category systems for program processes and outcomes are developed from the data in the same way that other typologies are constructed (see previous sections). The cross-classification of any process with any outcome produces a cell in the matrix—for example, the first cell in Exhibit 8.14 is created by the intersection of Process 1 with Outcome a. The information that goes in Cell 1-a (or any other cell in the matrix) describes linkages, patterns, themes, experiences, content, or actual activities that help us understand the relationships between processes and outcomes. Such relationships may have been identified by participants themselves during interviews or discovered by the evaluator in analyzing the data. In either case, the process–outcomes matrix becomes a way of organizing, thinking about, and presenting the qualitative connections between program implementation dimensions and program results.

EXHIBIT 8.14 Process–Impact Matrix

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 19/39

Application of the Process–Impact Matrix: An Example

An example will help make the notion of the process–outcomes matrix (Exhibit 8.14) more concrete and, hopefully, useful. Suppose we have been evaluating a juvenile justice program that places delinquent youth in foster homes. We have visited several foster homes, observed what the home environments are like, and interviewed the juveniles, the foster home parents, and the probation officers. A regularly recurring process theme concerns the importance of “letting kids learn to make their own decisions.” A regularly recurring

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 20/39

outcomes theme involves “keeping the kids straight” (reduced recidivism). Crossing the program process (“kids making their own decisions”) with the program outcome (“keeping kids straight”) creates a data analysis question: What actual decisions do juveniles make that are supposed to lead to reduced recidivism? We then carefully review our field notes and interview quotations, looking for data that help us understand how people in the program have answered this question based on their actual behaviors and practices. When we describe what decisions juveniles actually make in the program, the decision makers to whom our findings are reported can make their own judgments about the strength or weakness of the linkage between this program process and the desired outcome. Moreover, once the process–outcomes descriptive analysis of linkages has been completed, the evaluator is at liberty to offer interpretations and judgments about the nature and quality of this process–outcome connection.

An In-Depth Analysis Example: Recognizing Processes, Outcomes, and Linkages in Qualitative Data

Because of the centrality of the sensitizing concepts “program process” and “program outcome” in evaluation research, it may be helpful to provide a more detailed description of how these concepts can be used in qualitative analysis. How does one recognize a program process? Learning to identify and label program processes is a critical evaluation skill. This sensitizing notion of “process” is a way of talking about the common action that cuts across program activities, observed interactions, and program content. The example I shall use involves data from the wilderness education program I evaluated and discussed throughout the observations chapter (Chapter 6). That program, titled the Southwest Field Training Project, used the wilderness as a training arena for professional educators in the philosophy and methods of experiential education by engaging those educators in their own experiential learning process. Participants went from their normal urban environments into the wilderness for 10 days at a time, spending at least 1 day and night completely alone in some wilderness spot “on solo.” At times, while backpacking, the group was asked to walk silently so as not to be distracted from the wilderness sounds and images by conversation. In group discussions, participants were asked to talk about what they had observed about the wilderness and how they felt about being in the wilderness. Participants were also asked to write about the wilderness environment in journals. What do these different activities have in common, and how can that commonality be expressed?

We begin with several different ways of abstracting and labeling the underlying process:

• Experiencing the wilderness • Learning about the wilderness • Appreciating the wilderness • Immersion in the environment • Developing awareness of the environment • Becoming conscious of the wilderness • Developing sensitivity to the environment

Any of these phrases, each of which consists of some verb form (experiencing, learning, developing, etc.) and some noun form (wilderness, environment, etc.), captures some nuance of the process. The qualitative analyst works back and forth between the data (field notes and interviews) and his or her conception of what it is that needs to be expressed to find the most fitting language to describe the process. What language do people in the program use to describe what those activities and experiences have in common? What language comes closest to capturing the essence of this particular process? What level of generality or specificity will be most useful in separating out this particular set of things from other things? How do program participants and staff react to the different terms that could be used to describe the process?

It’s not unusual during analysis to go through several different phrases before finally settling on the exact language that will go into a final report. In the Southwest Field Training Project, we began with the concept label “Experiencing the Wilderness.” However, after several revisions, we finally described the process as

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 21/39

“Developing Sensitivity to the Environment” because this broader label permitted us to include discussions and activities that were aimed at helping participants understand how they were affected by and acted in their normal institutional environments. “Experiencing the wilderness” became a specific subprocess that was part of the more global process of “developing sensitivity to the environment.” Program participants and staff played a major role in determining the final phrasing and description of this process.

Other processes identified as important in the implementation of the program were as follows:

• Encountering and managing stress • Sharing in group settings • Examining professional activities, needs, and commitments • Assuming responsibility for articulating personal needs • Exchanging professional ideas and resources • Formally monitoring experiences, processes, changes, and impacts

As you struggle with finding the right language to communicate themes, patterns, and processes, keep in mind that there is no absolutely “right” way of stating what emerges from the analysis. There are only more and less useful ways of expressing what the data reveal.

Identifying and conceptualizing program outcomes and impacts can involve induction, deduction, abduction, and/or logical analysis. Inductively, the qualitative analyst looks for changes in participants, expressions of change, program ideology about outcomes and impacts, and ways that people in the program make distinctions between “those who are getting it” and “those who aren’t getting it” (where it is the desired outcome). In highly individualized programs, the statements about change that emerge from program participants and staff may be global. Outcomes such as “personal growth,” increased “awareness,” and “insight into self” are difficult to operationalize and standardize. That is precisely the reason why qualitative methods are particularly appropriate for capturing and evaluating such outcomes. The task for the qualitative analyst, then, is to describe what actually happens to people in the program and what they say about what happens to them.

Logically (or abductively), constructing a process–outcomes matrix can suggest additional possibilities. That is, where data on both program processes and participant outcomes have been sorted, analysis can be deepened by organizing the data through a logical scheme that links program processes to participant outcomes. Such a logically derived scheme was used to organize the data in the Southwest Field Training Project. First, a classification scheme that described different types of outcomes was conceptualized:

a. Changes in knowledge b. Changes in attitudes c. Changes in feelings d. Changes in behaviors e. Changes in skills

These general themes provided the reader of the report with examples of and insights into the kinds of changes that were occurring and how those changes were perceived by participants to be related to specific program processes. I emphasize that the process–outcomes matrix is merely an organizing tool; the data from participants themselves and from field observations provide the actual linkages between processes and outcomes.

What was the relationship between the program process of “developing sensitivity to the environment” and these individual-level outcomes? Space permits only a few examples from the data.

Skills: “Are you kidding? I learned how to survive without the comforts of civilization. I learned how to read the terrain ahead and pace myself. I learned how to carry a heavy load. I learned how to stay dry when

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 22/39

it’s raining. I learned how to tie a knot so that it doesn’t come apart when pressure is applied. You think those are metaphors for skills I need in my work? You’re damn right they are.” Attitudes: “I think it’s important to pay attention to the space you’re in. I don’t want to just keep going through my life oblivious to what’s around me and how it affects me and how I affect it.” Feelings: “Being out here, especially on solo, has given me confidence. I know I can handle a lot of things I didn’t think I could handle.” Behaviors: “I use my senses in a different way out here. In the city you get so you don’t pay much attention to the noise and the sounds. But listening out here, I’ve also begun to listen more back there. I touch more things too, just to experience the different textures.” Knowledge: “I know about how this place was formed, its history, the rock formations, the effects of the fires on the vegetation, where the river comes from, and where it goes.”

A different way of thinking about organizing data around outcomes was to think of the different levels of impact: (a) effects at the individual level, (b) effects on the group, and (c) effects on the institutions from which participants came into the program. The staff hoped to have impacts at all of these levels. Thus, it also was possible to organize the data by looking at what themes emerged when program processes were crossed with levels of impact. How did “developing sensitivity to the environment” affect individuals? How did the process of “developing sensitivity to the environment” affect the group? What was the effect of “developing sensitivity to the environment” on the institutions to which participants returned after their wilderness experiences? The process–outcomes matrix thus becomes a way of asking questions of the data, an additional source of focus in looking for themes and patterns in the hundreds of pages of field notes and interview transcriptions. Exhibit 8.35, at the end of this chapter (pp. 643–649), presents an extended excerpt from the qualitative evaluation report.

A Three-Dimensional Qualitative Analysis Matrix To study how schools used planning and evaluation processes, Campbell (1983) developed a 500-cell matrix (Exhibit 8.15) that begins (but just begins) to reach the outer limits of what one can do in a three-dimensional space. Campbell used this matrix to guide data collection and analysis in studying how the mandated, statewide educational planning, evaluation, and reporting system in Minnesota was being used. She examined five levels of use (high school, . . . , community, district), 10 components of the statewide project (planning, goal setting, . . . , student involvement), and 10 factors affecting utilization (personal factor, political factors, . . . ). Exhibit 8.15 again illustrates matrix thinking for both data organization and analytical/conceptual purposes.

Miles et al. (2014) have provided a rich source of ideas and illustrations on how to use matrices in qualitative analysis. Their Sourcebook provides a variety of ideas for analytical approaches to qualitative data, including a variety of mapping and visual display techniques.

Synthesizing Qualitative Studies We turn now to analysis across a different unit of analysis: synthesizing patterns, themes, and findings across qualitative studies where a completed study is the unit of analysis. In Chapter 5, Exhibit 5.8 (pp. 266–272), I introduced two purposeful sampling strategies that involve sampling completed studies as the unit of analysis: (1) qualitative research synthesis and (2) systematic qualitative evaluation reviews. One is research focused, the other evaluation focused. I distinguished these as different purposeful sampling strategies because they serve different purposes and select different kinds of qualitative studies for synthesis. (1) A qualitative research synthesis selects qualitative studies to analyze for cross-cutting research findings and contributions to theory. For example, there have been a substantial number of separate and independent ethnographic studies of coming-of-age and initiation ceremonies across cultures. A synthesis involves analyzing and interpreting

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 23/39

findings across those myriad studies. (2) In contrast, a systematic qualitative evaluation review seeks patterns across diverse qualitative evaluations to reach conclusions about what is effective.

EXHIBIT 8.15 Conceptual Guide for Data Collection and Analysis: Utilization of Planning, Evaluation, and Reporting

Qualitative Research Synthesis

A qualitative research synthesis involves seeking patterns across and integrating different qualitative studies (Finlayson & Dixon, 2008; Hannes & Lockwood, 2012; Saini & Shlonsky, 2012). Which studies are included in the final synthesis will depend on what emerges as relevant and meaningful during the synthesis. “Assessing quality is also about examining how study findings fit (or do not fit) with the findings of other studies. How study findings fit with the findings of other studies cannot be assessed until the synthesis is completed” (Harden & Gough, 2012, p. 160). In essence, quality criteria in a qualitative synthesis can be emergent.

Hannes and Lockwood (2012) have examined diverse approaches and frameworks for synthesizing qualitative research. The diversity of synthesis approaches reflects the diversity of qualitative methods and the variety of theoretical perspectives that inform qualitative inquiries (see Chapter 3). What all syntheses share in

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 24/39

common is having to address at a minimum three separate processes: (1) identifying and aggregating studies to include; (2) analyzing patterns, themes, and findings across the studies; and (3) interpreting the results.

In one sense, each qualitative study is a case. Synthesis of different qualitative studies on the same subject is a form of cross-case analysis. Such a synthesis is much more than a literature review. Noblit and Hare (1988) describe synthesizing qualitative studies as “meta-ethnography,” in which the challenge is to “retain the uniqueness and holism of accounts even as we synthesize them in the translations” (p. 7).

Systematic Qualitative Evaluation Reviews

Systematic qualitative evaluation serves a meta-evaluation function and is aimed at increasing confidence in actions to be taken to solve particular problems by synthesizing patterns of effectiveness across separate and independent evaluation studies. Such a systematic review seeks to identify, appraise, select, and synthesize all high-quality evaluation research evidence relevant to a particular arena of knowledge that is the basis for interventions (Funnell & Rogers, 2011, pp. 508–514).

Systematic reviews of randomized controlled trial (RCT) studies have been the basis for evidence-based medicine. The cases being synthesized are completed, usually published, studies. The process involves identifying all high-quality, peer-reviewed studies on a problem and synthesizing findings across those separate and diverse studies to reach conclusions about what is effective in dealing with the problem of concern, for example, female hormone replacement therapy or effective treatments for prostate cancer. Informing and setting guidelines for treatment is the instrumental use. The important new direction in systematic reviews is including qualitative evaluation studies (Gough et al., 2012; Wright, 2013). For example, the internationally prestigious Cochrane Collaboration Qualitative & Implementation Methods Group supports the synthesis of qualitative evidence and the integration of qualitative evidence with other evidence (mixed methods) in Cochrane intervention reviews on the effectiveness of health interventions. In effect, systematic reviews serve a meta-evaluation function.

Systematic Evaluation Reviews of Lessons Learned

Evaluators can synthesize lessons from a number of case studies to generate generic factors that contribute to program effectiveness—as, for example, Lisbeth Schorr (1988) did for poverty programs in her review and synthesis Within Our Reach: Breaking the Cycle of Disadvantage. Three decades ago, the U.S. Agency for International Development began commissioning lessons-learned synthesis studies on subjects such as irrigation (Steinberg, 1983), rural electrification (Wasserman & Davenport, 1983), food for peace (Rogers & Wallerstein, 1985), education development efforts (Warren, 1984), contraceptive social marketing (Binnendijk, 1986), agricultural policy analysis and planning (Tilney & Riordan, 1988), and agroforestry (Chew, 1989). In synthesizing separate evaluations to identify lessons learned, evaluators build a store of knowledge for future program development, more effective program implementation, and enlightened policy making.

SIDEBAR

QUALITATIVE RESEARCH SYNTHESES VERSUS SYSTEMATIC QUALITATIVE EVALUATION REVIEWS

I distinguish qualitative research syntheses from systematic qualitative evaluation reviews because they involve different purposeful sampling strategies that serve different purposes. You won’t find this distinction elsewhere. I make the distinction, and believe it is worth making, because this book addresses both research and evaluation methods. Qualitative research synthesis serves research purposes—selecting qualitative studies to analyze for cross-cutting findings and contributions to theory. Systematic qualitative evaluation reviews serve evaluation purposes—analyzing diverse qualitative evaluations to reach conclusions about patterns of effectiveness. The criteria for what studies to include in each case will be

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 25/39

different, as will the criteria for judging the final result: contribution to theory (qualitative research syntheses) versus contributions to practice (systematic evaluation reviews).

For scholarly inquiry, the qualitative research synthesis is a way to build theory through induction, deduction, interpretation, and integration. For evaluators, a qualitative systematic review can identify and extrapolate lessons learned to inform future program designs, identify effective intervention approaches, and build program theory.

The sample for synthesis studies usually consists of case studies with a common focus, for example, elementary education, health care for the elderly, and so on. However, one can also learn lessons about effective human intervention processes more generically by synthesizing case studies on quite different subjects. I synthesized three quite different qualitative evaluations conducted for The McKnight Foundation: (1) a major family housing effort, (2) a downtown development endeavor, and (3) a graduate fellowship program for minorities. Before undertaking the synthesis, I knew nothing about these programs, nor did I approach them with any particular preconceptions. I was not looking for any specific similarities, and none were suggested to me by either McKnight or program staff. The results were intended to provide insights into The McKnight Foundation’s operating philosophy and strategies as exemplified in practice by real operating programs. Independent evaluations of each program had already been conducted and presented to The McKnight Foundation, showing that these programs had successfully attained and exceeded the intended outcomes. But why were they successful? That was the intriguing and complex question on which the synthesis study focused.

The synthesis design included fieldwork (interviews with key players and site visits to each project) as well as an extensive review of their independent evaluations. I identified common success factors that were manifest in all three projects. Those were illuminating but not surprising. The real contribution of the synthesis was in how the success factors fit together, an unanticipated pattern that deepened the implications for understanding effective philanthropy.

The 10 success factors common to all three programs were as follows:

1. Strong leadership, developed, engaged, and supported throughout the initiative 2. A sizeable amount of money ($15 million), able to attract attention and generate support 3. Effective use of leverage at every level of program operation (McKnight insisted on sizable matching

funds and use of local in-kind resources from participating universities.) 4. A long-term perspective on and commitment to a sustainable program with cumulative impact over

time—in perpetuity (Support for the programs was converted to an endowment.) 5. A carefully melded public–private partnership 6. A program based on a vision made real through a carefully designed model that was true to the vision 7. Taking the time and effort to carefully plan in a process that generated broad-based community and

political support throughout the state 8. The careful structuring of local board control so that responsibility and ownership resided among key

influentials 9. Taking advantage of the right timing and climate for this kind of program 10. Clear accountability and evaluation so that problems could be corrected and accomplishments could

be recognized

While each of these factors provided insight into an important element of effective philanthropic programming, the unanticipated pattern was how these factors fit together to form a constellation of excellence. I found that I couldn’t prioritize these factors because they worked together in such a way that no

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 26/39

one factor was primary or sufficient; rather, each made a critical contribution to an integrated, effectively functioning whole. The lesson that emerged for effective philanthropy was not a series of steps to follow but rather a mosaic to create; that is, effective philanthropy appears to be a process of matching and integrating elements so that the pieces fit together in a meaningful and comprehensive way as a solution to complex problems. This means matching people with resources; bringing vision and values to bear on problems; and nurturing partnerships through leverage, careful planning, community involvement, and shared commitments —and doing all these things in mutually reinforcing ways. The challenge for effective philanthropy, then, is putting all the pieces and factors together to support integrated, holistic, and high-impact efforts and results— and to do so creatively (Storm & Vitt, 2000, pp. 115–116).

Another example of a major synthesis focused on evaluation of the 2005 Paris Declaration on Aid Effectiveness, endorsed by more than 150 countries and international organizations. An independent evaluation examined what difference, if any, the Paris Declaration made to development processes and results (Wood et al., 2011). The final report was a synthesis of case studies done in 22 developing countries and 18 donor agencies. The synthesis identified the factors that contributed to international aid reform and barriers to more effective aid. (For the lessons learned, see Dabelstein & Patton, 2013a.)

Qualitative synthesis has become a major and important approach to making sense of multiple and diverse qualitative studies. It is possible only because there are now many qualitative studies in diverse fields of interest available for synthesis. Synthesis findings are elevating the contribution of qualitative results to both research and evaluation.

SIDEBAR

REALIST SYNTHESIS

“Realist synthesis might best be thought of as a way of assembling rocks, or nonstandardized pieces of knowledge” (Funnell & Rogers, 2011, p. 514). Developed by British sociologist and evaluator Ray Pawson (2013), realist synthesis selects and integrates any quality evidence on a topic, including experiments, quasi-experimental studies, and case studies. “Quality is not assessed with reference to a hierarchy of research designs for the entire study but by assessing whether threats to validity have been adequately addressed in terms of the specific piece of evidence being used” (p. 515). Realist synthesis uses purposeful sampling of diverse kinds of evidence available, both quantitative and qualitative, to develop, refine, and test theories about how, for whom, and in what contexts policies will be effective . . . [and to] identify causal mechanisms that operate only in particular contexts. Realist synthesis is inherently a process of building, testing, and refining program theory. (p. 515)

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 27/39

MODULE

70 Interpreting Findings, Determining Substantive Significance, Elucidating Phenomenological Essence, and Hermeneutic Interpretation

Simply observing and interviewing do not ensure that the research is qualitative; the qualitative researcher must also interpret the beliefs and behaviors of participants.

—Valerie J. Janesick (2000, p. 387)

Interpreting for Meaning Qualitative interpretation begins with elucidating meanings. The analyst examines a story, a case study, a set of interviews, or a collection of field notes and asks, “What does this mean? What does this tell me about the nature of the phenomenon of interest?” In asking these questions, the analyst works back and forth between the data or story (the evidence) and his or her own perspective and understandings to make sense of the evidence. Both the evidence and the perspective brought to bear on the evidence need to be elucidated in this choreography in the search for meaning. Alternative interpretations are tried and tested against the data.

Interpretation, by definition, involves going beyond the descriptive data. Interpretation means attaching significance to what was found, making sense of findings, offering explanations, drawing conclusions, extrapolating lessons, making inferences, considering meanings, and otherwise imposing order on an unruly but surely patterned world. The rigors of interpretation and bringing data to bear on explanations include dealing with rival explanations, accounting for disconfirming cases, and accounting for data irregularities as part of testing the viability of an interpretation. All of this is expected—and appropriate—as long as the researcher owns the interpretation and makes clear the difference between description and interpretation. A good example is Reid Zimmerman’s (2014) description and interpretation of the seven deadly sayings of a nonprofit leader.

Schlechty and Noblit (1982) concluded that an interpretation may take one of three forms:

1. Making the obvious obvious 2. Making the obvious dubious 3. Making the hidden obvious

This captures rather succinctly what research colleagues, policymakers, and evaluation stakeholders expect: (1) confirm what we know that is supported by data, (2) disabuse us of misconceptions, and (3) illuminate important things that we didn’t know but should know. Accomplish these three things, and those interested in the findings can take it from there.

Explaining findings is an interpretive process. For example, when we analyzed follow-up interviews with participants who had gone through intensive community leadership training, we found a variety of expressions of uncertainty about what they should do with their training. In the final day of a six-day retreat, after learning how to assess community needs, work with diverse groups, communicate clearly, empower people to action, and plan for change, they were cautioned to go easy in transitioning back to their communities and to take their time in building community connections before taking action. What program staff meant as a last-day warning about not returning to the community as a bull in a china shop and charging ahead destructively had, in fact, paralyzed the participants and made them afraid to take any action at all. The program, which intended

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 28/39

to position participants for action, had inadvertently left graduates in “action paralysis” for fear of making mistakes. The meaning-laden phrase “action paralysis” emerged from the data analysis through interpretation. No one used that specific phase. Rather, we interpreted action paralysis as the essence of what the interviewees were reporting through a haze of uncertainties, ambiguities, worried musings, and wait-and-see- before-acting reflections.

Interpreting Findings at Different Levels of Analysis In the early 1990s, The McKnight Foundation in Minnesota invested $13 million in an innovative initiative titled The Aid to Families in Poverty Program. The initiative funded 34 programs using a variety of strategies. Our evaluation team conducted evaluations at the program level, the overall initiative level (synthesis of findings across all 34 programs), and the level of the broader policy environment that set the context for antipoverty interventions in Minnesota and the nation. The qualitative synthesis team interviewed program staff, conducted site visits, reviewed individual program evaluation reports, met with foundation program officers, and reviewed the national literature about poverty issues and programs. Here are examples of major findings and how we interpreted them. I invite you to pay special attention to how the findings and interpretations change at different levels of analysis for different units of analysis.

SIDEBAR

GENERAL THEORY OF INTERPRETATION

Interpretation is dependent on value, and judgments in all domains requiring interpretation are necessarily value judgments of some kind or other—though they start out from the descriptive facts.

Whenever we are faced with a normative system like law, whose point is to govern conduct, we cannot understand it simply as a pattern of behavior. We have to see it as an internalized set of standards and principles that the participants take to justify their behavior. But we need not share that point of view in order to understand it, even though we must rely on our own capacity for value judgments when we interpret how others see things as right that we believe to be wrong, and vice versa. We will not understand a bad system unless we see how its participants see it as good.

—Dworkin’s General Theory of Interpretation From Ronald Dworkin: The Moral Quest (Nagel, 2013, p. 56)

1. Outcomes for Families in Poverty

Finding: Program participants and staff emphasized the importance of helping families move out of crisis. Taking first steps and arresting decline were consistently reported as important outcomes. A common early outcome was increased intentionality—helping families in poverty come up with a plan, a sense of direction, and a commitment to making progress. Interpretation: Funders, policymakers, and evaluators place heavy emphasis on achieving long-term outcomes—getting families out of poverty. In doing so, they undervalue the huge amount of work it takes to establish trust with families in need, help stabilize families, and begin the process toward long-term outcomes. Early indicators of progress foreshadow longer-term outcomes and should be reported and valued.

2. Characteristics of Effective Program Staff

Finding: Effective staff approach program participants on a case-by-case basis. Their approach is respectful and individualized. They recognize that progress occurs in varying ways and at different rates,

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 29/39

and what may be limited progress for one family may represent enormous strides for another. Moreover, effective staff are highly responsive to individual participants’ situations, needs, capabilities, interests, and family context. In being responsive and respectful, they work to raise hopes and empower participants by helping them make concrete, intentional changes. Interpretation: Effective staff are the key to effective programs and achieving desired program outcomes for families in poverty. It wasn’t the program model or approach that made the difference. More important than the conceptual model or theory of change being implemented was how staff interacted with program participants. This has implications for staff recruitment, training, support, and performance evaluation.

3. Characteristics of Effective Programs

Finding: Effective programs support staff responsiveness by being flexible and giving staff discretion to take whatever actions assist participants to climb out of poverty. Flexible, responsive programs affect the larger systems of which they are a part by pushing against boundaries, arrangements, rules, procedures, and attitudes that hinder their capability to work flexibly and responsively—and therefore effectively— with participants. Interpretation: Patterns of effectiveness cut across different levels of operation and impact and showed up in what we would call the program culture. How people are treated affects how they treat others. How staff are treated affects how they treat program participants. Responsiveness reinforces responsiveness, flexibility supports individualization, and empowerment breeds empowerment. Program directors, professional staff, and organizational administrators will often need training and technical assistance in setting up and working with flexible, responsive approaches.

4. Philanthropic Foundation Lessons

Finding: The foundation chose the initiative’s name without consultation and review in the community. Our interviews found that many program staff and participants reacted negatively to the initiative’s name: Aid to Families in Poverty Program. Interpretation: Language matters to people. What an initiative is called sends messages. A name for the initiative that conveyed hope and strength rather than deficiency would have been received more positively. Failure to consult with people outside the foundation about the name increased the risk that the initiative’s title would inadvertently carry negative connotations (Patton, 1993).

Substantive Significance In lieu of statistical significance, qualitative findings are judged by their substantive significance. The analyst makes an argument for substantive significance in presenting findings and conclusions, but readers and users of the analysis will make their own value judgments about significance. In determining substantive significance, the analyst addresses these kinds of questions:

• To what extent and in what ways do the findings increase and deepen understanding of the phenomenon studied (verstehen)?

• To what extent are the findings useful for their intended purpose, for example, contributing to theory, informing policy, improving a program, informing decision making about some action, or problem solving in action research?

• To what extent are the findings consistent with other knowledge? A finding supported by and supportive of other work has confirmatory significance. A finding that breaks new ground has discovery or innovative significance.

• How solid, coherent, and consistent is the evidence in support of the findings? Triangulation, for example, can be used in determining the strength of evidence in support of a finding.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 30/39

Examples of Substantive Significance

Here are three examples of findings that, when interpreted, can be judged as substantively significant:

1. Case studies of 30 postdoctoral fellowship recipients found only 2 whose lives had been disrupted during the fellowship. Those 2 permanently lost the fellowship funds and were deemed failures by the fellowship’s sponsor and the fellows’ institutions. The case studies showed that the failures were due to the inflexibility of the funders in having no process for allowing a temporary sabbatical from the fellowship under conditions of sudden hardship. The consequences for the fellows were dramatic and long term, while a solution was readily at hand that could alleviate such dire consequences, which were likely to occur again in the future.

2. During a polio immunization campaign in India, a community of Muslim mothers heard a rumor that the immunization was a Hindu plot to sterilize Muslim children. So they hid their children from the vaccinators. A year later, those children were the source of an outbreak of polio. The numbers who resisted were small, but the consequences were great. The implication was that immunization campaigns need to be inquiring into community perceptions in real time so as to intervene and correct misperceptions in real time.

3. An international funder spent a large sum to build typhoon shelters in Bangladesh in low-lying areas near the ocean, where thousands of poor people were especially vulnerable to storms. When a typhoon hit, the shelters went largely unused because animals were prohibited and the resources of these poor people were their animals, which they would not abandon. The funding agency had been told of this potential problem when it was identified from a few key informant interviews, but the agency dismissed the findings because of the small sample size.

Determining substantive significance requires critical thinking about the broader consequences of findings. Exhibit 8.16 provides another example of substantive significance.

Interpretation Requires Both Critical Thinking and Creativity

Identifying patterns, themes, and categories involves using both creative and critical faculties in making carefully considered judgments about what is meaningful and substantively significant in the data. Since as a qualitative analyst you do not have a statistical test to help tell you when an observation or pattern is significant, you must rely first on your own sense making, understandings, intelligence, experience, and judgment; second, you should take seriously the responses of those who were studied or who participated in the inquiry about what they have reported to you as meaningful and significant; and third, you should consider the responses and reactions of those who read and review the results. Where all three—the qualitative analyst, those studied, and reviewers—agree, one has consensual validation of the substantive significance of the findings. Where disagreements emerge, which is more usual, you get a more interesting life and the joys of debate.

EXHIBIT 8.16 Substantive Significance Example: Minimally Disruptive Medicine

Chronic disease requires ongoing, lifetime management. This means that a patient must find a way to fit medical and lab appointments, exercise, medications, and dietary changes into a life already busy with family and work. Victor Montori, a professor of medicine at Mayo Clinic, Carl May, a professor of medical sociology at Newcastle University, and Francis Mair, a professor of primary care research at University of Glasgow, decided to study the burden of treatment by interviewing patients with multiple chronic comorbidities or cognitive impairment—two groups that are often excluded from studies examining compliance.

Case Study Examples

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 31/39

• A man who in the previous two years had visited specialist clinics for appointments, tests, and treatment 54 times (the equivalent of a full day every two weeks)

• A woman whose doctors had prescribed medications to be taken at 11 separate times during the day and was having trouble managing this (Montori, 2014)

Though initially from a small purposeful sample, the findings were sufficiently substantive to inspire conceptualization of a new category of health intervention: Minimally Disruptive Medicine. Minimally disruptive medicine is minimally disruptive not because it is minimal but because it is designed to fit naturally into a patient’s life and be manageable on an ongoing basis.

Determining substantive significance involves distinguishing signal from noise, which involves risking two kinds of errors. First, the analyst may decide that something is not a signal—that is, is not significant—when in fact it is, or second, and conversely, the analyst may attribute significance to something that is meaningless (just noise). The Halcolm story presented as a graphic comic at the end of Chapter 6 (pp. 418–419)is worth repeating here to illustrate this challenge of making judgments about what is really significant.

Halcolm was approached by a woman who handed him something. Without hesitation, Halcolm returned the object to the woman. The many young disciples who followed Halcolm to learn his wisdom began arguing among themselves about the special meaning of this interchange. A variety of interpretations were offered.

When Halcolm heard of the argument among his young followers, he called them together and asked each one to report on the significance of what they had observed. They offered a variety of interpretations. When they had finished, he said, “The real purpose of the exchange was to enable me to show you that you are not yet sufficiently masters of observation to know when you have witnessed a meaningless interaction.”

SIDEBAR

INTEROCULAR SIGNIFICANCE

If we are interested in real significance, we ignore little differences . . . . We ignore them because, although they are very likely real, they are very unlikely to hold up in replications. Fred Mosteller, the great applied statistician, was fond of saying that he did not care much for statistically significant differences, he was more interested in interocular differences, the differences that hit us between the eyes. (Scriven, 1993, p. 71)

Phenomenology as an Interpretative Framework: Elucidating Essence

Phenomenology asks for the very nature of a phenomenon, for that which makes a some-“thing” what it is—and without which it could not be what it is.

—Van Manen (1990, p. 10)

Phenomenology as a qualitative theoretical framework was discussed at length in Chapter 3 (pp. 115–118). In this module, we’re going to focus on phenomenological analysis as an interpretative framework. Phenomenological analysis seeks to grasp and elucidate the meaning, structure, and essence of the lived experience of a phenomenon for a person or group of people. Before I present the steps of one particular approach to phenomenological analysis, it is important to note that phenomenology has taken on a number of meanings, has a number of forms, and encompasses varying traditions, including transcendental

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 32/39

phenomenology, existential phenomenology, and hermeneutic phenomenology (Schwandt, 2007). Moustakas (1994) further distinguishes empirical phenomenology from transcendental phenomenology. Gubrium and Holstein (2000) add the label social phenomenology. Van Manen (1990) prefers “hermeneutical phenomenological reflection.” Sonnemann (1954) introduced the term phenomenography to label phenomenological investigation aimed at “a descriptive recording of immediate subjective experience as reported” (p. 344). Harper (2000) talks of looking at images through “the phenomenological mode”—that is, from the perspective of the self: “From the phenomenological perspective, photographs express the artistic, emotional, or experiential intent of the photographer” (p. 727). To this confusion of terminology is added the difficulty of distinguishing phenomenological philosophy from phenomenological methods and phenomenological analysis, all of which increases the tensions and contradictions in qualitative inquiry (Gergen & Gergen, 2000).

The use of the term phenomenology in contemporary versions of qualitative inquiry in North America tends to reflect a subjectivist, existentialist, and non-critical emphasis not present in the Continental tradition represented in the work of Husserl and Heidegger. The latter viewed the phenomenological project, so to speak, as an effort to get beneath or behind subjective experience to reveal the genuine, objective nature of things, and as a critique of both taken-for-granted meanings and subjectivism. Phenomenology, as it is commonly discussed in accounts of qualitative research, emphasizes just the opposite: It aims to identify and describe the subjective experiences of respondents. It is a matter of studying everyday experience from the point of view of the subject, and it shuns critical evaluation of forms of social life. (Schwandt, 2001, p. 192)

Phenomenological analysis involves and emphasizes different elements depending on which type of phenomenology you are using as a framework. I have chosen to focus on the phenomenological approach to analysis taken by Clark Moustakas, founder of The Center for Humanistic Studies (Detroit, Michigan). More than most, he has focused on the analytical process itself (Douglass & Moustakas, 1985; Moustakas, 1961, 1988, 1990, 1994, 1995). As we go deeper into the perspective and language of phenomenological analysis, let me warn you that the terminology and distinctions can be hard to grasp at first. But don’t skip over them lightly. These distinctions constitute windows into the world of phenomenological analysis. They matter. See if you can figure out why they matter. If you can, you will have grasped phenomenological interpretation.

Consciousness, Intentionality, Nomea, and Noesis Husserl’s transcendental phenomenology is intimately bound up in the concept of intentionality. In Aristotelian philosophy the term intention indicates the orientation of the mind to its object; the object exists in the mind in an intentional way. . . .

Intentionality refers to consciousness, to the internal experience of being conscious of something; thus the act of consciousness and the object of consciousness are intentionally related. Included in understanding of consciousness are important background factors such as stirrings of pleasure, shapings of judgment, or incipient wishes. Knowledge of intentionality requires that we be present to ourselves and things in the world, that we recognize that self and world are inseparable components of meaning.

Consider the experience of joy on witnessing a beautiful landscape. The landscape is the matter. The landscape is also the object of the intentional act, for example, its perception in consciousness. The matter enables the landscape to become manifest as an object rather than merely exist in consciousness.

The interpretive form is the perception that enables the landscape to appear; thus the landscape is self-given; my perception creates it and enables it to exist in my consciousness. The objectifying quality is the actuality of the landscape’s existence, as such, while the non-objectifying quality is a joyful feeling evoked in me by the landscape.

Every intentionality is composed of a nomea and noesis. The nomea is not the real object but the phenomenon, not the tree but the appearance of the tree. The object that appears in perception varies in terms of when it is perceived, from what angle, with what background of experience, with what orientation of wishing, willing, or judging, always from the vantage point of the perceiving individual. . . . The tree is out there present in time and space while the perception of the tree is in consciousness. . . .

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 33/39

Every intentional experience is also noetic. . . .

In considering the nomea-noesis correlate, . . . the “perceived as such” is the nomea; the “perfect self-evidence” is the noesis. Their relationship constitutes the intentionality of consciousness. For every nomea, there is a noesis; for every noesis, there is a nomea. On the noematic side is the uncovering and explication, the unfolding and becoming distinct, the clearing of what is actually presented in consciousness. On the noetic side is an explication of the intentional processes themselves. . . .

Summarizing the challenges of intentionality, the following processes stand out:

1. explicating the sense in which our experiences are directed;

2. discerning the features of consciousness that are essential for the individuation of objects (real or imaginary) that are before us in consciousness (Noema);

3. explicating how beliefs about such objects (real or imaginary) may be acquired, how it is that we are experiencing what we are experiencing (Noesis); and

4. integrating the noematic and noetic correlates of intentionality into meanings and essences of experience. (Moustakas, 1994, pp. 28–32)

Epoche

If those are the challenges, what are the steps for meeting them? The first step in phenomenological analysis is called epoche.

Epoche is a Greek word meaning to refrain from judgment, to abstain from or stay away from the everyday, ordinary way of perceiving things. In a natural attitude we hold knowledge judgmentally; we presuppose that what we perceive in nature is actually there and remains there as we perceive it. In contrast, Epoche requires a new way of looking at things, a way that requires that we learn to see what stands.

In the Epoche, the everyday understandings, judgments, and knowings are set aside, and the phenomena are revisited, visually, naively, in a wide-open sense, from the vantage point of a pure or transcendental ego. (Moustakas, 1994, p. 33)

In taking on the perspective of epoche, the researcher looks inside to become aware of personal bias, eliminate personal involvement with the subject material—that is, eliminate, or at least gain clarity about, preconceptions. Rigor is reinforced by a “phenomenological attitude shift” accomplished through epoche.

The researcher examines the phenomenon by attaining an attitudinal shift. This shift is known as the phenomenological attitude. This attitude consists of a different way of looking at the investigated experience. By moving beyond the natural attitude or the more prosaic way phenomena are imbued with meaning, experience gains a deeper meaning. This takes place by gaining access to the constituent elements of the phenomenon and leads to a description of the unique qualities and components that make this phenomenon what it is. In attaining this shift to the phenomenological attitude, Epoche is a primary and necessary phenomenological procedure.

Epoche is a process that the researcher engages in to remove, or at least become aware of prejudices, viewpoints or assumptions regarding the phenomenon under investigation. Epoche helps enable the researcher to investigate the phenomenon from a fresh and open view point without prejudgment or imposing meaning too soon. This suspension of judgment is critical in phenomenological investigation and requires the setting aside of the researcher’s personal viewpoint in order to see the experience for itself. (Katz, 1987, pp. 36–37)

According to Ihde (1977), “Epoche requires that looking precede judgment and that judgment of what is ‘real’ or ‘most real’ be suspended until all the evidence (or at least sufficient evidence) is in” (p. 36). As such, epoche is an ongoing analytical process rather than a single fixed event. The process of epoche epitomizes the data-based, evidential, and empirical (vs. empiricist) research orientation of phenomenology.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 34/39

Phenomenological Reduction, Bracketing, and Theme Analysis

Following epoche, the second step is phenomenological reduction. In this analytical process, the researcher brackets out the world and presuppositions to identify the data in pure form, uncontaminated by extraneous intrusions.

Bracketing is Husserl’s (1913) term. In bracketing, the researcher holds the phenomenon up for serious inspection. It is taken out of the world where it occurs. It is taken apart and dissected. Its elements and essential structures are uncovered, defined, and analyzed. It is treated as a text or a document; that is, as an instance of the phenomenon that is being studied. It is not interpreted in terms of the standard meanings given to it by the existing literature. Those preconceptions, which were isolated in the deconstruction phase, are suspended and put aside during bracketing. In bracketing, the subject matter is confronted, as much as possible, on its own terms. Bracketing involves the following steps:

(1) Locate within the personal experience, or self-story, key phrases and statements that speak directly to the phenomenon in question.

(2) Interpret the meanings of these phrases, as an informed reader.

(3) Obtain the subject’s interpretations of these phrases, if possible.

(4) Inspect these meanings for what they reveal about the essential, recurring features of the phenomenon being studied.

(5) Offer a tentative statement, or definition, of the phenomenon in terms of the essential recurring features identified in step 4. (Denzin, 1989b, pp. 55–56)

Imaginative Variation and Textural Portrayal

Once the data are bracketed, all aspects of the data are treated with equal value—that is, the data are “horizontalized.” The data are spread out for examination, with all elements and perspectives having equal weight. The data are then organized into meaningful clusters. Then, the analyst undertakes a delimitation process whereby irrelevant, repetitive, or overlapping data are eliminated. The researcher then identifies the invariant themes within the data to perform an imaginative variation on each theme. This can be likened to moving around a statue to see it from differing views. Through imaginative variation, the researcher develops enhanced or expanded versions of the invariant themes.

Using these enhanced or expanded versions of the invariant themes, the researcher moves to the textural portrayal of each theme—a description of an experience that doesn’t contain that experience (i.e., the feelings of vulnerability expressed by rape victims). The textural portrayal is an abstraction of the experience that provides content and illustration but not yet essence.

Phenomenological analysis then involves a “structural description” that contains the “bones” of the experience for the whole group of people studied, “a way of understanding how the co-researchers as a group experience what they experience” (Moustakas, 1994, p. 142). In the structural synthesis, the phenomenologist looks beneath the affect inherent in the experience to deeper meanings for the individuals who, together, make up the group.

Synthesis and Essence

The final step requires “an integration of the composite textual and composite structural descriptions, providing a synthesis of the meanings and essences of the experience” (Moustakas, 1994, p. 144). In summary, the primary steps of the Moustakas transcendental phenomenological model are as follows: (a) Epoche, (b) phenomenological reduction, (c) imaginative variation, and (d) synthesis of texture and structure. Other detailed analytical techniques are used within each of these stages (see Moustakas, 1994, pp. 180–181).

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 35/39

Heuristic Inquiry According to Moustakas (1990), heuristic inquiry applies phenomenological analysis to one’s own experience. As such, it involves a somewhat different, highly personal analytical process. Moustakas describes five basic phases in the heuristic process of phenomenological analysis: (1) immersion, (2) incubation, (3) illumination, (4) explication, and (5) creative synthesis.

Immersion is the stage of steeping oneself in all that is—of contacting the texture, tone, mood, range, and content of the experience. This state “requires my full presence, to savor, appreciate, smell, touch, taste, feel, know without concrete goal or purpose” (Moustakas, 1988, p. 56). The researcher’s total life and being are centered on the experience. He or she becomes totally involved in the world of the experience—questioning, mediating, dialoging, daydreaming, and indwelling.

The second state, incubation, is a time of “quiet contemplation” where the researcher waits, allowing space for awareness, intuitive or tacit insights, and understanding. In the incubation stage, the researcher deliberately withdraws, permitting meaning and awareness to awaken in their own time. One “must permit the glimmerings and awakenings to form, allow the birth of understanding to take place in its own readiness and completeness” (Moustakas, 1988, p. 50). This stage leads the way toward a clear and profound awareness of the experience and its meanings.

In the phase of illumination, expanding awareness and deepening meaning bring new clarity of knowing. Critical textures and structures are revealed so that the experience is known in all of its essential parameters. The experience takes on a vividness, and understanding grows. Themes and patterns emerge, forming clusters and parallels. New life and new visions appear along with new discoveries.

In the explication phase, other dimensions of meanings are added. This phase involves a full unfolding of the experience. Through focusing, self-dialogue, and reflection, the experience is depicted and further delineated. New connections are made through further explorations into universal elements and primary themes of the experience. The heuristic analyst refines emergent patterns and discovered relationships.

It is an organization of the data for oneself, a clarification of patterns for oneself, a conceptualization of concrete subjective experience for oneself, and integration of generic meanings for oneself, and a refinement of all these results for oneself. (Craig, 1978, p. 52)

What emerges is a depiction of the experience and a portrayal of the individuals who participated in the study. The researcher is ready now to communicate findings in a creative and meaningful way. Creative synthesis is the bringing together of the pieces that have emerged into a total experience, showing patterns and relationships. This phase points the way for new perspectives and meanings, a new vision of the experience. The fundamental richness of the experience and of the experiencing participants is captured and communicated in a personal and creative way. In heuristic analysis, the insights and experiences of the analyst are primary, including drawing on “tacit” knowledge that is deeply internal (Polanyi, 1967).

These brief outlines of phenomenological and heuristic analysis can do no more than hint at the in-depth living with the data that is intended. The purpose of this kind of disciplined analysis is to elucidate the essence of the experience of a phenomenon for an individual or a group. The analytical vocabulary of phenomenological analysis is initially alien, and potentially alienating, until the researcher becomes immersed in the holistic perspective, rigorous discipline, and paradigmatic parameters of phenomenology. As much as anything, this outline reveals the difficulty of defining and sequencing the internal intellectual processes involved in qualitative analysis more generally.

Phenomenology seeks to describe, elucidate, and interpret human experience. The product is a deep understanding of the nature and essence of the phenomenon studied. We turn now to a quite different analytical priority: not just describing and interpreting but also explaining the world.

The Hermeneutic Circle and Interpretation

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 36/39

Hermes was messenger to the Greek gods. . . . Himself the god of travel, commerce, invention, eloquence, cunning, and thievery, he acquired very early in his life a reputation for being a precocious trickster. (On the day he was born he stole Apollo’s cattle, invented the lyre, and made fire.) His duties as messenger included conducting the souls of the dead to Hades, warning Aeneas to go to Italy, where he founded the Roman race, and commanding the nymph Calypso to send Odysseus away on a raft, despite her love for him. With good reason his name is celebrated in the term “hermeneutics,” which refers to the business of interpreting. . . . Since we don’t have a godly messenger available to us, we have to interpret things for ourselves. (Packer & Addison, 1989, p. 1)

©2002 Michael Quinn Patton and Michael Cochran

Heuristic Inquiry Reactivity

Hermeneutics focuses on interpreting something of interest, traditionally a text or work of art; but in the larger context of qualitative inquiry, it has also come to include interpreting interviews and observed actions. The emphasis throughout concerns the nature of interpretation, and various philosophers have approached the matter differently, some arguing that there is no method of interpretation per se because everything involves interpretation (Schwandt, 2000, 2001). For our purposes here, the hermeneutic circle, as an analytical process aimed at enhancing understanding, offers a particular emphasis in qualitative analysis, namely, relating parts to wholes and wholes to parts.

Construing the meaning of the whole meant making sense of the parts, and grasping the meaning of the parts depended on having some sense of the whole. . . . The hermeneutic circle indicates a necessary condition of interpretation, but the circularity of the process is only temporary—eventually the interpreter can come to something approximating a complete and correct understanding of the meaning of a text in which whole and parts are related in perfect harmony. Said somewhat differently, the interpreter can, in time, get outside of or escape the hermeneutic circle in discovering the “true” meaning of the text. (Schwandt, 2001, p. 112)

The method involves playing the strange and unfamiliar parts of an action, text, or utterance off against the integrity of the action, narrative, or utterance as a whole until the meaning of the strange passages and the meaning of the whole are worked out or accounted for. (Thus, for example, to understand the meaning of the first few lines of a poem, I must have a grasp of the overall meaning of the poem, and vice versa.) In this process of applying the hermeneutic method, the interpreter’s self-understanding and socio-historical location neither affects nor is affected by the effort to interpret the meaning of the text or utterance. In fact, in applying the method, the interpreter abides by a set of procedural rules that help insure that the interpreter’s historical situation does not distort the bid to uncover the actual meaning embedded in the text, act, or utterance, thereby helping to insure the objectivity of the interpretation (Schwandt, 2001, p. 114).

The circularity and universality of hermeneutics (every interpretation is layered in and dependent on other interpretations, like a series of dolls that fit one inside the other, and then another and another) pose for the qualitative analyst the problem of where to begin. How and where do you break into the hermeneutic circle of interpretation? Packer and Addison (1989), in adapting the hermeneutic circle as an inquiry approach for psychology, suggest beginning with “practical understanding”:

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 37/39

Practical understanding is not an origin for knowledge in the sense of a foundation; it is, instead, the starting place for interpretation. Interpretive inquiry begins not from an absolute origin of unquestionable data or totally consistent logic, but at a place delineated by our everyday participatory understanding of people and events. We begin there in full awareness that this understanding is corrigible, and that it is partial in the twin senses of being incomplete and perspectival. Understanding is always moving forward. Practical activity projects itself forward into the world from its starting place, and shows us the entities we are home among. This means that neither commonsense nor scientific knowledge can be traced back to an origin, a foundation. . . . (p. 23)

The circularity of understanding, then, is that we understand in terms of what we already know. But the circularity is not, Heidigger argues, a “vicious” one where we simply confirm our prejudices, it is an “essential” one without which there would be no understanding at all. And the circle is complete; there is accommodation as well as assimilation. If we are persevering and open, our attention will be drawn to the projective character of our understanding and—in the backward arc, the movements of return—we gain an increased appreciation of what the forestructure involves, and where it might best be changed. . . . (p. 34)

Hermeneutic inquiry is not oriented toward a grand design. Any final construction that would be a resting point for scientific inquiry represents an illusion that must be resisted. If all knowledge were to be at last collected in some gigantic encyclopedia this would mark not the triumph of science so much as the loss of our human ability to encounter new concerns and uncover fresh puzzles. So although hermeneutic inquiry proceeds from a starting place, a self-consciously interpretive approach to scientific investigation does not come to an end at some final resting place, but works instead to keep discussion open and alive, to keep inquiry under way. (p. 35)

At a general level and in a global way, hermeneutics reminds us of the interpretive core of qualitative inquiry, the importance of context and the dynamic whole–part interrelations of a holistic perspective. At a specific level and in a particularistic way, the hermeneutic circle offers a process for formally engaging in interpretation.

Theory-Driven Qualitative Findings This module has looked in some depth at how two theory-based inquiry perspectives, phenomenology and hermeneutics, prescribe different analytical processes and produce different kinds of findings. To further emphasize this point, Exhibit 8.17 highlights how 10 different theoretical perspectives yield different kinds of findings due to the distinct focus of inquiry embedded in each theoretical perspective. Ethnography directs the inquiry to elucidate the nature of culture, whether for tribes, organizations, or programs. Social constructionism captures mental models and worldviews. Realism documents contextually operative causal mechanisms. Theories of change differentiate patterns of change and change trajectories. Systems theory illuminates interrelationships. Complexity theory invites studies of adaptation patterns, emergence, and simple rules. Phenomenology aims to elucidate the essence of the phenomenon studied. Grounded theory inquiries yield theoretical propositions and hypotheses. Hermeneutics interprets the meanings of texts. Pragmatism enquires into how things work and how they are used. Thus, operating within a particular theoretical orientation (see Chapter 3) provides focus, inquiry processes, and analytical procedures and yields certain kinds of findings that are a matter of core interest and priority for the community of inquiry engaged in studying the world through the lenses offered by that shared theoretical perspective. Exhibit 8.17 cites research and evaluation examples illuminating the differences in types of findings among these theoretical orientations.

EXHIBIT 8.17 Findings Yielded by Various Theoretical Perspectives With Research and Evaluation Examples

Different theoretical perspectives yield different kinds of findings due to the distinct focus of inquiry embedded in each theoretical perspective. Conducting research or evaluation within a particular theoretical orientation provides focus, inquiry processes, and analytical procedures and yields certain kinds of findings that are a matter of core interest and priority for the community of inquiry engaged in

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 38/39

studying the world through the lenses offered by that shared theoretical perspective. This exhibit cites research and evaluation examples illuminating the differences in types of findings among 10 theoretical orientations.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 39/39

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 1/37

Summary: Interpretation

This module has focused on interpreting findings, determining substantive significance, elucidating phenomenological essence, understanding hermeneutic interpretation, and distinguishing how different theory-based inquiries yield certain kinds of findings. The core theme has been interpretation. Interpretation is how we make sense of data. Interpretation is both an inevitability and a necessity. But as philosopher Friedrich Nietzsche added, “Necessity is not an established fact, but an interpretation.” The same can be said of causal inference, our next subject.

Permission is included in documents already submitted for Claudius Ceccon (Brazil).

SOURCE: Brazilian cartoonist Claudius Ceccon. Used with permission.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 2/37

MODULE

71 Causal Explanation Through Qualitative Analysis

A scientist puts a frog on a table and yells, “Jump!” The frog jumps. He then surgically removes a leg from the frog, puts the frog back on the table, and yells, “Jump!” The frog jumps again. The scientist surgically removes another limb and repeats the experiment. The frog jumps when commanded, just like before. He does it a third time. “Jump!” he exclaims after removing another limb, and the frog jumps. He removes the last limb on the frog and tries again. “Jump!” he yells, but the frog remains still. “Jump!” he repeats, but no response.

The scientist writes his findings in his notebook: “Upon removing all four limbs, the frog becomes deaf.”

This rather grotesque frog story is meant to facilitate the transition from Module 70, which focused on interpreting what qualitative data mean, to this module, which focuses on causal explanation. In essence, we are moving from interpreting findings to explaining them. Causal inference is a particularly treacherous and important form of qualitative analysis and interpretation.

Causality as Explanation

We construct reality, not exclusively but importantly, in terms of cause and effect. We must—else our individual worlds would be chaotic jumbles of actions unrelated to consequences.

—Northcutt and McCoy (2004, p. 169) Interactive Qualitative Analysis

Description tells what happened and how the story unfolded. Interpretation elucidates what the description means and judges what makes it significant. Then comes the “why?” question. Why did things unfold as they did? The answer: Because.

“Causation is intimately related to explanation; asking for explanation of an event is often to ask why it happened” (Schwandt, 2001, p. 23).

• The homeless youth moved into housing and enrolled in school or got a job because of what they experienced, how they were treated, and what they learned in the program serving homeless youth.

• The crime rate in the community is low because people look out for and support each other. • Climate change is occurring because of human activity. • The gap between rich and poor is increasing because of public policies favoring wealth accumulation by

those already rich.

To use the verb “because” is to posit an explanation and assert a causal connection. We make such assertions all the time in casual conversation. “You caused me to be late for my meeting because you didn’t wake me.” “You caused the car accident because you were texting on your phone and not paying attention to the road.” When researchers and evaluators make causal claims, however, evidence is required to support the assertion. Yet what counts as credible evidence is a matter of vociferous debate (Donaldson, Christie, & Mark, 2008). “Exactly how to define a causal relationship is one of the most difficult topics in epistemology and the philosophy of science. There is little agreement on how to establish causation” (Schwandt, 2001, p. 23).

Stake (1995) has emphasized that

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 3/37

explanations are intended to promote understanding and understanding is sometimes expressed in terms of explanation—but the two aims are epistemologically quite different . . . , a difference important to us, the difference between case studies seeking to identify cause and effect relationships and those seeking understanding of human experience. (p. 38)

Appreciating and respecting this distinction, once case studies have been written and descriptive typologies have been developed and supported, the tasks of organization and description are largely complete. Whether to take the next step in analysis and posit causality depends on the purpose of the inquiry and whether it has been designed to answer causal questions. As emphasized at the beginning of this chapter, analysis is driven by purpose. Purpose determines design. Design frames and focuses analysis.

In program evaluation, for example, explanations about which things appear to lead to other things, which aspects of a program produce certain effects, and how processes lead to outcomes are natural areas for analysis. When careful study of the data gives rise to ideas about causal linkages, there is no reason to deny those interested in the study’s results the benefit of those insights, with presentation of the supporting evidence from interviews, case studies, and field observations.

SIDEBAR

WHY WE’RE SO OBSESSED WITH CAUSALITY

We are pattern seekers, believers in a coherent world, in which regularities appear not by accident but as a result of mechanical causality or of someone’s intention.

—Daniel Kahneman (cognitive scientist) (Quoted in Pomeroy, 2013, p. 1)

Humans are creatures of causality. We like effects to have causes, and we detest incoherent randomness. Why else would the quintessential question of existence give rise to so many sleepless nights, endear billions to religion, or single-handedly fuel philosophy?

This predisposition for causation seems to be innate. In the 1940s, psychologist Albert Michotte theorized that “we see causality, just as directly as we see color,” as if it is omnipresent. To make his case, he devised presentations in which paper shapes moved around and came into contact with each other. When subjects— who could only see the shapes moving against a solid-colored background—were asked to describe what they saw, they concocted quite imaginative causal stories. . . .

Humanity’s need for concrete causation likely stems from our unceasing desire to maintain some iota of control over our lives. That we are simply victims of luck and randomness may be exhilarating to a madcap few, but it is altogether discomforting to most. By seeking straightforward explanations at every turn, we preserve the notion that we can always affect our condition in some meaningful way. Unfortunately, that idea is a facade. Some things don’t have clear answers. Some things are just random. Some things simply can’t be controlled. (Pomeroy, 2013, p. 1)

Reporting the Causal Assertions of Others Versus Inferring Causality Yourself

To the extent that you describe and report the causal linkages suggested by and believed in by those you’ve interviewed, you haven’t crossed the line from description into causal interpretation. You are simply reporting their beliefs and assertions.

Much qualitative inquiry stops at reporting the explanations of the people studied. This can take the form of presenting indigenous explanations through quotations, presenting case data, and/or identifying cross-case patterns and themes that describe how people explain key phenomena of interest, but without the analyst

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 4/37

adding any additional explanation of “why” the indigenous causal assertions take the form they do. People and groups construct causal explanations, for example, the gods are angry when it thunders. How people explain things can suffice. Indeed, the study of the attributions people make (how they explain things) offers a vast panorama for qualitative inquiry.

Attributions refer to people’s understandings of themselves and their environment. . . . Someone who has just passed a driving test at the first attempt may attribute his or her success to good fortune; another might attribute it to a happy choice of driving school; a third might attribute it to their own natural driving talent; and so on. Attributions such as these are often made about matters of moment in our lives. . . .

Interest in attribution stems from its crucial role as a mediator of perceptions, emotions, motivations and behaviours. Indeed, how people see cause and effect has implications for or may influence their interpersonal relations, their psychopathology, their response to psychotherapy, their decision making, and their adjustment to illness. . . . Attributions may serve many other functions, including enhancement of one’s perception of control, preservation of self-esteem, presentation of a particular picture of the self, and emotional release. (Harvey, Turnquist, & Agostinell, 1998, pp. 32–33)

Causal Explanation Grounded in Fieldwork

A researcher who has lived in a community for an extensive period of time will likely have insights into why things happen as they do there. A qualitative analyst who has spent hours interviewing people will likely come away from the analysis with possible explanations for how the phenomenon of interest takes the forms and has the effects it does. An evaluator who has studied a program, lived with the data from the field, and reflected at length about the patterns and themes that run through the data is in as good a position as anyone else at that point to interpret meanings, make judgments about significance, and offer explanations about relationships. Moreover, if decision makers and evaluation users have asked for such information—and in my experience they virtually always welcome causal analyses—there is no reason not to share insights with them to help them think about their own causal presuppositions and hypotheses, and to explore what the data do and do not support in the way of interconnections and potential causal relationships. But doing so remains controversial among those who lack training in qualitative causal analysis. I take the position in this module that qualitative causal analysis has now advanced in rigor and credibility to a point where it should be valued on its merits—and this module will demonstrate those merits. In this module, we’ll examine different perspectives on qualitative causal explanations and ways of increasing the credibility and utility of such analyses.

Historical Context: Approaching Causation Cautiously

The qualitative/quantitative debate of the past century positioned qualitative methods as primarily descriptive and quantitative methods as explanatory. Internal validity in experimental designs concerns the extent to which causal claims are warranted and can be substantiated. Randomized controlled trials (RCTs) involve manipulating a variable (the independent variable) to measure what effect it has on a second variable of interest (the dependent variable). This quantitative/experimental method, first used in agricultural and pharmaceutical studies and then applied in psychology and social science, became advocated as the “gold standard” for establishing causality. My MQP Rumination opposing this gold standard designation is in Chapter 3 (pp. 93–95). For our purpose here, the historically important point is that the experimental method became synonymous with causal research. RCTs had high internal validity. Case studies had low to no internal validity, meaning that they could describe but not explain (Campbell & Stanley, 1963; Shadish, Cook, & Campbell, 2001).

This remains the dominant perspective to this day, but developments in qualitative methods and analysis over the past two decades have demonstrated that qualitative analysis can yield causal explanations rigorously and credibly. This assertion is controversial and remains controversial largely because the advocates of experimental designs have been so successful in positioning experiments as the gold standard and the only way to establish causality. This module will present and explain how qualitative analysis can be

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 5/37

used to generate causal explanations. Exhibit 8.19, in Module 72 (pp. 600–601), summarizes approaches to causal explanation in qualitative analysis.

SIDEBAR

THE CAUSAL WARS ARE RAGING

The causal wars are still raging, and the amount of collateral damage is increasing.

The causal wars are about what is to count as scientifically impeccable evidence of a causal connection, usually in the context of the evaluation of interventions into human affairs.

The collateral damage comes from the policy that the RCT camp has been supporting with considerable success, here referred to as “the exclusionary policy,” which recommends that no (or almost no) programs be funded whose claims of good effects cannot be supported by randomized controlled trials (RCT)-based evidence. This means terminating many demonstrably excellent programs currently saving huge numbers of life-years.

After reviewing the causal wars and alternatives for making causal inferences, Scriven (2008) concludes, “In sum, there is absolutely nothing imperative, and nothing in general superior, about . . . RCT designs” (p. 23).

SOURCE: From the introduction to “A Summative Evaluation of RCT Methodology and an Alternative Approach to Causal Research,” by philosopher of science and evaluation research pioneer Michael Scriven (2008, p. 11).

Qualitative Causal Analysis as Speculative: Classic Advice to Proceed With Caution

Qualitative researchers . . . have generally denied that they were seeking causal explanations, arguing that their goal was the interpretive understanding of meanings rather than the identification of causes.

—Maxwell and Mittapalli (2008, pp. 322–323)

Lofland’s (1971) advice in his classic and influential book Analyzing Social Settings offers a cautionary perspective on the role of causal “speculation” in qualitative analysis. He argued that the strong suit of the qualitative researcher is the ability “to provide an orderly description of rich, descriptive detail” (p. 59); the consideration of causes and consequences using qualitative data should be a “tentative, qualified, and subsidiary task” (p. 62).

It is perfectly appropriate that one be curious about causes, so long as one recognizes that whatever account or explanations he develops is conjecture. In more legitimacy-conferring terms, such conjectures are called hypotheses or theories. It is proper to devote a portion of one’s report to conjectured causes of variations so long as one clearly labels his conjectures, hypotheses or theories as being that. (p. 62)

Neuendorf (2002) follows in this cautious tradition by setting such a high standard for determining causality, especially from content analysis of qualitative data, that it “is generally impossible to fully achieve . . . ; true causality is essentially an unattainable goal” (pp. 47–48).

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 6/37

In contrast to this cautious approach to drawing causal inferences from qualitative data, other qualitative analytical frameworks focus on causal explanation as a primary purpose, an attainable outcome, and even the strength of case studies.

Rigorous Qualitative Causal Analysis

The Case for Valuing Direct Observation

The most straightforward form of causal attribution involves direct observation in a short time frame where you can link the cause and effect in a straightforward manner. For example, you go out to a restaurant with friends, and all eat raw oysters. After the meal, all of you experience stomach sickness. It’s not a wild speculation to conclude that the oysters caused the sickness.

During a site visit to an employment training program, I witnessed a staff member yelling at a participant. The participant immediately left the building. I followed him and asked if I could talk with him for a moment. He said, “No, I’m finished with that program. Done. I won’t stand to be yelled at like that. I’m so out of there.” He turned and went his way. I checked subsequent attendance. He didn’t return. It seems reasonable to conclude that being yelled at was at least a contributing cause of his dropping out.

The idea of establishing causality has taken on such heavy meaning philosophically, methodologically, and epistemologically that the very idea can be daunting and intimidating (Maxwell, 2004).

Thus, qualitative analysts have often been advised to eschew using causal language and avoid making assertions. Reclaiming causal analysis as a reasonable and valid form of qualitative analysis begins with a recognition that direct, critical observation yields causal understandings.

The real “gold standard” for causal claims is the same ultimate standard as for all scientific claims; it is critical observation. Causation can be directly observed, in [a] lab or home or field, usually as one of many contextually embedded observations, such as lead being melted by heating a crucible, eggs being fried in a pan, or a hawk taking a pigeon. And causation can also be inferred from non-causal direct observations with no experimentation, as by the forensic pathologist performing an autopsy to determine the cause of death. (Scriven, 2008, p. 18)

One end of the observational continuum is seeing an action and a reaction that allows a direct, immediate conclusion that the action caused the reaction. At the other end of the continuum is long-term observation, including participant observation. Such in-depth qualitative studies yield detailed, comprehensive data about what happened in the observed settings, both how and why what happened happened. Spending a long time in a field setting engaging in ongoing observations, interviewing people, and tracking the details of what occurred makes it possible to connect the dots to explain what has occurred. When I read a classic high- quality, in-depth, comprehensive, field-based case study (e.g., Becker, Geer, Hughes, & Strauss, 1961; Goffman, 1961), the causal explanations offered strike me as valid, warranted, well supported, and consistent with the great volume of evidence presented.

What Constitutes Credible Evidence?

The fact that I find in-depth case studies persuasive does not mean that others do. Valuing different kinds of evidence is what the causal attribution war is about. Consider the challenge of eradicating intestinal worms in children, a widespread problem in many developing countries. Suppose we want to evaluate an intervention in which school-age children with diarrhea are given deworming medicine to increase their school attendance and performance. To attribute the intervention to the desired outcome, advocates of RCTs would insist on an evaluation design in which students suffering from diarrhea are randomly divided into a treatment group (those who receive worm medicine) and a control group (those who do not receive the medicine). The school attendance and test performance of the two groups would then be compared. If, after a month on the medicine, those receiving the intervention show higher attendance and school performance at a statistically significant

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 7/37

level compared with the control group (the counterfactual), then the increased outcomes can be attributed to the intervention (the worm medicine).

Advocates of qualitative inquiry would question the value of the control group in this case. Suppose that students, parents, teachers, and local health professionals are interviewed about the reasons why students miss school and perform poorly on tests. Independently, each of these groups assert that diarrhea is a major cause of the poor school attendance and performance. Gathering data separately from different informant groups (students, parents, teachers, and health professionals) is a form of triangulation, a way of checking the consistency of findings from different data sources. Following the baseline interviews, students are given a regimen of worm medicine. Those taking the medicine show increased school attendance and performance, and in follow-up interviews, the students, parents, teachers, and health professionals independently affirm their belief that the changes can be attributed to taking the worm medicine and being relieved of the symptoms of diarrhea. Is this credible, convincing evidence?

Those who find such a design sufficient would argue that the results are both reasonable and empirical and that the high cost of adding a control group is not needed to establish causality. Nor, they would assert, is it ethical to withhold medicine from students with diarrhea when relieving their symptoms has merit in and of itself. The advocates of RCTs would respond that without the control group, other unknown factors may have intervened to affect the outcomes and that only the existence of a counterfactual (control group) will establish with certainty the impact of the intervention.

As this example illustrates, those evaluators and methodologists on opposite sides of this debate have different worldviews about what constitutes sufficient evidence for attribution and action in the real world. This is not simply an academic debate. At stake are millions of dollars of evaluation funds and the credibility of different kinds of and approaches to evaluations around the world.

Thus far, I’ve been presenting the case for using high-quality, detailed, context-specific qualitative data to make causal inferences. When causal findings from fieldwork are explained through relevant theory, the explanations move to a higher level of generalizability. We turn now to theory-based approaches to causal inference.

Theory-Based Causal Analysis

Realist Analysis to Identify Causal Mechanisms

Qualitative inquiry grounded in realist philosophy and methods makes causal analysis the central focus. The overarching question for realist inquiry is “What are the causal mechanisms that explain how and why reality unfolds as it does in a particular context?” (See Chapter 3, pp. 111–114.) Causal explanation comes from carefully documenting and analyzing the actual mechanisms and processes that are involved in particular events and situations.

These mechanisms and processes can include mental phenomena as well as physical phenomena and can be identified in unique events as well as through regularities. This position’s emphasis on understanding processes, rather than on simply showing an association between variables, provides an alternative approach to causal explanation that is particularly suited to qualitative research. It incorporates qualitative researchers’ emphasis on meaning for actors and on unique contextual circumstances, and by treating causal processes as real events, it implies that these may be observed directly rather than only inferred. Thus, it removes the restriction that causal inference requires the comparison of situations in which the presumed cause is present or absent. (Maxwell & Mittapalli, 2008, p. 323)

Emmel (2013) does not hesitate to make drawing causal inferences a priority purpose of realist inquiries because a “scientific realist sampling strategy” (p. 95) is always based in theory.

Explanation and interpretation in a realist sampling strategy tests and refines theory. Sampling choices seek out examples of mechanisms in action, or inaction towards being able to say something explanatory about their causal

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 8/37

powers. Sampling . . . is both pre-specified and emergent, it is driven forward through an engagement with what is already known about that which is being investigated and ideas catalyzed through engagement with empirical accounts. (p. 85)

This illustrates how purpose and design drive analysis. Cases sampled are chosen purposefully because they will illuminate causal mechanisms. The inquiry involves detailed description and analysis of the real connection between events in such a way that they can be validly explained as “causally connected” (Emmel, 2013, p. 99). Emmel (2013) uses the image of splitting open a chicken (“spatchcocking”) to study its inner parts as analogous to realist qualitative analysis:

Splitting a chicken down its breastbone and opening it up to reveal the details of its thoracic and abdominal cavities. In a similar way, in research we will split these things, these variables . . . open and lay bare their anatomy for scrutiny and explanation through theorisation and empirical investigation. In the process of which we will be able to better describe, interpret, and, ultimately, explain the sample. (p. 100)

SIDEBAR

WHY THE GERMAN BOMBING OF LONDON IN WORLD WAR II DID NOT DEMORALIZE THE BRITISH

Both British and German leaders expected the intense aerial bombing of London in World War II to create panic. London’s residents were expected to flee to the countryside, leaving the city abandoned. Over an eight-month period of incessant bombing in 1940 and 1941, more than 40,000 people were killed, 46,000 injured, and a million buildings damaged or destroyed (Gladwell, 2013, p. 129). But the panic never came. Why?

Canadian psychiatrist J. T. MacCurdy (1943) offered an explanation in his book The Structure of Morale. He categorized three groups of people affected by the bombing: (1) those killed, (2) near misses, and (3) remote misses. He interviewed and conducted case studies of near misses and remote misses. His qualitative comparative analysis led to a causal explanation.

So why were Londoners so unfazed by the Blitz? Because forty thousand deaths and forty-six thousand injuries —spread across a metropolitan area of more than eight million people—means that there were many more remote misses who were emboldened by the experience of being bombed than there were near misses who were traumatized by it. (Gladwell, 2013, p. 133)

“We are all of us not merely liable to fear,” MacCurdy went on,

we are also prone to be afraid of being afraid, and the conquering of fear produces exhilaration. . . . When we have been afraid that we may panic in an air-raid, and, when it has happened, we have exhibited to others nothing but a calm exterior and we are now safe, the contrast between the previous apprehension and the present relief and feeling of security promotes a self-confidence that is the very father and mother of courage. (MacCurdy, quoted by Gladwell, 2013, p. 133)

In the midst of the Blitz, a middle-aged laborer in a button-factory was asked if he wanted to be evacuated to the countryside. He had been bombed out of his house twice. But each time he and his wife had been fine. He refused. “What, and miss all this?” he exclaimed. “Not for all the gold in China! There’s never been nothing like it! Ever! And never will be again.” (Gladwell, 2013, pp. 132–133)

Grounded Theory and Causal Analysis

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 9/37

Theory denotes a set of well-developed categories (e.g., themes, concepts) that are systematically interrelated through statements of relationship to form a theoretical framework that explains some relevant social, psychological, educational, nursing, or other phenomenon. The statements of relationship explain who, what, when, where, why, how, and with what consequences an event occurs. Once concepts are related through statements of relationship into an explanatory theoretical framework, the research findings move beyond conceptual ordering to theory. . . . A theory usually is more than a set of findings; it offers an explanation about phenomena. (Strauss & Corbin, 1998, p. 22)

Chapter 3 provided an overview of grounded theory in the context of other theoretical perspectives like ethnography, constructivism, phenomenology, and hermeneutics. As I noted in Chapter 3, grounded theory has opened the door to qualitative inquiry in many traditional academic social science and education departments, especially as a basis for doctoral dissertations. I believe this is, in part, because of its overt emphasis on the importance of and specific procedures for generating explanatory theory. Being systematic gets particular emphasis.

By systematic, I still mean systematic every step of the way; every stage done systematically so the reader knows exactly the process by which the published theory was generated. The bounty of adhering to the whole grounded theory method from data collection through the stages to writing, using the constant comparative method, show how well grounded theory fits, works, and is relevant. Grounded theory produces a core category and continually resolves a main concern, and through sorting the core category organizes the integration of the theory. . . . Grounded theory is a package, a lock-step method that starts the researcher from a “know nothing” to later become a theorist with a publication and with a theory that accounts for most of the action in a substantive area. The researcher becomes an expert in the Substantive area. . . . And if an incident comes his way that is new he can humbly through constant comparisons modify his theory to integrate a new property of a category.

Grounded theory methodology leaves nothing to chance by giving you rules for every stage on what to do and what to do next. If the reader skips any of these steps and rules, the theory will not be as worthy as it could be. The typical falling out of the package is to yield to the thrill of developing a few new, capturing categories and then yielding to use them in unending conceptual description and incident tripping rather that analysis by constant comparisons. (Glaser, 2001, pp. 1–2)

SIDEBAR

NO PRECONCEPTIONS: THE GROUNDED THEORY DICTUM

Preconceived questions, problems, and codes all block emergent coding, thereby undermining the foundation of Grounded Theory. Entering the field without preconceptions is the fundamental grounded theory dictum. The grounded theory researcher begins an inquiry without knowing the participant’s issues, worldview, basic concepts, or sense-making framework. These emerge through the course of the inquiry.

—Barney G. Glaser (2014b) No Preconceptions: The Grounded Theory Dictum

Other grounded theory resources are as follows:

STOP, WRITE!: Writing Grounded Theory (Glaser, 2012)

Memoing: A Vital Grounded Theory Procedure (Glaser, 2014a)

In their book on techniques and procedures for developing grounded theory, Strauss and Corbin (1998) emphasize that analysis is the interplay between the researcher and data, so what grounded theory offers as a framework is a set of “coding procedures” to “help provide some standardization and rigor” to the analytical process. Grounded theory is meant to “build theory rather than test theory.” It strives to “provide researchers

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 10/37

with analytical tools for handling masses of raw data.” It seeks to help qualitative analysts “consider alternative meanings of phenomenon.” It emphasizes being “systematic and creative simultaneously.” Finally, it elucidates “the concepts that are the building blocks of theory.” Grounded theory operates from a correspondence perspective in that it aims to generate explanatory propositions that correspond to real-world phenomena. The characteristics of a grounded theorist, they posit, are these:

1. The ability to step back and critically analyze situations 2. The ability to recognize the tendency toward bias 3. The ability to think abstractly 4. The ability to be flexible and open to helpful criticism 5. Sensitivity to the words and actions of respondents 6. A sense of absorption and devotion to the work process (p. 7)

According to Strauss and Corbin (1998), grounded theory begins with basic description, then moves to conceptual ordering (organizing data into discrete categories “according to their properties and dimensions and then using description to elucidate those categories” [p. 19]) and then theorizing: “conceiving or intuiting ideas—concepts—then also formulating them into a logical, systematic, and explanatory scheme” (p. 21).

In doing our analyses, we conceptualize and classify events, acts, and outcomes. The categories that emerge, along with their relationships, are the foundations for our developing theory. This abstracting, reducing, and relating is what makes the difference between theoretical and descriptive coding (or theory building and doing description). Doing line-by-line coding through which categories, their properties, and relationships emerge automatically takes us beyond description and puts us into a conceptual mode of analysis. (p. 66)

Strauss and Corbin (1998) have defined terms and processes in ways that are quite specific to grounded theory. It is informative to compare the language of grounded theory with the language of phenomenological analysis presented in the previous module. Here’s a sampling of important terminology.

Microanalysis: “The detailed line-by-line analysis necessary at the beginning of a study to generate initial categories (with their properties and dimensions) and to suggest relationships among categories; a combination of open and axial coding” (p. 57). Theoretical sampling: “Sampling on the basis of the emerging concepts, with the aim being to explore the dimensional range or varied conditions along which the properties of concepts vary” (p. 73). Theoretical saturation: “The point in category development at which no new properties, dimensions, or relationships emerge during analysis” (p. 143). Range of variability: “The degree to which a concept varies dimensionally along its properties, with variation being built into the theory by sampling for diversity and range of properties” (p. 143). Open coding: “The analytic process through which concepts are identified and their properties and dimensions are discovered in data” (p. 101). Axial coding: “The process of relating categories to their subcategories, termed ‘axial’ because coding occurs around the axis of the category, linking categories of the level of properties and dimensions” (p. 123). Relational statements: “We call these initial hunches about how concepts relate ‘hypotheses’ because they make two or more concepts, explaining the what, why, where, and how of phenomenon” (p. 135).

Comparative Analysis

According to Strauss and Corbin (1998), comparative analysis is a core technique of grounded theory development. Making theoretical comparisons—systematically and creatively—engages the analyst in

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 11/37

“raising questions and discovering properties and dimensions that might be in the data by increasing researcher sensitivity” (p. 67). Theoretical comparisons are one of the techniques used when doing microscopic analysis. Such comparisons enable

identification of variations in the patterns to be found in the data. It is not just one form of a category or pattern in which we are interested but also how that pattern varies dimensionally, which is discerned through a comparison of properties and dimensions under different conditions. (p. 67)

Strauss and Corbin (1998) offer specific techniques to increase the systematic and rigorous processes of comparison, for example, “the flip-flop technique”:

This indicates that a concept is turned “inside out” or “upside down” to obtain a different perspective on the event, object, or actions/interaction. In other words, we look at opposites or extremes to bring out significant properties (p. 94).

In the course of conducting a grounded theory analysis, one moves from lower-level concepts to higher- level theorizing:

Data go to concepts, and concepts get transcended to a core variable, which is the underlying pattern. Formal theory is on the fourth level, but the theory can be boundless as the research keeps comparing and trying to figure out what is going on and what the latent patterns are. (Glaser, 2000, p. 4)

Glaser (2000) worries that the popularity of grounded theory has led to a preponderance of lower-level theorizing without completing the job. Too many qualitative analysts, he warns, are satisfied to stop when they’ve merely generated “theory bits.”

Theory bits are a bit of theory from a substantive theory that a person will use briefly in a sentence or so. . . .

Theory bits come from two sources. First, they come from generating one concept in a study and conjecturing without generating the rest of the theory. With the juicy concept, the conjecture sounds grounded, but it is not; it is only experiential. Second, theory bits come from a generated substantive theory. A theory bit emerges in normal talk when it is impossible to relate the whole theory. So, a bit with grab is related to the listener. The listener can then be referred to an article or a report that describes the whole theory. . . .

Grounded theory is rich in imageric concepts that are easy to apply “on the fly.” They are applied intuitively, with no data, with a feeling of “knowing” as a quick analysis of a substantive incident or area. They ring true with great credibility. They empower conceptually and perceptually. They feel theoretically complete (“Yes, that accounts for it.”). They are exciting handles of explanation. They can run way ahead of the structural constraints of research. They are simple one or two variable applications, as opposed to being multivariate and complex. . . . They are quick and easy. They invade social and professional conversations as colleagues use them to sound knowledgeable. . . . The danger, of course, is that they might be just plain wrong or irrelevant unless based in a grounded theory. Hopefully, they get corrected as more data come out. The grounded theorist should try to fit, correct, and modify them even as they pass his or her lips.

Unfortunately, theory bits have the ability to stunt further analysis because they can sound so correct. . . . Multivariate thinking stops in favor of a juicy single variable, a quick and sensible explanation. . . . Multivariate thinking can continue these bits to fuller explanations. This is the great benefit of trusting a theory that fits, works, and is relevant as it is continually modified. . . . But a responsible grounded theorist always should finish his or her bit with a statement to the effect that “Of course, these situations are very complex or multivariate, and without more data, I cannot tell what is really going on.” (pp. 7–8)

As noted throughout this chapter in commenting on how to learn qualitative analysis, it is crucial to study examples. Bunch (2001) has published a grounded theory study about people living with HIV/AIDS. Glaser (1993) and Strauss and Corbin (1997) have collected together in edited volumes a range of grounded theory exemplars that include several studies of health (life after heart attacks, emphysema, chronic renal failure, chronically ill men, tuberculosis, and Alzheimer’s disease), organizational headhunting, abusive relationships,

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 12/37

women alone in public places, selfhood in women, prison time, and the characteristics of contemporary Japanese society. The journal Grounded Theory Review began publication in 2000.

The theory generated can take the form of a model and be told as a story based on the coding categories used to organize the data. So, for example, causal conditions will have been identified that produced the phenomenon that is being studied. Strategies show how these causal conditions operated in particular contexts. These strategies are mediated by intervening conditions and produce action and interactions that result in consequences. The model, which articulates a theory, also tells a causal story.

Narrative Analysis and Case Studies: Telling a Coherent Causal Story Narrative analysis (see Chapter 3, pp. 128–131) interprets stories, life history narratives, historical memoirs, and creative nonfiction to reveal cultural and social patterns through the lens of individual experiences. Chapter 2 featured an interpretation of the significance of the story of Henrietta Lacks, whose cells were taken without her knowledge and used for medical research (pp. 78–80). Her story takes much of its meaning from what it reveals about the African American community at that time, the nature and norms of medical research, the policies of a major university, and the larger political, social, and economic context within which her story unfolded. The story explains why her cells were taken for medical research without her consent. The case study traces the consequences that resulted—consequences for the researchers, her family, the university, and people suffering from a great variety of diseases whose treatment came from what was learned from her cells. Case studies can be written in a variety of ways, one of which is “causal narratives” specifically constructed to “elucidate the processes at work in one case, or a small number of cases, using in-depth intensive analysis and a narrative presentation of the argument” (Maxwell & Mittapalli, 2008, p. 324).

Yin (2012) distinguishes descriptive case studies from explanatory case studies. Descriptive case studies generate “rich and revealing insights into the social world of a particular case” (p. 49). An explanatory case study, in contrast, “seeks to explain how and why a series of events occurred. In real-world settings, the explanations can be quite complex and can cover an extended period of time. Such conditions create the need for using the case study method rather than conducting, say, an experiment or a survey” (p. 89). Explanatory case studies use causal reasoning to create a coherent, data-based explanation of how one thing led to another.

The internal validity of an explanatory case study depends on the richness of the explanation, the detailed depiction of processes and actions that lead to observed outcomes and consequences, triangulation of data sources, and exploration of rival explanations to arrive at the one that best fits the data. Causal reasoning is the basis for organizing and making sense of a case study or narrative through systematic process tracing (see Exhibit 8.18).

Qualitative Comparative Analysis (QCA) Qualitative comparative analysis, developed and championed by political sociologist Charles Ragin (1987, 2000), focuses on systematically making comparisons to generate explanations. He has developed a systematic approach for making comparisons of large case units, like nation-states and historical periods, or macrosocial phenomena, like social movements. He constructs and codes each case as a combination of causal and outcome conditions. These combinations can be compared with each other and then logically simplified through a bottom-up process of paired comparison. He aims to draw on the strength of holistic analysis manifest in context-rich individual cases while making possible systematic cross-case comparisons of relatively large numbers of cases, for example, 15 to 25 or more. Ragin (2000, 2008) draws on fuzzy set theory and calls the result “diversity-oriented research” because he systematically codes and takes into account case variations and uniquenesses as well as commonalities, thereby elucidating both similarities and differences. The comparative analysis involves constructing a “truth table” in which the analyst codes each case for the presence or absence of each attribute of interest (Fielding & Lee, 1998, pp. 158–159). The

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 13/37

information in the truth table displays the different combinations of conditions that produce a specific outcome. To deal with the large number of comparisons needed, QCA is done using a customized software program.

Analysts conducting diversity-oriented research are admonished to assume maximum causal complexity by considering the possibility that no single causal condition may be either necessary or sufficient to explain the outcome of interest. Different combinations of causal conditions might produce the observed result, though singular causes can also be considered, examined, and tested. Despite reducing large amounts of data to broad patterns represented in matrices or some other form of shorthand, Ragin (1987) stresses repeatedly that these representations must ultimately be evaluated by the extent to which they enhance understanding of specific cases. A cause–consequence comparative matrix, then, can be thought of as a map providing guidance through the terrain of multiple cases and causes.

EXHIBIT 8.18 Process Tracing for Causal Analysis

Process tracing identifies, tests, elucidates, and validates causal mechanisms by describing the “causal chain between an independent variable (or variables) and the outcome of the dependent variable” (George & Bennett, 2005, p. 206–207).

Beach and Pedersen (2013) assert that process-tracing methods are arguably the only methods that allow us to study causal mechanisms:

Studying causal mechanisms with process-tracing methods enables the researcher to make strong within-case inferences about the causal process whereby outcomes are produced, enabling us to update the degree of confidence we hold in the validity of a theorized causal mechanism . . . [and] enabling us to open up the black box of causality using in-depth case study methods to make strong within-case inferences about causal mechanisms based on in-depth single-case studies that are arguably not possible with other social science methods. (pp. 1–2)

Beach and Pedersen (2013) differentiate three approaches to process tracing: (1) theory testing, (2) theory building, and (3) explaining outcomes:

Theory-testing process tracing deduces a theory from the existing literature and then tests whether evidence shows that each part of a hypothesized causal mechanism is present in a given case, enabling within-case inferences about whether the mechanism functioned as expected in the case and whether the mechanism as a whole was present. . . .

Theory-building process-tracing seeks to build a generalizable theoretical explanation from empirical evidence, inferring that a more general causal mechanism exists from the facts of a particular case. . . .

Finally, explaining-outcome process-tracing attempts to craft a minimally sufficient explanation of a puzzling outcome in a specific historical case. Here the aim is not to build or test more general theories but to craft a (minimally) sufficient explanation of the outcome of the case where the ambitions are more case-centric than theory-oriented. This distinction reflects the case-centric ambitions of many qualitative scholars. (p. 3)

QCA [qualitative comparative analysis] seeks to recover the complexity of particular situations by recognizing the conjunctural and context-specific character of causation. Unlike much qualitative analysis, the method forces researchers to select cases and variables in a systematic manner. This reduces the likelihood that “inconvenient” cases will be dropped from the analysis or data forced into inappropriate theoretical moulds. . . .

QCA clearly has the potential to be used beyond the historical and cross-national contexts originally envisioned by Ragin. (Fielding & Lee, 1998, pp. 160, 161–162)

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 14/37

SIDEBAR

CROSS-CULTURAL CASE ANALYSIS COMPARABILITY

In cross-cultural research, the challenge of determining comparable units of analysis has created controversy. For example, when definitions of “family” vary dramatically, can one really do systematic comparisons? Are extended families in nonliterate societies and nuclear families in modern societies such different entities that, beyond the obvious surface differences, they cease to be comparable units for generating theory? “The main problem for ethnologists has been to define and develop adequate and equivalent cultural units for cross-cultural comparison” (De Munck, 2000, p. 279).

©2002 Michael Quinn Patton and Michael Cochran

Analytic Induction The analysis of set relations is critically important to social research. . . . Qualitative analysis is fundamentally about set relations. Consider this simple example: if all (or almost all) of the anorectic teenage girls I interview have highly critical mothers (that is, the anorectic girls constitute a consistent subset of the girls with highly critical mothers), then I will no doubt consider this connection when it comes to explaining the causes and contexts of anorexia. This attention to consistent connections (e.g., causally relevant commonalities that are more or less uniformly present in a given set of cases) is characteristic of qualitative inquiry. It is the cornerstone of the technique commonly known as analytic induction. (Ragin, 2008, p. 2)

Analytic induction also involves cross-case analysis in an effort to seek explanations. Ragin’s qualitative comparative analysis formalized and moderated the logic of analytic induction (Ryan & Bernard, 2000, p. 787), but it was first articulated as a method of “exhaustive examination of cases in order to prove universal, causal generalizations” (Peter Manning, quoted by Vidich & Lyman, 2000, p. 57). In Norman Denzin’s sociological methods classic The Research Act (1978b), he identified analytic induction based on comparisons of carefully done case studies as one of the three primary strategies available for dealing with and sorting out rival explanations in generating theory; the other two are experiment-based inferences and multivariate analysis. Analytic induction as a comparative case method

was to be the critical foundation of a revitalized qualitative sociology. The claim to universality of the causal generalizations is . . . derived from the examination of a single case studied in light of a preformulated hypothesis

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 15/37

that might be reformulated if the hypothesis does not fit the facts. . . . Discovery of a single negative case is held to disprove the hypothesis and to require its reformulation. (Vidich & Lyman, 2000, p. 57)

Over time, those using analytic induction have eliminated the emphasis on discovering universal causal generalizations and have, instead, emphasized it as a strategy for engaging in qualitative inquiry and comparative case analysis that includes examining preconceived hypotheses—that is, without the pretense of the mental blank slate advocated in purer forms of phenomenological inquiry and grounded theory.

In analytic induction, researchers develop hypotheses, sometimes rough and general approximations, prior to entry into the field or, in cases where data already are collected, prior to data analysis. These hypotheses can be based on hunches, assumptions, careful examination of research and theory, or combinations. Hypotheses are revised to fit emerging interpretations of the data over the course of data collection and analysis. Researchers actively seek to disconfirm emerging hypotheses through negative case analysis, that is, analysis of cases that hold promise for disconfirming emerging hypotheses and that add variability to the sample. In this way, the originators of the method sought to examine enough cases to assure the development of universal hypotheses.

Originally developed to produce universal and causal hypotheses, contemporary researchers have de-emphasized universality and causality and have emphasized instead the development of descriptive hypotheses that identify patterns of behaviors, interactions and perceptions. . . . Bogdan & Biklen (1992) have called this approach modified analytic induction. (Gilgun, 1995, pp. 268–269)

Jane Gilgun (1995) used modified analytic induction in a study of incest perpetrators to test hypotheses derived from the literature on care and justice and to modify them to fit an in-depth subjective account of incest perpetrators. She used the literature-derived concepts to sensitize herself throughout the research while remaining open to discovering concepts and hypotheses not accounted for in the original formulations. And she did gain new insights:

Most striking about the perpetrators’ accounts was that almost all of them defined incest as love and care. The types of love they expressed ranged from sexual and romantic to care and concern for the welfare of the children. These were unanticipated findings. I did not hypothesize that perpetrators would view incest as caring and as romantic love. Rather, I had assumed that incest represented lack of care and, implicitly, an inability to love [literature- derived hypotheses]. It did not occur to me that perpetrators would equate incest and romance, or even incest and feelings of sexualized caring. From previous research, I did assume that incest perpetrators would experience profound sexual gratification through incest. Ironically, their professed love of whatever type was contradicted by many other aspects of their accounts, such as continuing the incest when children wanted to stop, withholding permission to do ordinary things until the children submitted sexually, and letting others think the children were lying when the incest was disclosed. These perpetrators, therefore, did not view incest as harmful to victims, did not reflect on how they used their power and authority to coerce children to cooperate, and even interpreted their behavior in many cases as forms of care and romantic love. (p. 270)

Analytic induction reminds us that qualitative inquiry can do more than discover emergent concepts and generate new theory. A mainstay of science has always been examining and reexamining and reexamining yet again those propositions that have become the dominant belief or explanatory paradigm within a discipline or group of practitioners. Modified analytic induction provides a name and guidance for undertaking such qualitative inquiry and analysis.

Forensic Causal Analysis On August 1, 2007, the major six-lane I-35 interstate highway bridge connecting Minneapolis and Saint Paul collapsed during rush hour. Thirteen people died, and another 145 were injured. The National Transportation Safety Board investigators studied evidence from the accident, including a security video camera that recorded the collapse. The investigators determined that the bridge’s steel gusset plates were undersized and inadequate to support the load of the bridge, a load that had had increased substantially over time since the bridge’s construction. During the recovery of the wreckage, investigators discovered gusset plates at eight different joint locations that were fractured. The investigation turned up photos from a June 2003 inspection

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 16/37

of the bridge that showed gusset plate bowing. The investigation concluded that the primary cause of the collapse was the undersized gusset plates. Contributing to the collapse was the fact that 2 inches of concrete was added to the road surface over the years, increasing the dead load by 20%. Also contributing was the weight of construction equipment and material resting on the bridge just above its weakest point at the time of the collapse. That load was estimated at 578,000 pounds (262,000 kilograms), consisting of sand, water, and vehicles.

This kind of in-depth case analysis is mandatory for accidents of all kinds: vehicular accidents, fires, building collapses, and so on. Crimes are investigated. Deaths in hospitals are investigated. What all such investigations share are retrospective case study methods that consider possible causes and, based on the preponderance of evidence, the most probable cause. Exhibit 5.12 (p. 296) describes the rigorous forensic case analyses of railroad switching operations fatalities. As noted in the exhibit, 55 railroad employees died in switching yard accidents from 2005 to 2010. To analyze the causes of these deaths, investigators from the railroad industry, labor unions, locomotive engineers, and federal regulators worked together to look for patterns of causes of the accidents. They carefully reviewed every case, coding a variety of variables related to conditions, contributing factors, and kinds of operations involved. They then looked for patterns in the qualitative case data and correlations in the quantitative cross-case data. They spent three to four hours coding each case and hours analyzing patterns across cases.

They found that fatalities happen for a reason. Accidents are not random occurrences, unfortunate events, or just plain bad luck. The risks to employees engaged in switching operations are real, ever present, and preventable. The data showed patterns in why switching fatalities occur. Findings about the causes of railroad accidents accumulated through this rigorous analysis over time and across cases identified five common behaviors and 10 common hazards that contribute to fatal accidents.

General Elimination Method and Modus Operandi Analysis

The General Elimination Methodology (GEM) approach for case analysis involves identifying rival alternative causal explanations, comparing each with the evidence, and eliminating those that don’t conform to the evidence, until that causal explanation remains that best fits the preponderance of the evidence (Scriven, 2008).

To take an example from work in which I have been involved, when looking at the effect of aid given by Heifer or Gates to extremely poor farmers in East Africa, after determining that a substantial improvement in welfare has followed the arrival of aid, and has been sustained for a few years, we check for the presence of more than a dozen other possible causes of this observed subsequent increase in welfare, including: efforts by the country’s government that have actually trickled down to the village level, analogous efforts by other philanthropies, self-help gains resulting from inspired leadership in the local communities, increased income from family members traveling to well-paid job openings elsewhere and remitting money back home, increased prices for milk or calves in the local markets, the beneficial results of a few years of good weather or of improved water supply, or of technology-driven improvements in the quality of available commercial feed, veterinary meds or services, or grass seed for improving pastures. This requires considerable systematic effort, but no sophisticated experimental design, no sophisticated statistics or risk analysis. (Scriven, 2008, p. 22)

Modus operandi (or method of operating, MO) analysis was also conceptualized by evaluation theorist Michael Scriven (1976) as a way of inferring causality when experimental designs are impractical or inappropriate. The MO approach, drawing from forensic science, makes the inquirer a detective. Detectives match clues discovered at a crime scene with known patterns of possible suspects. Those suspects whose MO does not fit the crime scene pattern are eliminated from further investigation. Translated to research and evaluation, the inquirer/detective observes some pattern and makes a list of possible causes. Evidence from the inquiry is matched with the list of suspects (possible causes). Those possible causes that do not fit the pattern of evidence can be eliminated from further consideration. Following the autopsy-like logic of Occam’s razor, as each possible cause is matched with the evidence, that cause supported by the preponderance of evidence and offering the simplest causal inference among competing possibilities is chosen as most likely.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 17/37

This is also known as inference to the best explanation: If, based on the facts of the case and cumulative evidence, including interpretation, knowledge, and explanatory theory brought to the analysis, it is possible to identify one explanation as better than the others, then inferences based on that explanation will be warranted as the best explanation.

Forensic case analysis findings are accepted as evidence in courts of law for both criminal and civil proceedings. Clearly, such analyses and their use throughout the world demonstrate that thorough, systematic, and independent case analysis can identify causes at a reasonable level of confidence (preponderance of evidence) for individual cases. Cross-case analyses are used to generate regulations, policies, and operational procedures in fields as varied as transportation, hospitals, utilities, and construction, to name but a few examples.

Permission is included in documents already submitted for Claudius Ceccon (Brazil).

SOURCE: Brazilian cartoonist Claudius Ceccon. Used with permission.

SIDEBAR

GENERAL ELIMINATION CASE STUDY METHOD EXAMPLE: EVALUATING AN ADVOCACY CAMPAIGN AIMED AT INFLUENCING A SUPREME COURT DECISION

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 18/37

Several foundations funded a campaign aimed at influencing a Supreme Court decision. The collaboration of foundations committed more than $2 million to a focused advocacy effort within a window of nine months to potentially influence the Court. The evaluation case study examined the following question: To what extent, if at all, did the final-push campaign influence the Supreme Court’s decision?

The method we used in evaluating the Supreme Court advocacy campaign is what Scriven (2008) has called General Elimination Methodology, or GEM. It is a kind of “inverse epidemiological method.” Epidemiology begins with an effect and searches for its cause. In this application of GEM, we had both an effect (the Supreme Court decision in favor of the campaign position) and an intervention (the advocacy campaign), and we were searching for connections between the two. In doing so, we conducted a retrospective case study. Using evidence gathered through fieldwork—interviews, document analysis, detailed review of the Court arguments and decision, news analysis, and the documentation of the campaign itself—we aimed to eliminate alternative or rival explanations until the most compelling explanation, supported by the evidence, remained. This is also called the forensic method or MO (modus operandi, or method of operating) approach. Scriven brought the concept into evaluation from detective work, in which a criminal’s MO is established as a “signature trace” that connects the same criminal to different crimes (Davidson, 2005, p. 75). The modus operandi method works well in tracing the effects of interventions that have highly distinctive patterns of effects.

The evidence brought to bear in the evaluation of the judicial advocacy campaign was organized and presented as an in-depth case study of the campaign in four sections: (1) the litigation work; (2) the coordinated, targeted state organizing campaigns; (3) the communications and public education strategies; and (4) the overall coalition coordination. The case study involved detailed examination of campaign documents and interviews with 45 people directly involved in and knowledgeable about the campaign and/or the case, including the attorneys who argued both sides of the case before the Supreme Court. Several key people were interviewed more than once. The case also involved examining and analyzing hundreds of documents, including legal briefs, the Court’s opinions, more than 30 other court documents, more than 20 scholarly publications and books about the Supreme Court, media reports on the case, and confidential campaign files and documents, including three binders of media clips from campaign files. The case also drew on reports and documents describing related cases, legislative activity, and policy issues. Group discussions with key campaign strategists and advocates were especially helpful in clarifying important issues in the case.

Given the multifaceted and omnibus nature of the total campaign, a particular value of constructing this kind of in-depth case study is that none of the informants completely knew the full story. And, of course, different informants about the same events and processes had varying perspectives about what occurred and what it meant. A case study, then, involves ongoing comparative analysis—the sorting out, comparing, and reporting of different perspectives.

The full case did not emerge all at once. Indeed, it took time, including follow-up interviews, rereading documents, and continuous fact checking, for the full story to emerge. In a retrospective case study of this kind, we are often talking to people about events that they have “moved beyond” in their busy lives. Documentation is useful in returning to the past, but the critical judgments and perceptions stored in the memories of key players often take time and care to reignite. Developing relationships with key players was critical to this process.

Evaluation Conclusion

Based on a thorough review of the campaign’s activities, interviews with key informants and key knowledgeables, and careful analysis of the Supreme Court decision, we conclude that the coordinated final-push campaign contributed significantly to the Court’s decision.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 19/37

See Exhibit 8.24 (p. 614) in Module 73 for a graphic depiction of the finding about the factors that contributed to the campaign’s success.

SOURCES: Patton (2008) and Patton and Sherwood (2007).

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 20/37

MODULE

72 New Analysis Directions: Contribution Analysis, Participatory Analysis, and Qualitative Counterfactuals

Look carefully for words and phrases that indicate attributes and various kinds of causal or conditional relations . . . .

Causal relations: “because” and its variants, ’cause, ’cuz, as a result, since, and the like. For example, “Y’know, we always take [highway] 197 there ’cuz it avoids all that traffic at the mall.” But notice the use of the word “since” in the following: “Since he got married, it’s like he forgot his friends.” Text analysis that involves the search for linguistic connectors like these requires very strong skills in the language of the text because you have to be able to pick out very subtle differences in usage.

—Bernard and Ryan (2010, p. 60) Analyzing Qualitative Data

Contribution Analysis One qualitative critique of traditional causal attribution approaches is that the language and concepts are overly deterministic. The word cause connotes singular, direct, and linear actions leading to clear, precise, and verifiable results: X caused Y. But such a direct, singular, linear causation is rare in the complex and dynamic interactions of human beings. More often, there are multiple causal influences and multiple outcomes.

Reframing Explanation From Cause and Effect to Influences and Interactions

Contribution analysis (Mayne, 2007, 2011, 2012; Patton, 2012b) was developed as an approach in program evaluation to examine a causal hypothesis (theory of change) against logic and evidence to examine what factors could explain the findings. Attribution questions are different from contribution questions, as follows.

Traditional evaluation causality questions (attribution)

• Has the program caused the outcome? • To what extent has the program caused the outcome? • How much of the outcome is caused by the program?

Contribution questions

• Has the program made a difference? That is, has the program made an important contribution to the observed result? Has the program influenced the observed result?

• How much of a difference has the program made? How much of a contribution?

The result of a contribution analysis is not definitive proof that the intervention or program has made an important contribution but rather evidence and argumentation from which it is reasonable to draw conclusions about the degree and importance of the contribution, within some level of confidence. The aim is to get

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 21/37

plausible association based on a preponderance of evidence, as in the judicial and forensic traditions. The question is whether a reasonable person would agree from the evidence and argument that the program has made an important contribution to the observed result. Contribution analysis can be used in impact evaluations for interventions in complex development situations with multiple actors (Stern et al., 2012).

A contribution analysis produces a contribution story that presents the evidence and other influences on program outcomes. A major part of that story may tell about behavioral changes that intended beneficiaries have made as a result of the intervention.

Attributes of a Credible Contribution Story

A credible statement of contribution would entail the following:

• A well-articulated context of the program, discussing other influencing factors • A plausible theory of change (no obvious flaws) that is not disproven • A description of implemented activities and resulting outputs of the program • A description of the observed results • The results of contribution analysis

The evidence in support of the assumptions behind the key links in the theory of change Discussion of the roles of the other influencing factors

• A discussion of the quality of the evidence provided, noting weaknesses

Contribution analysis focuses on identifying likely influences. Such causes, which on their own are neither necessary nor sufficient, represent the kind of contribution role that many interventions play. Contribution analysis, like detective work, requires connecting the dots between what was done and what resulted, examining a multitude of interacting variables and factors, and considering alternative explanations and hypotheses, so that in the end, we can reach an independent, reasonable, and evidence-based judgment based on the cumulative evidence. That is what we did in evaluating the judicial advocacy campaign featured earlier in a sidebar (see p. 595). From a contribution perspective, the question became how much influence the campaign appeared to have had rather than whether the campaign directly and singularly produced the observed results.

Outcome mapping (IDRC, 2010) and outcome harvesting (Wilson-Grau & Britt, 2012) are well-developed frameworks that use contribution analysis for evaluating outcomes in complex dynamic systems characterized by multiple influences, multiple outcomes, and multiple interrelationships.

Collaborative and Participatory Causal Analyses Collaborative and participatory approaches to qualitative inquiry include working with nonresearchers and nonevaluators not only in collecting data but also in analyzing data. When dealing with making judgments about the extent to which the preponderance of evidence supports certain causal conclusions, or what contributory factors explain the results, the people in the setting studied can serve as the equivalent of an inquiry jury, rendering their own interpretations and judgments about causality. In a major study of “the difficult methodological and theoretical challenges faced by those who wish to evaluate the impacts of international development policies,” aimed at “broadening the range of designs and methods for impact evaluations,” participatory approaches were highlighted as an important design and analysis option.

Impact Evaluation (IE) aims to demonstrate that development programmes lead to development results, that the intervention as cause has an effect. . . . On the basis of literature and practice, a basic classification of potential

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 22/37

designs is outlined [with] . . . five design approaches identified—Experimental, Statistical, Theory-based, Case- based and Participatory. (Stern et al., 2012, pp. i–ii)

Participatory approaches to causal inference do not see recipients of aid as passive recipients but rather as active “agents.” Within this understanding, beneficiaries have “agency” and can help “cause” successful outcomes by their own actions and decisions. (Stern et al., 2012, p. 29)

Chapter 4 discussed collaborative and participatory approaches at some length, including Exhibit 4.13: Principles of Fully Participatory and Genuinely Collaborative Inquiry (page 222). Participatory approaches require special facilitation skills to help those involved adopt analytical thinking. Some of the challenges include the following:

• Deciding how much involvement nonresearchers will have, for example, whether they will simply react and respond to the researcher’s analysis or whether they will be involved in the generative phase of analysis (Determining this can be a shared decision. “In participatory research, participants make decisions rather than function as passive subjects” [Reinharz, 1992, p. 185].)

• Creating an environment in which those collaborating feel that their perspective is genuinely valued and respected

• Demystifying research • Combining training in how to do analysis with the actual work of analysis • Managing the difficult mechanics of the process, especially where several people are involved • Developing processes for dealing with conflicts in interpretations (e.g., agreeing to report multiple

interpretations) • Determining how to maintain confidentiality with multiple analysts

A good example of these challenges concerns how to help lay analysts deal with counterintuitive findings and counterfactuals—that is, data that don’t fit primary patterns, negative cases, and data that oppose primary preconceptions or predilections. Morris (2000) found that shared learning, especially the capacity to deal with counterfactuals, was reduced when participants feared judgment by others, especially those in positions of authority.

In analyzing hundreds of open-ended interviews with parents who had participated in early-childhood parent education programs throughout the state of Minnesota, I facilitated a process of analysis that involved some 40 program staff. The staff worked in groups of two and three, each analyzing 10 pre and post paired interviews at a time. No staff analyzed interviews with parents from their own programs. The analysis included coding interviews with a framework developed at the beginning of the study as well as inductive, generative coding in which the staff could create their own categories. Following the coding, new and larger groups engaged in interpreting the results and extracting central conclusions. Everyone worked together in a large center for three days. I moved among the groups, helping resolve problems. Not only did we get the data coded, but also the process, as is intended in collaborative and participatory research processes, proved to be an enormously stimulating and provocative learning experience for the staff participants. The process forced them to engage deeply with parents’ perceptions and feedback, as well as to engage with each other’s reactions, biases, and interpretations. In that regard, the process also facilitated communication among diverse staff members from across the state, another intended outcome of the collaborative analysis process. Finally, the process saved thousands of dollars in research and evaluation costs, while making a staff and program development contribution. The results were intended primarily for internal program improvement use. As would be expected in such a nonresearcher analysis process, external stakeholders placed less value on the results than did those who participated in the process (Mueller, 1996; Mueller & Fitzpatrick, 1998; Program Evaluation Division, 2001). However, as participatory processes have become better facilitated and understood, external stakeholders are coming to appreciate and value them more.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 23/37

Qualitative Counterfactuals A central issue in establishing causality is addressing the counterfactual: What would have happened without the cause or intervention? A counterfactual analysis looks at the case that is counter to the facts actually observed. In experimental designs, the control group constitutes the counterfactual, providing evidence of what would happen without the treatment received by the treatment group. Because qualitative designs do not assign people to treatment and control groups, critics of qualitative inquiry assert that causal claims cannot be made. Yet in many cases, it is neither feasible nor ethical to conduct experiments. One qualitative solution is to construct a hypothetical counterfactual case.

SIDEBAR

FRAMEWORK FOR CAUSAL ANALYSIS IN EVALUATION USING PROGRAM THEORY

A systematic approach to causal analysis for program theory evaluation consists of three components: (1) congruence, (2) comparisons, and (3) critical review.

1. Congruence with program theory. Do the observed and documented results match the program theory? 2. Counterfactual comparisons. What would have happened without the intervention? (See discussion of

qualitative and scenario-based counterfactuals, p. 599.) 3. Critical review. Are there plausible explanations for the results?

SOURCE: Funnell and Rogers (2011, pp. 473–499).

Assessing the plausible outcome of a combination of conditions that does not exist and instead must be imagined may seem esoteric. However, this analytic strategy has a long and distinguished tradition in the history of social science. A causal combination that lacks empirical instances and therefore must be imagined is a counterfactual case; evaluating its plausible outcome is counterfactual analysis.

To some, counterfactual analysis is central to case-oriented inquiry because such research typically embraces only a handful of empirical cases. If only a few instances exist (e.g., of social revolution), then researchers must compare empirical cases to hypothetical cases. The affinity between counterfactual analysis and case-oriented research, however, derives not simply from its focus on small Ns, but from its configurational nature. Case-oriented explanations of outcomes are often combinatorial in nature, stressing specific configurations of causal conditions. Counterfactual cases thus often differ from empirical cases by a single causal condition, thus creating a decisive, though partially imaginary, comparison. (Ragin, 2008, p. 150)

An example of a historical counterfactual case is Moore’s (1966) creation of an alternative history of the United States in which the South, rather than the North, won the U.S. Civil War. He used this counterfactual creation to support his theory that a “revolutionary break with the past” is an essential part of becoming a modern democracy.

Such counterfactuals constitute thought experiments. German social science pioneer Max Weber is commonly credited with being the first to advocate the use of thought experiments in social research to gain insight into causal relationships. Ragin (2008) offers an extensive discussion of counterfactuals in his configurational framework of qualitative comparative analysis, discussed earlier. In qualitative comparative analysis, counterfactual cases are created and used as substitutes for matched empirical cases when the real world offers no matching case. The hypothetical matched cases are identified by their configurations of causal conditions to illuminate the comparative analysis and causal patterns. This moves the qualitative analyst from

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 24/37

writing fieldwork-based empirical case studies to creating relevant comparison cases through thought experiments and alternative case study scenarios.

Scenario-Based Counterfactuals

One innovative and intriguing approach to constructing counterfactual comparisons is a collaborative and participatory process being applied in program evaluation. Scenario-based counterfactuals are negotiated alternative scenarios developed jointly with decision makers and stakeholders as part of a collaborative analysis process. The negotiated and constructed scenario-based counterfactual specifies reasonable projected alternative processes and any differences that would have occurred under the negotiated alternative scenario, for example, in timing or scale. This amounts to creating a plausible alternative hypothesis and examining the degree of plausibility.

• A community adopts an energy conservation plan built around a collaboration of churches, businesses, schools, nonprofits, and government units. How important was the collaboration to the adoption and implementation of the plan? Knowledgeable key informants construct the alternative scenario as a fabricated case study—no collaboration, business as usual—as a counterfactual.

• Exhibit 7.20 (pp. 511–516) is a case study of Thmaris, a homeless youth who received services and describes the difference it had made to him. In the case, he asserts that he would have landed in jail given the path he was on before he received services. A full case study alternative scenario could be constructed to fill in the details of a possible counterfactual. The credibility and validity of the counterfactual scenario are negotiated in a participatory process with key stakeholders.

The negotiated scenario-based counterfactual is based on a constructed alternative to what was implemented. The negotiated alternative scenario is a plausible (to decision makers and stakeholders) alternative; it is feasible in the sense that there are no budgetary, timing, or technical reasons why it could not have occurred, and it is legal in the sense that the alternative represents one of the options available within current law or plausible changes to relevant law (Rowe, Colby, Hall, & Niemeyer, in press).

Overview and Summary: Causal Explanation Thorough Qualitative Analysis This module has examined a variety of ways of approaching casual explanation in qualitative analysis. Exhibit 8.19 summarizes these different approaches.

I opened this module on causal analysis with observations from two eminent philosophers of science. First, Tom Schwandt (2001), University of Illinois, warned that “exactly how to define a causal relationship is one of the most difficult topics in epistemology and the philosophy of science. There is little agreement on how to establish causation” (p. 23). Then, Michael Scriven (2008), Claremont Graduate School, observed that “the causal wars are still raging, and the amount of collateral damage is increasing” (p. 11); he was referring to the gold standard debate that pits RCTs against all the alternatives. He concluded, as I do, that “there is absolutely nothing imperative, and nothing in general superior, about . . . RCT designs” (p. 23).

The overall conclusion I reach is that developments in qualitative methods and analysis over the past two decades have demonstrated that qualitative analysis can yield causal explanations rigorously and credibly.

That said, I close this module with a cautionary tale and a reminder that some question the whole notion of simple linear causality, doubting both its accuracy and its utility.

EXHIBIT 8.19 Twelve Approaches to Qualitative Causal Analysis

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 25/37

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 26/37

From Linear Causality to Complex Interrelationships

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 27/37

The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm.

—Philosopher Bertrand Russell (1872–1970) Selected Papers

Simple causal explanations are alluring, even seductive. We seem unable to escape simple linear modeling. We fall back on the linear assumptions of much of quantitative analysis and specify isolated independent and dependent variables that are mechanically linked together out of context. In contrast, the challenge of qualitative inquiry involves portraying a holistic picture of what the phenomenon, setting, or program is like and struggling to understand the fundamental nature of a particular set of activities and people in relationship in a specific context. “Particularization is an important aim, coming to know the particularity of the case,” as qualitative case study expert Bob Stake (1995) admonishes us to remember (p. 39). Simple statements of linear relationships may be more distorting than illuminating. The ongoing challenge of qualitative analysis is moving between the phenomenon of interest constantly moving between the phenomenon of interest and our abstractions of that phenomenon, between the descriptions of what has occurred and our interpretations of those descriptions, between the complexity of reality and our simplifications of those complexities, between the circularities and interdependencies of human activity and our need for linear, ordered statements of cause and effect.

Distinguished social scientist Gregory Bateson traced at least part of the source of our struggle to the ways in which we have been taught to think about things. We are told that a “noun” is the “name of a person, place, or thing.” We are told that a “verb” is an “action word.” These kinds of definitions, Bateson (1977) argues, were the beginning of teaching us that “the way to define something is by what it supposedly is in itself—not by its relations to other things.”

Today all that should be changed. Children could be told a noun is a word having a certain relationship to a predicate. A verb has a certain relationship to a noun, its subject, and so on. Relationship could now be used as a basis for definition, and any child could then see that there is something wrong with the sentence “‘Go’ is a verb.” . . . We could have been told something about the pattern which connects: that all communication necessitates context, and that without context there is no meaning. (p. 13)

Without belaboring this point about the difference between linear causal analysis (x causes y) and a holistic perspective that describes the interdependence and interrelatedness of complex phenomena, I would simply offer the reader a Sufi story. I suggest trying to analyze the data represented by the story in two ways. First, try to isolate specific variables that are important in the story, deciding which are the independent variables and which the dependent variable, and then write a statement of the form “These things caused this thing.” Then, read the story again. For the second analysis, try to distinguish among and label the different meanings of the situation expressed by the characters. Then, write a statement of the form “These things and these things came together to create ______.” Don’t try to decide that one approach is right and the other is wrong; simply try to experience and understand the two approaches.

Walking one evening along a deserted road, Mulla Nasrudin saw a troop of horsemen coming towards him. His imagination started to work; he imagined himself captured and sold as a slave, robbed by the oncoming horsemen, or conscripted into the army. Fearing for his safety, Nasrudin bolted, climbed a wall into a graveyard, and lay down in an open tomb.

Puzzled at this strange behavior, the men—honest travelers—pursued Nasrudin to see if they could help him. They found him stretched out in the grave, tense and quivering.

“What are you doing in that grave? We saw you run away and see that you are in a state of great anxiety and fear. Can we help you?”

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 28/37

Seeing the men up close, Nasrudin realized that they were honest travelers who were genuinely interested in his welfare. He didn’t want to offend them or embarrass himself by telling them how he had misperceived them, so Nasrudin simply sat up in the grave and said, “You ask what I’m doing in this grave. If you must know, I can tell you only this: I am here because of you, and you are here because of me.” (Adapted from Shah, 1972, p. 16)

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 29/37

MODULE

73 Writing Up and Reporting Findings, Including Using Visuals

At one time, one blade of grass is as effective as a sixteen-foot golden statue of Buddha. At another time, a sixteen-foot golden statute of Buddha is as effective as a blade of grass.

—Zen master Wumen Huikai (1183–1260)

Some reports are thin as a blade of grass; others feel 16 feet thick. Size, of course, is not the issue. Quality is. But given the volume of data involved in qualitative inquiry and the challenges of data reduction already discussed, reporting qualitative findings is the final step in data reduction, and size is a real constraint, especially when writing in forms other than research monographs and book-length studies, like journal articles and newsletter summaries. Each step in completing a qualitative project presents quality challenges, but the final step is completing a report so that others can know what you’ve learned and how you learned it. This means finding and writing the story that has emerged from your analysis. It also means dealing with what Lofland (1971) called the “the agony of omitting”—deciding what material to leave out of the story.

It can happen that an overall structure that organizes a great deal of material happens also to leave out some of one’s most favorite material and small pieces of analysis. . . . Unless one decides to write a relatively disconnected report, he must face the hard truth that no overall analytic structure is likely to encompass every small piece of analysis and all the empirical material that one has on hand. . . .

The underlying philosophical point, perhaps, is that everything is related to everything else in a flowing, even organic fashion, making coherence and organization a difficult and problematic human task. But in order to have any kind of understanding, we humans require that some sort of order be imposed upon that flux. No order fits perfectly. All order is provisional and partial. Nonetheless, understanding requires order, provisional and partial as it may be. It is with that philosophical view that one can hopefully bring himself to accept the fact that he cannot write about everything that he has seen (or analyzed) and still write something with overall coherence or overall structure. (p. 123)

Purpose Guides Writing and Reporting This chapter opened with the reminder that purpose guides analysis. Purpose also guides report writing and dissemination of findings. The key to all writing starts with (a) knowing your audience and (b) knowing what you want to say to them—a form of strategic communication (Weiss, 2001).

Dissertations have their own formats and requirements (Biklen & Casella, 2007; Bloomberg & Volpe, 2012; Heer & Anderson, 2005; Piantanida & Garman, 2009). Scholarly journals in various disciplines and applied research fields have their own standards and norms for what they publish. The best way to learn them is to read and study them, and study specialized qualitative methods journals like Qualitative Inquiry, Qualitative Research, Field Methods, Symbolic Interaction, Journal of Contemporary Ethnography, Grounded Theory Review, Qualitative Health Research, and American Journal of Evaluation. The format for evaluation reports is a matter of negotiation and is usually specified in the contract that commissions the evaluation. In all of these cases, the guiding principles remain (a) know your audience and (b) know what you want to say to them. (For in-depth guidance on writing the results of qualitative analysis, see Goodall, 2008, Writing Qualitative Inquiry; Holliday, 2007, Doing and Writing Qualitative Research; Wolcott, 2009, Writing Up Qualitative Research.)

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 30/37

Reflexivity and Voice

I do not put that note of spontaneity that my critics like into anything but the fifth draft. —Economist John Kenneth Galbraith (1986) (In the interview “The Art of Good Writing”)

In Chapter 2, when presenting the major strategic themes of qualitative inquiry, I included as one of the 12 primary themes that of “Voice, Perspective, and Reflexivity.”

The qualitative analyst owns and is reflective about her or his own voice and perspective; a credible voice conveys authenticity and trustworthiness; the inquirer’s focus becomes balance—understanding and depicting the world authentically in all its complexity while being self-analytical, politically aware, and reflexive in consciousness. This reiterates that the qualitative inquirer is the instrument of inquiry. (See Exhibit 2.1, pp. 46–47.)

SIDEBAR

PRESENTATIONS AND FEEDBACK BEFORE A FINAL REPORT OR PUBLICATION

You may be called on to present your findings to colleagues or at seminars or conferences before you’ve completed a final report or formally written up your results for publication. Such occasions can be helpful in testing out your conclusions and getting feedback about how they are received. Indeed, it is wise to seek such opportunities to help you get outside your own perspective and find out what questions your analysis raises in the minds of others. You can use what you learn to focus and fine-tune your report.

Program Evaluation Feedback

A different but related challenge arises in evaluation when, as is typical, intended users (especially program staff and administrators) want preliminary feedback while fieldwork is still under way or as soon as data collection is over. Providing preliminary feedback provides an opportunity to reaffirm with intended users the final focus of the analysis and nurture their interest in findings. Academic social scientists have a tendency to want to withhold their findings until they have polished their presentation. Use of evaluation findings, however, does not necessarily center on the final report, which should be viewed as one element in a total utilization process, sometimes a minor element, especially in formative and developmental evaluation (Patton, 2012a; Torres et al., 1996).

Evaluators who prefer to work diligently in the solitude of their offices until they can spring a final report on a waiting world may find that the world has passed them by. Feedback can inform ongoing thinking about a program instead of serving only as a one-shot information input for a single decision point. However, sessions devoted to reestablishing the focus of the evaluation analysis and providing initial feedback need to be handled with care. The evaluator will need to explain that analysis of qualitative data involves a painstaking process requiring long hours of careful work, going over notes, organizing data, looking for patterns, checking emergent patterns against the data, cross-validating data sources and findings, and making linkages among the various parts of the data and the emergent dimensions of the analysis. Thus, any early discussion of findings can only be preliminary, directed at the most general issues and the most striking, obvious results. If, in the course of conducting the more detailed and complete analysis of the data, the evaluator finds that statements made or feedback given during a preliminary session were inaccurate, evaluation users should be informed about the discrepancy at once.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 31/37

Analysis and reporting are where reflexivity comes to the fore. As discussed in Chapter 2, the term reflexivity has entered the qualitative lexicon as a way of emphasizing the importance of deep introspection, political consciousness, cultural awareness, and ownership of one’s perspective. Reflexivity calls on us to think about how we think and inquire into our thinking patterns even as we apply thinking to making sense of the patterns we observe around us. Being reflexive involves self-questioning, self-understanding, and

interpretation of interpretation and the launching of a critical self-exploration of one’s own interpretations . . . , a consideration of the perceptual, cognitive, theoretical, linguistic, (inter)textual, political and cultural circumstances that form the backdrop to—as well as impregnate–the interpretations. (Alvesson & Sköldberg, 2009, p. 9)

Reflexivity reminds the qualitative inquirer to be attentive to and conscious of the cultural, political, social, linguistic, and economic origins of one’s own perspective and voice as well as the perspective and voices of those one interviews and those to whom one reports. To be reflexive, then, is to undertake an ongoing examination of what I know and how I know it.

So I repeat—analysis and reporting are where reflexivity comes to the fore. Throughout analysis and reporting, as indeed throughout all of qualitative inquiry, questions of reflexivity and voice must be asked as part of the process of engaging the data and extracting findings. Triangulated reflexive inquiry involves three sets of questions (see Exhibit 2.1 in Chapter 2, pp. 46–47.):

1. The Self-Reflexivity question What do I know? How do I know what I know? What shapes and has shaped my perspective? How have my perceptions and my background affected the data I have collected and my analysis of those data? How do I perceive those I have studied? With what voice do I share my perspective? What do I do with what I found? These questions challenge the researcher to also be a learner, to reflect on our “personal epistemologies”—the ways we understand knowledge and the construction of knowledge (Rossman & Rallis, 1998, p. 25).

2. Reflexive questions about those studied How do those studied know what they know? What shapes and has shaped their worldview? How do they perceive me, the inquirer? Why? How do I know?

3. Reflexivity about the audience How do those who receive my findings make sense of what I give them? What perspectives do they bring to the findings I offer? How to they perceive me? How do I perceive them? How do these perceptions affect what I report and how I report it?

SIDEBAR

REFLEXIVITY WHEN RESEARCHING TRAUMA

Connolly and Reilly (2007) examined the impact of conducting narrative research focused on trauma and healing. They recounted what they found through three voices: (1) the study participants, who experienced the trauma; (2) the researchers, who shared their personal experiences of conducting the inquiry; (3) and “an academic colleague who acted as a reflective echo making sense of and normalizing the researcher’s experience” (p. 522). Issues that emerged included

harmonic resonance between the story of the participant and the life experiences of the researcher; emotional reflexivity; complex researcher roles and identities; acts of reciprocity that redress the balance of power in the research relationship; the need for compassion for the participants; and self-care for the researcher when researching trauma. (p. 522)

Based on their reflexive process, the researchers concluded that when researching trauma, the researcher is a member of both a scholarly community and a human community and that maintaining the stance of a member of the human community is an essential element of conducting trauma research.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 32/37

Reflexivity in the Methods Section of a Report

Because the qualitative inquirer is the instrument of qualitative inquiry and analysis, especially analysis, the methods section of a qualitative report should include some degree of reflexive discussion to acknowledge the perspective, skills, and experiences the inquirer has brought to the work. Such reflexivity is rare to nonexistent in quantitative reports, but qualitative reporting brings a different voice and perspective to the work. The methods section should reflect that difference. That’s also why qualitative reports more often use the first- person voice. (For a discussion on using the first-person, active voice in reporting versus the third-person, passive voice, see the section on reflexivity meets voice [pp. 70–74 in Chapter 2].)

Self-awareness, even a certain degree of self-analysis, has become a requirement of qualitative inquiry. As the reflexive questions above suggest, attention to voice applies not only to intentionality about the voice of the analyst but also to intentionality and consciousness about whose voices and what messages are represented in the stories and interviews we report. Qualitative data “can be used to relay dominant voices or can be appropriated to ‘give voice’ to otherwise silenced groups and individuals” (Coffey & Atkinson, 1996, p. 78). Eminent qualitative sociologist Howard Becker (1967) posed this classically as the question “Whose side are we on?” Societies, cultures, organizations, programs, and families are stratified. Power, resources, and status are distributed differentially. How we sample in the field, and then sample again during analysis in deciding who and what to quote, involves decisions about whose voices will be heard.

Finally, as we report findings, we need to anticipate how what we report will be heard and understood. We need strategies for thinking about the nature of the reporter–audience interaction, for example, understanding how “six basic tendencies of human behavior come into play in generating a positive response: reciprocation, consistency, social validation, liking, authority, and scarcity” (Cialdini, 2001, p. 76). Some writers eschew this responsibility, claiming that they write only for themselves. But researchers and evaluators have larger social responsibilities to present their findings for peer review and, in the cases of applied research, evaluation, and action research, to present their findings in ways that are understandable and useful.

Triangulated reflexive inquiry provides a framework for sorting through these issues during analysis and report writing—and then including in the methods section of your report how these reflections informed your findings. (For examples of qualitative writings centered on illuminating issues of reflexivity and voice, see Hertz, 1997.) We turn now to the content of writing and reporting.

Balancing Description and Interpretation One of the major decisions that has to be made in reporting is how much description to include. Rich, detailed description and direct quotations constitute the foundation of qualitative inquiry. Sufficient description and direct quotations should be included to allow the reader to enter into the situation observed and the thoughts of the people represented in the report. Description should stop short, however, of becoming trivial and mundane. The reader does not have to know everything that was done or said. Focus comes from having determined what’s substantively significant (see p. 572) and providing enough detail and evidence to illuminate and make that substantive case.

Yet the description must not be so “thin” as to remove context or meaning. Qualitative analysis, remember, is grounded in “thick description”:

A thick description does more than record what a person is doing. It goes beyond mere fact and surface appearances. It presents detail, context, emotion, and the webs of social relationships that join persons to one another. Thick description evokes emotionality and self-feelings. It inserts history into experience. It establishes the significance of an experience, or the sequence of events, for the person or persons in question. In thick description, the voices, feelings, actions, and meanings of interacting individuals are heard. (Denzin, 1989c, p. 83)

From Thick Description to Thick Interpretation

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 33/37

Thick description sets up and makes possible thick interpretation. By “thick interpretation,” Denzin (1989c) means, in part, connecting individual cases to larger public issues and to the programs that serve as the linkage between individual troubles and public concerns: “The perspectives and experiences of those persons who are served by applied programs must be grasped, interpreted, and understood if solid, effective, applied programs are to be put into place” (p. 105).

Description precedes and is then balanced by analysis and interpretation. Endless description becomes its own muddle. The purpose of analysis is to organize the description so that it is manageable. Description provides the skeletal frame for analysis that leads to interpretation. An interesting and readable report provides sufficient description to allow the reader to understand the basis for an interpretation and sufficient interpretation to allow the reader to appreciate the description.

Details of verification and validation processes (topics of the next chapter) are typically placed in a separate methods section of a report, but parenthetical remarks throughout the text about findings that have been validated can help readers value what they are reading. For example, if I describe some program process and then speculate on the relationship between that process and client outcomes, I may mention that (a) staff and clients agreed with this analysis when they read it, (b) I experienced this linkage personally as a participant- observer in the program, or (c) this connection was independently arrived at by two analysts looking at the data separately.

The report should help readers understand the different degrees of significance of various findings, if these exist. Since qualitative analysis lacks the parsimonious statistical significance tests of statistics, the qualitative analyst must make judgments that provide clues for the reader as to the writer’s belief about variations in the credibility and importance of different findings: When are patterns “clear”? When are they “strongly supported by the data”? When are the patterns “merely suggestive”? Readers will ultimately make their own decisions and judgments about these matters based on the evidence you’ve provided, but your analysis-based opinions and speculations deserve to be reported and are usually of interest to readers given that you’ve struggled with the data and know them better than anyone else.

Exhibit 8.35, at the end of this chapter (pp. 643–649), presents portions of a report describing the effects on participants of their experiences in the wilderness education program. The data come from in-depth, open- ended interviews. This excerpt illustrates the centrality of quotations in supporting and explaining thematic findings.

Communicating With Metaphors and Analogies

All perception of truth is the detection of an analogy. —Henry David Thoreau (1817–1862)

The museum study reported earlier in the discussion of analyst-generated typologies differentiated different kinds of visitors by using metaphors: the “commuter,” the “nomad,” the “cafeteria type,” and the “VIP.” In the dropout study, we relied on metaphors to depict the different roles we observed teachers playing in interacting with truants: the “cop,” the “old-fashioned schoolmaster,” and “the ostrich.” Language not only supports communication but also serves as a form of representation, shaping how we perceive the world (Chatterjee, 2001; Patton, 2000).

Metaphors and analogies can be powerful ways of connecting with readers of qualitative studies. But some analogies offend certain audiences, so they must be selected with some sensitivity to how those being described would feel and how intended audiences will respond. At a meeting of the Midwest Sociological Society, distinguished sociologist Morris Janowitz was asked to participate in a panel on the question “What is the cutting edge of sociology?” Janowitz (1979), having written extensively on the sociology of the military, took offense at the “cutting edge” metaphor. He explained,

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 34/37

Paul Fussell, the humanist, has prepared a powerful and brilliant sociological study of the literary works of the great wars of the 20th century which he entitled The Great War and Modern Memory. It is a work which all sociologists should read. His conclusion is that World War I and World War II, Korea and Vietnam have militarized our language. I agree and therefore do not like the question “Where is the cutting edge of sociology?” “Cutting Edge ”is a military term. I am put off by the very term cutting edge. Cutting edge, like the parallel term breakthrough, are slogans which intellectuals have inherited from the managers of violence. Even if they apply to the physical sciences, I do not believe that they apply to the social sciences, especially sociology, which grows by gradual accretion. (p. 591)

Of particular importance, in this regard, is avoiding metaphors with possible racist and sexist connotations, for instance, “It’s black and white.” One external reviewer of this book felt that this point deserves special emphasis, so I yield the floor, so to speak:

A lack of respect for people is conveyed in insensitive metaphors. So let’s avoid metaphors that are insensitive to regional differences, health and mental health differences, sexual orientation, and so on and so forth. No need to list them all (who could?), but at least make the comment about being respectful of people rather than seeming to be specific to just two types of insensitivity [racist and sexist insensitivities]. It is just too easy to inadvertently fall into habits of speech that can be hurtful.

At an Educational Evaluation and Public Policy Conference sponsored by the Far West Laboratory for Educational Research and Development, the women’s caucus expressed concern about the analogies used in evaluation and went on to suggest some alternatives:

To deal with diversity is to look for new metaphors. We need no new weapons of assessment—the violence has already been done! How about brooms to sweep away the attic-y cobwebs of our male/female stereotypes? The tests and assessment techniques we frequently use are full of them. How about knives, forks, and spoons to sample the feast of human diversity in all its richness and color. Where are the techniques that assess the delicious-ness of response variety, independence of thought, originality, uniqueness? (And lest you think those are female metaphors, let me do away with that myth—at our house everybody sweeps and everybody eats!) Our workgroup talked about another metaphor—the cafeteria line versus the smorgasbord banquet of styles of teaching/learning/assessing. Many new metaphors are needed as we seek clarity in our search for better ways of evaluating. To deal with diversity is to look for new metaphors. (Hurty, 1976, p. 1)

When employing a metaphor, it is important to make sure that it serves the data and not vice versa. Don’t manipulate the data to fit the metaphor. Moreover, because metaphors carry implicit connotations, it is important to make sure that the data fit the most prominent of those connotations so that what is communicated is what the analyst wants to communicate. Finally, one must avoid reifying metaphors and acting as if the world were really the way the metaphor suggests it is.

The metaphor is chiefly a tool for revealing special properties of an object or event. Frequently, theorists forget this and make their metaphors a real entity in the empirical world. It is legitimate, for example, to say that a social system is like an organism, but this does not mean that a social system is an organism. When metaphors, or concepts, are reified, they lose their explanatory value and become tautologies. A careful line must be followed in the use of metaphors, so that they remain a powerful means of illumination. (Denzin, 1978b, p. 46)

How “real” metaphors are may turn out to be a specific manifestation of Thomas’s theorem: What is perceived as real is real in its consequences. Brain scans are revealing that when we read a detailed description, an evocative metaphor, or an emotional story, the brain is stimulated. A team of brain researchers from Emory University found that

when subjects in their laboratory read a metaphor involving texture, the sensory cortex, responsible for perceiving texture through touch, became active. Metaphors like “The singer had a velvet voice” and “He had leathery hands” roused the sensory cortex, while phrases matched for meaning, like “The singer had a pleasing voice” and “He had strong hands,” did not. (Paul, 2012, p. SR6)

This invites research into the larger question of what happens to the brain when one is analyzing qualitative data or reading a full qualitative case study. Stay tuned.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 35/37

Creating and Incorporating Visuals The raw data of qualitative inquiry take the form of words, narratives, recorded observations, documents, and stories. This chapter has discussed how these data are analyzed and interpreted through content, pattern, and theme analysis; constructing cases studies and cross-case analyses; creating typologies; and depicting causal connections and interrelationships. At the center of all these analytical approaches have been words. Now, we turn to those things that have legendarily and metaphorically been worth a thousand words: pictures, visuals, and graphics.

No trend is more pronounced in the past decade than the visualization of data and findings. The capability to create meaningful and powerful visuals is a skill that is likely to become increasingly important in our short-attention-span world. Visualization rules. But before we exalt its place in contemporary analysis and reporting, let’s pause and give a nod to the visualization pioneers who couldn’t just go on the Internet and download some impressive photos and graphics. The legendary nurse Florence Nightingale (1820–1910) gathered data over a period of a year showing deaths in the Crimean War (1855–1856) due to battle wounds compared with deaths due to infections and disease (a much larger number). She converted the data into a color-coded visual display that proved hugely influential in changing sanitation practices and paved the way for attention to preventing infections, which saved hundreds of thousands of lives (Magnello, 2012). In 1894, George Waring Jr. was appointed Street Commissioner of the City of New York and began a systematic sanitation program that cleared the streets of New York of shin-deep garbage and animal and human waste. He took and had published in the newspapers before and after photographs showing the visual difference his reforms brought (Waring, 1897; Wells, 2012). In 1896, New York City honored him with a parade of appreciation for his contributions to the city’s quality of life, which included draining the wetlands of Manhattan Island to create Central Park. That is the vision of visualization we build on.

The best way to present qualitative data visualization is not to talk about such visuals but to provide actual illustrations. Exhibit 3.13 (pp. 142–143) in Chapter 3 presented a theory-of-change baseline systems graphic depicting the situation at the beginning of a light rail construction project; the key system actors, structures, and processes; and the lack of integration among these subsystem elements. The purpose of that visual graphic was to provide an example of how qualitative inquiry can be used to depict a system and support the conceptualization and evaluation of system change. Take another look at that graphic from the perspective of visually depicting findings, in that case the results of focus groups with key knowledgeables.

In this section, I’ll present examples of visual displays of qualitative findings with a minimum of accompanying verbiage, inviting you instead to engage with each type of illustration as a way of stimulating your thinking about making visualizations a part of your qualitative reporting (see Exhibits 8.20–8.27).

SIDEBAR

PICTURING DISABILITIES

Qualitative sociologist Robert Bogdan (2012) has assembled, analyzed, and published more than 200 historical photographs of people with disabilities. Beginning in the 1860s, when photography was emerging as a commercial enterprise, up to the 1970s, when the disability rights movement forced change, he shows how people with disabilities were portrayed. In one photo, a young woman with no arms wears a sequined tutu and smiles for the camera as she holds a teacup with her toes. In another, a man holds up two prosthetic legs while his own legs are bared to the knees to show his missing feet. Such photos were used as promotional material for circus sideshows and charity drives and hung in art galleries. They were found on “begging cards” and in family albums.

Bogdan’s (2012) analysis includes an inquiry into the perspective, role, and values of the photographers who took these photos and the contexts within which such photographs were created and people with disabilities were exploited. He examines a wide range of purposes and uses of disability photographs,

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 36/37

from sideshow souvenirs to clinical photographs. The photographs are both data and visualization of findings.

EXHIBIT 8.20 Photos Before and After to Illustrate Change

Vietnam Helmet Law

• Road traffic injuries have long been a leading cause of death and disability in Vietnam. • 60% of fatalities occur in motorcycle riders and passengers. • Vietnam has had a partial motorcycle helmet legislation since 1995, However implementation and

enforcement had been limited. • On 15 December 2007, Vietnam first comprehensive mandatory helmet law came into effect, covering

all riders and passengers on all roads nationwide. Penalties increased ten-fold and cohorts of police were mobilized for enforcement.

Photos on street corners the day before the law took effect

Photos on the same street corners the day after the law took effect

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 37/37

Results

The Asia Injury Prevention Foundation reported: “Nearly 100% of Vietnam’s motorbike users left home wearing a helmet. It was an unbelievable sight with a near instantaneous effect. Major hospitals report the number of patients admitted for traumatic brain injuries in the two days after the law’s enactment was much lower than on previous weekends. In Ho Chi Minh City alone, serious traffic accident injuries fell by almost 50 percent compared with pre-helmet weekends.”

SOURCE: McDonnell, Tran, and McCoy (2010).

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 1/39

CHAPTER

8 Qualitative Analysis and Interpretation

In set theory, an empty set is denoted by a pair of empty braces: { }. A set is defined as a collection of things that are brought together because they have something in common. In mathematical set theory, the items in a set must obey a clear, definitive rule, for example, whole numbers that are multiples of 2 (2, 4, 6, 8, 10, . . . ) or words that begin with the letters Qu and have only one syllable (Queen, Queer, Quinn, Quit, . . . ). The rule must be without ambiguity, so that it is clear and uncontested that an item does or does not belong in the set.

In qualitative analysis, in contrast, what items belong in a set (e.g., a grouping, a category, a pattern, a theme) is a matter of judgment. Judgments can vary depending on who is doing the judging, with what criteria, and for what purpose. Thus, unlike rules, judgments can be ambiguous. Love and hate may be considered in the same set (strong emotions) or may be judged to belong in different sets (things that bring humans together vs. things that divide us). What constitutes a set in the game of tennis is clear, defined by a rule. What constitutes a set in qualitative analysis must be defined and created anew each time the game, qualitative analysis, is played. Play on.

Chapter Preview Part 1 of the book provided an overview of qualitative inquiry, with chapters on the nature, niche, and value of qualitative inquiry; strategic themes in qualitative inquiry; a variety of qualitative inquiry frameworks (paradigmatic, philosophical, and theoretical orientations); and practical and actionable qualitative applications. Part 2 covered qualitative designs and data collection, with chapters covering purposeful sampling and design options, fieldwork strategies and observation methods, and qualitative interviewing. Part 3 presents the two final chapters: Chapter 8, “Qualitative Analysis and Interpretation,” followed by Chapter 9, “Enhancing the Quality and Credibility of Qualitative Studies.”

Module 65 in this chapter opens by covering the basics of analysis, with a focus on establishing a strong foundation for qualitative analysis. Module 66 presents the importance of thick description and constructing case studies. Module 67 turns to pattern, theme, and content analysis. Module 68 looks in depth at the intellectual and operational work of analysis. Module 69 presents logical and matrix analyses and explains how to synthesize qualitative studies. Module 70 takes on the critical processes of interpreting findings and determining substantive significance, with special attention to phenomenological and hermeneutic examples. Module 71 examines causal explanation thorough qualitative analysis. Module 72 opens the window on new analysis directions: contribution analysis, participatory analysis, and qualitative counterfactuals. Module 73 provides advice and examples on writing up and reporting findings, including using visuals. Module 74 addresses special analysis and reporting issues—mixed methods and focused communications—and provides a principles-focused report exemplar. Finally, Module 75 summarizes and concludes the chapter, plus providing case study exhibits. We begin with basic analysis.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 2/39

SOURCE: Brazilian cartoonist Claudius Ceccon. Used with permission.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 3/39

MODULE

65 Establishing a Strong Foundation for Qualitative Analysis:Covering the Basics

Good field methods are necessary, but not sufficient, for good research. You may be a skilled and diligent observer and interviewer and gather “rich data,” but, unless you have good ideas about how to focus the study and analyze those data, your project will yield little of value.

—William Foot Whyte (1984, p. 225) Learning From the Field

The Challenge Qualitative analysis transforms data into findings. No formula exists for that transformation. Guidance yes, but no recipe. Direction can and will be offered, but the final destination remains unique for each inquirer, known only when—and if—arrived at.

Medieval alchemy aimed to transmute base metals into gold. Modern alchemy aims to transform raw data into knowledge, the coin of the Information Age. Rarity increases value. Fine qualitative analysis remains rare and difficult—and therefore valuable.

Metaphors abound. Analysis begins during a larval stage that, if fully developed, metamorphoses from a caterpillar-like beginning into the splendor of the mature butterfly. Or this: The inquirer acts as a catalyst on raw data, generating an interaction that synthesizes new substance born anew of the catalytic conversion. Or this: Findings emerge like an artistic mural created from collage-like pieces that make sense in new ways when seen and understood as part of a greater whole.

Consider the patterns and themes running through these metaphors: transformation, transmutation, conversion, synthesis, whole from parts, and sense making. Such motifs run through qualitative analysis like golden threads in a royal garment. They decorate the garment and enhance its quality, but they may also distract attention from the basic cloth that gives the garment its strength and shape—the skill, knowledge, experience, creativity, diligence, and work of the garment maker. No abstract processes of analysis, no matter how eloquently named and finely described, can substitute for the skill, knowledge, experience, creativity, diligence, and work of the qualitative analyst. Thus, Stake (1995) writes classically of the art of case study research. Van Maanen (1988) emphasizes the storytelling motifs of qualitative writing in his ethnographic book on telling tales. Golden-Biddle and Locke (2007) make story the central theme in their book Composing Qualitative Research. Corrine Glesne (2010), a researcher and a poet, begins with the story analogy, describing qualitative analysis as “finding your story,” then later represents the process as “improvising a song of the world.” Lawrence-Lightfoot and Davis (1997) evoke “portraits” in naming their form of qualitative analysis The Art and Science of Portraiture. Brady (2000) explores “anthropological poetics.” Janesick (2000) uses the metaphor of dance in “the choreography of qualitative research design,” which suggests that, for warming up, we may need “stretching exercises” (Janesick, 2011). Hunt and Benford (1997) call to mind theatre as they use “dramaturgy” to examine qualitative inquiry. Denzin (2003) and Hamera (2011) call for ethnography to be “performative.” Richardson (2000b) reminds us that qualitative analysis and writing involve us not just in making sense of the world but also in making sense of our relationship to the world and therefore in discovering things about ourselves even as we discover things about some phenomenon of interest. In this complex and multifaceted analytical integration of disciplined science, creative artistry, skillful crafting, rigorous sense making, and personal reflexivity, we mold interviews, observations, documents, and field notes into findings.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 4/39

The challenge of qualitative analysis lies in making sense of massive amounts of data. This involves reducing the volume of raw information, sifting the trivial from the significant, identifying significant patterns, and constructing a framework for communicating the essence of what the data reveal. In analyzing qualitative data, guidelines exist but no recipes; principles provide direction, but there is no significance test to run that determines whether a finding is worthy of attention. No ways exist of perfectly replicating the researcher’s analytical thought processes. No straightforward tests can be applied for reliability and validity. In short, no absolute rules exist, except perhaps this: Do your very best with your full intellect to fairly represent the data and communicate what the data reveal given the purpose of the study.

SIDEBAR

DISTINGUISHING SIGNAL FROM NOISE

When you try to locate a clear radio station signal through the static noise that fills the airways between signals, you are engaged in the process of distinguishing signal from noise. Nate Silver (2012) used that metaphor as the title of his best-selling book on “why so many predictions fail—but some don’t.” Silver found that the best predictions—whether of election outcomes, economic patterns, social trends, spread of disease, winning sports teams, stock market indicators, or any of the many arenas in which humans attempt predictions—are those that use both quantitative and qualitative data and use theory to turn data into a feasible, meaningful, and compelling story. Data are noise, lots and lots of noise. The more data, the more noise. Big data: loud noise. The story that detects, makes sense of, interprets, and explains meaningful patterns in the data is the signal.

But signals are not constant or static. They vary with context and change over time. So the quest to distinguish signal from noise is ongoing.

Frameworks for analyzing qualitative data can be found in abundance (Saldaña, 2011), and studying examples of qualitative analysis can be especially helpful, as in the Miles, Huberman, and Saldaña (2014) qualitative analysis sourcebook. But guidelines, procedural suggestions, and exemplars are not rules. Applying guidelines requires judgment and creativity. Because each qualitative study is unique, the analytical approach used will be unique. Because qualitative inquiry depends, at every stage, on the skills, training, insights, and capabilities of the inquirer, qualitative analysis ultimately depends on the analytical intellect and style of the analyst. The human factor is the great strength and the fundamental weakness of qualitative inquiry and analysis—a scientific two-edged sword.

That said, let’s get on with it. Exhibit 8.1 offers 12 tips for laying a strong foundation for qualitative analysis. These tips are neither exhaustive nor universal. They won’t all apply to everyone, but perhaps they will stimulate you to think of other things you might do to get yourself ready for the challenges of qualitative analysis. I’ll elaborate several of these tips and then delve into alternative ways of conducting qualitative analysis.

Elaboration of Some Tips for Ensuring That a Strong Foundation for Qualitative Analysis Begins During Fieldwork

Field methods have the advantage of flexibility, allowing us to explore the field, to refine or change the initial problem focus, and to adapt the data gathering process to ideas that occur to us even in late stages of our exploration.

—Whyte, (1984, p. 225)

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 5/39

Research texts typically make a hard-and-fast distinction between data collection and analysis. For data collection based on surveys, standardized tests, and experimental designs, the lines between data collection and analysis are clear. But the fluid and emergent nature of naturalistic inquiry makes the distinction between data gathering and analysis far less absolute. In the course of fieldwork, ideas about directions for analysis will occur. Patterns take shape. Signals start to emerge from the noise. Possible themes spring to mind. Thinking about implications and explanations deepens the final stage of fieldwork. While earlier stages of fieldwork tend to be generative and emergent, following wherever the data lead, later stages bring closure by moving toward confirmatory data collection—deepening insights into and confirming (or disconfirming) patterns that seem to have appeared. Indeed, sampling, confirming, and disconfirming cases require a sense of what there is to be confirmed or disconfirmed.

Ideas for making sense of the data that emerge while still in the field constitute the beginning of analysis; they are part of the record of field notes. Sometimes insights emerge almost serendipitously. When I was interviewing recipients of MacArthur Foundation fellowships (popularly dubbed “Genius Awards”), I happened to interview several people in major professional and personal transitions, followed by several in quite stable situations. This happenstance of how interviews were scheduled suggested a major distinction that became important in the final analysis—distinguishing the impact of the fellowships on recipients in transition from those in stable situations.

Recording and tracking analytical insights that occur during data collection is part of fieldwork and the beginning of qualitative analysis. I’ve heard graduate students being instructed to repress all analytical thoughts while in the field and to concentrate on data collection. Such advice ignores the emergent nature of qualitative designs and the power of field-based analytical insights. Certainly, this can be overdone. Too much focus on analysis while fieldwork is still going on can interfere with the openness of naturalistic inquiry, which is its strength. Rushing to premature conclusions should be avoided. But repressing analytical insights may mean losing them forever, for there’s no guarantee they’ll return. And repressing in-the-field insights removes the opportunity to adapt data collection to test the authenticity of those insights while still in the field and fails to acknowledge the confirmatory possibilities of the closing stages of fieldwork. In the MacArthur Fellowship study, I added transitional cases to the sample near the end of the study to better understand the varieties of transitions the fellows were experiencing—an in-the-field form of emergent, purposeful sampling driven by field-based analysis. Such overlapping of data collection and analysis improves both the quality of the data collected and the quality of the analysis, so long as the fieldworker takes care not to allow these initial interpretations to overly confine analytical possibilities. Indeed, instead of focusing additional data collection entirely on confirming emergent patterns while still in the field, the inquiry should become particularly sensitive to looking for alternative explanations and patterns that would invalidate the initial insights.

EXHIBIT 8.1 Twelve Tips for Ensuring a Strong Foundation for Qualitative Analysis

1. Begin analysis during fieldwork: Note and record emergent patterns and possible themes while still in the field. Add confirming cases to deepen analysis and possible disconfirming cases to test thematic ideas while still in the field.

2. Inventory and organize the data: Make sure you have all the interviews, observations, and documents that constitute the raw data of your qualitative inquiry. Check that the data elements and sources are labeled, dated, and complete.

3. Fill in gaps in the data: As soon as possible, fill in the gaps in the data while connections in the field are fresh. If later interviews turn up issues that need to be checked out with earlier interviewees, do so quickly. If documents are missing, take steps to get them.

4. Protect the data: Back them up. Make sure the data are secure. 5. Express appreciation: Thank those who have provided you with data. Fieldwork creates

relationships. Once out of the field, analysis and writing can take a lot of time. Don’t wait until it’s

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 6/39

all done to show your appreciation to those who have provided you with data. Follow up with appropriate expressions of appreciation sooner rather than later. Procrastination can too easily lead to never getting it done. Do it!

6. Reaffirm the purpose of your inquiry: Restate the purpose of your enquiry, and therefore the purpose of your analysis. Purpose drives analysis. Design frames and sets the stage for analysis. Be clear about why you’re doing this work, revisit and reengage with the questions and the purposeful sampling strategy that have guided your inquiry, and get clear about the primary product that you are producing that will fulfill your study’s purpose and answer priority questions in accordance with your design.

7. Review exemplars for inspiration and guidance: Reexamining classic works in your field can be a source of inspiration. They are classics for a reason. Keep those exemplars nearby to reinvigorate and motivate you when the drudgery of analysis sets in or doubts about what you’re finding emerge. Those who wrote the classics experienced analysis fatigue and doubts as well. They persevered. So will you.

8. Make qualitative analysis software decisions: If you’re using qualitative data management software, learn how to use it effectively. Locate technical support. Practice data entry and some simple analysis. All qualitative analysis software has a steep learning curve. Leave time and mental space to learn if you’re new to the software. If you’re an old hand, check out new versions and features that might be useful.

9. Schedule intense, dedicated time for analysis: Qualitative analysis requires immersion in the data. It takes time. Make time. Set a realistic schedule. Enlist the support of family, friends, and colleagues to help you stay focused, and give the analysis the dedicated time it deserves.

10. Clarify and determine your initial analysis strategy: Inductive qualitative analysis can follow a number of pathways. Various theoretical traditions (ethnography, phenomenology, constructivism, realism, etc.) provide frameworks and guidance. Data can be organized and reported in different ways: case studies, question-by-question interview analysis, storytelling, elucidating sensitizing concepts or principles, and thematic analysis, among others. Grounded theory provides a highly prescriptive framework for analysis. Decide what strategy fits your purpose, write it down with your rationale, and get started analyzing. This process involves reconnecting with the theoretical and strategic framework that presumably guided design decisions and the formulation of your inquiry questions.

11. Be reflective and reflexive: Monitor your thought processes and decision-making criteria. Be in touch with predispositions, biases, fears, hopes, constraints, blinders, and pressures you’re under. Qualitative analysis is ultimately highly personal and judgmental. You are the analyst. Observe yourself. Learn about yourself and your analysis processes, both cognitively and emotionally.

12. Start and keep an analysis journal: Document analysis decisions, emergent ideas, forks in the road, false starts, dead ends, breakthroughs, eureka moments, what you learn about analysis, what you learn about the focus of the inquiry, and what you learn about yourself. You may think you’ll remember these things. You won’t. Document the analytical process—in depth, systematically, and regularly. That documentation is the foundation of rigor. Qualitative analysis is a new stage of fieldwork in which you must observe and document your own processes even as you are doing the analysis.

See elaboration of these tips in the next sections.

In essence, when data collection has ended and it is time to begin the formal and focused analysis, the qualitative inquirer has two primary sources to draw from in organizing the analysis: (1) the questions that were generated during the conceptual and design phases of the study, prior to fieldwork, and (2) the analytic insights and interpretations that emerged during data collection.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 7/39

Even when analysis and writing are under way, fieldwork may not be over. On occasion, gaps or ambiguities found during analysis cry out for more data collection—so, where possible, interviewees may be recontacted to clarify or deepen responses, or new observations are made to enrich descriptions. This is called member checking—verifying data, findings, and interpretations with the participants in the study, especially key informants. While writing a Grand Canyon–based book that describes modern male coming-of-age issues (Patton, 1999), I conducted several follow-up and clarifying interviews with my two key informants and returned to the Grand Canyon four times to deepen my understanding of Canyon geology and add descriptive depth. Each time I thought that, at last, fieldwork was over and I could just concentrate on writing, I came to a point where I simply could not continue without more data collection. Such can be the integrative, iterative, and synergistic processes of data collection and analysis in qualitative inquiry. A final caveat, however: Perfectionism breeds imperfections. Often, additional fieldwork isn’t possible, so gaps and unresolved ambiguities are noted as part of the final report. Dissertation and publication deadlines may also obviate additional confirmatory fieldwork. And no amount of additional fieldwork can, or should, be used to force the vagaries of the real world into hard-and-fast conclusions or categories. Such perfectionist and forced analysis ultimately undermines the authenticity of inductive, qualitative analysis. Finding patterns is one result of analysis. Finding vagaries, uncertainties, and ambiguities is another.

Inventory and Organize the Raw Data for Analysis

It wasn’t curiosity that killed the cat.

It was trying to make sense of all the data curiosity generated. —Halcolm

The data generated by qualitative methods are voluminous. I have found no way of preparing students for the sheer mass of information they will find themselves confronted with when data collection has ended. Sitting down to make sense out of pages of interviews and whole files of field notes can be overwhelming. Organizing and analyzing a mountain of narrative can seem like an impossible task.

How big a mountain? Consider a study of community and scientist perceptions of HIV vaccine trials in the United States done by the Centers for Disease Control. A large, complex, multisite effort called Project LinCS: Linking Communities and Scientists, the study’s 313 interviews generated more than 10,000 pages of transcribed text from 238 participants on a range of topics (MacQueen & Milstein, 1999). Now that’s an extreme case, but on average, a one-hour interview will yield 10 to 15 single-spaced pages of text; 10 two- hour interviews will yield roughly 200 to 300 pages of transcripts.

Getting organized for analysis begins with an inventory of what you have. Are the field notes complete? Are there any parts that you put off to write later but never got to doing that need to be finished, even at this late date, before beginning analysis? Are there any glaring holes in the data that can still be filled by collecting additional data before the analysis begins? Are all the data properly labeled with a notation system that will make retrieval manageable (dates, places, interviewee-identifying information, etc.)? Are interview transcriptions complete? Assess the quality of the information you have collected. Get a sense of the whole.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 8/39

©2002 Michael Quinn Patton and Michael Cochran

Fill in Gaps in the Data The problem of incomplete data is illustrated by the experience of a student who had conducted 30 in-depth pre- and post interviews with participants in a special program. The transcription process took several weeks. She made copies of three transcripts and brought them to our seminar for assistance in doing the analysis. As I read the interviews, I got a terrible sinking feeling in my stomach. While other students were going over the transcriptions, I pulled her aside and asked her what instructions she had given the typist. It was clear from reading just a few pages that she did not have verbatim transcriptions—the essential raw data for qualitative analysis. The language in each interview was the same. The sentence structures were the same. The answers were grammatically correct. People in natural conversations simply do not talk that way. The grammar in natural conversations comes out atrocious when transcribed. Sentences hang incomplete, interrupted by new thoughts before the first sentence is completed. Without the knowledge of this student, and certainly without her permission, the typist had decided to summarize the participants’ responses because “so much of what they said was just rambling on and on about nothing,” the transcriber later explained. All of the interviews had to be transcribed again before analysis could begin.

Earlier, I discussed the transition from fieldwork to analysis. Transcribing offers another point of transition between data collection and analysis as part of data management and preparation. Doing all or some of your own interview transcriptions (instead of having them done by a transcriber), for example, provides an opportunity to get immersed in the data, an experience that usually generates important insights. Typing and organizing handwritten field notes offer another opportunity to immerse yourself in the data, a chance to get a feel of the cumulative data as a whole. Doing your own transcriptions, or at least checking them by listening to the tapes as you read them, can be quite different from just working off transcripts done by someone else.

Protect Your Data Thomas Carlyle lent the only copy of his handwritten manuscript on the history of the French Revolution, his masterwork, to philosopher John Stuart Mill, who lent it to a Mrs. Taylor. Mrs. Taylor’s illiterate housekeeper thought it was waste paper and burned it. Carlyle reacted with nobility and stoicism and immediately set about rewriting the book. It was published in 1837 to critical acclaim and consolidated Carlyle’s reputation as one of the foremost men of letters of his day. We’ll never know how the acclaimed version compared with the original, or what else Carlyle might have written in the year lost after the fireplace calamity.

So it is prudent to make backup copies of all your data, putting one master copy away someplace secure. Indeed, if data collection has gone on over any long period, it is wise to make copies of the data as they are collected, being certain to put one copy in a safe place where it will not be disturbed and cannot be lost, stolen, or burned. One of my graduate students kept all of his field notes and transcripts in the truck of his car.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 9/39

The car was vandalized, and he lost everything, with no backup copies. The data you’ve collected are unique and precious. The exact observations you’ve made, the exact words people have spoken in interviews—these can never be recaptured in precisely the same way, even if new observations are undertaken and new interviews are conducted. Moreover, you’ve likely made promises about protecting confidentiality, so you have an obligation to take care of the data. Field notes and interviews should be treated as the valuable material they are. Protect them.

Beyond Thomas Carlyle’s cautionary tale, my advice in this regard comes from two more recent disasters. I was at the University of Wisconsin when antiwar protestors bombed a physics building, destroying the life work of several professors. I also had a psychology doctoral student who carried her dissertation work, including all the raw data, in the back seat of her car. An angry patient from a mental health clinic with whom she was working firebombed her car, destroying all of her work. Tragic stories of lost research, while rare, occur just often enough to remind us about the wisdom of an ounce of prevention.

Once a copy is put away for safekeeping, I like to have one hard copy handy throughout the analysis, one copy for writing on, and one or more copies for cutting and pasting. A great deal of the work of qualitative analysis involves creative cutting and pasting of the data, even if done on a computer, as is now common, rather than by hand. Under no circumstances should one yield to the temptation to begin cutting and pasting the master copy. The master copy or computer file remains a key resource for locating materials and maintaining the context for the raw data.

Qualitative data analysis software (QDAS) facilitates saving data in multiple digital locations, such as external hard drive, server, DVD copy, and flash drive. However, the researcher must be disciplined enough to update backups frequently and not destroy all old copies, especially preserving the original master copy.

Purpose Drives Analysis

“Data” linked to real human social worlds is where human social science, whatever else it does, has to start and has to finish.

—Michael Agar (2013, p. 19) The Lively Science

Purpose drives analysis: This follows from the theme of Chapter 5, that purpose drives design. Design gives a study direction and focus. Chapter 5 presented a typology of inquiry purposes: basic research, applied research, summative evaluation research, formative evaluation, and action research. (See Exhibit 5.1, p. 250.) These distinct purposes inform design decisions and inquiry focus and subsequently undergird analysis because they involve different norms and expectations for generating, validating, presenting, and using findings.

Basic qualitative research is typically reported through a scholarly monograph or published article, with primary attention to the contribution of the research to social science theory. The theoretical framework within which the study is conducted will heavily shape the analysis. As Chapter 3 made clear, the theoretical framework for an ethnographic study will differ from that for ethnomethodology, heuristics, or hermeneutics.

Applied qualitative research may have a more or less scholarly orientation depending on primary audience. If the primary audience is scholars, then applied research will be judged by the standards of basic research, namely, research rigor and contribution to theory. If the primary audience is policymakers, the relevance, clarity, utility, and applicability of the findings will become most important.

For scholarly qualitative research, the published literature on the topic being studied focuses the contribution of a particular study. Scholarship involves an ongoing dialogue with colleagues about particular questions of interest within the scholarly community. The analytical focus, therefore, derives in part from

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 10/39

what one has learned that will make a contribution to the literature in a field of inquiry. That literature will likely have contributed to the initial design of the study (implicitly or explicitly), so it is appropriate to revisit that literature to help focus the analysis.

Focus in evaluation research should derive from questions generated at the very beginning of the evaluation process, ideally through interactions with primary intended users of the findings. Too many times, evaluators go through painstaking care, even agony, in the process of working with primary stakeholders to clearly conceptualize and focus evaluation questions before data collection begins. But then, once the data are collected and analysis begins, they never look back over their notes to review and renew their clarity on the central issues in the evaluation. It is not enough to count on remembering what the evaluation questions were. The early negotiations around the purpose of an evaluation usually involve important nuances. To reestablish those nuances for the purpose of helping focus the analysis, it is important to review the notes on decisions that were made during the conceptual part of the evaluation. (This assumes, of course, that the evaluator has treated the conceptual phase of the evaluation as a field experience and has kept detailed notes about the negotiations that went on and the decisions that were made.)

In addition, it may be worth reopening discussions with intended evaluation users to make sure that the original focus of the evaluation remains relevant. This accomplishes two things. First, it allows the evaluator to make sure that the analysis will focus on needed information. Second, it prepares evaluation users for the results. At the point of beginning formal analysis, the evaluator will have a much better perspective on what kinds of questions can be answered with the data that have been collected. It pays to check out which questions should take priority in the final report and to suggest new possibilities that may have emerged during fieldwork.

Summative evaluations will be judged by the extent to which they contribute to making decisions about a program or intervention, usually decisions about overall effectiveness, continuation, expansion, and/or replication in other sites. A full report presenting data, interpretations, and recommendations is required. In contrast, formative evaluations, conducted for program improvement, may not even generate a written report. Findings may be reported primarily orally. Summary observations may be listed in outline form, or an executive summary may be written, but the timelines for formative feedback and the high costs of formal report writing may make a full, written report impractical. Staff and funders often want the insights of an experienced outsider who can interview program participants effectively, observe what goes on in the program, and provide helpful feedback. The methods are qualitative, the purpose is practical, and the analysis is done throughout fieldwork; no written report is expected beyond a final outline of observations and implications. Academic theory takes second place to understanding the program’s theory of action as actually practiced and implemented. In addition, formative feedback to program staff may be ongoing rather than simply at the end of the study. However, in some situations, funders may request a carefully documented, fully developed, and formally written formative report. The nature of formative reporting, then, is dictated by user needs rather than scholarly norms. For qualitative evaluators, a primary purpose of inquiry, analysis, and interaction around findings is to “foster learning”; a qualitative evaluator “serves as an educator, helping program staff and participants understand the evaluation process and ways in which they can use that process for their own learning” (Goodyear et al., in press).

Action research reporting also varies a great deal. In some action research, the process is the product, so no report will be produced for outside consumption. On the other hand, some action research efforts are undertaken to test organizational or community development theory, and therefore, they require fairly scholarly reports and publications. Action research undertaken by a group of people to solve a specific problem may involve the group sharing the analysis process to generate a mutually understood and acceptable solution, with no permanent, written report of findings.

Students writing dissertations will typically be expected to follow very formal and explicit analytical procedures to produce a scholarly monograph with careful attention to methodological rigor. Graduate students will be expected to report in detail on all aspects of methodology, usually in a separate chapter, including a thorough discussion of analytical procedures, problems, and limitations.

The point here is that the process, duration, and procedures of analysis will vary depending on the study’s purpose and audience. Likewise, the reporting format will vary. First and foremost, then, analysis depends on

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 11/39

clarity about purpose (as do all other aspects of the study). Knowing what kind (or kinds) of findings are needed and how those results will be reported constitutes the foundation for analysis.

Design Frames Analysis Purpose drives analysis through purposeful sampling. Design reflects purpose and therefore frames analysis.

You don’t wait until you’ve collected data to figure out your analysis approach. Design decisions (Chapter 5) anticipate what kind of analysis will be done. In particular, the purposeful sampling strategy you’ve followed is based on what kind of results you want to produce. The data you have to analyze are based on your design. What you have sampled determines what you will address in analysis. Exhibit 8.2 shows the connection between purposeful sampling strategy and analysis approach. The purposeful sampling strategies are taken from Exhibit 5.8 in Chapter 5 (pp. 266–272).

Take Guidance and Inspiration From Examples and Exemplars

Since we learn from examples, it pays to carefully select good examples to learn from. —Halcolm

Reexamining classic works in your field can be a source of inspiration. They are classics for a reason. Keep those exemplars nearby to reinvigorate and inspire you when the drudgery of analysis sets in or doubts about what you’re finding emerge. Those who wrote the classics experienced analysis fatigue and doubts as well. They persevered. So will you.

The first chapter presented several examples of important qualitative studies from different fields and disciplines:

• Patterns in women’s ways of knowing (Belenky et al., 1986)

EXHIBIT 8.2 Connecting Design and Analysis: Purposeful Sampling and Purpose-Driven Analysis

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 12/39

NOTE: The first and second columns correspond to purposeful sampling strategies presented in Exhibit 5.8, pp. 266– 272.

The methods section of a qualitative report can use this same connecting framework (Exhibit 8.2), showing how design informs analysis. An excellent example of such explicit connecting design and analysis is Kaczynski, Salmona, and Smith, (2014).

• Eight characteristics of organizational excellence (Peters & Waterman, 1982) • Seven habits of highly effective people (Covey, 2013) • Case studies and cross-case analysis illuminating why battered women kill (Browne, 1987)

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 13/39

• Three primary processes that contribute to the development of an interpersonal relationship (Moustakas, 1995)

• Case examples illustrating the diversity of experiences and outcomes in an adult literacy program (Patton & Stockdill, 1987)

• Teachers’ reactions to an oppressive school accountability system (Perrone & Patton, 1976)

Reviewing these examples of qualitative findings from the first chapter will ground this discussion of analytical processes in samples of the real fruit of qualitative inquiry. And this chapter will add many more examples.

Chapter 3 presented theoretical orientations associated with qualitative inquiry (ethnography, phenomenology, constructivism, etc.). These theoretical frameworks have implications for analysis in that the fundamental premises articulated in a theoretical framework or philosophy are meant to inform how one makes sense of the world. If you have positioned your inquiry within one of those traditions, exemplars will help immerse you in the way in which that tradition guides analysis. Later in this chapter, I’ll examine in more depth two of the major theory-oriented analytical approaches, phenomenology and grounded theory, as examples of how theory informs analysis.

Using Qualitative Analysis Software Computers and software are tools that assist analysis. Software doesn’t really analyze qualitative data. Qualitative software programs facilitate data storage, coding, retrieval, comparing, and linking—but human beings do the analysis. That said, one reviewer of this book added the following elaboration of how software analysis facilitates engagement with the data.

It is correct to emphasize that Qualitative Data Analysis Software (QDAS) is just a tool which the researcher as instrument must remain in control of. However, the tool is both a data management tool and a qualitative analysis tool. When working with QDAS the researcher is building relationships [with the data] which are a process that is more than just content analysis. The tool helps the researcher build connections which promote further development of complex insights. This ongoing process of meaning construction is analysis. It should be also noted that QDAS can be used with any number of theoretical approaches.

Software has eased significantly the old drudgery of manually locating a particular coded paragraph. Analysis programs speed up the processes of searching for certain words, phases, and themes; labeling interview passages and field notes for easy retrieval and comparative analysis; locating coded themes; grouping data together in categories; and comparing passages in transcripts or incidents from field notes. But the qualitative analyst doing content analysis must still decide what things go together to form a pattern, what constitutes a theme, what to name it, and what meanings to extract from case studies. The human being, not the software, must decide how to frame a case study, how much and what to include, and how to tell the story. Still, software can play a useful role in managing the volume of qualitative data to facilitate analysis, just as quantitative software does.

Quantitative programs revolutionized that research by making it possible to crunch numbers, more accurately, more quickly, and in more ways. . . . Much of the tedious, boring, mistake-prone data manipulation has been removed. This makes it possible to spend more time investigating the meaning of their data.

In a similar way, QDA [qualitative data analysis] programs improve our work by removing drudgery in managing qualitative data. Copying, highlighting, cross-referencing, cutting and pasting transcripts and field notes, covering floors with index cards, making multiple copies, sorting and resorting card piles, and finding misplaced cards have never been the highlights of qualitative research. It makes at least as much sense for us to use qualitative programs for tedious tasks as it does for those people down the hall to stop hand-calculating gammas. (Durkin, 1997, p. 93)

The analysis of qualitative data involves creativity, intellectual discipline, analytical rigor, and a great deal of hard work. Computer programs can facilitate the work of analysis, but they can’t provide the creativity and

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 14/39

intelligence that make each qualitative analysis unique. Moreover, since new software is being constantly developed and upgraded, this book can do no more than provide some general guidance. Most of this chapter will focus on the human thinking processes involved in analysis rather than the mechanical data management challenges that computers help solve. Reviews of computer software in relation to various theoretical and practical issues in qualitative analysis can help you decide what software fits your needs.

What began as distinct software approaches have become more standardized as the various software packages have converged to offer similar functions, though sometimes with different names for the same functions. They all facilitate marking text, building codebooks, indexing, categorizing, creating memos, and displaying multiple text entries side by side. Import and export capabilities vary. Some support team work and multiple users more than others. Graphics and matrix capabilities vary, but they are becoming increasingly sophisticated. All take time to learn to use effectively. The greater the volume of data to be analyzed, the more helpful these software programs are. Moreover, knowing which software program you will use before data collection will help you collect and enter data in the way that works best for that particular program.

SIDEBAR

QUALITATIVE ANALYSIS TECHNOLOGY REVOLUTION: PAST AND FUTURE

In the early 1980s, as qualitative researchers began to grapple with the promise and challenges of computers, a handful of innovative researchers brought forth the first generation of what would come to be known as CAQDAS (Computer Assisted Qualitative Data Analysis Software), or QDAS (Qualitative Data Analysis Software). . . . These stand-alone software packages were developed, initially, to bring the power of computing to the often labor-intensive work of qualitative research. While limited in scope at the beginning to text retrieval tasks, for instance, these tools quickly expanded to become comprehensive all-in-one packages. . . .

Close to 30 years later, QDAS packages are comprehensive, feature-laden tools of immense value to many in the qualitative research world. However, with the advent of the Internet and the emergence of web-based tools known as Web 2.0, QDAS is now challenged on many fronts as researchers seek out easier-to-learn, more widely available and less expensive, increasingly multimodal, visually attractive, and more socially connected technologies. . . .

Truly, qualitative research and technology is in the midst of a revolution.

QDAS 2.0 offers spectacular possibilities to qualitative researchers. . . . What is emerging: Web 2.0 tools with various capacities. . . . As we move more deeply into the digital age, their use, which was once a private choice, will become a necessity. (Davidson & di Gregorio, 2011, pp. 627, 639)

Qualitative discussion groups on the Internet regularly discuss, rate, compare, and debate the strengths and weaknesses of different software programs. While preferences vary, these discussions usually end with the consensus that any of the major programs will satisfy the needs of most qualitative researchers. Increasingly, distinctions depend on “feel,” “style,” and “ease of use”—matters of individual taste—more than differences in function. Still, differences exist, and new developments can be expected to solve existing limitations. Exhibit 8.3 lists resources for comparing and using qualitative software programs.

In considering whether to use software to assist in analysis, keep in mind that this is partly a matter of individual style, comfort with computers, amount of data to be analyzed, and personal preference. Computer analysis is not necessary and can interfere with the analytic process for those who aren’t comfortable spending long hours in front of a screen. Some self-described “concrete” types like to get a physical feel for the data, which isn’t possible with a computer. Participants on a qualitative listserv posted these responses to a thread on software analysis:

• The best advice I ever received about coding was to read the data I collected over and over and over. The more I interacted with the data, the more patterns and categories began to “jump out” at me. I never even

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 15/39

bothered to use the software program I installed on the computer because I found it much easier to code it by hand.

• I found that hand coding was easier and more productive than using a computer program. For me, actually seeing the data in concrete form was vital in recognizing emerging themes. I actually printed multiple copies of data and cut it into individual “chunks,” color coding as I went along, and actually physically manipulating the data by grouping chunks by apparent themes, filing in colored folders, and so on. This technique was especially useful when data seemed to fit more than one theme and facilitated merging my initial and later impressions as themes solidified. Messy, but vital for us concrete people.

So, though software analysis has become common and many swear by it because it can offer leaps in productivity for those adept at it, using software is not a requisite for qualitative analysis. Whether you do or do not use software, the real analytical work takes place in your head.

Being Reflective and Reflexive

Distinguishing signal from noise requires both scientific knowledge and self-knowledge. —Nate Silver (2012, p. 453)

The strategies, guidelines, and ideas for analysis offered in this book are meant to be suggestive and facilitating rather than confining or exhaustive. In actually doing analysis, you will have to adapt what is presented here to fit your specific situation and study. However analysis is done, analysts have an obligation to monitor and report their own analytical procedures and processes as fully and truthfully as possible. This means that qualitative analysis is a new stage of fieldwork in which analysts must observe their own processes even as they are doing the analysis. The final obligation of analysis is to analyze and report on the analytical process as part of the report of actual findings. The extent of such reporting will depend on the purpose of the study. Module 73, later in this chapter, will discuss reflexivity and voice in depth.

EXHIBIT 8.3 Examples of Resources for Computer-Assisted Qualitative Data Analysis Software Decisions, Training, and Technical Assistance

Reviews of qualitative analysis software and web-based programs in relation to significant theoretical and practical issues in qualitative analysis can help you decide what software, if any, fits your needs. Here are some examples of resources that provide information and training.

• The Computer-Assisted Qualitative Data Analysis Software (CAQDAS): The software provides practical support, training, and information in the use of a range of software programs designed to assist qualitative data analysis; platforms for debate concerning the methodological and epistemological issues arising from the use of such software packages; and research into methodological applications of CAQDAS (CAQDAS Networking Project, 2014).

• International Institute for Qualitative Methodology, University of Alberta, Canada: The institute facilitates the development of qualitative research methods across a wide variety of academic disciplines, offering training and networking opportunities through annual conferences and online workshops on qualitative software analysis programs (http://www.iiqm.ualberta.ca/AboutUs.aspx).

• Mobile and Cloud Qualitative Research Apps, The Qualitative Report: http://www.nova.edu/ssss/QR/apps.html

• Qualitative Research, Software & Support Services, University of Massachusetts, Amherst:

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 16/39

http://www.umass.edu/qdap/ • Coding Analysis Toolkit (CAT), a free service of the Qualitative Data Analysis Program (QDAP)

hosted by the University Center for Social and Urban Research at the University of Pittsburgh: http://cat.ucsur.pitt.edu/

• CAQDAS list of options and links: (a) Open source and free, (b) proprietary, and (c) web based (http://en.wikipedia.org/wiki/Computer- assisted_qualitative_data_analysis_software)

• Learning qualitative data analysis on the web: Comparative reviews of software (http://onlineqda.hud.ac.uk/Intro_CAQDAS/reviews-of-sw.php)

• Make inquiries to the qualitative research listserv, QUALRS-L: [email protected]/?

SIDEBAR

OBSERVATIONS AND ADVICE FROM A QUALITATIVE SOFTWARE CONSULTANT

Asher E. Beckwitt has built a consulting business advising graduate students and novice researchers on how to engage in qualitative research using software for analysis (www.qualitativeresearch.org). I asked her to share her experiences as a consultant.

Question: What are the most common challenges you encounter in using qualitative methods with clients and in advising graduate students on their qualitative dissertations?

Answer: The most common challenges are as follows:

1. Students do not understand the differences between qualitative and quantitative methods. 2. They do not understand that there are different types of qualitative analysis (e.g., grounded theory,

phenomenology, narrative analysis, etc.). 3. They do not understand that there are different methodologists’ approaches within these areas (e.g.,

Glasser and Strauss approach grounded theory differently than Strauss and Corbin). 4. They do not understand how to code and analyze their data according to their chosen approach.

Question: You work a lot with qualitative software. What do you tell clients and students that qualitative software does well—and doesn’t do?

Answer: Software is an excellent tool for organizing and coding information. It allows you to store all of the collected data and the codes in one place, as well as code multiple sources (papers/interviews/focus group transcripts/field notes). The organizational capacity of software permits you to code in more detail (e.g., you may have hundreds of codes).

Question: What are the most common misunderstandings about qualitative software? Answer: Most people assume software codes the data for you. This is incorrect, because the

software only stores the information. The researcher is still responsible for coding the text and making decisions about what (and how) to code the data. People also assume software is a method. Software is not a method, it is a software program. Clients and students also erroneously assume using software will make their

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 17/39

project more rigorous or valid. Software simply stores the information inputted by the researcher and provides a more efficient way to retrieve and query that information.

Question: What are the greatest advantages of qualitative software? Answer: It stores the information in one place and allows you to code in more detail. Question: Advice about using software? Cautions? Like with any software, it is important to

save your work often. Your wisdom about qualitative software? Answer: Most of the people I encounter think the software will do the work for them. As

mentioned, software does not code the data for you. The researchers must understand the type of qualitative method and methodology they are using and understand how to implement and translate this approach to code their data in software.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 18/39

MODULE

66 Thick Description and Case Studies: The Bedrock of Qualitative Analysis

Bedrock is a solid, firm, strong, and stable foundation.

We need a bedrock of story and legend in order to live our lives coherently. —Alan Moore

English writer and graphic comic artist

Thick, rich description provides the foundation for qualitative analysis and reporting. Good description takes the reader into the setting being described. In his classic Street Corner Society, William Foote Whyte (1943) took us to the “slum” neighborhood where he did his fieldwork and introduced us to the characters there, as did Elliot Liebow in Tally’s Corner (1967), a description of the lives of unemployed black men in Washington, D.C., during the 1960s. In Constance Curry’s (1995) oral history of school integration in Drew, Mississippi, in the 1960s, she tells the story of an African American mother Mae Bertha Carter and her seven children as they faced day-to-day and night-to-night threats and terror from resistant, angry whites. Through in-depth case study descriptions, Angela Browne (1987) helps us experience and understand the isolation and fear of a battered woman whose life is controlled by a rage-filled, violent man. Through detailed description and rich quotations, Alan Peshkin (1986) showed readers The Total World of a Fundamentalist Christian School, as Erving Goffman (1961) had done earlier for other “total institutions,” closed worlds like prisons, army camps, boarding schools, nursing homes, and mental hospitals. Howard Becker (1953, 1985) described how one learns to become a marijuana user in such detail that you almost get the scent of the smoke from his writing.

SIDEBAR

ANALYZING AND REPORTING HOW PROGRAM DESCRIPTIONS VARY BY PERSPECTIVE

In describing and evaluating different models of youth service programs, Roholt, Hildreth, and Baizerman (2009) began by capturing and reporting different perspectives: (a) official programmatic descriptions, (b) youth programmatic descriptions, and (c) adult descriptions of the programs.

Evaluation is necessary for youth civic engagement (YCE) programs if they are to receive funding and if they want to improve their work. . . . We sought to understand the program from the points of view and experiences of the multiple participants. . . . We wanted to know what they did, how they made sense of it, and what consequences this had for them and others. We did this by asking young people, teachers, youth workers, volunteers, coordinators, principals, parents, and other non-involved young people to teach us as much as possible about the program and about their participation experiences.

Five questions guided the evaluation:

1. What does this project say it is about? 2. How is it organized and carried out? 3. What are young people doing in the program? 4. What meaning do they give to their work?

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 19/39

5. What consequences does this work have for them, others, the larger community?

What was different about our study was that we worked as if we did not understand what anyone was telling us. For example, when a young person said the word “citizenship,” we assumed that we did not understand what he or she meant. This strategy opened the door to a deeper interrogation of the insider’s experience and brought us to the many ways these projects are experienced and understood by different types of participants—young people, adult coaches, school principals, community leaders, teachers, group leaders, and by different individuals. This gave us a look at what was meaningful to them, as well as to what was done in the program and why. (Roholt et al., 2009, pp. 72, 73, 75)

The detailed descriptions were used to compare different program models and the different perspectives on those models of people in diverse roles and in varying relationships with the programs. This comparative analysis was possible because the descriptions were “thick” and rich. The bedrock of the analysis was description from different perspectives.

These classic qualitative studies share the capacity to open up a world to the reader through rich, detailed, and concrete descriptions of people and places—“thick description” (Denzin, 1989c; Geertz, 1973)—in such a way that we can understand the phenomenon studied and draw our own interpretations about meanings and significance.

Description forms the bedrock of all qualitative reporting, whether for scholarly inquiry, as in the examples above, or for program evaluation. For evaluation studies, basic descriptive questions include the following: How do people get into the program? What is the program setting like? What are the primary activities of the program? What happens to people in the program? What are the effects of the program on participants? Thick evaluation descriptions take those who need to use the evaluation findings into the experience and outcomes of the program.

Description Before Interpretation

A basic tenet of research is careful separation of description from interpretation. Interpretation involves explaining the findings, answering “why” questions, attaching significance to particular results, and putting patterns into an analytic framework. It is tempting to rush into the creative work of interpreting the data before doing the detailed, hard work of putting together coherent answers to major descriptive questions. But description comes first.

Alternative Ways of Organizing and Reporting Descriptions

Several options exist for organizing and reporting descriptive findings. Exhibit 8.4 presents various options depending on whether the primary organizing motif centers on telling the story of what occurred, presenting case studies, or illuminating an analytical framework.

These are not mutually exclusive or exhaustive ways of organizing and reporting qualitative data. Different parts of a report may use different reporting approaches. The point is that one must have some initial framework for organizing and managing the voluminous data collected during fieldwork.

Where variations in the experiences of individuals are the primary focus of the study, it is appropriate to begin by writing a case study using all the data for each person. Only then are cross-case analysis and comparative analysis done. For example, if one has studied 10 juvenile delinquents, the analysis would begin by doing a case description of each juvenile before doing cross-case analysis. On the other hand, if the focus is on a criminal justice program serving juveniles, the analysis might begin with description of variations in answers to common questions, for example, what were the patterns of major program experiences, what did they like, what did they dislike, how did they think they had changed, and so forth.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 20/39

Likewise, in analyzing interviews, the analyst has the option of beginning with case analysis or cross-case analysis. Beginning with case analysis means writing a case study for each person interviewed or each unit studied (e.g., each critical event, each group, or each program location). Beginning with cross-case analysis means grouping together answers from different people to common questions or analyzing different perspectives on central issues. If a standardized open-ended interview has been used, it is fairly easy to do cross-case or cross-interview analysis for each question in the interview. With an interview guide approach, answers from different people can be grouped by topics from the guide, but the relevant data won’t be found in the same place in each interview. An interview guide, if it has been carefully conceived, actually constitutes a descriptive analytical framework for analysis.

A qualitative study will often include both kinds of analysis—individual cases and cross-case analyses— but one has to begin somewhere. Trying to do both individual case studies and cross-case analysis at the same time will likely lead to confusion.

Case Studies

Case study is not a methodological choice but a choice of what is to be studied. . . . We could study it analytically or holistically, entirely by repeated measures or hermeneutically, organically or culturally, and by mixed methods—but we concentrate, at least for the time being, on the case.

—Robert E. Stake “Case Studies” (2000, p. 435)

Case analysis involves organizing the data by specific cases for in-depth study and comparison. Well- constructed case studies are holistic and context sensitive, two of the primary strategic themes of qualitative inquiry discussed in Chapter 2. Cases can be individuals, groups, neighborhoods, programs, organizations, cultures, regions, or nation-states. “In an ethnographic case study, there is exactly one unit of analysis—the community or village or tribe” (Bernard, 1994, pp. 35–36). Cases can also be critical incidents, stages in the life of a person or of a program, or anything that can be defined as a “specific, unique, bounded system” (Stake, 2000, p. 436). Cases are units of analysis. What constitutes a case, or unit of analysis, is usually determined during the design stage and becomes the basis for purposeful sampling in qualitative inquiry (see Chapter 5 for a discussion of case study designs and purposeful sampling). Sometimes, however, new units of analysis, or cases, emerge during fieldwork or from the analysis after data collection. For example, one might have sampled schools as the unit of analysis, expecting to do case studies of three schools, and then, reviewing the fieldwork, one might decide that classrooms are a more meaningful unit of analysis and shift to case studies of classrooms instead of schools, or add case studies of particular teachers or students. Contrariwise, one could begin by sampling classrooms and end up doing case studies on schools. This illustrates the critical importance of thinking carefully about the question “What is a case?” (Ragin & Becker, 1992).

EXHIBIT 8.4 Options for Organizing and Reporting Qualitative Data

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 21/39

The case study approach to qualitative analysis constitutes a specific way of collecting, organizing, and analyzing data; in that sense, it represents an analysis process. The purpose is to gather comprehensive, systematic, and in-depth information about each case of interest. The analysis process results in a product: a case study. Thus, the term case study can refer to either the process of analysis or the product of analysis, or to both.

Analyzing patterns and identifying themes across multiple case studies has become a significant way of conducting qualitative analysis (Stake, 2006). Case studies may be layered or nested. For example, in evaluation, a single program may be a case study. However, within that single program case (n = 1), one may do case studies of several participants. In such an approach, the analysis would begin with the individual case studies; then, the cross-case pattern analysis of the individual cases might be part of the data for the program case study. Likewise, if a national or state program consists of several project sites, the analysis may consist of

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 22/39

three layers of case studies: (1) individual participant case studies at project sites combined to make up project site case studies, (2) project site case studies combined to make up state program case studies, and (3) state programs combined to make up a national program case study. Exhibit 8.5 shows this layered case study approach.

This kind of layering recognizes that you can always build larger case units out of smaller ones—that is, you can always combine studies of individuals into studies of a program—but if you only have program-level data, you can’t disaggregate it to construct individual cases.

Case Study Rule

Remember this rule: No matter what you are studying, always collect data on the lowest level unit of analysis possible.

Collect data about individuals, for example, rather than about households. If you are interested in issues of production and consumption (things that make sense at the household level), you can always package your data about individuals into data about households during analysis. . . . You can always aggregate data collected on individuals, but you can never disaggregate data collected on groups. (Bernard, 1994, p. 37)

Though a scholarly or evaluation project may consist of several cases and include cross-case comparisons, the analyst’s first and foremost responsibility consists of doing justice to each individual case. All else depends on that.

Ultimately, we may be interested in a general phenomenon or a population of cases more than in the individual case. And we cannot understand this case without knowing about other cases. But while we are studying it, our meager resources are concentrated on trying to understand its complexities. For the while, we probably will not study comparison cases. We may simultaneously carry on more than one case study, but each case study is a concentrated inquiry into a single case. (Stake, 2000, p. 436)

Case data consist of all the information one has about each case: (a) interview data, (b) observations, (c) the documentary data (e.g., program records or files, newspaper clippings), (d) impressions and statements of others about the case, and (e) contextual information—in effect, all the information one has accumulated about each particular case goes into that case study. These diverse sources make up the raw data for case analysis and can amount to a large accumulation of material. For individual people, case data can include (a) interviews with the person and those who know her or him, (b) clinical records and background and statistical information about the person, (c) a life history profile, (d) things the person has produced (diaries, photos, writings, paintings, etc.), and (e) personality or other test results (yes, quantitative data can be part of a qualitative case study). At the program level, case data can include (a) program documents, (b) statistical profiles, (c) program reports and proposals, (d) interviews with program participants and staff, (e) observations of the program, and (f) program histories.

EXHIBIT 8.5 Case Study: Layers of Possible Analysis

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 23/39

From Data to Case Study Once the raw case data have been accumulated, the researcher may write a case record. The case record pulls together and organizes the voluminous case data into a comprehensive, primary resource package. The case record includes all the major information that will be used in doing the final case analysis and writing the case study. Information is edited, redundancies are sorted out, parts are fitted together, and the case record is organized for ready access either chronologically or topically. The case record must be complete but manageable; it should include all the information needed for subsequent analysis, but it is organized at a level beyond that of the raw case data.

A case record should make no concessions to the reader in terms of interest or communication. It is a condensation of the case data aspiring to the condition that no interpreter requires to appeal behind it to the raw data to sustain an interpretation. Of course, this criterion cannot be fully met: some case records will be better than others. The case

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 24/39

record of a school attempts a portrayal through the organization of data alone, and a portrayal without theoretical aspirations. (Stenhouse, 1977, p. 19)

The case record is used to construct a case study appropriate for sharing with an intended audience, for example, scholars, policymakers, program decision makers, or practitioners. The tone, length, form, structure, and format of the final case presentation depend on audience and study purpose. The final case study is what will be communicated in a publication or report. The full report may include several case studies that are then compared and contrasted, but the basic unit of analysis of such a comparative study remains the distinct cases, and the credibility of the overall findings will depend on the quality of the individual case studies. Exhibit 8.6 shows this sequence of moving from raw case data to the written case study. The second step—converting the raw data to a case record before writing the actual case study—is optional. A case record is only constructed when a great deal of unedited raw data from interviews, observations, and documents must be edited and organized before writing the final case study. In many studies, the analyst will work directly and selectively from raw data to write the final case study.

The case study should take the reader into the case situation and experience—a person’s life, a group’s life, or a program’s life. Each case study in a report stands alone, allowing the reader to understand the case as a unique, holistic entity. At a later point in analysis, it is possible to compare and contrast cases, but initially, each case must be represented and understood as an idiosyncratic manifestation of the phenomenon of interest. A case study should be sufficiently detailed and comprehensive to illuminate the focus of inquiry without becoming boring and laden with trivia. A skillfully crafted case feels like a fine weaving. And that, of course, is the trick. How to do the weaving? How to tell the story? How to decide what stays in the final case presentation and what gets deleted along the way. Elmore Leonard (2001), the author of Glitz and other popular detective thrillers, was once asked how he managed to keep the action in his books moving so quickly. He said, “I leave out the parts that people skip” (p. 7). Not bad advice for writing an engaging case study.

EXHIBIT 8.6 The Process of Constructing Case Studies

Step 1. Assemble the raw case data

These data consist of all the information collected about the person, program, organization, or setting for which a case study is to be written.

Step 2. (optional) Construct a case record

This is a condensation of the raw case data, organized, classified, and edited into a manageable and accessible file.

Step 3. Write a final case study narrative

The case study is a readable, descriptive picture of or story about a person, program, organization, or other unit of analysis, making accessible to the reader all the information necessary to understand the case in all its uniqueness. The case story can be told chronologically or presented thematically (sometimes both).

The case study offers a holistic portrayal, presented with any context necessary for understanding the case.

In doing biographical or life history case studies, Denzin (1989a) has found particular value in identifying what he calls “epiphanies”—“existentially problematic moments in the lives of individuals” (p. 129).

SIDEBAR

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 25/39

HOLISTIC CASE STUDIES

Neuroscientists have long used case studies of victims of traumatic brain injuries to understand how the brain works.

Depending on what part of the brain suffered, strange things might happen. Parents couldn’t recognize their children. Normal people became pathological liars. Some people lost the ability to speak—but could sing just fine. These incidents have become classic case studies, fodder for innumerable textbooks and bull sessions around the lab. The names of these patients—H. M. Tan, Phineas Gage—are deeply woven into the lore of neuroscience. (Kean, 2014, p. SR8)

Science journalist Sam Kean (2014) has reflected on such outlier case studies and concluded that “in the quest for scientific understanding, we end up magnifying patients’ deficits until deficits are all we see. The actual person fades away” (p. SR8). He has concluded that more holistic case studies are needed and are even critical to a fuller understanding.

When we read the full stories of people’s lives . . . , we have to put ourselves into the minds of the characters, even if those minds are damaged. Only then can we see that they want the same things, and endure the same disappointments, as the rest of us. They feel the same joys, and suffer the same bewilderment that life got away from them. Like an optical illusion, we can flip our focus. Tales about bizarre deficits become tales of resiliency and courage. (p. SR8)

It is possible to identify four major structures, or types of existentially problematic moments, or epiphanies, in the lives of individuals. First, there are those moments that are major and touch every fabric of a person’s life. Their effects are immediate and long term. Second, there are those epiphanies that represent eruptions, or reactions, to events that have been going on for a long period of time. Third are those events that are minor yet symbolically representative of major problematic moments in a relationship. Fourth, and finally, are those episodes whose effects are immediate, but their meanings are only given later, in retrospection, and in the reliving of the event. I give the following names to these four structures of problematic experience: (1) the major epiphany, (2) the cumulative epiphany, (3) the illuminative, minor epiphany, and (4) the relived epiphany. (Of course, any epiphany can be relived and given new retrospective meaning.) These four types may, of course, build upon one another. A given event may, at different phases in a person’s or relationship’s life, be first, major, then minor, and then later relived. A cumulative epiphany will, of course, erupt into a major event in a person’s life. (p.129)

Programs, organizations, and communities have parallel types of epiphanies, though they’re usually called critical incidents, crises, transitions, or organizational lessons learned. For a classic example of an organizational development case study in the business school tradition, see the analysis of the Nut Island sewage treatment plant in Quincy, Massachusetts—the complex story of how an outstanding team, highly competent, deeply committed to excellence, focused on the organizational mission, and working hard still ended up in a “catastrophic failure” (Levy, 2001).

Studying such examples is one of the best ways to learn how to write case studies. The section titled “Thick Description,” earlier in this chapter, cited a number of case studies that have become classics in the genre. Chapter 1 presented case vignettes of individuals in an adult literacy program. An example of a full individual case study is presented as Exhibit 8.33, at the end of this chapter (pp. 638–642). Originally prepared for an evaluation report that included several participant case studies, it tells the story of one person’s experiences in a career education program. This case represents an exemplar of how multiple sources of information can be brought together to offer a comprehensive picture of a person’s experience, in this instance, a student’s changing involvement in the program and changing attitudes and behaviors over time. The case data for each student in the evaluation study included the following:

1. Observations of selected students at employer sites three times during the year 2. Interviews three times per year with the students’ employer-instructors at the time of observation

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 26/39

3. Parent interviews once a year 4. In-depth student interviews four times a year 5. Informal discussions with program staff 6. A review of student projects and other documents 7. Twenty-three records from the files of each student (including employer evaluations of students, student

products, test scores, and staff progress evaluations of students)

Initial interview guide questions provided a framework for analyzing and reviewing each source. Information from all of these sources was integrated to produce a highly readable narrative that could be used by decision makers and funders to better understand what it was like to be in the program (Owens, Haenn, & Fehrenbacher, 1976). The evaluation staff of the Northwest Regional Educational Laboratory went to great pains to carefully validate the information in the case studies. Different sources of information were used to cross-validate the findings, patterns, and conclusions. Two evaluators reviewed the material in each case study to independently make judgments and interpretations about its content and meaning. In addition, an external evaluator reviewed the raw data to check for biases or unwarranted conclusions. Students were asked to read their own case studies and comment on the accuracy of fact and interpretation in the study. Finally, to guarantee the readability of the case studies, a newspaper journalist was employed to help organize and edit the final versions. Such a rigorous case study approach increases the confidence of readers that the cases are accurate and comprehensive. Both in its content and in the process by which it was constructed, the Northwest Lab case study presented at the end of this chapter (Exhibit 8.33) exemplifies how an individual case study can be prepared and presented.

How one compares and contrasts cases will depend on the purpose of the study and how the cases were sampled. As discussed in Chapter 5, critical cases, extreme cases, typical cases, and heterogeneous cases serve different purposes. Once case studies have been written, the analytic strategies described in the remainder of this chapter can be used to further analyze, compare, and interpret the cases to generate cross-case themes, patterns, and findings. Exhibit 8.7 summarizes the central points I’ve discussed for constructing case studies.

SIDEBAR

DIVERSE CASE STUDY EXEMPLARS

Case Studies in This Book

• The story of Henietta Lacks. This in-depth case study tells the story of a poor African American tobacco farmer who grew up in the South. In 1951, when she died of cervical cancer, the cells from her tumor were taken for research, without her knowledge or permission, by an oncologist. The cells manifest unique characteristics: They could be cultured, sustained, reproduced, and distributed to other researchers, the first tissue cells discovered with these extraordinary characteristics. How this affected her family and the medical world shows how layers of case studies can be interwoven (Skloot, 2010). (See Chapter 2, Exhibit 2.7, pp. 78–80.)

• Story of Li. This case study presents highlights of a participant case study used to illuminate a Vietnamese woman’s experience in an employment training program; in addition to describing what a job placement meant to her, the case was constructed to illuminate hard to measure outcomes such as “understanding the American workplace culture” and “speaking up for oneself,” learnings that can be critical to long-term job success for an emigrant. (See Chapter 4, Exhibit 4.3, pp. 182–183.)

• Thmaris. A case study of a homeless youth and his journey to a more stable life, this case illuminates the challenges and long-term effects of dealing with childhood trauma and failed relationships. His experience in homeless shelters is central to the case study. (See Chapter 7, Exhibit 7.20, pp. 511– 516.)

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 27/39

• Mike’s story. This case study tells the story of one person’s experiences in a career education program. This case represents an exemplar of how multiple sources of information can be brought together to offer a comprehensive picture of a person’s experience, in this instance, a student’s changing involvement in the program and changing attitudes and behaviors over time. (See Chapter 8, Exhibit 8.33, pp. 638–642.)

Examples of Excellent Published Case Studies

• Education case studies. Brizuela, Stewart, Carrillo, and Berger (2000), Stake, Bresler, and Mabry (1991), Perrone (1985), and Alkin, Daillak, and White (1979)

• Family case studies. Sussman and Gilgun (1996) • International development cases. Wood et al. (2011), Salmen (1987), and Searle (1985) • Government accountability case study. Joyce (2011) • Case studies of effective antipoverty programs. Schorr (1988) • Case studies of research influencing policy in developing countries. (Carden, 2009) • Philanthropy case studies. Evaluation Roundtable (2014) and Sherwood (2005) • Public health cases. White (2014) • Business cases. Collins (2001a, 2009), Collins and Porras (2004), and Collins and Hansen (2011)

EXHIBIT 8.7 Guidelines for Constructing Case Studies

1. Focus first on capturing the uniqueness of each case. The qualitative analyst’s first and foremost responsibility consists of doing justice to each individual case. Don’t jump ahead to formal cross-case analysis until the individual cases are fully constructed.

2. Construct cases for smaller units of analysis first. You can always aggregate data collected on individuals into groups for analysis, but you can never disaggregate data collected only on groups to construct individual case studies.

3. Use multiple sources of data. A case study includes and integrates all the information one has about each case—interview data, observations, and documents.

4. Write the case to tell a core story. Structure the case with a beginning, middle, and end. 5. Make the case coherent for the reader. The case study should take the reader into the case situation

and experience—a person’s life experience, a group’s cohesion, a program’s coherence as a program, or a community’s sense of community.

6. Balance detail with relevance. A case study should be sufficiently detailed and comprehensive to illuminate the focus of inquiry without becoming boring and laden with trivia.

7. Readability and coherence check. Have someone read the case and give you feedback about its coherence and readability, and any gaps or ambiguities that need attention.

8. Accuracy check. For individual case studies, have the person whose story you’ve written review the case for accuracy. For other units of analysis (programs, communities, organizations) have a key informant review the case.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 28/39

MODULE

67 Qualitative Analysis Approaches: Identifying Patterns andThemes

The ability to use thematic analysis appears to involve a number of underlying abilities, or competencies. One competency can be called pattern recognition. It is the ability to see patterns in seemingly random information.

—Boyatzis (1998, p. 7)

This module will present the kinds of findings that result from qualitative analysis. I’ll examine what is meant by content analysis and distinguish patterns from themes. I’ll contrast inductive and deductive analytical approaches and introduce some specific analytical approaches like grounded theory and analytic induction. I’ll discuss sensitizing concepts as a focus for analysis and differentiate indigenous concepts and typologies from analyst-created concepts and typologies. Exhibit 8.10, at the end of this module (pp. 551–552), will summarize the 10 analytical approaches reviewed in this module.

The next module will go into detail about the actual coding and analytical procedures for making sense of qualitative data. I could have started with those procedural processes for analysis, but I think it’s helpful to understand first what kinds of findings can be generated from qualitative analysis before delving very deeply into the mechanics and operational processes. Thus, this module will provide examples of patterns, themes, indigenous concepts and typologies, and analyst-constructed concepts and typologies—the fruit of qualitative analysis, a metaphor that harks back to Chapter 1. The next module will explain how you harvest qualitative fruit once you know more about the variety of fruit that can be harvested.

Content Analysis

No consensus exists about the terminology to apply in differentiating varieties and processes of qualitative analysis. Content analysis sometimes refers to searching text for and counting recurring words or themes. For example, a speech by a politician might be analyzed to see what phrases or concepts predominate, or speeches of two politicians might be compared to see how many times and in what contexts they used a phrase like “global economy” or “family values.” More generally, content analysis usually refers to analyzing text (interview transcripts, diaries, or documents) rather than observation-based field notes. Even more generally, content analysis refers to any qualitative data reduction and sense-making effort that takes a volume of qualitative material and attempts to identify core consistencies and meanings. Case studies, for example, can be content analyzed.

Patterns Are the Basis for Themes

The core meanings found through content analysis are patterns and themes. The processes of searching for patterns and themes may be distinguished as pattern analysis and theme analysis, respectively. I’m asked frequently about the difference between a pattern and a theme. The term pattern refers to a descriptive finding, for example, “Almost all participants reported feeling fear when they rappelled down the cliff,” while a theme takes a more categorical or topical form, interpreting the meaning of the pattern: FEAR. Putting these terms together, a report on a wilderness education study might state,

The content analysis revealed a pattern of participants reporting being afraid when rappelling down cliffs and running river rapids; many also initially experienced the group process of sharing personal feelings as evoking some fear. Those patterns make dealing with fear a major theme of the wilderness education program experience.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 29/39

Inductive and Deductive Qualitative Analyses Qualitative deductive analysis: Determining the extent to which qualitative data in a particular study support existing general conceptualizations, explanations, results, and/or theories

Qualitative inductive analysis: Generating new concepts, explanations, results, and/or theories from the specific data of a qualitative study

Francis Bacon is known for his emphasis on induction, the use of direct observation to confirm ideas and the linking together of observed facts to form theories or explanations of how natural phenomenon work. Bacon correctly never told us how to get ideas or how to accomplish the linkage of empirical facts. Those activities remain essentially humanistic—you think hard. (Bernard, 2000, p. 12)

SIDEBAR

HUMAN PATTERN RECOGNITION

Pattern detection is an evolutionary capacity developed in and passed on from our Stone Age ancestors.

Human beings do not have very many natural defenses. We are not all that fast, and we are not all that strong. We do not have claws or fangs or body armor. We cannot spit venom. We cannot camouflage ourselves. And we cannot fly. Instead, we survive by means of our wits. Our minds are quick. We are wired to detect patterns and respond to opportunities and threats without much hesitation. (Silver, 2012, p. 12)

Both the capacity and the drive to find patterns is much more developed in humans than in other animals, explains Tomaso Poggio, a neuroscientist who studies how human brains process information and make sense of the world. The problem is that these evolutionary instincts sometimes lead us to see patterns when there are none. People do that all the time, Poggio has found—“finding patterns in random noise.” Thus, unless we work actively to become aware of our biases and avoid overconfidence when identifying patterns, we can fail to accurately distinguish signal (pattern) from noise (random occurrences and relationships) (Silver, 2012, p. 12).

So beware of the allure and dangers of apophenia: identifying meaningful patterns in meaningless randomness.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 30/39

Bacon (1561–1626) is recognized as one of the founders of scientific thinking, but he also has been awarded “the dubious honor of being the first martyr of empiricism.” Still pondering the universe at the age of 65, he got an idea one day while driving his carriage in the snow in a farming area north of London. It occurred to him that cold might delay the biological process of putrefaction, so he stopped, purchased a hen from a farmer, killed it on the spot, and stuffed it in the snow. His idea worked. The snow did delay the rotting process, but he subsequently contracted bronchitis and died a month later (Bernard, 2000, p. 12). As I noted in Chapter 6, fieldwork can be risky. Engaging in analysis, on the other hand, is seldom life threatening, though you do risk being disputed and sometimes ridiculed by those who arrive at contrary conclusions.

Inductive analysis involves discovering patterns, themes, and categories in one’s data. Findings emerge out of the data, through the analyst’s interactions with the data. In contrast, when engaging in deductive analysis, the data are analyzed according to an existing framework. Qualitative analysis is typically inductive in the early stages, especially when developing a codebook for content analysis or figuring out possible categories, patterns, and themes. This is often called “open coding” (Strauss & Corbin, 1998, p. 223), to emphasize the importance of being open to the data. “Grounded theory” (Glaser & Strauss, 1967) emphasizes becoming immersed in the data—being grounded—so that embedded meanings and relationships can emerge. The French would say of such an immersion process, Je m’enracine (“I root myself”). The analyst becomes implanted in the data. The resulting analysis grows out of that groundedness.

From Inductive to Deductive

Once patterns, themes, and/or categories have been established through inductive analysis, the final, confirmatory stage of qualitative analysis may be deductive in testing and affirming the authenticity and appropriateness of the inductive content analysis, including carefully examining deviate cases or data that don’t fit the categories developed. Generating theoretical propositions or formal hypotheses after inductively identifying categories is considered deductive analysis by grounded theorists Strauss and Corbin (1998): “Anytime that a researcher derives hypotheses from data, because it involves interpretation, we consider that to be a deductive process” (p. 22). Grounded theorizing, then, involves both inductive and deductive processes: “At the heart of theorizing lies the interplay of making inductions (deriving concepts, their properties, and dimensions from data) and deductions (hypothesizing about the relationships between concepts)” (Strauss & Corbin, 1998, p. 22).

SIDEBAR

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 31/39

INDUCTIVE PROGRAM THEORY DEVELOPMENT

Inductive [program theory] development involves observing the program in action and deriving the theories that are implicit in people’s actions when implementing the program. The theory in action may differ from the espoused theory: what people do is different from what they say they do, believe they are doing, or believe they should be doing according to policy or some other principle.

The program in action could be observed at the point of service delivery in the field that is closest to the clients of the program. It could include observation of the program in action, including through participant observation. Interviews can be conducted with staff about how they implement the program and about why they undertake some activities that may appear to be at variance with the program design or omit parts of the program design. Program participants can be interviewed about how they experience the program (or have experienced it), if data are gathered through exit interviews; and how they would like to experience it. . . .

Theory in action could also be identified from looking at how program managers have interpreted the program, as indicated by the types of practices they adopt. However, it is important to confirm that inferences drawn are correct. For example, what they consider to be important about the program and how they interpret the program’s intent might be inferred from their choice of particular performance indicators and how they use them. This inference would need to be confirmed with program managers, since the selection of indicators may have been imposed on management and staff as, for example, part of national nonprogram specific monitoring requirements. Or the indicators may have been selected simply because they were available and easy to measure and report, but not necessarily considered by staff to be meaningful.

—Funnell and Rogers (2011, pp. 111–112)

From Deduction to Induction: Analytic Induction

Analytic induction as a distinct qualitative analysis approach begins with an analyst’s deduced propositions or theory-derived hypotheses and “is a procedure for verifying theories and propositions based on qualitative data” (Taylor & Bogdan, 1984, p. 127). Sometimes, as with analytic induction, qualitative analysis is first deductive or quasi-deductive and then inductive, as when, for example, the analyst begins by examining the data in terms of theory-derived sensitizing concepts or applying a theoretical framework developed by someone else (e.g., testing Piaget’s developmental theory on case studies of children). After or alongside this deductive phase of analysis, the researcher strives to look at the data afresh for undiscovered patterns and emergent understandings (inductive analysis). I’ll discuss both grounded theory and analytic deduction at greater length later in this chapter.

SIDEBAR

DEDUCTIVE ANALYSIS EXAMPLE: AGENCY RESISTANCE TO OUTCOME MEASUREMENT

Identifying factors that support evaluation use and overcoming resistance to evaluation have been two of the central concerns of the evaluation profession for 40 years (Alkin, 1975; Patton, 1978b). A great volume of research, much of it qualitative case studies of use and nonuse, point to the importance of high- quality stakeholder involvement to enhance use (Brandon & Fukunaga, 2014; Patton, 2008, 2012a). Strickhouser and Wright (2014) contributed to this arena of inquiry by interviewing the directors and staff of eight human service nonprofit agencies and their one common funder in a large southeastern metropolitan area. They found that agencies continue to resist, and in some cases sabotage, evaluation reporting requirements. They found that, as shown in previous studies, program evaluators often find it difficult to conceptualize and evaluate outcomes. Tensions around outcome measurement make

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 32/39

communication between agencies and their funders difficult and frustrating on both sides. This is an example of a primarily deductive qualitative analysis because the questions asked and the analysis conducted draw on issues and concepts that are already well established. The qualitative inquiry tests whether already identified factors continue to be manifest in a specific group not previously studied. The findings are confirmatory and illuminating, but they do not generate any new concepts or factors.

Because, as identified and discussed in Chapter 2, inductive analysis is one of the primary characteristics of qualitative inquiry, we’ll focus on strategies for thinking and working inductively. There are two distinct ways of analyzing qualitative data inductively. First, the analyst can identify, define, and elucidate the categories developed and articulated by the people studied to focus analysis. Second, the analyst may also become aware of categories or patterns for which the people studied did not have labels or terms, and the analyst develops terms to describe these inductively generated categories. Each of these approaches is described below.

Inductive Approaches: Indigenous and Analyst-Constructed Patterns and Themes

Indigenous Concepts and Practices

A good place to begin inductive analysis is to inventory and define key phrases, terms, and practices that are special to the people in the setting studied. What are the indigenous categories that the people interviewed have created to make sense of their world? What are the practices they engage in that can only be understood within their worldview? Anthropologists call this emic analysis and distinguish it from etic analysis, which refers to labels imposed by the researcher. (For more on this distinction and its origins, see Chapter 6, which discusses emic and etic perspectives in fieldwork.) “Identifying the categories and terms used by informants themselves is also called in vivo coding” (Bernard 1998, p. 608).

Consider the practice among traditional Dani women of amputating a finger joint when a relative dies. The Dani people live in the lush Baliem Valley of Irian Java, Indonesia’s most remote province, in the western half of New Guinea. The joint is removed to honor and placate ancestral ghosts. Missionaries have fought against the practice as sinful, and the government has banned it as barbaric, but many traditional women still practice it.

Some women in Dani villages have only four stubs and a thumb on each hand. In tribute to her dead mother and brothers, Soroba, 38, has had the tops of six of her fingers amputated. “The first time was the worst,” she said. “The pain was so bad, I thought I would die. But it’s worth it to honor my family.” (Sims, 2001, p. 6)

Analyzing such an indigenous practice begins with understanding it from the perspective of its practitioners, within the indigenous context, in the words of the local people, in their language, within their worldview.

According to this view, cultural behavior should always be studied and categorized in terms of the inside view—the actors’ definition—of human events. That is, the units of conceptualization in anthropological theories should be “discovered” by analyzing the cognitive processes of the people studied, rather than “imposed” from cross-cultural (hence, ethnocentric) classifications of behavior. (Pelto & Pelto, 1978, p. 54)

Anthropologists, working cross-culturally, have long emphasized the importance of preserving and reporting the indigenous categories of the people studied. Franz Boas (1943) was a major influence in this direction: “If it is our serious purpose to understand the thoughts of a people, the whole analysis of experience must be based on their concepts, not ours” (p. 314).

In an intervention program, certain terms may emerge or be created by participants to capture some essence of the program. In the wilderness education program I evaluated, the idea of “detoxification” became a

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 33/39

powerful way for participants to share meaning about what being in the wilderness together meant (Patton, 1999, pp. 49–52). In the Caribbean Extension Project evaluation, the term liming had special meaning from the participants. Not really translatable, it essentially means passing time, hanging out, doing nothing, shooting the breeze—but doing so agreeably, without guilt, stress, or a sense that one ought to be doing something more productive with one’s time. Liming has positive, desirable connotations because of its social group meaning—people just enjoying being together without having to accomplish anything. Given that uniquely Caribbean term, what does it mean when participants describe what happened in a training session or instructional field trip as primarily “liming”? How much “liming” could acceptably be built into training for participant satisfaction and still get something done? How much programmatic liming was acceptable? These became key formative evaluation issues.

In evaluating a leadership training program, we gathered extensive data on what participants and staff meant by the term leadership. Pretraining and posttraining exercises involved having participants write a paragraph on leadership; the writing was part of the program curriculum, not designed for evaluation, but the results provided useful qualitative evaluation data. There were small-group discussions on leadership. The training included lectures and group discussions on leadership, which we observed. We participated in and took notes on informal discussions about leadership. Because the very idea of leadership was central to the program, it was essential to capture variations in what participants meant when they talked about “leadership.” The results showed that the ongoing confusion about what leadership meant was one of the problematic issues in the program. Leadership was an indigenous concept in that staff and participants throughout the training experience used it extensively, but it was also a sensitizing concept since we knew going into the fieldwork that it would be an important notion to study.

Sensitizing Concepts In contrast to purely indigenous concepts, sensitizing concepts refer to categories that the analyst brings to the data. Experienced observers often use sensitizing concepts to orient fieldwork, an approach discussed in Chapter 6 (pp. 357–363). These sensitizing concepts have their origins in social science theory, the research literature, or evaluation issues identified at the beginning of a study. Sensitizing concepts give the analyst “a general sense of reference” and provide “directions along which to look” (Blumer, 1969, p. 148). Using sensitizing concepts involves examining how the concept is manifest and given meaning in a particular setting or among a particular group of people.

Conroy (1987) used the sensitizing concept “victimization” to study police officers. Innocent citizens are frequently thought of as the victims of police brutality or indifference. Conroy turned the idea of victim around and looked at what it would mean to study police officers as victims of the experiences of law enforcement. He found the sensitizing concept of victimization helpful in understanding the isolation, lack of interpersonal affect, cynicism, repressed anger, and sadness observed among police officers. He used the idea of victimization to tie together the following quotes from police officers:

• As a police officer and as an individual I think I have lost the ability to feel and to empathize with people. I had a little girl that was run over by a bus and her mother was there and she had her little book bag. It was really sad at the time but I remember feeling absolutely nothing. It was like a mannequin on the street instead of some little girl. I really wanted to be able to cry about it and I really wanted to have some feelings about it, but I couldn’t. It’s a little frightening for me to be so callous and I have been unable to relax.

• I am paying a price by always being on edge and by being alone. I have become isolated from old friends. We are different. I feel separate from people, different, out of step. It becomes easier to just be with other police officers because they have the same basic understanding of my environment, we speak the same language. The terminology is crude. When I started I didn’t want to get into any words like scumbags and scrotes, but it so aptly describes these people.

• I have become isolated from who I was because I have seen many things I wish I had not seen. It’s frustrating to see things that other people don’t see, won’t see, can’t see. I wish sometimes, I didn’t see the things. I need to be assertive, but don’t like it. I have to put on my police mask to do that. But now it is getting harder and harder to take that mask off. I take my work home with me. I don’t want my work to invade my personal life but I’m

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 34/39

finding I need to be alone more and more. I need time to recharge my batteries. I don’t like to be alone, but must. (p. 52)

Two additional points are worth making about these quotations. First, by presenting the actual data on which the analysis is based, the readers are able to make their own determination of whether the concept “victimization” helps in making sense of the data. By presenting respondents in their own words and reporting the actual data that were the basis of his interpretation, Conroy invites readers to make their own analysis and interpretation. The analyst’s constructs should not dominate the analysis, but rather, they should facilitate the reader’s understanding of the world under study.

Second, these three quotations illustrate the power of qualitative data. The point of analysis is not simply to find a concept or label to neatly tie together the data. What is important is understanding the people studied. Concepts are never a substitute for direct experience with the descriptive data. What people actually say and the descriptions of events observed remain the essence of qualitative inquiry. The analytical process is meant to organize and elucidate telling the story of the data. Indeed, the skilled analyst is able to get out of the way of the data to let the data tell their own story. The analyst uses concepts to help make sense of and present the data, but not to the point of straining or forcing the analysis. The reader can usually tell when the analyst is more interested in proving the applicability and validity of a concept than in letting the data reveal the perspectives of the people interviewed and the intricacies of the world studied.

Analyst-Created Concepts Sensitizing concepts are used during fieldwork to guide the inquiry and subsequent analysis. The analysis puts flesh on the bare bones of a sensitizing concept, deepening its meaning and revealing its implications. Concepts can also emerge during analysis that were not yet imagined or conceptualized during fieldwork. At a conference for fathers of teenagers aimed at illuminating strategies for dealing with the challenges of guiding one’s child through adolescence, the term reverse incest anxiety emerged in our analysis to describe some fathers’ fear of expressing physical affection for teenage daughters lest it be perceived as inappropriate.

SIDEBAR

TEMPLATE ANALYSIS

Template analysis is an approach being used in organizational research to organize and make sense of rich, unstructured qualitative data. The analytical framework provides guidance for defining codes, hierarchical coding, and parallel coding. Template analysis involves identifying conceptual themes, clustering them into broader groupings, and, subsequently, identifying “master themes” and subsidiary constituent themes across cases. In organizational research and program evaluation, template analysis “works particularly well when the aim is to compare the perspectives of different groups of staff within a specific context” (King, 2004, p. 257).

SOURCES: King (2012); King and Horrocks (2012); Waring and Wainwright (2008).

British social scientist Guy Standing (2011) created the term precariat to describe a new class of people in industrialized societies who are in the precarious position of only getting occasional short-term and part-time work and whose quality of life and living standards are made precarious. They have no career path and stability, and they experience multiple forms of economic and social insecurity.

Having suggested how singular concepts can bring focus to inductive analysis, the next level of analysis, constructing typologies, moves us into a somewhat more complex analytical strategy.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 35/39

Typologies and Continua: Indigenous and Analyst-Constructed Frameworks

There are two kinds of people in the world: those who think there are two kinds of people in the world and those who don’t.

—Humorist Robert Benchley (1889–1945) Law of Distinction

Indigenous Typologies Typologies are classification systems made up of categories that divide some aspect of the world into parts along a continuum. They differ from taxonomies, which completely classify a phenomenon through mutually exclusive and exhaustive categories, like the biological system for classifying species. Typologies, in contrast, are built on ideal types or illustrative end points rather than a complete and discrete set of categories. Well- known and widely used sociological typologies include Redfield’s folk–urban continuum (gemeinschaft– gesellschaft) and Von Wiese’s and Becker’s sacred–secular continuum (for details, see Vidich & Lyman, 2000, p. 52). Sociologists classically distinguish ascribed from achieved characteristics. Psychologists distinguish degrees of mental illness (neuroses to psychoses). Political scientists classify governmental systems along a democratic–authoritarian continuum. Economists distinguish laissez-faire from centrally planned economic systems. Systems analysts distinguish open from closed systems. In all of these cases, however, the distinctions involve matters of degree and interpretation rather than absolute distinctions. All of these examples have emerged from social science theory and represent theory-based typologies constructed by analysts. We’ll examine that approach in greater depth in a moment. First, however, let’s look at identifying indigenous typologies as a form of qualitative analysis.

Illuminating indigenous typologies requires an analysis of the continua and distinctions used by people in a setting to break up the complexity of reality into distinguishable parts. The language of a group of people reveals what is important to them in that they name something to separate and distinguish it from other things with other names. Once these labels have been identified from an analysis of what people have said during fieldwork, the next step is to identify the attributes or characteristics that distinguish one thing from another. In describing this kind of analysis, Charles Frake (1962) used the example of a hamburger. Hamburgers can vary a great deal in how they are cooked (rare to well done) or what is added to them (pickles, mustard, ketchup, lettuce), and they are still called hamburgers. However, when a piece of cheese is added to the meat, it becomes a cheeseburger. The task for the analyst is to discover what it is that separates a “hamburger” from a “cheeseburger”—that is, to discern and report “how people construe their world of experience from the way they talk about it” (Frake, 1962, p. 74).

An analysis example of this kind comes from a formative evaluation aimed at reducing the dropout rate among high school students. In observations and interviews at the targeted high school, it became important to understand the ways in which teachers categorized students. With regard to problems of truancy, absenteeism, tardiness, and skipping class, the teachers had come to label students as either “chronics” or “borderlines.” One teacher described the chronics as “the ones who are out of school all the time, and everything you do to get them in doesn’t work.” Another teacher said, “You can always pick them out, the chronics. They’re usually the same kids.” The borderlines, on the other hand,

SIDEBAR

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 36/39

THE SIGNAL AND THE NOISE: LIVING WITH, LEARNING FROM, AND HONORING THE NOISE

The metaphor of distinguishing signal from noise is a powerful way to talk about pattern detection in qualitative analysis. In reporting findings, qualitative analysts typically focus on and highlight the patterns and themes found, what they mean, and their implications for theory and/or practice. The signal versus noise distinction can appear to connote that what is valuable is the signal. Of course, one person’s noise can be another person’s signal, so the distinction depends on perspective. But however distinguished, once the signal (pattern, theme, or meaning) is detected, the noise fades into the background. Still, much can be learned from dwelling with and understanding the noise. Describing, characterizing, making sense of, portraying, and understanding the noise can be, in and of itself, a qualitative analysis contribution.

• Evaluation professional Nora Murphy, cofounder of the TerraLuna Collaborative, described to me observing dinner meetings of teachers participating in an innovative initiative and just immersing herself in the predinner chatter and other activities going on—small groups forming and disbanding, people moving around and milling around (taking in the head nodding, head shaking, furled brows, and animated hands; reconnecting hugs and handshakes; and hearing the laughter)—literally experiencing the noise of the interactions to get a sense of how these teachers were coming together with each other.

• Seasoned educator Eleanor Coleman, of the Minnesota Humanities Center, told me how she and her team of district leaders went through an exercise of listing all the new initiatives that had been introduced in the school district over the past five years. The walls were soon covered with a list of more than 100 initiatives that had been introduced, demanding their attention and participation, plus ongoing demands from needy students, concerned parents, paper-pushing administrators, union leaders, and elected officials, while they tried to lead their lives, take care of their families, maintain relationships with friends and neighbors, and feed their spiritual needs. Messy lives. Noisy lives. How, through all that noise, would yet another initiative become an innovative and valued signal, catching the attention of and engendering commitment from teachers? Answering that question—indeed, even beginning to answer that question—meant dwelling more deeply with and understanding the noise.

In The Signal and the Noise, Nate Silver (2012) recounts a conversation with an international terrorism expert in which the expert distinguishes the challenge of finding a proverbial needle in a haystack from the even more daunting challenge of finding one particular needle in a large stack of needles. In both cases, the focus is on finding the needle—the thing you’re looking for, the signal, the pattern, the thing that stands out. And to find that one particular needle you’re looking for, you have to take apart the haystack or the stack of needles. But before doing so, imagine first inquiring into the stack, whether of hay or needles (How did the stack come to be there? What’s the context within which the stack has been stacked? What are the characteristics of the stack? What can be learned about and from the stack) before destroying it in search of the needle.

A comprehensive, holistic qualitative inquiry will describe, analyze, attend to, and attempt to understand both the signal and the noise. And sometimes, the noise is the signal.

skip a few classes, waiting for a response, and when it comes they shape up. They’re not so different from your typical junior high student, but when they see the chronics getting away with it, they get more brazen in their actions.

Another teacher said, “Borderlines are gone a lot but not constantly like the chronics.”

Not all teachers used precisely the same criteria to distinguish “chronics” from “borderlines,” but all teachers used these labels in talking about students. To understand the program activities directed at reducing high school dropouts and the differential impact of the program on students, it became important to observe

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 37/39

differences in how “borderlines” and “chronics” were treated. Many teachers, for example, refused even to attempt to deal with chronics. They considered it a waste of their time. Students, it turned out, knew what labels were applied to them and how to manipulate these labels to get more or less attention from teachers. Students who wanted to be left alone called themselves “chronics” and reinforced their “chronic image with teachers. Students who wanted to graduate, even if only barely and with minimal school attendance, cultivated an image as ‘borderline.’”

Another example of an indigenous typology emerged in the wilderness education program I evaluated. As I explained earlier, when I used this example to discuss participant observation, one subgroup started calling themselves the “turtles.” They contrasted themselves to the “truckers.” On the surface, these labels were aimed at distinguishing different styles of hiking and backpacking, one slow and one fast. Beneath the surface, however, the terms came to represent different approaches to the wilderness and different styles of experience in relation to the wilderness and the program.

Groups, cultures, organizations, and families develop their own language systems to emphasize distinctions they consider important. Every program gives rise to special vocabulary that staff and participants use to differentiate types of activities, kinds of participants, styles of participation, and variously valued outcomes. These indigenous typologies provide clues to analysts that the phenomena to which the labels refer are important to the people in the setting and that to fully understand the setting it is necessary to understand those terms and their implications.

SIDEBAR

TAO QUALITIES

When Beauty is recognized in the World Ugliness has been learned; When Good is recognized in the World Evil has been learned. In this way: Alive and dead are abstracted from growth; Difficult and easy are abstracted from progress; Far and near are abstracted from position; Strong and weak are abstracted from control, Song and speech are abstracted from harmony; After and before are abstracted from sequence.

Comparative Analysis

A newborn is soft and tender,

A crone, hard and stiff.

Plants and animals, in life, are supple and juicy;

In death, brittle and dry.

So softness and tenderness are attributes of life,

And hardness and stiffness, attributes of death.

—Tao Te Ching of Lao Tzu

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 38/39

Analyst-Constructed Typologies Once indigenous concepts, typologies, and themes have been surfaced, understood, and analyzed, the qualitative analysis may move to a different inductive task to further elucidate findings—constructing nonindigenous typologies based on analyst-generated patterns, themes, and concepts. Such constructions must be done with considerable care to avoid creating things that are not really in the data. The advice of biological theorist John Maynard Smith (2000) is informative in this regard: Seek models of the world that make sense and whose consequences can be worked out, for “to replace a world you do not understand by a model of a world you do not understand is no advance” (p. 46).

Constructing ideal types or alternative paradigms is one simple form of presenting qualitative comparisons. Exhibit 8.8 presents my ideal-typical comparison of “coming-of-age paradigms,” which contrasts tribal initiation themes with contemporary coming-of-age themes (Patton, 1999). A series of patterns are distilled into contrasting themes that create alternative ideal types. The notion of “ideal types” makes it explicit that the analyst has constructed and interpreted something that supersedes purely descriptive analysis.

In creating analyst-constructed typologies through inductive analysis, you take on the task of identifying and making explicit patterns that appear to exist but remain unperceived by the people studied. The danger is that analyst-constructed typologies impose a world of meaning on the participants that better reflects the observer’s world than the world under study. One way of testing analyst-constructed typologies is to present them to the people whose world is being analyzed to find out if the constructions make sense to them.

The best and most stringent test of observer constructions is their recognizability to the participants themselves. When participants themselves say, “yes, that is there, I’d simply never noticed it before,” the observer can be reasonably confident that he has tapped into extant patterns of participation. (Lofland, 1971, p. 34)

Exhibit 8.9, using the problem of classifying people’s ancestry, shows what can happen when indigenous and official constructions conflict, a matter of some consequence to those affected.

A good example of an analyst-generated typology comes from an evaluation of the National Museum of Natural History, Smithsonian Institution, done by Robert L. Wolf and Barbara L. Tymitz (1978). This has become a classic in the museum studies field. They conducted a naturalistic inquiry of viewers’ reactions to an exhibit on “Ice Age Mammals and Emergence of Man.” From their observations, they identified four different kinds of visitors to the exhibit.

EXHIBIT 8.8 Coming-of-Age Paradigms

Ideal-typical comparison of “coming-of-age paradigms,” which contrasts indigenous tribal initiation themes with contemporary, analyst-constructed coming-of-age themes

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 39/39

EXHIBIT 8.9 Qualitative Analysis of Ancestry at the U.S. Census

To count different kinds of people—the job of the Census Bureau—you need categories to count them in. The long form of the 2000 census, given to one in six households, asked an open-ended, fill-in-the-blank question about “ancestry.” Analysts then coded the responses into 604 categories, up from 467 in 1980. The government doesn’t ask about religion, so if people respond that they are “Jewish,” they don’t get their ancestry counted. However, those who write in that they are Amish or Mennonite do get counted because those are considered cultural categories.

Ethnic minorities that cross national boundaries, such as French and Spanish Basques, and groups affected by geopolitical change, like Czechs and Slovaks or groups within the former Yugoslavia, are counted in distinct categories. The Census Bureau, following advice from the U.S. State Department, differentiates Taiwanese Americans from Chinese Americans, a matter of political sensitivity.

Can Assyrians and Chaldeans be lumped together? When the Census Bureau announced that they would combine the two in the same “ancestry code,” an Assyrian group sued over the issue but lost the lawsuit. Assyrian Americans trace their roots to a biblical-era empire covering much of what is now Iraq and believe that Chaldeans are a separate religious subgroup. A fieldworker for the Census Bureau did fieldwork on the issue.

“I went into places where there were young people playing games, went into restaurants, and places where older people gathered,” says Ms. McKenney. . . . She paid a visit to Assyrian neighborhoods in Chicago, where a large concentration of Assyrian-Americans lives. At a local community center and later that day at the Assyrian restaurant next door, community leaders presented their case for keeping the ancestry code the same. Over the same period, she visited Detroit to look into the Chaldean matter . . . .

“I found that many of the people, especially the younger people, viewed it as an ethnic group, not a religion,” says Ms. McKenney. She and Mr. Reed (Census Bureau Ancestry research expert) concurred that enough differences existed that the Chaldeans could potentially qualify as a separate ancestry group.

In a conference call between interested parties, a compromise was struck. Assyrians and Chaldeans would remain under a single ancestry code, but the name would no longer be Assyrian, it would be Assyrian/Chaldean/Syriac—Syriac being the name of the Aramaic dialect that Assyrians and Chaldeans speak. “There was a meeting of the minds between all the representatives, and basically it was a unified decision to say that we’re going to go under the same name,” says the Chaldean Federation’s Mr. Yono. (Kulish, 2001, p. 1)

6/2/2018 Data Analysis Strategies Scoring Guide

Data Analysis Strategies Scoring Guide

Due Date: End of Unit 8. Percentage of Course Grade: 15%.

CRITERIA

NON- PERFORMANCE

BASIC

PROFICIENT

DISTINGUISHED

Describe data

Does not

Describes data

Describes data

Describes data analysis

analysis methods

describe data

analysis methods,

analysis methods that

methods clearly and in a well-

that are appropriate

analysis

but methods are

are appropriate to the

documented manner that are

to the design, allow

methods.

not appropriate to

design, allow the

appropriate to the design,

the research

the design or will

research question to

allow the research question to

question to be

not answer the

be answered, and are

be answered, and are

answered, and are

research

supported by

supported by appropriate

supported by

question.

appropriate references.

references.

appropriate

references.

25%

Create interview

Does not create

Creates interview

Creates interview

Creates interview questions

questions that are

interview

questions that are

questions that are

that are relevant to the

relevant to the

questions that are

relevant to the

relevant to the

research question and are

research question

relevant to the

research

research question and

free from leading or biased

and are free from

research question

question, but may

are free from leading or

language, and ensures that

leading or biased

and are free from

not be free from

biased language.

questions exemplify the skills

language.

leading or biased

leading or biased

of a qualitative researcher.

25%

language.

language.

Describe the role of

Does not

Describes the role

Describes the role of

Describes the role of the

the researcher.

describe the role

of the researcher,

the researcher.

researcher, and provides an

25%

of the researcher.

but the description

insightful analysis of any pre-

is incomplete.

understandings,

preconceptions, and biases

about the topic, along with a

detailed plan as to how to

address them.

Write in a manner

Writes in a

Writes in a

Writes each item in a

Writes each item in a manner

that is sufficiently

manner that is

manner that is

manner that is

that is sufficiently scholarly in

scholarly in tone

insufficiently

sufficiently

sufficiently scholarly in

tone and contains no editorial

and contains few

scholarly in tone

scholarly in tone

tone and contains

or mechanical (grammar,

editorial or

and contains

and contains

fewer than two editorial

usage, typographical, et

mechanical

more than two

fewer than two

or mechanical errors

cetera) errors.

(grammar, usage,

editorial or

editorial or

per five pages.

typographical, et

mechanical errors

mechanical errors

cetera) errors.

per page.

per five pages.

25%

https://courseroomc.capella.edu/bbcswebdav/institution/MULTI/MULTI7868/180400/Scoring_Guides/u08a1_scoring_guide.html 1/1

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 1/13

During the latter part of the school year, Mike worked on several projects at once. He worked on a project on basic electricity and took a course on “Beginning Guitar” for project credit.

To improve his communication skills, Mike also worked on an intergroup relations project. This project grew out of an awareness by the staff that Mike liked other students but seemed to lack social interaction with his peers and the staff. Reports at the beginning of the year indicated that he appeared dependent and submissive and was an immature conversationalist. In response to these observations, Mike’s learning manager negotiated project objectives and activities with him that would help improve his communication skills and help him solve some of his interpersonal problems. At the end of the year, Mike noted a positive change related to his communication skills. “I can now speak up in groups,” he said.

Mike’s unfinished project related to his own experience and interests. He had moved to the Portland area from Canada 10 years previously and frequently returns to see relatives. The project was on immigration laws and regulations in the functional citizenship area. At the same time, it would help Mike improve his grammar and spelling. Since students have the option of completing a project started during their junior year when they are seniors, Mike had a chance to finish the project this year. Of the year, Mike said, “It turned out even better than I thought.” Things he liked best about the new experience in EBCE were working at his own speed, going to a job, and having more freedom.

At the end of the year, Mike’s tests showed significant increases in both reading and language skills. In the math and study skill areas, where he was already above average, only slight increases were indicated.

Tests on attitudes, given both at the beginning and at the end of the year, indicated positive gains in self- reliance, understanding of roles in society, tolerance for people with differences in background and ideas from his, and openness to change.

Aspirations did not change for Mike. He still wants to go into computer programming after finishing college. “When I started the year, I really didn’t know too much about computers. I feel now that I know a lot and want even more to make it my career.”

The description of Mike’s second year in EBCE is omitted. We pick up the case study after his second-year experience.

Mike’s Views of EBCE. Mike reported that his EBCE experiences, especially the learning levels, had improved all of his basic skills. He felt he had the freedom to do the kinds of things he wanted to do while at employer sites. These experiences, according to Mike, have strengthened his vocational choice in the field he wanted to enter and have caused him to look at educational and training requirements plus some other alternatives. For instance, Mike tried to enter the military, figuring it would be a good source of training in the field of computers, but was unable to because of a medical problem.

By going directly to job sites, Mike has gotten a feel for the “real world” of work. He said his work at computer repair–oriented sites furthered his conception of the patience necessary when dealing with customers and the fine degree of precision needed in the repair of equipment. He also discovered how a customer engineer takes a problem, evaluates it, and solves it.

When asked about his work values, Mike replied, “I figure if I get the right job, I’d work at it and try to do my best. . . . In fact, I’m sure that even though I didn’t like the job I’d still do more than I was asked to. . . . I’d work as hard as I could.” Although he has always been a responsible person, he feels that his experiences in EBCE have made him more trustworthy. Mike also feels that he is now treated more like an adult because of his own attitudes. In fact, he feels he understands himself a lot more now.

Mike’s future plans concern trying to get a job in computer programming at an automobile dealership or computer services company. He had previously done some computer work at the automobile dealership in relationship to a project in Explorer Scouts. He also wants more training in computer programming and has

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 2/13

discussed these plans with the student coordinator and an EBCE secretary. His attitude toward learning is that it may not be fun but it is important.

When asked in which areas he made less growth than he had hoped to, Mike responded, “I really made a lot of growth in all areas.” He credits the EBCE program for this, finding it more helpful than high school. It gives you the opportunity to “get out and meet more people and get to be able to communicate better with people out in the community.”

Most of Mike’s experiences at the high school were not too personally rewarding. He did start a geometry class there this year but had to drop it as he had started late and could not catch up. Although he got along all right with the staff at the high school, in the past he felt the teachers there had a “barrier between them and the students.” The EBCE staff “treat you on a more individual type circumstance. . . . [They] have the time to talk to you.” In EBCE, you can “work at your own speed. . . . [You] don’t have to be in the classroom.”

Mike recommends the program to most of his friends, although some of his friends had already dropped out of school. He stated, “I would have paid to come into EBCE, I think it’s really that good a program. . . . In fact, I’ve learned more in these two years in EBCE than I have in the last four years at the high school.” He did not even ask for reimbursement for travel expenses because he said he liked the program so much.

Other Perspectives and Data

The Views of His Parents. When Mike first told his parents about the program, they were concerned about what was going to be involved and whether it was a good program and educational. When interviewed in March, they said that EBCE had helped Mike to become more mature and know where he is going.

Mike’s parents said that they were well informed by the EBCE staff in all areas. Mike tended to talk to them about his activities in EBCE, while the only thing he ever talked about at the high school was photography. Mike’s career plans have not really changed since he entered EBCE, and his parents have not tried to influence him, but EBCE has helped him to rule out mechanic and truck driving as possible careers.

Since beginning the EBCE program, his parents have found Mike to be more mature, dependable, and enthusiastic. He also became more reflective and concerned about the future. His writing improved, and he read more.

There are no areas where his parents felt that EBCE did not help him, and they rated the EBCE program highly in all areas.

Test Progress Measures on Mike. Although Mike showed great improvement in almost all areas of the Comprehensive Test of Basic Skills during the first year of participation, his scores declined considerably during the second year. Especially significant were the declines in Mike’s arithmetic applications and study skills scores.

Mike’s attitudinal scores all showed a positive gain over the total two-year period, but they also tended to decline during the second year of participation. On the semantic differential, Mike scored significantly below the EBCE mean at FY 75 posttest on the community resources, adults, learning, and work scales.

Mike showed continued growth over the two-year period on the work, self-reliance, communication, role, and trust scales of the Psychosocial Maturity Scale. He was significantly above the EBCE posttest means on the work, role, and social commitment scales and below average on only the openness to change scale. The openness to change score also showed a significant decline over the year.

The staff rated Mike on seven student behaviors. At the beginning of the year, he was significantly above the EBCE mean on “applies knowledge of his/her own aptitudes, interests, and abilities to potential career interests” and below the mean on “understands another person’s message and feelings.” At posttest time, he was still below the EBCE mean on the latter behavior, as well as on “demonstrates willingness to apply basic skills to work tasks and to vocational interests.”

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 3/13

Over the course of the two years in the EBCE program, Mike’s scores on the Self-Directed Search showed little change in pattern, although the number of interests and competencies did expand. Overall, realistic (R) occupations decreased and enterprising (E) occupations increased as his code changed from RCI (where C is conventional and I is investigative occupations) at pretest FY 74 to ICR at pretest FY 75 (a classification that includes computer operators and equipment repairers) to CEI at posttest FY 75. However, the I was only one point stronger than the R, and the CER classification includes data processing workers. Thus, Mike’s Self- Directed Search codes appeared very representative of his desired occupational future.

Evaluators’ Reflections. Mike’s dramatic declines in attitudes and basic skills scores reflect behavior changes that occurred during the second half of his second year on the program and were detected by a number of people. In February, at a student staffing meeting, his learning manager reported of Mike that “no progress is seen in this zone with projects . . . still elusive . . . coasting right now . . . may end up in trouble.” The prescription was to “watch him—make him produce . . . find out where he is.” However, at the end of the next to last zone in mid-May, the report was still “the elusive butterfly! (Mike) needs to get himself in high gear to get everything completed on time!!!” Since the posttesting was completed before this time, Mike probably coasted through the posttesting as well.

Other data suggesting his lack of concern and involvement during the second half of his senior year was attendance. Although he missed only two days the first half of the year, he missed 13 days during the second half.

Mike showed a definite change in some of his personality characteristics over the two years he spent in the EBCE program. In the beginning of the program, he was totally lacking in social skills and self-confidence. By the time he graduated, he had made great strides in his social skills (although there was still much room for improvement). However, his self-confidence had grown to the point of overconfidence. Indeed, the employer- instructor on his last learning level spent a good deal of time trying to get Mike to make a realistic appraisal of his own capabilities.

When interviewed after graduation, Mike was working six evenings a week at a restaurant where he worked part-time for the last year. He hopes to work there for about a year, working his way up to cook, and then go to a business college for a year to study computers. SOURCE: Fehrenbacher, Owens, and Haenn (1976, pp. 1, 17–21). Used by permission of Education Northwest.

EXHIBIT 8.34 Excerpts From Codebook for Use by Multiple Coders of Interviews With Decision Makers and Evaluators About Their Utilization of Evaluation Research

This codebook was developed from four sources: (1) the standardized open-ended questions used in interviewing; (2) review of the utilization literature for ideas to be examined and hypotheses to be reviewed; (3) our initial inventory review of the interviews, in which two of us read all the data and added categories for coding; and (4) a few additional categories added during coding when passages didn’t fit well into the available categories.

Every interview was coded twice by two independent coders. Each individual code, including redundancies, was entered into our qualitative analysis database, so that we could retrieve all passages (data) on any subject included in the classification scheme, with brief descriptions of the content of those passages. The analyst could then go directly to the full passages and complete interviews from which these passages were extracted to keep quotations in context. In addition, the computer analysis permitted easy cross-classification and cross-comparison of passages for more complex analyses across interviews.

Characteristics of Program Evaluated

0101 Nature or kind of program

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 4/13

0102 Program relationship to government hierarchy 0103 Funding (source, amount, determination of, etc.) 0104 Purpose of program 0105 History of program (duration, changes, termination, etc.) 0106 Program effectiveness

Evaluator Role in Specific Study

0201 Evaluator’s role in initiation and planning stage 0203 Evaluator’s role in data collection stage 0204 Evaluator’s role in final report and dissemination 0205  Relationship of evaluator to program (internal/external) 0206 Evaluator’s organization (type, size, staff, etc.) 0207 Opinions/feelings about role in specific study 0208 Evaluator’s background 0209 Comments on evaluator and evaluator process

Decision Maker’s Role in Specific Study

0301  Decision maker’s role in initiation and planning stage 0302 Decision maker’s role in data collection stage 0303   Decision maker’s role in final report and dissemination 0304 Relationship of decision maker to program 0305 Relationship of decision maker to other people or units in government 0306  Comments on decision maker and decision-making process (opinions, feelings, facts, knowledge, etc.)

Stakeholder Interactions

0501 Stakeholder characteristics 0502 Interactions during or about initiation of study 0503 Interactions during or about design of study 0504 Interactions during or about data collection 0505 Interactions during or about final report/findings 0506 Interactions during or about dissemination

Planning and Initiation Process of This Study (How and Who started)

0601 Initiator 0602 Interested groups or individuals 0603 Circumstances surrounding initiation

Purpose of Study (Why)

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 5/13

0701 Description of purpose 0702 Changes in purpose

Political Context

0801 Description of political context 0802 Effects on study

Expectations for Utilization

0901 Description of expectations 0902 Holders of expectations 0903 Effect of expectations on study 0904 Relationship of expectations to specific decisions 0905 Reasons for lack of expectations 0906 People mentioned as not having expectations 0907 Effect of lack of expectations on study

Data Collection, Analysis, Methodology

1001 Methodological quality 1002 Methodological appropriateness 1003 Factors affecting data collection and methodology

Findings, Final Report

1101 Description of findings/recommendations 1102 Reception of findings/recommendations 1103  Comments on final report (forms, problems, quality, etc.) 1104 Comments and description of dissemination

Impact of Specific Study

1201 Description of impacts on program 1202 Description of nonprogram impacts 1203 Impact of specific recommendations

Factors and Effects on Utilization

1301 Lateness 1302 Methodological quality 1303 Methodological appropriateness 1304 Positive/negative findings 1305 Surprise findings

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 6/13

1306 Central/peripheral objectives 1307 Point in life of program 1308 Presence/absence of other studies 1309 Political factors 1310 Interaction with evaluators 1311 Resources 1312 Most important factor

EXHIBIT 8.35 Excerpts From an Illustrative Interview Analysis: Reflections on Outcomes From Participants in a Wilderness Education Program

—Jeanne Campbell and Michael Patton

Experiences affect people in different ways. This experiential education truism means that the individual outcomes, impacts, and changes that result from participation in some set of activities are seldom predictable with any certainty. Moreover, the meaning and meaningfulness of such changes as do occur are likely to be highly specific to particular people in particular circumstances. While the individualized nature of learning is a fundamental tenet of experiential education, it is still important to stand back from those individual experiences in order to look at the patterns of change that cut across the specifics of person and circumstances. One of the purposes of the evaluation of the Learninghouse Southwest Field Training Project was to do just that—to document the experiences of individuals and then to look for the patterns that help provide an overview of the project and its impacts.

A major method for accomplishing this kind of reflective evaluation was the conduct of follow-up interviews with the 11 project participants. The first interviews were conducted at the end of October 1977, three weeks following the first field conference in the Gila Wilderness of New Mexico. The second interviews were conducted during the third week of February, three weeks after the wilderness experience in the Kofa Mountains of Arizona. The third and final interviews were conducted in early May, following the San Juan River conference in southern Utah. All interviews were conducted by telephone. An average interview took 20 minutes, with a range from 15 to 35 minutes. Interviews were tape-recorded and transcribed for analysis.

The interviews focused on three central issues: (1) How has your participation in the Learninghouse Project affected you personally? (2) How has your participation in the project affected you professionally? (3) How has your participation in the project affected your institution?

In the pages that follow, participant responses to these questions are presented and analyzed. The major purpose of the analysis was to organize participant responses in such a way that overall patterns would become clear. The emphasis throughout was on letting the participants speak for themselves. The challenge for the evaluators was to present participant responses in a cogent fashion that integrates the great variety of experiences and impacts recorded during the interviews.

Personal Change

“How has your participation in the Learninghouse Project affected you personally? What has been the impact of the project on you as a person?”

Questions about personal change generated more reactions from participants than subsequent questions about professional and institutional change. There is an intensity to these responses about individual change that makes it clear just how significant these experiences were in stimulating personal growth and

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 7/13

development. Participants attempted throughout the interviews to indicate that they felt differently about themselves as persons because of their Learninghouse experiences. While such personal changes are often difficult to articulate, the interviews reflect a variety of personal impacts.

Confidence: A Sense of Self

During the three weeks in the wilderness, participants encountered a number of opportunities to test themselves. Can I carry a full pack day after day, uphill and downhill? Can I make it up that mountain? Do I have anything to contribute to the group? As participants encountered and managed stress, they learned things about themselves. The result was often an increase in personal confidence and a greater sense of self.

It’s really hard to say that LH did one thing or another. I think increased self-confidence has helped me do some things that I was thinking about doing. And I think that came, self-confidence came about largely because of the field experiences. I, right after we got back, I had my annual merit evaluation meeting with my boss, and at that I requested that I get a, have a change in title or a different title, and another title really is what it amounts to, and that I be given the chance for some other responsibilities that are outside the area that I work in. I want to get some individual counseling experience, and up to this point I have been kind of hesitant to ask for that, but I feel like I have a better sense of what I need to do for myself and that I have a right to ask for it least (Cliff, post-Kofas).

I guess something that has been important to me in the last couple of trips and will be important in the next one is just the outdoor piece of it. Doing things that perhaps I’d not been willing to attempt before whatever reason. And finding I’m better at it than expected. Before I was afraid (Charlene, post-Kofas).

The interviews indicate that increased confidence came not only from physical accomplishments but also —and especially—from interpersonal accomplishments.

After the Kofas I achieved several things that I’ve been working on for two years. Basically, the central struggle of the last two years of my life has been to no longer try to please people. No matter what my own feelings and needs are I try to please you. And in the past I had done whatever another person wanted me to do in spite of my own feelings and needs. And to have arrived at a point where I could tend to my own feelings and take care of what I needed to do for me is by far the most important victory I’ve won . . . a major one.

In the Kofas, I amazed myself that I didn’t more than temporarily buy into how . . . I was being described . . . when I didn’t recognize myself yet. And that’s new for me. In the past I’d accept others’ criticisms of me as if they were indeed describing me . . . and get sucked into that. And I felt that was an achievement for me to hold onto my sense of myself in the face of criticisms has long been one of my monsters I’ve been struggling with, so to hold onto me is, especially as I did, was definitely an achievement (Billie, post- Kofas).

I’ve been paying a lot of attention to not looking for validation from other people. Just sticking with whatever kinds of feelings I have and not trying to go outside of myself . . . and lay myself on a platter for approval. I think the project did have a lot to do with that, especially this second trip in the Kofas (Greg, post-Kofas).

I would say the most important thing that happened to me was being able to talk to other people quite honestly about, I think really about their problems more than mine. That’s very interesting in that I think that I had, I think I had an effect upon Billie and Charlene both. As a result of that it gave me a lot more confidence and positive feelings. Do you follow that? Where rather than saying I had this problem and I talked to somebody and they solved it for me, it was more my helping other people to feel good about themselves that made me feel more adequate and better than myself (Rod, post-Gila).

Another element of confidence concerns the extent to which one believes in one’s own ideas—a kind of intellectual confidence.

I think if I take the whole project into consideration. I think that I’ve gained a lot of confidence myself in some of the ideas that I have tried to use, both personally and let’s say professionally. Especially in my teaching

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 8/13

aspects, especially teaching at a woman’s college where I think one of our roles is not only to teach women subject matter, but also to teach them to be more assertive. I think that’s a greater component of our mission than normally would have it at most colleges. I think that a lot of the ideas that I had about personal growth and about my own interactions with people were maybe reinforced by the LH experience, so that I felt more confident about them, and as a result they have come out more in my dealings with people. I would say specifically in respect to a sort of a more humanistic approach to things (Rod, post-Kofas).

Increased confidence for participants was often an outcome of learning that they could do something new and difficult. At other times, however, increased confidence emerged as a result of finding new ways to handle old and difficult situations, for example, learning how to recognize and manage stress.

A change I’ve noticed most recently and most strongly is the ability to recognize stress. And also the ability to recognize that I can do a task without needing to make it stressful which is something I didn’t know I did. So what I find I wind up doing, for example, is when I’ve had a number of things happen during the day and I begin to feel myself keying up I find myself very willing to say both to close friends and to people I don’t know very well, I can’t deal with this that you’re bringing me. Can we talk about it tomorrow? This is an issue that really needs a lot of time and a lot of attention. I don’t want to deal with it today, can we talk later . . . etc. So I’m finding myself really able to do that. And I’m absolutely delighted about it.

(Whereas before you just piled it on?)

Exactly. I’d pile it and pile it until I wouldn’t understand why I was going in circles (Charlene, post-Kofas).

Personal Change—Overview

The personal outcomes cited by Learninghouse participants are all difficult to measure. What we have in the interviews are personal perceptions about personal change. The evidence, in total, indicates that participants felt differently and, in many cases, behaved differently as a result of their project participation. Different participants were affected in different ways and to varying extent. One participant reported virtually no personal effects from the experiences.

And as far as the effect it had on me personally, which was the original question, okay, to be honest with you, to a large degree it had very little effect, and that’s not a dig on the program, because at some point in people’s lives I think things start to have smaller effect, but they still have effect. So I think that for me, what it did have an effect on was tolerance. Because there were as lot of things that occurred on the trip that I didn’t agree with. And still don’t agree, but I don’t find myself to be viciously in disagreement any longer, just plainly in disagreement. So it was kind of like before, I didn’t want to listen to the disagreement, or I wanted to listen to it but resolve it. Now, you know, there’s a third option, that I can listen to it, continue to disagree with it and not mind continuing to listen to it (Cory, post–San Juan).

The more common reaction, however, was surprise at just how much personal change occurred.

My expected outcome was increase the number of contacts in the Southwest, and everyone of my expected outcomes were professional. That, you know, much more talk about potential innovations in education and directions to go, and you know, field-based education, what that’s about, and I didn’t expect at all, which may not be realistic on my part, but at least I didn’t expect at all—the personal impact (Charlene, post-Gila).

For others, the year’s participation in Learninghouse was among the most important learning experiences of a lifetime, precisely because the project embraced personal as well as professional growth.

I’ve been involved in institutions and in projects as an educator, let’s say, for 20 years I started out teaching in high school, going to the NSF institutions during the summertime and I’ve gone to a lot of Chautauqua things and a lot of conferences, you know, of various natures. And I really think that this project has by far the greatest. . . . has had by far the greatest impact on me. And I think that the reason is that in all the projects that I’ve had in the past . . . they’ve been all very specifically oriented towards one subject or toward one . . . more of a, I guess, more of a science, more of a subject matter orientation to them. Whereas this having a process

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 9/13

orientation has a longer effect. I mean a lot of the things I learn in these instances is out of date by now and you keep up with the literature, for example, and all that and maybe that stimulates you to keep up . . . but in reality as far as a growth thing on my part, I think on the part of other participants, I think that this has been phenomenal. And I just think that this is the kind of thing that we should be looking towards funding on any level, federal, or any level (Rod, post–San Juan).

We come now to a transition point in this report. Having reported participants’ perceptions about personal change, we want to report the professional outcomes of the Learninghouse project. The problem is that in the context of a holistic experience like the Southwest Field Training Project, the personal–professional distinction becomes arbitrary. A major theme running throughout discussions during the conferences was the importance of reducing the personal–professional schism—the desirability of living an integrated life and being an integrated self. This theme is reflected in the interviews, as many participants had difficulty responding separately to questions about personal versus professional change.

Personal–Professional Change

Analytically, there is at least a connotative difference between personal and professional change. For evaluation purposes, we tried to distinguish one from the other as follows: personal changes concern the thoughts, feelings, behaviors, intentions, and knowledge people have about themselves; professional changes concern the skills, competences, ideas, techniques and processes people use in their work. There is, however, a middle ground. How does one categorize changes in thoughts, feelings, and intentions about competences, skills, and processes? There are changes in the person that affect that person’s work. This section is a tribute to the complexity of human beings in defying the neat categories of social scientists and evaluators. This section reports changes that, for lack of a better nomenclature, we have called simply personal/professional impacts.

The most central and most common impact in this regard concerned changes in personal perspective that affected fundamental notions about and approaches to the world of work. The wilderness experiences and accompanying group processes permitted and/or forced many participants to stand back and take a look at themselves in relation to their work. The result was a changed perspective. The following four quotations are from interviews conducted after the first field conference in the Gila, a time when the contrasts provided by the first wilderness experience seemed to be felt most intensely.

The trip came at a real opportune time. I’ve been on this new job about 4–5 weeks and was really getting pretty thoroughly mired in it, kind of overwhelmed by it, and so it came after a particularly hellish week, so in that sense it was just a critical, really helpful time to get away. To feel that I had, to remember that I had some choices, both in terms of whether I stayed here or went elsewhere, get some perspective of what it was I actually wanted to accomplish in higher education rather than just surviving to keep my sanity. And it gave me some, it renewed some of my ability to think of doing what I wanted to do here at the University, or trying to, that there were things that were important for me to do rather than just handling the stuff that poured across my desk (Henry, post-Gila).

I think it’s helped make me become more creative, and just, and that’s kind of tied in with the whole idea of the theory of experiential education. And the way we approached it on these trips. And so for instance I’m talking with my wife the other night, after I got Laura’s paper that she’d given in Colorado, and I said you oughta read this because you can go out and teach history and you know, experientially. Then I gave her an idea of how I would teach frontier history for instance, and I don’t know beans about frontier history. But it was an idea which, then she told another friend about it, and this friend says oh, you can get a grant for that. You know. So that was just a real vivid example, and I feel like, it’s, I’ve been able to apply, or be creative in a number of different situations, I think just because I give myself a certain freedom, I don’t know, I can’t quite pinpoint what brought it about, but I just feel more creative in my work (Cliff, post–San Juan).

You know my biggest problem is I’ve been trying to save the world, and what I’m doing is pulling back. Because, perhaps the way I’ve been going about it has been wrong or whatever, but at least my motives are clearer and I know much more directly what I need and what I don’t need and so I’m more open but less, yeah,

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 10/13

as I said, I’ve been in a let’s save the world kind of thing, now I feel more realistic and honest (Charlene, post- Gila).

I’ve been thinking about myself and my relationship to men and my boss, and especially to ideas about fear and risk. . . . I decided that I needed to become a little more visible at the department. After the October experience, I just said I was a bit more ready to become visible at the department level. And I volunteered then to work on developing a department training policy and develop the plan and went down to the department and talked to the assistant about it and put myself in a consulting role while another person was assigned the actual job of doing it. And I think that I was ready to make that decision and act on it after I first of all got clear that I was working on male–female relationships. My department has a man, again, not a terribly easy one to know, so it’s a risk for me to go talk with him and yet I did it. I was relatively comfortable and felt very good and very pleased with myself that I had done that and I think that’s also connected (Billie, post-Kofas).

The connection between personal changes and professional activities was an important theme throughout the Learninghouse Project. The passages reported in this section illustrate how that connection took hold in the minds and lives of project participants. As we turn now to more explicit professional impacts, it is helpful to keep in mind the somewhat artificial and arbitrary nature of the personal–professional distinction.

(Omitted are sections on changed professional knowledge about experiential education, use of journals, group facilitation skills, individual professional skills, personal insights regarding work and professional life, and the specific projects the participants undertook professionally. Also omitted are sections on institutional impacts. We pick up the report in the concluding section.)

Final Reflections

Personal change . . . professional change . . . institutional change. . . . Evaluation categories aim at making sense out of an enormously complex reality. The reflections by participants throughout the interviews make it clear that most of them came away from the Learninghouse program feeling changes in themselves. Something had touched them. Sometimes it meant a change in perspective that would show up in completely unexpected ways.

For one thing, I just finished the purchase of my house. First of all, that’s a new experience for me. I’ve never done it before. I’ve never owned a home and never even wanted to. It seemed odd to me that my desire to “settle down” or make this type of commitment to a place occurred just right after the Gila trip. Just sort of one of those things that I woke up and went, “Wow, I want to stay here. I like this place. I want to buy it.” And I had never in my life lived in a house or a place that I felt that way about. I thought that was kind of strange. And I do see that as a function of personal growth and stability. At least some kind of stability.

Other areas of personal growth: one has been, and this kind of crosses over I think into the professional areas, and that would be an ability to gain perspective. Certainly the trips I think . . . incredibly valuable for gaining perspective on what’s happening in my home situation, my personal life, my professional life . . . the whole thing. And it has allowed me to focus on some priority types of things for me. And deal with some issues that I’ve been kind of dragging on for years and years and not really wanting to face up with them or deal with them. And I have been able to move on and move through those kinds of things in the last 6 or 9 months or so to a much greater extent than ever before (Tom, post–San Juan).

Other participants came away from the wilderness experiences with a more concrete orientation that they could apply to work, play, and life.

The thing that I realized as I was trying to make some connections between the river and raft trip, was that in some ways I can see the parallels of my life being kind of like our raft trip was, and the rapids, or the thrill ride, and they’re a lot of fun, but it’s nice to get out of them for a while and dry off. It’s nice sometimes to be able to just drift along and not worry about things. But a lot of it also is just hard work. A lot of times I wish I could get out of it and go a different way, and that’s been kind of a nice thing for me to think about and kind of a viewpoint to have whenever I see things in a lull or in a real high speed pace, that I can say, “Okay, I’m going

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 11/13

to be in this for a while, but I’m going to come out of it and go into something else.” And so that’s kind of a metaphor that I use as somewhat of a philosophy or point of view that’s helpful as I go from day to day (Cliff, post–San Juan).

A common theme that emerged as participants reflected on their year’s involvement with Learninghouse was a new awareness of options, alternatives, and possibilities.

I would say that if I have one overall comment, the effect of the first week overall, is to renew my sense of the broader possibilities in my job and in my life. Opens things to me. I realize that I have a choice to be here and be myself. And since I have a choice, there are responsibilities. Which is a good feeling (Henry, post-Gila).

I guess to me what sticks out overall is that the experience was an opportunity for me to step out of the rest of my life and focus on it and evaluate it, both my personal life and my work, professional life aspect (Michael, post–San Juan).

As participants stood back and examined themselves and their work, they seemed to discover a clarity that had previously been missing—perspective, awareness, clarity—stuff of which personal/professional/institutional change is made.

I think I had a real opportunity to explore some issues of my own worth with a group of people who were willing to allow me to explore those. And it may have come later, but it happened then. On the Learninghouse, through the Learninghouse . . . and I think it speeded up the process of growing for me in that way, accepting my own worth, my own ideas about education, about what I was doing, and in terms of being a teacher it really aided my discussions of people and my interactions. It really gave me a lot of focus on what I was doing. I think I would’ve muddled around a long time with some issues that I was able to, I think, gain some clarity on pretty quickly by talking to people who were sharing their experience and were working towards the same goals, self-directed learning, and experiential education (Greg, post–San Juan).

I think what happened is that for me it served as a catalyst for some personal changes, you know, the personal, institutional, they’re all wound up, bound up together. I think I was really wrestling with jobs and career and so on. For me the whole project was a catalyst, a kind of permission to look at things that I hadn’t looked at before. One of the realizations, one of the insights that I had in the process was, kind of neat on my part, to become concrete, specific in my actions in my life, no matter whether that was writing that I was doing, or if it was in my job, or whatever it was. But to really pay attention to that. I think that’s one of the things that happened to me (Peter, post–San Juan).

These statements from interviews do not represent a final assessment of the impacts of the Learninghouse Southwest Field Training Project. Several participants resisted the request to make summary statements about the effects and outcomes of their participation in the program because they didn’t want to force premature closure.

(Can you summarize the overall significance of participation in the project?)

I do want to make a summary, and I don’t again. . . . It feels like the words aren’t easy and for me being very much a words person, that’s unusual. It’s not necessarily that the impact hasn’t been in the cognitive areas. There have been some. But what they’ve been, where the impact has been absolutely overwhelming is in the affective areas. Appreciation of other people, appreciation of this kind of education. Though I work in it, I haven’t done it before! A real valuing of people, the profession, of my colleagues in a sense that I never had before. . . .

The impact feels like it’s been dramatic, and I’m not sure that I can say exactly how. I’m my whole . . . it all can be summarized perhaps by saying I’m much more in control. In a good kind of sense. In accepting risk and being willing to take it; accepting challenge and being willing to push myself on that; accepting and understanding more about working at the edge of my capabilities . . . what that means to me. Recognizing very comfortably what I can do and feeling good about that confidence, and recognizing that what I haven’t yet

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 12/13

done, and feeling okay about trying it. The whole perception of confidence has changed (Charlene, post–San Juan).

The Learninghouse program was many things—the wilderness, a model of experiential education, stress, professional development—but most of all, the project was the people who participated. In response after response participants talked about the importance of the people to everything that happened. Because of the dominance of that motif throughout the interviews, we want to end this report with that highly personal emphasis.

I said before I think that to know some people, that meant a lot to me, people who were also caring. And people who were also involved, very involved in some issues, philosophical and educational, that were pretty basic not only to education, but to living. Knowing these people has been really important to me. It’s given me a kind of continuity and something to hold onto in the midst of a really frustrating, really difficult situation where I didn’t have people where I could get much feedback from, or that I could share much thinking about, talking about, and working with. It’s just kind of basic issues. That kind of continuity is real important to just my feelings, important to myself. Feeling like I have someplace to go. . . . Sometimes I feel funny about placing so much emphasis on the people. . . . But the people have really meant a lot to me as far as putting things together for myself. Being able to have my hands in something that might that really offers me a way to go (Greg, post– San Juan).

SOURCE: Patton (1978a, pp. 7–9).

APPLICATION EXERCISES

1. Practicing cross-case analysis: Three case studies are presented in different chapters of this book. The story of Li presents a Vietnamese woman’s experience in an employment training program (Exhibit 4.3, pp. 182–183). Thmaris is a case study of a homeless youth and his journey to a more stable life (Exhibit 7.20, pp. 511–516). Mike’s story, in this chapter, describes his experiences in a career education program (Exhibit 8.33, pp. 638–542). Conduct a cross-case analysis of these three cases. Identify patterns (descriptive similarities across cases) and themes (the labels you give to what the patterns mean). See page 541 for differentiation of patterns and themes.

2. Conducting a causal analysis: The module on causal analysis ends with a suggested assignment. Here, it is included again as a formal application exercise. In the Sufi story that follows, analyze the data represented by the story in two ways. (1) First, try to isolate specific variables that are important in the story, deciding which are the independent variables and which is the dependent variable, and then write a statement of the form “These things caused this thing.” (2) Read the story again. For the second analysis, try to distinguish among and label the different meanings of the situation expressed by the characters observed in the story; then write a statement of the form “These things and these things came together to create ______.” (3) Discuss the implications of these different interpretative and explanatory approaches. You aren’t being asked to decide that one approach is right and the other is wrong. You are being asked to analyze the implications of each approach for interpreting qualitative case data. Here’s the case data, otherwise known as a story.

Walking one evening along a deserted road, Mulla Nasrudin saw a troop of horsemen coming toward him. His imagination started to work; he imagined himself captured and sold as a slave, robbed by the oncoming horsemen, or conscripted into the army. Fearing for his safety, Nasrudin bolted, climbed a wall into a graveyard, and lay down in an open tomb.

6/2/2018 Bookshelf Online: Qualitative Research & Evaluation Methods: Integrating Theory and Practice

https://online.vitalsource.com/#/books/9781483314815/cfi/6/42!/4/2/4/2@0:0 13/13

Puzzled at this strange behavior, the men—honest travelers—pursued Nasrudin to see if they could help him. They found him stretched out in the grave, tense and quivering.

“What are you doing in that grave? We saw you run away and see that you are in a state of great anxiety and fear. Can we help you?”

Seeing the men up close, Nasrudin realized that they were honest travelers who were genuinely interested in his welfare. He didn’t want to offend them or embarrass himself by telling them how he had misperceived them, so Nasrudin simply sat up in the grave and said, “You ask what I’m doing in this grave. If you must know, I can tell you only this: I am here because of you, and you are here because of me.” (Adapted from Shah, 1972, p. 16)

3. Exercise on diverse approaches to causal explanation: The Halcolm graphic comic on causality (pp. 635–637) retells an African fable about causality. Exhibit 8.19 (pp. 600–601) presents 12 approaches to qualitative causal analysis. (a) Select three different approaches in Exhibit 8.19, and discuss those approaches as applied to the Halcolm story about why dogs bark. (b) Identify which approaches in Exhibit 8.19 cannot be used with a single case story and why. What is needed to engage in certain kinds of causal explanatory analysis?

4. Exercise on substantive significance: Exhibit 8.35 (pp. 643–649) presents excerpts from an evaluation report. Add your own new section to the end of the report in which you discuss what you believe is substantively significant in the findings. Qualitative inquiry does not have statistical significance tests, so qualitative judgments must be rendered about what is substantively significant. Demonstrate that you can do this and understand what it means by identifying what is substantively significant in the report presented in Exhibit 8.35. Justify your judgments. (See pp. 643–649 for a discussion of substantive significance.)

5. Exercise on reflexivity: Select a topic, issue, or question on which you either have done qualitative inquiry or would like to do a qualitative inquiry. Present a brief outline of your methods, either actual or proposed. Now, write a reflexive analysis section. How does who you are come into play in this inquiry? Discuss yourself as the instrument of qualitative inquiry and analysis. What are your strengths and weaknesses that come into play for this particular actual or proposed inquiry? (See pp. 70–74 for a discussion of reflexivity.)

6. The role of numbers in analysis: MQP Rumination #8 argues that numbers (how many interviewees said what) should not be reported in a qualitative analysis; essentially, qualitative analysis should stay qualitative (see pp. 557–560). Make the opposite argument. Write your own rumination making the case that reporting how many people said what is altogether appropriate and useful.

7. Evaluating qualitative analysis: Locate a qualitative study on a topic of interest to you. How are the analysis process and approach described in the report? What questions do you have about the analysis that are left unanswered, if any? Is reflexivity addressed? Are descriptions clearly separated from interpretations? To what extent are causal explanations offered in the analysis? Are strengths and weaknesses in the data and the analytical process discussed? How is qualitative analysis software discussed, if at all? Use Exhibit 8.32 (pp. 631–632), the qualitative analysis checklist, to review and evaluate a qualitative study of your own choosing.

a.    To what extent, if at all, has the way you have approached new situations since the course been  a result of your Outward Bound experience?  8.   Have there been any ways in which the Outward Bound course affected you that we haven’t 

discussed? If yes, how? Would you elaborate on that?

a.    What things that you experienced during that week carried over to your life since the  course?

b.    What plans have you made, if any, to change anything or do anything differently as a result  of the course?

9.   Suppose you were being asked by a government agency whether or not they should support a  course like this. What would you say?

a.    Who shouldn’t take a course like this?

10.   Okay, you’ve been very helpful. Any other thoughts or feelings you might share with us to  help us understand your reactions to the course and how it affected you?

a.    Anything at all you’d like to add?

EXHIBIT 7.20 Interview Case Study Example

The Experience of Youth Homelessness: Thmaris Tells His Story

The following case study is one of 14 done as part of a study of youth homelessness in Minnesota (Murphy, 2014).

Thmaris (the name he created for himself for this case study)

Thmaris was born in Chicago, Illinois, in 1990. He has an older and a younger brother, and was  raised by his mother. His family moved to Minnesota in 1996 but didn’t have housing when  they moved here. As a result,  they moved around a  lot between extended family members’  homes  and  family  shelters,  never  staying  anywhere  for  more  than  a  year.  His  mother  was  addicted to alcohol and drugs, and it often fell on Thmaris and his older brother to take care of  their younger brother. Consequently, Thmaris has been earning money for the family since he  turned 12. Sometimes this was through a job—like the construction apprenticeship he had at  age 12—and sometimes this was through stealing or dealing weed. Even though he stole and  dealt when he had to, he recognized early on that he was happiest when he was working with  his hands and providing for his family through a job.

When  Thmaris  was  13,  his  mother  entered  an  alcoholic  treatment  program,  and  he  and  his  brothers went to stay with his uncle. Thmaris remembers this as one of the lowest points in his  life. There were six kids in a three-bedroom house, with people coming and going. There was  never  enough  food  or  clothing,  and  it  didn’t  feel  safe.  When  his  mother  returned  from  the  treatment center to get them from the uncle’s home, Thmaris thought things would get better,  but they got much worse. His mother accused Thmaris’s older brother of molesting Thmaris  and  their  little  brother,  and  these  accusations  tore  his  family  apart.  To  this  day,  he  doesn’t  understand why she did this.

She really went above and beyond to try to prove it and try to accuse him. I know that made him feel like nothing. I know it made him feel like the worse kinds of person. It wasn’t true. There was

no truth to it. My bigger brother, he’s super protective and he’s not that type of dude. It just hurt me for her to do something like that and to accuse her own son of something like that. She just accused him of the worst crime type ever. That really stuck with us.

Not only was it painful to watch his mother accuse their brother of something he didn’t do, but  also as a result, they lost their older brother, their protector, and the closest thing to a father  figure that they had. This placed Thmaris in a tough position, where he felt that he had to align  his loyalties either with this older brother or with his mother. If he placed his loyalties with his  older brother, it would mean that he couldn’t have a relationship with his younger brother.

He had to place us to a distance. He stepped back from being in our lives a lot. When he went to Chicago to stay with my dad, it was like we never saw him, we never talked to him. It was confusing for me because I’m the middle child. So I have a little brother and I have an older brother. When she sent my brother away, I always felt like she abandoned him, just threw him out of our lives. . . . I would go up to Chicago to see my brother, I’d stay there for a couple of months, just to get that big brother/small brother bond again. Then, I would come back because I didn’t want my little brother growing up like, “Dang. Both of my brothers left me.” It was just always really hard and I just felt like [my mother] just put me in a position to choose, to choose whether I want to be around my little brother or my big brother.

Once again, Thmaris found himself having to grow up quickly. With his older brother gone, he  felt that he had become the sole provider for and protector of his younger brother.

His mother continued to cycle in and out of treatment programs, and Thmaris and his younger  brother likewise cycled in and out of foster care. Foster care was a time of relative peace and  stability for Thmaris and his brother as they developed a relationship with their foster mother  that  they  maintain  to  this  day.  He  was  always  drawn  back  to  family  though  and  ended  up  staying with his mother when he was 16. He remembers this year as the longest year of his life.  At this time, she wasn’t able to have the boys stay with her because she had Section 8 housing  and they weren’t named on her lease. He was angry at the world and fighting with anyone and  everyone. He started running with a gang and was getting into a lot of trouble. There were  many nights that his mother would lock him out as a consequence of this behavior.

After being locked out several times, Thmaris decided that he’d had it and told his mother that  he was moving out. This was the night Thmaris became homeless. When he first moved out, he  stayed with his girlfriend. When he couldn’t stay with her, he would couch hop or sleep outside  in  public  places.  Thmaris’s  girlfriend  at  the  time  introduced  him  to  the  drop-in  centers  in  Minneapolis and St. Paul. She had a baby and used to visit the drop-in centers to get pampers,  wipes, and other baby supplies. Thmaris gravitated toward the drop-in center, which he felt had  a smaller, more intimate feel to it. First and foremost, he used it as a place to get off of the  streets  and  be  safe.  He  also  appreciated  the  support  they  provided  for  doing  things  such  as  writing  resumes,  applying  for  jobs,  and  locating  apartments,  but  he  only  took  advantage  of  them sporadically.

When asked to recall his first impressions of the drop-in center, Thmaris describes a place that  had some clear rules and expectations. While at the drop-in center, Thmaris knew he couldn’t  use profanity, fight, smoke, bring in drugs, or be intoxicated. The expectation was that you try  your hardest to succeed in whatever you want to do.

They help you with a lot of stuff. I just feel like there’s nothing that you really need that you can’t really get from here. If you really need underwear and socks and t-shirts, they have closets full of

stuff like that. If you really need toothpaste and toothbrushes and deodorant and all that stuff, all the hygiene stuff, they have that here. If you have a whole lot of stuff but nowhere to go, they have lockers here where you can leave your stuff here and can’t nobody really get in them. They have showers here that you can use if you really need it. I just feel like they have so many resources for you it’s ridiculous.

During his first few years utilizing the drop-in center, Thmaris continued to be in a gang and  was frequently in and out of jail. Being a gang member was his primary source of income, and  he was loyal to his fellow gang members even when it got him in trouble. He recalls a time  when he got arrested for auto theft.

I was in jail a lot. I remember back in 2009 I was running with this group of guys and they was doing breaking and entering and stuff like that. I knew it could come with jail time or whatever, but I just seen these guys with a lot of money and I was very broke at the time. So I’m like, well, I’m doing it with these guys.

One time we go to this one house, and there’s a car there. The guy I’m with gets the car and he was driving it around. We get to the house, and he’s like, “Well, I forgot to go to the store to get some something.” He’s like, “Could you go to the store?”

I’m like, “Yeah.” So I get the car and I get the keys and I go to the store. And right away I was just surrounded by cops, and I got thrown in jail for auto theft. I could have easily just gave those guys up, but I had always been taught to be a man of responsibility. If I did something, then I should take responsibility for it. So I got thrown in jail; I got a year and a day of jail time over my head with five years’ probation.

After  this  conviction,  Thmaris  was  starting  to  use  the  resources  at  the  drop-in  center  more  consistently and secured a job delivering newspapers. Because he had a job and was making  progress, he was accepted into a Transitional Living Program (TLP). Things were moving in a  direction that Thmaris felt good about, but he violated the probation related to the auto theft  and  lost  his  place  in  the  TLP  program.  He  describes  violating  his  probation  because  his  probation officer was in Washington County and Thmaris was staying in St. Paul. It was hard  to  report  to  a  parole  officer  in  St.  Cloud  when  Thmaris  didn’t  have  consistent  access  to  transportation, so Thmaris decided to serve his year and a day instead. He spent time in two  correctional facilities before being released after eight months. By the time he was released, he  had lost his job and his spot in the TLP.

Turning Point

Going to jail and losing his TLP was a turning point for Thmaris. When he got out of jail, he  recalls saying to himself,

I need to stop with the crimes and committing crimes and just try to find something different to do with my life. . . . So I got back in school. I went to school for welding technology, and that’s been my main focus ever since I got out at that time. It’s been rare that I would go back to jail. If I did, it would be for arguing with my girlfriend or not leaving the skyway when they wanted me to, so it was like trespassing and stuff like that. After I got out of Moose Lake, it just really dawned on me that I needed to change my life and that the life that I was living wasn’t the right path for me.

But he didn’t know how to do this on his own, so when he got out of prison, he went straight to  the drop-in center to ask his case manager for help.

Thmaris’s case manager is supported by a Healthy Transitions grant  through the Minnesota  Department of Health. He works specifically with homeless, unaccompanied youth who spent  at least 30 days in foster care between the ages of 16 and 18. Each time a young person visits  the drop-in center for the first time, he or she is asked about his or her previous experience with  the foster care system. If they report being in foster care between the ages of 16 and18 for at  least 30 days, then they will typically be up on the case load of one of the workers supported by  the  Healthy  Transitions  grant.  Thmaris  had  two  case  workers  previous  to  Rahim  but  was  transferred to Rahim when Rahim began working with youth under Healthy Transitions. This  was  a  lucky  move  for  Thmaris  because  this  relationship  developed  into  one  that  has  been  deeply meaningful to him.

I just feel that ever since I turned 20 I realized that I’m an adult and that I have to make better choices, not just for me but the people around me. Didn’t nobody help me with that but Rahim. . . . the things that he was able to do, he made sure that he did them. I remember days that I’d come down to Safe Zone, and I’d be like, Rahim, I haven’t eaten in two days or, Rahim, I haven’t changed my underwear in like a week or whatever. He would give me bus cards to get to and from interviews. He would give me Target cards to go take care of my personal hygiene. He would give me Cub cards to go eat. It was like every problem or every obstacle I threw in front of him, he made sure that I would overcome it with him. He was like the greatest mentor I ever had. I’ve never had nobody like that.

Out of all the other caseworkers I had, nobody ever really sat me down and tried to work out a resolution for my problems. They always just gave me pamphlets like, “Well, go call these people and see if they can do something for you.” Or, “You should go to this building because this building has that, what you want.” It was like every time I come to him I don’t have to worry about anything. He’s not going to send me to the next man, put me on to the next person’s caseload. He just always took care of me. If I would have never met Rahim, I would have been in a totally different situation, I would have went a totally different route.

Without the support of Rahim and the resources at the drop-in center, Thmaris is sure he would  still be in a gang and dealing weed and would eventually end up in jail again.

And it wasn’t just that Rahim was there for him, it’s also that Rahim has been with the drop-in  center for more than five years. This consistency has meant a lot to Thmaris, who shared, “I  just  seen  a  lot  of  case  managers  come  and  go.  Rahim  is  the  only  one  that  has  never  went  anywhere. So many years have gone past, and Rahim is here.” After his turning point, Rahim  recalls Thmaris coming in to the drop-in center with an intense focus on changing his life.

He was here, pretty close to everyday, and he’d never been here that frequently before. He was here all the time, working on job searches. He started to take school really, really seriously, which has been a really positive and strong thing for him. And we kind of sat together and figured out a path that hopefully would help provide for him over time. He got his high school diploma, which was really huge. I think just being successful at something was helpful. He’s a smart kid. I think getting his high school diploma helped convince him of that.

Thmaris  recalls  this  time  similarly.  “Everything  I  was  doing  [with  Rahim]  was  productive.  When you get that feeling like you’re accomplishing something and you’re doing good, it’s  like a feeling that you can’t describe.”

Going Back to School

When Thmaris first thought about going back to school, he was planning to go just to get a  student loan check like many of the homeless youth around him. But Rahim helped him see a  different path. Knowing of Thmaris’s past positive experiences with construction and building,  Rahim urged Thmaris to consider careers that would allow him to work with his hands and to  focus on the big picture. He also made sure that Thmaris had the support of a learning specialist  from Saint Paul Public Schools, who helped Thmaris figure out what steps he would need to  take to get from where he was then to his dream career of welding.

The first time I ever talked to him about school. I was like, “Yeah, man. I just feel like I should go to school and get a loan.” And he was like, “Well, it’s bigger than that. That loan money is going to be gone like that.”

He didn’t tell me, no, you shouldn’t do that. He just said just think about the future. And I just thought about it. I was thinking and I was thinking. And then I was like, “What if I went to school for something that I want to get a job in?” He was, “Yeah, that’s the best way to go.”

I talked to him about construction, and he told me you already basically have experience in that because he knows all the jobs I had. So he was like you should go into something that’s totally different but that pays a lot of money. Then I researched it and then just knew that I liked working with my hands. So I just put two and two together and was like, well, I want to go to school for welding. Ever since I learned how to weld it’s like . . . I love it!

Thmaris loved welding but was intimidated by the engineering part of the learning, which he  called “book work.” Again, it was Rahim’s belief in him that helped him persevere.

I started doing the book work, and I was getting overwhelmed. It was like every time I came down here—even if I just came down here to use the phone—Rahim would be like, “How’s that welding class going?”

He just kept me interested in it. I don’t know how to explain it. I was just thirsty—not even to get a job in it—but to show Rahim what I’d learned and what I accomplished.

He would tell me, “I’m proud of you.” Then I’d show him my book work, and he’d be like, “Man, I don’t even know how to read this stuff.” And it just made me feel like I was actually on the right path. It made me feel like I was doing what I was intended to do with my life. That’s just how he makes me feel. He makes me feel like when I’m doing the right thing, and he makes sure that I know it.

Thmaris finished his welding certificate with a 4.0 grade point average and the high positive  regard of his teachers, and he did this against great odds. He was homeless while completing  the program, meaning that he had no consistent place to sleep, to do homework, or to even keep  his books. Getting to campus was challenging because tokens were hard to come by. During  this time, he was also dealing with one of his greatest challenges, unhealthy relationships with  older women. In part, these relationships are a survival technique. Any time living with these  women  is  time  off  of  the  streets.  But  these  relationships  are  also,  in  part,  a  symptom  of  Thmaris’s desire to be loved, to have a family, and to take care of others.

I just super easily fall in love. I always look for the ones that have been through the most, the ones that have always had a rough time, and I try to make their life better. That’s the biggest thing for me, is trying to stay away from love. I don’t know. I’m just so into love. It’s probably because I wanted my mom and dad to be together so bad it killed me. Every time she brung a new man home it killed me inside. So I just want a family.

The relationship he was in while attending his first semester of welding classes ended badly.  She threw away or burned everything Thmaris owned and paid people to jump him and “beat  the hell out of him.” They broke his hand and wrist so badly that surgery was required, and the  healing time was long and painful. Despite all of this, Thmaris persevered. He completed his  welding certificate and is now completing his general education requirements.

Leaving the Gang

To choose this different path, Thmaris had to leave his gang. His case manager helped him with  this too by talking through what it was that the gang offered him. Certainly, it offered money  and friends, but as Thmaris got older, it also offered more opportunities to serve extended jail  time. While Thmaris was in jail for auto theft, no one from the gang came to see him, put  money in the book for him, or checked on his little brother. Thmaris identified this as one of  the greatest challenges he’s had to overcome.

I think the first biggest thing I had to do was leave a lot of friends that weren’t on the same level or the same type of mentality that I was on. I had to leave them alone, regardless of if I knew them my whole life or not.

Thmaris’s case manager thinks his commitment to school was critical in helping Thmaris leave  his gang. He described leaving a gang as follows.

Leaving a gang is almost like drug addiction. You have to replace it with something. And in Thmaris’s instance, he actually can replace it with this really furious effort towards getting his education, looking for jobs, and trying to do something different with himself. If he had just tried to quit the gang and replaced it with nothing and did nothing all day, it would’ve been a lot harder. But he replaced it with this welding certificate, and that was really good. He got very, very excited about welding. It’s nice because he is naturally good at it.

Becoming a Father

Thmaris has also recently become a father. He currently has a three-month-old son, named after  him,  who  is  his  pride  and  joy.  But  he  and  the  mother  have  a  rocky  relationship,  and  it’s  frustrating to Thmaris, who would like to raise his child with his child’s mother. He didn’t  grow up with his own father in the picture and would like something different for his own son.  He feels that his son would be happier in life waking up each day seeing both of his parents.

Man, I just have so many plans for him. I just . . . I don’t want to put him on a pedestal or anything because I don’t want him to go through having a dad that thinks so highly of him and then it’s so hard for him to meet my goals. I want him to make his goals. I want him to be happy. That’s all I care about. Regardless if he wants to work at Subway instead of going to school . . . if that’s your choice, that’s your choice.

Becoming a father has also helped him give up the gang life and weed. He realizes that if he  doesn’t stop selling drugs, then he might not live long enough to see his son grow up.

I don’t want to sell drugs all my life and then when I die and my son be like, “Wow. Dad didn’t leave me nothing or he didn’t teach anything but how to sell drugs.” No, that’s not what you would want for your son. You want your son to know what it is to work for his money. You want your son to know how it feels to come from a long day’s work, tired, like, “Damn. I earned my money though.”

Couch Hopping

Throughout  his  time  visiting  drop-in  centers,  Thmaris  has  never  stayed  at  a  youth  or  adult  shelter. He typically couch hops or stays with women he is dating. Some nights, he walks the  skyway all night or sleeps in a stairwell. After staying in family shelters when he was younger,  he has vowed to himself that he would never stay in another shelter again.

When we first came to Minnesota that’s all we did. We stayed in shelter as a family. It was like traumatizing to me because [in shelters] you see like humans at their weakest point. You see them hungry, dirty. I didn’t like that. I don’t like being around a whole bunch of people that was . . . I’m not saying that I feel like I’m better than anybody because I definitely don’t. But I just felt like it was too much to take in. It was too stressful, and it just made me want to cry. It was crazy. I don’t want to go back to a shelter ever again.

Where He Is Now

Currently, Thmaris isn’t stably housed. He spends some nights at his baby’s mother’s house,  some nights with friends, other nights outside, and some nights at a hotel room. But despite  this, he feels very positive and hopeful about his life.

The fact that I have my certificate for welding and I’m certified for welding, that just blows me away. I would have never thought in a million years that I would have that. Even though I’ve still got to look for a job and I still got a long ways to go, I just feel proud of myself. . . . I know a lot of people that don’t even have high school diplomas or a GED and they’re struggling to get into college and people going to college just to go get a loan and stuff like that.

I just feel like I’m bettering myself. I’ve learned a lot over these past seven years. I’ve matured a great deal. I honestly feel that I’m bettering myself. I don’t feel like I’m taking any steps back, regardless or not if I have employment or if I have my own house. I just feel like each day I live more and I learn more and I just feel . . . I’m just grateful to be alive, grateful to even go through the things I’m going through.

Thmaris credits having someone believe in him as critically important in helping him learn how  to believe in himself. He says the following about his case manager:

He just saw more in me. I didn’t even see it at the time. He saw great potential, and he told me that all the time. “Man, I see great potential in you. I see that. You can just be way much more than what you are.” Just to keep coming down here and having somebody have that much faith in you and believe in you that much, it’s a life changer.

My mom always used to tell me that I wasn’t shit, you know what I’m saying? She was a super alcoholic, and when she gets drunk she always say that. “You ain’t shit, your daddy ain’t shit, you ain’t going to be shit.” She was just always down on me. Just to hear somebody really have an interest in you or want you to better yourself, it just changed my life.

I honestly feel like if I didn’t have Rahim in my corner, I would have been doing a whole bunch of dumb shit. I would have been right back at square one. I probably would have spent more time in jail than I did. I just felt like if it wasn’t for him I probably wouldn’t be here right now talking to you.

Through this relationship, Thmaris was able to learn things that others may take for granted,  such as how to create an e-mail account, write a resume, apply for a job online, or use time  productively.

To be honest, I never knew what a resume was, I never knew how to create an e-mail account, I never knew how to send a resume online, I never knew how to do an application for a job online. He taught me everything there was about that. He taught me how to look for an apartment, he taught me how to look for a job, he taught me how to dress, he taught me how to talk to a boss, how to talk to a manager, how to get a job. There’s just a lot of stuff like that.

I also learned that there’s always something to do productively instead of wasting your time. So I just thought about making my resume better. And I just thought about sending e-mails to companies that I knew were hiring, and just doing productive stuff. I never knew what “productive” was until Rahim. I just didn’t think about time that was being wasted.

Thmaris hopes to open his own shop someday doing car modifications. He wants to make his  son proud, and despite  their difficult history and relationship, he wants  to make his mother  proud. He knows it will take a lot of work to do this but feels motivated and determined to do  so.

That’s what I basically learned from being at [the drop-in center]. I understand now the value of doing what you need to do versus what you want. A lot of people say, “Men do what they want and boys do what they can.” But that’s not it. It’s “Men do what they need to do and boys do what they want.” I’m so glad I learned that for real. Because I was always just doing what I wanted to do.

He’s proud that he’s been able to overcome the challenges in his life in order to get a high  school  diploma  and  graduate  from  his  welding  program.  Feeling  successful  has  just  fueled  Thmaris’s ambition to experience more success.

You don’t know how it felt when I graduated high school. I was like, “Wow, I did this on my own?” And it just felt so good. I’m thirsty again to get another certificate or diploma or whatever just because it’s just the best feeling in the world. It’s better than any drug. It’s like, man, I don’t even know how to explain it. It just felt like you just climbed up to the top of the mountain and just like you made it.

Thmaris feels that without the drop-in center and his case manager he might be in jail right  now. He sees other young people “wasting their time” at the drop-in center and wishes he could  tell them what he now knows.

If you’re still stuck in that stage where you don’t know what you want to do with your life, then come here and sit down with a case manager. Try to talk to somebody, and they’ll help you better your situation.

For  Thmaris,  the  drop-in  center  and  his  case  manager  were  key  to  helping  him  quit  his  addiction, leave his gang, get a high school diploma and welding certificate, and start to build a  life for himself that he’s proud of.

APPLICATION EXERCISES

1.   Construct an interview that includes (a) a section of standardized open-ended questions and (b)  interview guide topics. (See Exhibit 7.4, pp. 437–438.) Interview at least three people. As part  of the interview, look for opportunities to add in emergent, conversational questions not  planned in advance. After doing the interviews, discuss the differences you experienced among  these three interview format approaches. What are the strengths and weaknesses of each?

2.   Discuss what in-depth interviewing can and cannot do. What are the strengths and weaknesses  of in-depth, open-ended interviewing? As part of your discussion, comment on the following  quotation by financial investment advisor Fred Schwed (2014):

Like all of life’s rich emotional experiences, the full flavor of losing important money cannot be  conveyed by literature. Art cannot convey to an inexperienced girl what it is truly like to be a wife  and mother. There are certain things that cannot be adequately explained to a virgin either by words  or pictures. Nor can any description I might offer here even approximate what it feels like to lose a  real chunk of money that you used to own.

3. Exhibit 7.7 (p. 445) presents a matrix of question options. To understand how these options are  applied in an actual study, review a real interview. The Outward Bound standardized interview,  Exhibit 7.19 (pp. 508–511), at the end of this chapter, can be used for this purpose. Identify  questions that fit in as many cells as you can. That is, which cell in the matrix (Exhibit 7.7) is  represented by each question in the Outward Bound interview protocol?

4. Exhibit 7.12 (pp. 462–463) presents six approaches to interactive, relationship-based interviewing. What do these approaches have in common? How are they different from each  other? Why are these approaches controversial from the perspective of traditional social science  interviewing? (see Exhibit 7.3, Item 2, p. 433)

5. Exhibit 7.20 (pp. 511–516), at the end of this chapter, presents a case study of a homeless  youth, Thmaris, based on an in-depth interview with him. What is your reaction to the case  study? What purposes does it serve? How would you expect it might be used? What makes the  case study effective?

6.   Examine an oral history data set. Select and compare three oral histories from a collection. Here  are examples:

•    The Voices of Feminism Oral History Project, with transcripts available through Smith  College: http://www.smith.edu/libraries/libs/ssc/vof/vof-intro.html

•    Oral history project transcripts made available by University of South Florida:  http://guides.lib.usf.edu/ohp

7.   Locate a qualitative study in your area of interest, discipline, or profession that made extensive  use of interviewing as the primary method of data collection. What approach to interviewing  was used? Why? How was the study designed and conducted to ensure high-quality data? What  challenges, if any, are reported? How were they handled? What ethical issues, if any, are  reported and discussed? Overall, assess how well the study reports on the interviewing methods  used to allow you to make a judgment about the quality of the findings. What questions about  the interviewing approach and its implications are left unanswered?

Get help from top-rated tutors in any subject.

Efficiently complete your homework and academic assignments by getting help from the experts at homeworkarchive.com