Common misconceptions of critical thinking

SHARON BAILIN, ROLAND CASE, JERROLD R. COOMBS and LEROI B. DANIELS

In this paper, the ® rst of two, we analyse three widely-held conceptions of critical thinking: as one or more skills, as mental processes, and as sets of procedures. Each view is, we contend, wrong-headed, misleading or, at best, unhelpful. Some who write about critical thinking seem to muddle all three views in an unenlightening me lange. Apart from the errors or inadequacies of the conceptions themselves, they promote or abet misconceived practices for teaching critical thinking. Together, they have led to the view that critical thinking is best taught by practising it. We o� er alternative proposals for the teaching of critical thinking.

Critical thinking is a subject of considerable current interest, both in terms of theory and pedagogy. A great deal is written about critical thinking, conferences on the subject abound, and educational initiatives aimed at fostering critical thinking proliferate.1 It is our view that much of the theoretical work and many of the pedagogical endeavours in this area are misdirected because they are based on faulty conceptions of critical think- ing. Critical thinking is frequently conceptualized in terms of skills, pro- cesses, procedures and practice. Much of the educational literature either refers to cognitive or thinking skills or equates critical thinking with certain mental processes or procedural moves that can be improved through practice. In this paper we attempt to explain the misconceptions inherent in such ways of conceptualizing critical thinking. It is important to note that much of the literature contains a pervasive miasma of overlapping uses of such terms as skill, process, procedure, behaviour, mental operations,

j. curriculum studies, 1999, vol. 31, no. 3, 269± 283

S haron Bailin, a professor in the Faculty of Education, Simon Fraser University, Burnaby, British Columbia, Canada V5A 1S6, is interested in philosophical inquiries into critical thinking, creativity and aesthetic education. Her publications include Reason and V alues: New Essays in Philosophy of Education (Calgary, AB: Detselig, 1993), co-edited with John P. Portelli. Roland Case, an associate professor in the Faculty of Education, Simon Fraser University, conducts research in social studies and legal and global education. His most recent book is The Canadian Anthology of Social S tudies: Issues and S trategies (Burnaby, BC: Faculty of Education, Simon Fraser University), co-edited with Penney Clark. Jerrold R. Coombs, a professor in the Faculty of Education, University of British Columbia, has published extensively on ethical issues in education and the development of competence in practical reasoning. His publications include Applied Ethics: A Reader (Oxford: Black- well, 1993), co-edited with Earl R. Winkler. L eRoi B. Daniels, a professor emeritus in the Faculty of Education, University of British Columbia, is interested in philosophy of mind and legal education. He is currently editing (with Roland Case) the `Critical Challenges Across the Curriculum’ series (Burnaby, BC: Faculty of Education, Simon Fraser University).

Journal of Curriculum S tudies ISSN 0022± 0272 print/ISSN 1366± 5839 online Ñ 1999 Taylor & Francis Ltd http://www.tandf.co.uk/JNLS/cus.htm

http://www.taylorandfrancis.com/JNLS/cus.htm

etc. We thus ® nd similar kinds of error and confusion about critical thinking under super® cially di� erent ways of talking. We have tried to focus on plausibly distinct uses of skill, process and procedure in our critiques. Our arguments will lay the groundwork for o� ering a new conception based on di� erent foundational assumptions in the following paper on this theme.

Cr i ti c a l th i n ki n g a s s ki l l

Many educators and theorists appear to view the task of teaching critical thinking as primarily a matter of developing thinking skills. Indeed, the discourse on thinking is su� used with skill talk. Courses and conferences focus on the development of thinking skills and references to skills appear in much of the literature.2 Even leading theorists in the area of critical thinking conceptualize critical thinking largely in terms of skill. Thus, for example, Siegel (1988: 39, 41) writes of the critical thinker as possessing à certain character as well as certain skills’ , and makes reference to `a wide variety of reasoning skills’ . Similarly, Paul (1984: 5) refers to critical thinking skills and describes them as `a set of integrated macro-logical skills’ . The Delphi Report on critical thinking (Facione 1990), which purports to be based on expert consensus in the ® eld, views critical thinking in terms of cognitive skills in interpretation, analysis, evaluation, inference, explanation and self-regulation.

It is important to note that the term s̀kill’ can be used in a variety of senses and that, as a consequence, some of the discussion of skills in critical thinking is relatively unproblematic. In some instances s̀kill’ is used to indicate that an individual is pro® cient at the task in question. It is used, in this context, in an achievement sense. A skilled reasoner is one who is able to reason well and to meet the relevant criteria for good reasoning. The use of skill in this context focuses attention on students being capable of intelligent performance as opposed to merely having propositional knowl- edge about intelligent performance. Thus, someone who is thinking criti- cally can do more than cite a de® nition for ad hominem. He or she will notice inappropriate appeals to an arguer’ s character in particular argu- mentative contexts. Clearly, being a critical thinker involves, among other things, having a certain amount of `know-how’. Such thinkers are skilled, then, in the sense that they must be able to ful® ll relevant standards of good thinking. Conceptualizing critical thinking as involving skill in this achievement sense is relatively benign.

However, some of the discussion of skills in the context of critical thinking is more problematic. There is a strong tendency among educators to divide educational goals or objectives into three distinct kinds: knowl- edge, skills (i.e. abilities), and attitudes (i.e. values), and to assign critical thinking to the category of skills.3 Conceiving of critical thinking as a skill in this sense implies more than simply that an individual is a competent or pro® cient thinker. It is based on a conception of skill as an identi® able operation which is generic and discrete. There are di� culties with both of these notions. We will begin with the problems entailed in viewing skills as

270 s. bailin ET A L .

generic, i.e. once learned, they can be applied in any ® eld of endeavour; the problems involved in viewing skills as discrete will be dealt with later.

Skills as generic

The identi® cation of critical thinking with skill in the tripartite division of educational goals separates critical thinking from the development of knowledge, understanding and attitudes. Critical thinking is seen to involve generic operations that can be learned in themselves, apart from any particular knowledge domains, and then transferred to or applied in di� erent contexts. Thus, for example, Worsham and Stockton (1986: 11, 12) claim that t̀here are some skills that are basic and common to most curriculum tasks (for example, gathering information, ® nding the main idea, determining meaning)’ . They further state that:

Most curriculum materials at the high school level require that students analyze, synthesize, and evaluate as well as to[sic] create new `products’, such as original oral and written pieces and artistic creations. Students are expected to apply the appropriate thinking skills to accomplish these tasks.

In a similar vein, Beyer (1987: 163) makes reference to discrete thinking skills and claims that:

To be pro® cient in a thinking skill or strategy means to be able to use that operation e� ectively and e� ciently on one’s own in a variety of appropriate contexts.

The separation of knowledge and critical thinking is fraught with di� culties however. If the claim that critical thinking skills are generic is taken to mean that these skills can be applied in any context regardless of background knowledge, then the claim seems clearly false. Background knowledge in the particular area is a precondition for critical thinking to take place. A person cannot analyse a particular chemical compound if he or she does not know something about chemistry, and without an under- standing of certain historical events a person will be unable to evaluate competing theories regarding the causes of World War I.

Many theorists acknowledge the necessity of background knowledge for critical thinking but still maintain a separation between knowledge and the skill or skills of thinking critically. For example, Nickerson et al. (1985: 49) contend that:

recognizing the interdependence of thinking and knowledge does not deny the reality of the distinction. It is at least conceivable that people possessing the same knowledge might di� er signi® cantly in how skillfully they apply what they know.

We argue, however, that the distinction is itself untenable. Skilled performance at thinking tasks cannot be separated from knowledge. The kinds of acts, such as predicting and interpreting, which are put forth as generic skills will, in fact, vary greatly depending on the context, and this di� erence is connected with the di� erent kinds of knowledge and under-

common misconceptions of critical thinking 271

standing necessary for successful completion of the particular task. Inter- preting a graph is a very di� erent sort of enterprise from interpreting a play. The former involves coming to an understanding of the relationships among the plotted entities based on understanding certain geometric conventions; the latter involves constructing a plausible meaning for the play based on textual evidence. Both of these di� er again from the case of interpreting someone’s motives, which involves imputing certain beliefs or attitudes to an individual based on reading verbal and bodily cues as well as on past knowledge of the person. Similarly, predicting how a story will end calls upon very di� erent understanding than does predicting the weather. It makes little sense, then, to think in terms of generic skills, which are simply applied or transferred to di� erent domains of knowledge.

Becoming pro® cient at critical thinking itself involves, among other things, the acquisition of certain sorts of knowledge. For example, the knowledge of certain critical concepts which enable one to make distinc- tions is central to critical thinking. Understanding the di� erence between a necessary and a su� cient condition is not just background knowledge but is very much a part of what is involved in thinking critically.

Similarly, pro® ciency in critical thinking involves an understanding of the various principles which govern good thinking in particular areas, and many of these are domain speci® c, as McPeck (1981) has pointed out. Barrow (1991: 12) makes the point in this way:

What is clear, what is contradictory, what is logical, and so forth, depends upon the particular context. . . . To be logical in discussion about art is not a matter of combining logical ability with information about art. It is a matter of understanding the logic of art, of being on the inside of aesthetic concepts and aesthetic theory. The capacity to be critical about art is inextricably intertwined with understanding aesthetic discourse.

Facione (1990: 10) sums up well this general point:

This domain-speci® c knowledge includes understanding methodological principles and competence to engage in norm-regulated practices that are at the core of reasonable judgements in those speci® c contexts. . . . Too much of value is lost if CT [critical thinking] is conceived of simply as a list of logical operations and domain-speci® c knowledge is conceived of simply as an aggregation of information.

An additional di� culty with the identi® cation of critical thinking solely with skills to the exclusion of knowledge and attitudes is that it fails to recognize the central role played by attitudes in thinking critically. Critical thinking involves more than the ability to engage in good thinking. It also involves the willingness or disposition to do so. Siegel (1988) refers to this aspect of critical thinking as the critical spirit and sees it as of equal importance to the reason-assessment component. Ennis (1987) includes a list of dispositions in his conception of critical thinking, and dispositions, and values and traits of character are central to Paul’ s (1982) notion of a s̀trong sense’ of critical thinking.

272 s. bailin ET A L .

Skills as discrete

Another major di� culty with the equation of critical thinking with skill is that it assumes the existence of certain discrete processes, procedures or operations. It is assumed that acquiring a skill involves becoming pro® cient at these processes. Thus, Chuska (1986: 25) distinguishes between the `ways of thinking (the processes involved)’ and t̀hinking skills (the pro® - ciency a person demonstrates in using the processes)’. In some cases these processes are thought to involve certain mental processes or operations, and in others these processes are conceived of in terms of procedures or steps. The di� culties with both these conceptualizations are dealt with below.

Cr i ti c a l th i n ki n g a s m e n ta l p r o c e s s e s

It is a common assumption in discourse about critical thinking that being good at critical thinking is basically a matter of being pro® cient at certain mental processes.4 These processes are generally thought to include such things as classifying, inferring, observing, evaluating, synthesizing and hypothesizing. Kirby and Kuykendall (1991: 7, 11), for example, hold that t̀hinking is a holistic process in which di� erent mental operations work in concert’ and allude to ìntellectual skills training’ . It is our view that a purely `processes’ conception of critical thinking is logically mis- leading and pedagogically mischievous.5

In medicine, talking about processes as outcomes makes some sense. An obstetrician may give a newborn infant an appropriately sound smack to start up certain vital processes. May we not suggest that teachers should seek to do something analogous? If we do, we are presumably not suggest- ing that they should seek the occurrence of physical processes such as synapse-® ring in the brain, but that they should seek the occurrence of such mental processes as analysing or translating. Should they not, then, seek to invoke mental processes?

Talk about mental processes has a logic very di� erent from the logic of talk about physical processes. Physical processes, such as baking or synapse-® ring, can, at least in principle, be observed and identi® ed independently of any product they may have. Mental processes can be identi® ed only via their products; observing them directly is a logical impossibility. For example, we suppose that a translating `process’ has occurred in some person only because the person has succeeded in produ- cing a translation.

Descriptions of translating and classifying `behaviours’ are not descrip- tions of behaviours at all, but descriptions of upshots or accomplishments such as converting poetry to prose. When someone succeeds in such a conversion there is no doubt that something must have gone on ìn’ that person which enabled him or her to succeed. To identify this s̀omething’ as a particular mental process is to assume that the same sort of thing goes on within a person in every case in which he or she translates something. There is no reason to suppose this is the case. The so-called `processes’ are hypothesized, and then rei® ed after the fact of these upshots.

common misconceptions of critical thinking 273

Mental processes are di� erentiated from one another not by observing features of the processes, but by distinguishing among kinds of upshots or accomplishments. The number of di� erent kinds of processes we identify depends upon how we decide to di� erentiate upshots. For some purposes we may wish to lump them all together. For instance, we may lump together all of the upshots that represent successful application of conven- tional meaning rules and standards, and then we might talk of t̀he process’ of translation that all have in common. We may, on the other hand, want to subdivide student successes on the basis of the di� erent kinds of meaning conventions they ful® l. In either case, we will be less inclined to reify and confound categories if we talk about enabling students to ful® l the conventions and standards rather than about their exercising mysterious processes presumed to lie behind such accomplishments. No useful ped- agogical aim is served by postulating such processes.

Regardless of the conceptual hazards, people interested in critical thinking, and in education in general, are prone to talk about processesÐ the thinking process, the reading process, the creative process. What makes this way of characterizing teaching and learning so attractive? In part, the attraction may arise from the ambiguity of the term `process’. In part, it may also occur because it seems to o� er a promising answer to the question, `Are critical thinking abilities transferable?’

Broadly speaking, a process may be any course of events that has an upshot or a result of some sort. However, there are at least three distinct ways that courses of events relate to their upshots. In the ® rst instance, they may relate as that course of events people now call `natural selection’ relates to its upshot, the evolution of a species. In the second, they may relate as running a race relates to ® nishing the race. In the third, they may relate as facing an object relates to noticing it. We may characterize these, for the sake of convenience, as: (1) process-product, (2) task-achievement, and (3) orient-reception relations. Process-product pairs are used to pick out situations in which a series of changes or a particular relation produces an identi® able upshot. Task-achievement pairs are used to talk about what people do to bring about upshots. Tasks di� er from other `processes’ in that tasks are things people do on purpose in an e� ort to succeed at something. There are doubtless thousands of task words in most natural languages. Words like l̀ook’, s̀earch’ , r̀ace’ and t̀each’ can all be used as task words. Their use in this way re¯ ects the fact that many things people seek to accomplish are di� cult to bring o� . They can try and fail.

Ambiguity in the term `process’ lends a spurious sort of plausibility to the processes conception of critical thinking because it makes it plausible to suppose that all upshots of human activity have the same relation to the activity as products of combustion have to the process of combustion. Because processes are routinely named after their products, it is natural to suppose that achievements and receptions must also have corresponding processes. The result, of course, is unwarranted rei® cationÐ reading back from outcomes to mysterious antecedent processes.

The process conception is also bolstered by the fact that the same happening may be spoken of as both a process and a task. When one bakes a loaf of bread the changes in the loaf may be seen either as a natural function

274 s. bailin ET A L .

of heating and of the chemistry of its constituents, or as what the cook doesÐ heating the oven to the proper temperature and so on. The same happenings are, thus, characterized di� erently. Baking, the chemical pro- cess, is a causal occurrence; baking, the task, is a procedure (or an art) intended to bring about the chemical process in proper degree, so that the result is not pasty, or charred, or leaden. Because such words as `baking’ may be ambiguous, it is easy to neglect the di� erence between the process and the task.

Such reception verbs, as s̀ee’, `notice’ and r̀ealize’ refer to upshots of a special kind. First, they involve either (or both) our literal perception apparatuses (eyes, ears, etc.) or our mental abilities. Secondly, although there are tasks we can carry out to position ourselves to see (e.g. sit where we can watch the horizon) or prepare ourselves conceptually (e.g. acquire the concepts of truth and validity), these tasks cannot guarantee that we will have the desired upshot. As White (1967: 69) puts it:

We can ask someone how he [sic] `would’ discover or cure, but not how he `would’ notice, although it is as legitimate to ask how he `did’ notice as it is to ask how he `did’ discover or cure. For the former `how’ question asks for the method, but the latter for the opportunity. Although appropriate schooling and practice can put us in a condition to notice what we used to miss, people cannot be taught nor can they learn how to notice, as they can be taught or can learn how to detect. Noticing, unlike solving, is not the exercise of a skill.

For those interested in teaching students to become better at critical thinking, the moral is clear. We cannot teach students the process of noticing fallacies, for we have no grounds for believing there is such a process. The most we can do is orient them, and this, it seems, we do in at least three ways.

� We teach the person certain conceptsÐ for instance, the concept of a valid argument. This enables them to notice fallacies they would otherwise have overlookedÐ but does not, of course, guarantee they will notice them.

� We motivate the person to care that arguments are valid and to be on the lookout for invalid arguments.

� We teach procedures that enable the person to orient himself or herself where certain kinds of reception are sought.

The second reason why people become advocates of critical thinking processes is that they want schools to provide curricula such that students learn to do certain things across the curriculumÐ and into their non-school livesÐ abstract, analyse, classify, evaluate, sequence, synthesize, translate, etc. These `processes’ are believed to be common to all critical thinking situations and to a range of activities beyond. To educators this means that in teaching them they can economize on instruction because there will be transfer of training. Someone who learns the forehand smash in tennis is likely to learn the forehand smash in squash with less di� culty than a person novice to both. Are we then to suggest that someone who learns, for example, to abstract in the writing of a pre cis will be able, because of that prior learning, to abstract in depicting a house, or that one who is able to

common misconceptions of critical thinking 275

evaluate cars will thereby be able to evaluate hypotheses? What else can we make of talk of processes as general abilities? Critical thinking situations may well have common features, but speaking of processes is of no value; it is, indeed, either otiose or misleading, and we almost certainly risk losing more than we gain. We risk falling into a monochromatic and wholly misleading view of the teaching of critical thinking.

Cr i ti c a l th i n ki n g a s p r o c e d u r e s

Another common misconception of critical thinking sees it as basically a matter of following a general procedure, described usually in terms of a set of steps, stages or phases. We contend that developing students’ compe- tence in thinking is not, at heart, dependent on teaching them steps or procedures to follow. We begin by clarifying what we believe is implied by those who characterize critical thinking as following step-by-step pro- cedures. Next, we compare this view with an account of thinking as the exercise of judgement.

Thinking as procedure

Although there is no consensus about the general procedures that constitute thinking, the three most frequently discussed are inquiry (i.e. t̀he scienti® c method’), problem solving, and decision making (Wright 1993). Some writers refer to critical thinking and creative thinking as separate pro- cedures (Marzano et al. 1988: 32, Overgaard 1989: 9). By some accounts, there are as many as eight general thinking procedures: concept formation, principle formation, comprehension, problem solving, decision making, research, composition, and oral discourse (Marzano et al. 1988: 32± 33). Each of these is distinguished by the type of conclusion or result produced (e.g. clari® cation of a concept, a decision about what course of action to take). Proponents of thinking as procedure, by de® nition, believe that procedures are at the heart of promoting thinking.

An important variable in this view of thinking is the formality of the sequence of steps involved in these general procedures. There is a range of opinion on this matter, spanning what we will call the algorithmic and the heuristic views of thinking as procedure. According to Nickerson et al. (1985: 74), algorithms and heuristics are two types of procedures: an algorithm is a step-by-step prescription that is guaranteed to accomplish a particular goal; an heuristic is a procedure that is merely reasonably likely to yield a solution. Proponents of an algorithmic view of thinking as procedure hold that: (1) there is a manageable number of highly reliable procedures that, taken as a whole, can address the range of situations that students need to resolve, (2) the steps in these procedures form a ® xed order, and (3) mastery of these steps is the central challenge in learning to think. Supporters of the heuristic view hold a less stringent set of assump- tions: (1) there is a potentially large number of procedures helpful across the range of situations that students need to resolve, (2) the order of the

276 s. bailin ET A L .

steps in these is not ® xed, and (3) mastery of these steps is a pre-eminent, but not necessarily the only, challenge in learning to think.

Although it is di� cult to ® nd much support for the algorithmic view of critical thinking, many academics, particularly psychologists, appear to accept the heuristic view. Thus, after reviewing a representative range of programmes to promote thinking, Glaser (1984: 96) notes that `most of these programs place emphasis on the teaching of general processes, general heuristics and rules for reasoning and problem solving, that might be acquired as transferable habits of thinking’ . Marzano et al. (1988: 34) suggest that the procedures should not be taught as `prescribed procedures’ but rather as r̀epertoires or arrays of alternatives’ that are s̀emi-ordered’ or are `working hypotheses about the best way to accomplish a goal, general procedures to be used ¯ exibly by teachers and adapted by students’ . For others, however, the sequence of steps to be followed is more signi® cant (e.g. Beach 1987: 146± 147).

It is intuitively appealing to describe critical thinking in terms of how an individual is to go about it. The procedure approach, by reducing critical thinking to steps, seeks to provide operational or task descriptions of the building blocks of such thinking. Consider the following exampleÐ the `Decide Model’ by E. Daniel Eckberg.6 This conception holds or assumes that critical thinking comprises a set of steps characterized as follows:

D. De® ne the dilemma What’s the problem? Why does it concern me? What’s the basic issue?

E. Examine electives What are all sorts of possible ways of solving the problem? What choices do we have? What are our alternative courses of action? What hypothesis can we make?

C. Consider consequences What happens if we try each choice? If we do this, then what? How will things change if I choose this one? What data can I collect and consider in considering these con- sequences?

I. Investigate importance What principles are important to me here? What things do I most value? How will these values in¯ uence my choice? What am I assuming to be true? What are my preferences and biases?

D. Decide direction In the light of the data, what’ s my choice? Which choice should now be chosen? Which hypothesis seems to be the best? Based on the evidence, what course of action should I take?

common misconceptions of critical thinking 277

E. Evaluate ends How can I test my hypothesis? Was my course of action correct? What are the consequences of my choice? Has a tentative hypothesis been proven or disproved? What are my conclusions?

As one can see, the model attempts to characterize critical thinking as a set of procedures to be carried out. None of the steps directly raises the underlying normative questions. Even in asking, `Was my course of action correct?’, the schema refers to what has been completedÐ a re¯ ection back. Thus, the fundamentally normative and ongoing nature of critical thinking is ignored or masked. Critical thinking is not simply a retrospective undertaking.

It might be suggested that a more appropriate description of the `decide direction’ step is `make an informed, fair-minded decision’ . We agree, but this no longer describes a procedure to be performed, rather it identi® es norms to be ful® lled. As such, it is not characteristic of the procedure view. Although some educators may use the term s̀tep’ to refer to achievement of standards, the focus is overwhelmingly on strategies and heuristics. We do not wish to quibble over conceptual territory; rather we draw attention to the dominant (possibly, paradigmatic) use of the term s̀tep’ so as to expose the inadequacies of this view of critical thinking as following general procedures.

Concerns with t̀hinking as general procedures’

Although we believe that heuristics serve a useful role in learning to think critically, we do not regard them as the central feature of good thinking: there are two basic reasons why the general procedures view is an inadequate way of conceiving of critical thinking. We believe it misrepre- sents the major obstacle to good thinking, and grossly understates the signi® cance of contextual factors in deciding how to proceed in any particular case of critical thinking

On the general procedures view, the performance of certain tasks is seen to be a highly reliable means of achieving the desired results of thinking. The educational challenge is, therefore, to equip students with repertoires of procedures they can employ across the range of thinking situations. In our view, the mere performance of certain procedures identi® ed in descriptive terms is insu� cient to ensure that what has happened counts as critical thinking.

The performance of tasks such as thinking of reasons for and against a position, or of brainstorming alternatives, does not guarantee that an individual is thinking critically. The pro and con reasons that the individual comes up with may address only the most trivial aspects of the issue; so, too, the brainstorming of alternatives may miss the most sensible alter- natives. Learning to engage in such activities has little educational merit unless these things are done in such a way as to ful® l relevant standards of

278 s. bailin ET A L .

adequacy. Students have, after all, performed these sorts of tasks for much of their lives. The educational goal must be to teach them to do such tasks well by increasing their capacity and inclination to make judgements by reference to criteria and standards that distinguish thoughtful evaluations from sloppy ones, fruitful classi® cation schemes from trivial ones, and so on. A general procedures approach that does not teach standards of good thinking is unlikely to sharpen students’ critical judgement. It is for this reason we have suggested that critical thinking should be characterized not in terms of procedures to be carried out, but in terms of the standards a performance must ful® l to count as successful.

Critical thinking is a polymorphous or multi-form enterprise; there are numerous activities that may be helpful in solving a problem or reaching a decision. What steps are appropriate is determined both by the nature of the problem and its context. They are context-bound. For example, in deciding whether any particular government should support international military intervention in `civil’ wars, it is hard to imagine how one set of steps, or any limited set of procedures, could be appropriate for all such circumstances. Nor could the same sequence of problem-solving steps usefully be applied both to ® xing a failing relationship and to ® xing a civil war. Identifying both these situations as `problems’ masks the very di� erent factors that need to be considered in deciding what should be done in each case.7 Given the diversity of problems and problem contexts, we believe that any account of the steps involved in problem solving or decision making will either be so vague as to be largely unhelpful, or they will be so speci® c that they will have little generalizability beyond a speci® c class of problems or decisions.

To a considerable extent, what we should do in solving a problem is determined by the standards that must be met for the solution in the particular case to be successful. In the case of a failing relationship, it may be lack of honesty with oneself that is the problem. In deciding whether a government should participate in an international intervention may involve honesty, but it often involves considering the e� ect on the lives of many innocentsÐ and very large economic e� ects. Following the decision-making model listed above may simply be an occasion to rationalize the self- deception that gave rise to the personal problem in the ® rst placeÐ or the international problem in the ® rst place. Nurturing open-mindedness may be the only s̀tep’ needed to repair this situation

We are not claiming that teaching about general procedures is a com- pletely inappropriate way to promote critical thinking. Rather, we empha- size that the e� ectiveness of any procedure depends on its e� cacy in helping students meet the relevant standards for good thinking: there are no inherent or highly reliable connections between learning to think well and performing particular operations. Put another way, what drives increased competence in thinking is greater mastery of the standards for judging an appropriate tack to take in a particular context, not learning pre- programmed, supposedly generalizable, procedures.

common misconceptions of critical thinking 279

Cr i ti c a l th i n ki n g a n d th e p e d a g o g y o f p r a c ti c e

We have reviewed three conceptions of critical thinking: skills, processes, and procedures. All three have been used to promote the idea that competence in thinking critically is gained primarily through practice. Thus, although we will focus in this section on the skills-conception as a source of the pedagogy of practice, we could just as well focus on either the process or the procedures view. Nickerson et al. (1985) discuss learning thinking skills as analogous to two ways of learning physical skillsÐ one when a person practises a particular skill to strengthen it; the other where, by appropriately directing intellectual energy, teachers replace the novice’ s ine� cient movements with more e� cient ones. Practice is seen as exercis- ing the skills of critical thinking so that improvement will take place. Students may, for example, be given frequent opportunities to make comparisons in a variety of domains so that the s̀kill of comparing’ will be exercised, and this aspect of critical thinking improved. We contend, however, that critical thinking is not promoted simply through the repeti- tion of s̀kills’ of thinking, but rather by developing the relevant knowledge, commitments and strategies and, above all, by coming to understand what criteria and standards are relevant. Repetition does indeed have some role to play, but only if it takes place in the context of the development of such knowledge, criteria, commitments and strategies.

The main assumption underpinning the practice view is that critical thinking consists of a variety of discrete skills that can be improved through repetition. On this view critical thinking skills are analogous to skills in an athletic endeavour such as soccer, where it is possible to practise kicking, heading the ball, passing, etc., and to develop skill at each of these constituent activities independently of ever playing a football game. One repeats the skill until it has become routinized and one no longer needs to apply conscious attention to its execution.

However, this is not an appropriate model for what is involved in becoming better at critical thinking. Unlike athletic skill, skill in critical thinking cannot be separated from understanding the nature and purpose of the task one is attempting to accomplish.8 Becoming better at comparing, for example, involves learning to make comparisons according to relevant criteria, making comparisons which are appropriate to the particular circumstances, comparing with a view to the reason the comparison is being made, and so on.

We argued earlier that critical thinking cannot be characterized in terms of speci® c mental processes, and that there are no good grounds for supposing that terms like comparing, classifying and inferring denote generic mental processes which one can improve through repetition. Here, we emphasize that all aspects of critical thinking centrally involve judgement, and judgement cannot be made routine. Scheƒ er (1965: 103) makes this point with reference to chess:

critical skills call for strategic judgement and cannot be rendered automatic. To construe the learning of chess as a matter of drill would thus be quite wrong-headed in suggesting that the same game be played over and over

280 s. bailin ET A L .

again, or intimating that going through the motions of playing repeatedly somehow improves one’s game. What is rather supposed, at least in the case of chess, is that improvement comes about through development of strategic judgement, which requires that such judgement be allowed opportunity to guide choices in a wide variety of games, with maximal opportunity for evaluating relevant outcomes and re¯ ecting upon alternative principles and strategy in the light of such evaluation.

An examination of those areas where practice is helpfulÐ for example artistic performanceÐ makes evident that useful practice involves far more than mere repetition. Practising the piano is not simply a matter of continually repeating a piece in the same manner, but rather of being alert to and attempting to correct errors and continually striving for improvement according to the standards of quality performance. Dewey (1964: 201) makes the point that simply sawing a bow across violin strings will not make a violinist.

It is a certain quality of practice, not mere practice, which produces the expert and the artist. Unless the practice is based upon rational principles, upon insights into facts and their meaning, èxperience’ simply ® xes incorrect acts into wrong habits.

Howard (1982: 161, 162) also maintains that practice is not mere repetition, but claims that it is, rather, repetition which is g̀uided by speci® c aims such as solving various kinds of problems’ or ìmproving acquired skills’ , and ìn accord with some . . . criteria of performance’ which enable one to judge the level of mastery of the activity. Thus, he states:

Rather than mechanically duplicating a passage, one strives for particular goals, say, of ¯ uency, contrast, or balance. Successive repeats re¯ ect a drive toward such goals rather than passive absorption of a sequence of motor acts.

The question arises at this point as to how critical thinking can best be developed and what role practice plays in this development. We have argued that what characterizes thinking which is critical is the quality of the reasoning. Thus, in order to become a (more) critical thinker one must understand what constitutes quality reasoning, and have the commitments relevant to employing and seeking quality reasoning. The knowledge necessary for such understanding includes background knowledge relevant to the context in question, knowledge of the principles and standards of argumentation and inquiry, both in general and in specialized areas, knowledge of critical concepts, and knowledge of relevant strategies and heuristics. The kinds of habits of mind, commitments or sensitivities necessary for being a critical thinker include such things as open-mind- edness, fair-mindedness, the desire for truth, an inquiring attitude and a respect for high-quality products and performances. Thus, fostering criti- cal thinking would involve the development of such knowledge and commitments.

A variety of means may be employed to promote such development, including direct instruction, teacher modelling, creation of an educational environment where critical inquiry is valued and nurtured, and provision for students of frequent opportunities to think critically about meaningful

common misconceptions of critical thinking 281

challenges with appropriate feedback. Practice may also have a role to play, but it must be understood that it is not practice in the sense of a simple repetition of a skill, process or procedure. Rather such practice presupposes the kind of knowledge outlined above, and involves the development of critical judgement through applying this knowledge in a variety of contexts. It also involves attempts on the part of the learner to improve according to speci® c criteria of performance, and frequent feedback and evaluation with respect to the quality of thinking demonstrated.

N o te s

1. See, for example, Presseisen (1986). 2. Some examples are Worsham and Stockton (1986) and Beyer (1991). 3. One fairly recent example of the use of this tripartite division of goals is to be found in

British Columbia Ministry of Education (1991a, b). 4. It is, of course, a category mistake to talk about `doing’ processes; processes happen;

people do not do them. 5. One which comes close to this is found in a document produced by a Canadian Ministry

of Education (British Columbia Ministry of Education 1991b: 15) which refers to t̀hirteen thinking operations: observation, comparing, classifying, making hypotheses, imagining . . . ’ .

6. The `Decide Model’ is used in an introductory text on economic reasoning (described in Mackey 1977: 410).

7. According to Mackey (1977: 408) problem solving is t̀he application of an organized method of reasoning to a di� cult, perplexing or bewildering situation’.

8. This is not to deny that many activities, such as football, deeply involveÐ in addition to skillsÐ critical thinking.

R e fe r e n c e s

BARROW, R. (1991) The generic fallacy. Educational Philosophy and Theory, 23 (1), 7± 17. BEACH, R. (1987) Strategic teaching in literature. In B. F. Jones, A. S. Palincsar, D. S. Ogle

and E. G. Carr (eds), S trategic Teaching and L earning: Cognitive Instruction in the Content Areas (Alexandria, VA: Association for Supervision and Curriculum Development), 135± 159.

BEYER, B. K. (1987) Practical S trategies for the Teaching of Thinking (Boston: Allyn & Bacon).

BEYER, B. K. (1991) Teaching Thinking Skills: A Handbook for Elementary S chool Teachers (Boston: Allyn & Bacon).

BRITISH COLUMBIA MINISTRY OF EDUCATION (1991a) Thinking in the Classroom (Resources for Teachers), V olume One: The Context for Thoughtful L earning (Victoria, BC: Assessment, Examinations, and Reporting Branch, Ministry of Education and Ministry Responsible for Multiculturalism and Human Rights).

BRITISH COLUMBIA MINISTRY OF EDUCATION (1991b) Thinking in the Classroom (Resources for Teachers), V olume Two: Experiences that Enhance Thoughtful L earning (Victoria, BC: Assessment, Examinations, and Reporting Branch, Ministry of Education and Ministry Responsible for Multiculturalism and Human Rights).

CHUSKA, K. R. (1986) Teaching the Process of Thinking, K-12, Fastback 244 (Bloomington, IN: Phi Delta Kappa Educational Foundation).

DEWEY, J. (1964) What psychology can do for the teacher. In R. D. Archambault (ed.), John Dewey on Education: Selected Writings (Chicago: University of Chicago Press), 195± 211.

282 s. bailin ET A L .

ENNIS, R. H. (1987) A taxonomy of critical thinking dispositions and abilities. In J. B. Baron and R. J. Sternberg (eds), Teaching Thinking S kills: Theory and Practice (New York: Freeman), 9± 26.

FACIONE, P. A. (1990) Critical thinking: A statement of expert consensus for purposes of educational assessment and instruction: Research ® ndings and recommendations (The Delphi Report). Prepared for the Committee on Pre-College Philosophy of the American Philosophical Association. ERIC ED 315 423.

GLASER, R. (1984) Education and thinking: the role of knowledge. American Psychologist, 39 (2), 93± 104.

HOWARD, V. A. (1982) Artistry: The Work of Artists (Indianapolis, IN: Hackett). KIRBY, D. and KUYKENDALL, C., 1991, Mind Matters: Teaching for Thinking (Portsmouth,

NH: Boynton/Cook). MACKEY, J. (1977) Three problem-solving models for the elementary classroom. S ocial

Education, 41 (5), 408± 410. MARZANO, R. J., BRANDT, R. S., HUGHES, C. S., JONES, B. F., PRESSEISEN, B. Z., RANKIN,

C. S. and SUHOR, C. (1988) Dimensions of Thinking: A Framework for Curriculum and Instruction (Alexandria, VA: Association for Supervision and Curriculum Development).

MCPECK, J. E. (1981) Critical Thinking and Education (Oxford: Martin Robertson). NICKERSON, R. S., PERKINS, D. N. and SMITH, E. E., 1985, The Teaching of Thinking

(Hillsdale, NJ: Erlbaum). OVERGAARD, V. (1989) Focus on thinking: Towards developing a common understanding. In

R. W. Marx (ed.), Curriculum: Towards Developing a Common Understanding: A Report to the British Columbia Ministry of Education (Vancouver, BC: Vancouver School District), 5± 34.

PAUL, R. W. (1982) Teaching critical thinking in the strong sense: a focus on self-deception, world views, and dialectical mode of analysis. Informal L ogic, 4 (2), 2± 7.

PAUL, R. W. (1984) Critical thinking: fundamental to education for a free society. Educational L eadership, 42 (1), 4± 14.

PRESSEISEN, B. Z. (1986) Critical Thinking and Thinking Skills: S tate-of-the-Art De® nitions and Practice in Public S chools (Philadelphia: Research for Better Schools).

SCHEFFLER, I. (1965) Conditions of Knowledge: An Introduction to Epistemology and Education (Glenview, IL: Scott, Foresman).

SIEGEL, H. (1988) Educating Reason: Rationality, Critical Thinking, and Education (New York: Routledge).

WHITE, A. R. (1967) T he Philosophy of Mind (New York: Random House). WORSHAM, A. M. and STOCKTON, A. J. (1986) A Model for Teaching Thinking Skills: The

Inclusion Process, Fastback 236 (Bloomington, IN: Phi Delta Kappa). WRIGHT, I. (1993) Inquiry, problem-solving, and decision making in elementary social studies

methods textbooks. Journal of S ocial Studies Research, 16± 17 (1), 26± 32.

common misconceptions of critical thinking 283

The Nichols Case MADI AND DEVON NICHOLS BACKGROUND Madi and Devon Nichols, both age 63, have been married for 40 years, are both in good health, and they are citizens and residents of Louisiana. They expect to work until age 66 to 70. Madi and Devon live in a community property state. They have the following children and grandchildren: Elizabeth, an estate planning attorney, is married, healthy, and happy. Madi and Devon adore Elizabeth’s husband, Scott, and their four children. James, a high net worth investment consultant, was recently divorced and his ex-wife, Catherine, has custody of their three children. Madi and Devon, never quite cared for Catherine, as she always seemed to be quite snooty. Since the divorce, the relationship between Madi and Catherine has been very strained. Since his divorce, James has had somewhat of a mid-life crisis. He recently rented a penthouse apartment and bought a new Jaguar. James has also been dating Natalie, a 21-year-old swimsuit model. While Madi and Devon are confident that this is only a passing phase, they are concerned about giving any gifts outright to James or his children. Lynn, Madi and Devon’s third child, was a bit of a wild child. Lynn died in a tragic motorcycle accident in her senior year of college while on her way home to tell her parents about a big secret she had been keeping. The summer before, Lynn had given birth to a baby girl named Marie. At the time, Lynn gave the baby to the baby’s father, an older married man, although no official adoption was ever transacted. Madi and Devon still do not know about Marie. Madi and Devon own Fresh Veggies, a popular organic health food store in a general partnership with another couple located in Louisiana. Scott, Elizabeth’s husband, has worked at the store since he was a kid. Scott is now one of the store managers who continues along with the other partners to direct the company as executives. Madi and Devon would like to reward Scott for all of his hard work by giving Scott and Elizabeth 3/4 of their interest in the business and giving the remaining 1/4 of their interest in the business to James. They do not want James to have any control over the business, just to have an income interest. Elizabeth’s youngest child, Andrew, was born with a serious physical disability. To provide additional support for Andrew, Devon created an irrevocable trust with Andrew as the sole beneficiary with an $8,015,000 transfer of separate property 5 years ago. The trust meets the requirements of Section 2503(c). Assume for any calculation of GSTT that the annual exclusion was $15,000 and the lifetime exemption was $11,580,000. Also assume that the GSTT and gift tax rates were 40% for determination of GSTT even though they were paid 5 years prior.

Children Age Grandchildren

Elizabeth 40 4 children (ages 15, 14, 13 & 12)

James 35 3 children (ages 5, 3 & 1)

Lynn Deceased 1 child

Devon and Madi made the following additional lifetime transfers:

• Four years ago, Devon gave Elizabeth, James, and their spouses $100,000 each (assume the annual exclusion at the time was $11,000) of community property.

• Two years ago, Devon gave Elizabeth, James, and their spouses $200,000 each of his separate property. Devon paid gift tax of $347,760 on these gifts.

Madi and Devon have never elected to split gifts of separate property. Devon and Madi estimate the following at each of their deaths:

• The last illness and funeral expenses are expected to be $100,000 per person.

• Estate administration expenses are estimated at $250,000 per person. WILLS Madi does not have a will. Devon has an outdated will leaving most of his probate assets to Madi. Clauses from Devon’s Statutory Last Will and Testament I, Devon, being of sound mind and wishing to make proper disposition of my property in the event of my death, do declare this to be my Last Will and Testament. I revoke all of my prior wills and codicils.

1. I have been married but once, and only to Madi with whom I am presently living. Out of my marriage to Madi, three children were born, namely Elizabeth, James and Lynn. I have adopted no one nor has anyone adopted me.

2. I leave my Vintage Mustang and House Boat to my son, James. 3. I leave the life insurance proceeds on my life to my daughter, Elizabeth. 4. I leave Vacation Home 1 to my daughter, Lynn. 5. I leave Auto 1 to the Methodist Church, a qualified charity. 6. I give the residual of my estate to Madi, my wife. 7. In the event that Madi predeceases me or fails to survive me for more than six (6) months from

the date of my death, I leave any interest of my estate determined to be payable to her to my children, Elizabeth, James and Lynn, in equal and 1/3 shares.

8. In the event that any of the named legatees should predecease me, die within six months from the date of my death, disclaim, or otherwise fail to accept any property bequeathed to him or her, then such interest will pass to the said legatee’s descendants, otherwise his or her share of all of my property of which I die possessed shall be paid equally among the surviving named legatees.

9. I name my best friend Keith to serve as the executor of my succession with full seizin and without bond.

10. I direct that the expenses of my last illness, funeral, and the administration of my estate shall be paid by my executor as soon as practicable after my death and allocated against the residual estate.

11. Since I have made numerous lifetime gifts to my children, all inheritance, estate, succession, transfer, and other taxes (including interest and penalties thereon) payable by reason of my death shall be allocated to the children’s share, regardless of whether my spouse survives me.

Statement of Financial Position (Devon and Madi Nichols)

1. Assets are stated at fair market value (rounded to even dollars). 2. Liabilities are stated at principal only (rounded to even dollars). 3. The adjusted basis of the primary residence is $600,000. 4. Madi received vacation home 2 from her grandmother, Lois. Madi and Lois were always very

close and Lois gave her the home when Elizabeth was first born so Madi could enjoy motherhood as much as Lois had. Lois purchased the vacation home for $30,000 and the FMV of the home at the date of transfer was $200,000. The FMV when Lois died was $250,000.

5. The life insurance policy has Madi listed as the designated beneficiary. The Investment account is a Transfer on Death account with Elizabeth and James as the listed beneficiaries of both Devon and Madi’s shares.

6. The Yacht was purchased by Devon after his House Boat was destroyed by a Hurricane. 7. Property Ownership:

• CP - Community Property

• H - Husband separate

• W - Wife separate 8. The insurance face value (death benefit) and the cash value of $1,000,000 are the same.

ASSETS LIABILITIES AND NET WORTH

Cash & Cash Equivalents Liabilities

CP Cash $150,000 Current Liabilities

Total Cash / Cash Equiv. $150,000 CP Credit Card 1 $16,000

CP Credit Card 2 $5,000

Invested Assets Total Current Liabilities $21,000

CP Fresh Veggies $4,000,000

CP Investment Portfolio $13,000,000 Long-Term Liabilities

H Life Insurance on Devon $1,000,000 CP Mortgage – Primary Residence $750,000

CP Rental Property $500,000 CP Rental Property $300,000

Total Investments $18,500,000 W Auto 2 $70,000

Total Long-Term Liabilities $1,120,000

Personal Use Assets

CP Primary Residence $1,500,000

H Vacation Home 1 $950,000 Total Liabilities $1,141,000

W Vacation Home 2 $500,000

CP Personal Property $900,000

H Auto 1 $70,000 Net Worth $22,469,000

W Auto 2 $60,000

H Vintage Mustang $80,000

H Yacht $900,000

Total Personal Use $4,960,000

Total Assets $23,610,000 Total Liabilities and Net $23,610,000

Estate Planning 2018 2019 2020 2021

Annual Gift Tax Exclusion $15,000 $15,000 $15,000 $15,000 Annual Gift Tax Exclusion to a Noncitizen Spouse

$152,000 $155,000 $157,000 $159,000

Applicable Exclusion Amount:

• Gift Tax $11,180,000 $11,400,000 $11,580,000 $11,700,000 • Estate Tax $11,180,000 $11,400,000 $11,580,000 $11,700,000 Applicable Credit Amount:

• Gift Tax Credit Equivalent $4,417,800 $4,505,800 $4,577,800 $4,625,800 • Estate Tax Credit Equivalent $4,417,800 $4,505,800 $4,577,800 $4,625,800 Maximum Estate and Gift Tax Rate 40% 40% 40% 40% GSTT Exclusion Amount $11,180,000 $11,400,000 $11,580,000 $11,700,000 Estate Installments (Section 6166) $1,520,000 $1,550,000 $1,570,000 $1,590,000 Special Use Valuation (Section 2032A) $1,140,000 $1,160,000 $1,180,000 $1,190,000

2021 Annual Estate Planning Limits

www.money-education.com Call toll free 888.295.6023

If taxable income is: The tax is: Not over $2,650 10% of taxable income Over $2,650 but not over $9,550 $265 plus 24% of the amount over $2,650 Over $9,550 but not over $13,050 $1,921 plus 35% of the amount over $9,550 Over $13,050 $3,146 plus 37% of the amount over $13,050

Income Tax Rate Schedule for Estates and Trusts (2021)

  • 2021 Annual Estate Planning Limits
  • Income Tax Rate Schedule for Estates and Trusts (2021)

ATRA 2012 & Current Limits

Made 2010 legislation permanent, including portability.

2020 & 2021 annual exclusion $15,000.

2020 lifetime exemption $11,580,000

2021 lifetime exemption $11,700,000.

Above $1,000,000, the estate, gift, and generation skipping tax rate is 40%.

2018 changes: Instead of $5M from 2010 being indexed for inflation, it treats it as $10M indexed for inflation. Congress included a sunset provision, so that the exemption reverts back to 2017 limits, indexed for inflation, in 2026.

1

ESTATE PLAN

Answer the following questions. Assume the facts given in the fact pattern and that the 2021 estate and gift tax rates and annual exclusion apply to all transfers in the current and previous years.

1. Which of the following transfer mechanisms would be appropriate for the transfer of Fresh Veggies to James and Elizabeth assuming Madi and Devon did not want to make an outright gift of the company to them?

1. Private Annuity.

2. SCIN.

3. Family Limited Partnership.

4. QPRT.

1 only. 3 only.

1 and 2 1, 2, 3 and 4.

2. If Devon died today, which of the following statements is true regarding the transfers made in his will?

Madi will receive Devon’s interest in the Investment Portfolio.

Elizabeth will receive the proceeds of the life insurance policy.

James will receive the yacht in place of the houseboat.

Marie may potentially receive Vacation Home 1 as Lynn’s rightful heir.

3. Assuming Devon died today, calculate his gross estate.

4. Assuming Devon died today, calculate his probate estate.

5. Assuming Devon died today, calculate the marital deduction available for transfers to Madi (remember this is a net amount).

6. Ignoring the data above, assume that Devon died today and the estate tax due was $702,591 and Keith is appointed executor. Unfortunately, Keith forgot to file an Estate Tax Return (Form 706) and pay the estate tax due until 45 days after the return’s due date. How much is the failure-to-file penalty?

7. Assume Madi and Devon wanted to establish college funds for each of the grandchildren. Which of the following statements would be true?

An irrevocable trust would be a completed gift for gift tax and estate tax purposes.

The transfers would not be subject to GSTT, regardless of to whom the money is paid, because the payments were made for education.

If Devon and Madi found out about Marie and wanted to establish a fund for her as well, then the transfer to Marie would be subject to GSTT.

An appropriate planning technique for both Elizabeth and James’ family would be to place the assets into 2 family trusts, 1 for Elizabeth’s children (with Elizabeth and Scott as joint trustees) and 1 for James’ children (with Catherine as the trustee).

8. Which of the following statements regarding the transfer to Andrew 5 years ago is correct?

Andrew is a skip person because he is more than 37 and ½ years younger than Devon, thus the transfer results in a taxable termination.

The transfer will qualify for the GSTT annual exclusion.

Assuming 2020 rates apply to this transfer, the GSTT will be 40% of $8,000,000.

The only tax consequence for this transfer will be GSTT due.

9. Assuming Madi died today, which of the following statements is true?

Kathi’s assets would avoid probate.

All of Madi’s community property assets would transfer to Devon because of the implied right of survivorship.

State intestacy law would dictate who received Madi’s assets. Madi’s gross estate would include the life insurance policy on Devon.

10. Assuming Devon died today, which of the following statements regarding a valid disclaimer is correct?

Assume Devon left Elizabeth his interest in the Yacht. If Elizabeth disclaims her property, then the transfer will be subject to GSTT.

If James disclaimed his property, the property would transfer to Kathi.

If Madi wanted to disclaim a portion of the property, she must do so by the due date of the estate tax return plus extensions.

In order for the disclaimer to be valid it must be in writing or witnessed by 3 nonrelated individuals if the disclaimer is oral.

11. Assume Madi died today and left Vacation Home 2 to Elizabeth. What would Elizabeth’s adjusted basis be in Vacation Home 2?

12. Assume Madi died today and left her share of the personal residence to Devon. What would Devon’s adjusted basis be in the personal residence?

13. Madi and Devon are considering making a charitable contribution to the Boys and Girls Club of America and want the grandchildren to receive income from the property for an extended period. Which of the following charitable devices may be appropriate to meet their objectives?

1. Charitable Remainder Annuity Trust

2. Charitable Remainder Unitrust Trust

3. Pooled Income Fund

4. Charitable Lead Trust

A 1 only

B. 1, 2, and 3

C. 2, 3, and 4

D. 1, 2, 3, and 4

14. Which of the following types of clauses appear in Devon’s will?

1. Specific Bequests

2. Survivorship Clause

3. No-contest clause

4. Simultaneous Clause

A 1 only

B. 2 and 3

C. 1 and 2

D. 1, 2, 3, and 4

15. Assume Devon died today, and Keith is appointed as executor. Of the following, which is not an available election Keith can make before he files Devon’s estate tax return?

Electing the QTIP election on property passing to Kathi.

Utilizing the annual exclusion against testamentary transfers.

Selection of the income tax year end for Devon.

Deducting the expenses of administering Devon’s estate on the estate tax return (Form 706)

16. Assume Devon died in 2020. Which filing status can Madi use on her 2020 income tax return?

Single

Head of Household.

Married Filing Jointly

Qualifying Widow.

17. Assume Devon died in 2020. Which filing status can Madi use on her 2021 income tax return?

Single

Head of Household.

Married Filing Jointly.

Qualifying Widow.

18. Assume Devon died today, Keith is appointed executor, and the estate does not have sufficient cash to pay the required taxes or expenses. Of the following statements, which is not true regarding selling an estate’s assets to generate cash?

The estate may have income tax consequences.

The assets may not be sold at full, realizable fair market value.

Any losses on the sale of the assets are deductible as losses on the estate tax return

Any selling expenses are deductible on the estate tax return.

19. Assume Devon dies today, and Keith is appointed executor. Keith is considering electing the alternate valuation date. Which of the following statements does not correctly reflect the rules applicable to the alternate valuation date?

The general rule is the election covers all assets included in the gross estate and cannot be applied to only a portion of the property.

Assets disposed of within 6 months of decedent’s death must be valued on the date of disposition.

The election can be made even though an estate tax return does not have to be filed

The election must decrease the value of the gross estate and decrease the estate tax liability.

20. Assume Devon transfers ownership of the life insurance policy on his life to an Irrevocable Life Insurance Trust (ILIT) and retains the right to borrow against the policy. Assume Devon dies 5 years later. Which of the following is correct regarding the treatment of the proceeds of the life insurance policy?

The proceeds will be included in Devon’s federal gross estate if he has any outstanding loans against the life insurance policy.

The proceeds will be included in Devo’s federal gross estate if he continued paying the policy premiums after the life insurance policy was transferred to the ILIT.

The proceeds will never be included in Devon’s federal gross estate.

The proceeds will always be included in Devon’s federal gross estate.

ESTATE PLAN

Answer the following questions. Assume the facts given in the fact pattern and that the 2021 estate and gift tax rates and annual exclusion apply to all transfers in the current and previous years.

1. Which of the following transfer mechanisms would be appropriate for the transfer of Fresh Veggies to James and Elizabeth assuming Madi and Devon did not want to make an outright gift of the company to them?

1. Private Annuity.

2. SCIN.

3. Family Limited Partnership.

4. QPRT.

1 only. 3 only.

1 and 2 1, 2, 3 and 4.

2. If Devon died today, which of the following statements is true regarding the transfers made in his will?

Madi will receive Devon’s interest in the Investment Portfolio.

Elizabeth will receive the proceeds of the life insurance policy.

James will receive the yacht in place of the house boat.

Marie may potentially receive Vacation Home 1 as Lynn’s rightful heir.

3. Assuming Devon died today, calculate his gross estate.

4. Assuming Devon died today, calculate his probate estate.

5. Assuming Devon died today, calculate the marital deduction available for transfers to Madi (remember this is a net amount).

6. Ignoring the data above, Assume that Devon died today and the estate tax due was $702,591 and Keith is appointed executor. Unfortunately, Keith forgot to file an Estate Tax Return (Form 706) and pay the estate tax due until 45 days after the return’s due date. How much is the failure-to-file penalty?

7. Assume Madi and Devon wanted to establish college funds for each of the grandchildren. Which of the following statements would be true?

An irrevocable trust would be a completed gift for gift tax and estate tax purposes.

The transfers would not be subject to GSTT, regardless of to whom the money is paid, because the payments were made for education.

If Devon and Madi found out about Marie and wanted to establish a fund for her as well, then the transfer to Marie would be subject to GSTT.

An appropriate planning technique for both Elizabeth and James’ family would be to place the assets into 2 family trusts, 1 for Elizabeth’s children (with Elizabeth and Scott as joint trustees) and 1 for James’ children (with Catherine as the trustee).

8. Which of the following statements regarding the transfer to Andrew 5 years ago is correct?

Andrew is a skip person because he is more than 37 and ½ years younger than Devon, thus the transfer results in a taxable termination.

The transfer will qualify for the GSTT annual exclusion.

Assuming 2020 rates apply to this transfer, the GSTT will be 40% of $8,000,000.

The only tax consequence for this transfer will be GSTT due.

9. Assuming Madi died today, which of the following statements is true?

Kathi’s assets would avoid probate.

All of Madi’s community property assets would transfer to Devon because of the implied right of survivorship.

State intestacy law would dictate who received Madi’s assets. Madi’s gross estate would include the life insurance policy on Devon.

10. Assuming Devon died today, which of the following statements regarding a valid disclaimer is correct?

Assume Devon left Elizabeth his interest in the Yacht. If Elizabeth disclaims her property, then the transfer will be subject to GSTT.

If James disclaimed his property, the property would transfer to Kathi.

If Madi wanted to disclaim a portion of the property, she must do so by the due date of the estate tax return plus extensions.

In order for the disclaimer to be valid it must be in writing or witnessed by 3 nonrelated individuals if the disclaimer is oral.

11. Assume Madi died today and left Vacation Home 2 to Elizabeth. What would Elizabeth’s adjusted basis be in Vacation Home 2?

12. Assume Madi died today and left her share of the personal residence to Devon. What would Devon’s adjusted basis be in the personal residence?

13. Madi and Devon are considering making a charitable contribution to the Boys and Girls Club of America and want the grandchildren to receive income from the property for an extended period. Which of the following charitable devices may be appropriate to meet their objectives?

1. Charitable Remainder Annuity Trust

2. Charitable Remainder Unitrust Trust

3. Pooled Income Fund

4. Charitable Lead Trust

A 1 only

B. 1, 2, and 3

C. 2, 3, and 4

D. 1, 2, 3, and 4

14. Which of the following types of clauses appear in Devon’s will?

1. Specific Bequests

2. Survivorship Clause

3. No-contest clause

4. Simultaneous Clause

A 1 only

B. 2 and 3

C. 1 and 2

D. 1, 2, 3, and 4

15. Assume Devon died today, and Keith is appointed as executor. Of the following, which is not an available election Keith can make before he files Devon’s estate tax return?

Electing the QTIP election on property passing to Kathi.

Utilizing the annual exclusion against testamentary transfers.

Selection of the income tax year end for Devon.

Deducting the expenses of administering Devon’s estate on the estate tax return (Form 706)

16. Assume Devon died in 2020. Which filing status can Madi use on her 2020 income tax return?

Single

Head of Household.

Married Filing Jointly

Qualifying Widow.

17. Assume Devon died in 2020. Which filing status can Madi use on her 2021 income tax return?

Single

Head of Household.

Married Filing Jointly.

Qualifying Widow.

18. Assume Devon died today, Keith is appointed executor, and the estate does not have sufficient cash to pay the required taxes or expenses. Of the following statements, which is not true regarding selling an estate’s assets to generate cash?

The estate may have income tax consequences.

The assets may not be sold at full, realizable fair market value.

Any losses on the sale of the assets are deductible as losses on the estate tax return

Any selling expenses are deductible on the estate tax return.

19. Assume Devon dies today and Keith is appointed executor. Keith is considering electing the alternate valuation date. Which of the following statements does not correctly reflect the rules applicable to the alternate valuation date?

The general rule is the election covers all assets included in the gross estate and cannot be applied to only a portion of the property.

Assets disposed of within 6 months of decedent’s death must be valued on the date of disposition.

The election can be made even though an estate tax return does not have to be filed

The election must decrease the value of the gross estate and decrease the estate tax liability.

20. Assume Devon transfers ownership of the life insurance policy on his life to an Irrevocable Life Insurance Trust (ILIT) and retains the right to borrow against the policy. Assume Devon dies 5 years later. Which of the following is correct regarding the treatment of the proceeds of the life insurance policy?

The proceeds will be included in Devon’s federal gross estate if he has any outstanding loans against the life insurance policy.

The proceeds will be included in Devo’s federal gross estate if he continued paying the policy premiums after the life insurance policy was transferred to the ILIT.

The proceeds will never be included in Devon’s federal gross estate.

The proceeds will always be included in Devon’s federal gross estate.

American Library Association

Library Technology R E P O R T S

E x p e r t G u i d e s t o L i b r a r y S y s t e m s a n d S e r v i c e s

alatechsource.org

Combating Fake News in the Digital Age

Joanna M. Burkhardt

Library Technology R E P O R T S

Abstract

The issue of fake news has become very prominent in recent months. Its power to mislead and misinform has been made evident around the world. While fake news is not a new phenomenon, the means by which it is spread has changed in both speed and magni- tude. Social media platforms like Facebook, Twit- ter, and Instagram are fertile ground for the spread of fake news. Algorithms known as bots are increas- ingly being deployed to manipulate information, to disrupt social media communication, and to gain user attention. While technological assistance to identify fake news are beginning to appear, they are in their infancy. It will take time for programmers to create software that can recognize and tag fake news with- out human intervention. Even if technology can help to identify fake news in the future, those who seek to create and provide fake news will also be creating the means to continue, creating a loop in which those who want to avoid fake news are always playing catch up.

Individuals have the responsibility to protect themselves from fake news. It is essential to teach ourselves and our students and patrons to be critical consumers of news. This issue of Library Technology Reports (vol. 53, no. 8), “Combating Fake News in the Digital Age,” is for librarians who serve all age levels and who can help by teaching students both that they need to be aware and how to be aware of fake news. Library instruction in how to avoid fake news, how to identify fake news, and how to stop fake news will be essential.

Library Technology Reports (ISSN 0024-2586) is published eight times a year (January, March, April, June, July, September, October, and Decem- ber) by American Library Association, 50 E. Huron St., Chicago, IL 60611. It is managed by ALA TechSource, a unit of the publishing department of ALA. Periodical postage paid at Chicago, Illinois, and at additional mail- ing offices. POSTMASTER: Send address changes to Library Technology Reports, 50 E. Huron St., Chicago, IL 60611.

Trademarked names appear in the text of this journal. Rather than identify or insert a trademark symbol at the appearance of each name, the authors and the American Library Association state that the names are used for editorial purposes exclusively, to the ultimate benefit of the owners of the trademarks. There is absolutely no intention of infringement on the rights of the trademark owners.

Copyright © 2017 Joanna M. Burkhardt All Rights Reserved.

alatechsource.org

ALA TechSource purchases fund advocacy, awareness, and accreditation programs for library professionals worldwide.

Volume 53, Number 8

Combating Fake News in the Digital Age ISBN: 978-0-8389-5991-6

American Library Association 50 East Huron St.

Chicago, IL 60611-2795 USA alatechsource.org

800-545-2433, ext. 4299 312-944-6780

312-280-5275 (fax)

Advertising Representative Samantha Imburgia [email protected]

312-280-3244

Editor Samantha Imburgia [email protected]

312-280-3244

Copy Editor Judith Lauber

Production Tim Clifford

Editorial Assistant Colton Ursiny

Cover Design Alejandra Diaz

About the Author

Joanna M. Burkhardt is Full Professor/Librarian at the University of Rhode Island Libraries. She is Director of the branch libraries in Providence and Narragansett and the URI Libraries Collection Development Manager. She earned an MA in anthropology from the University of Wisconsin–Madison in 1981 and an MLS from the Uni- versity of Rhode Island in 1986. She has taught informa- tion literacy to both students and teachers since 1999. She has given workshops, presentations, podcasts, key- note addresses, and panel discussions about information literacy. She is coauthor or author of four books about information literacy. She addressed the topic of fake news at the ALA Annual Conference in 2017 and designed a poster and bookmark on that topic for ALA Graphics.

Subscriptions alatechsource.org/subscribe

Chapter 1—History of Fake News 5 Pre–Printing Press Era 5 Post–Printing Press Era 5 Mass Media Era 6 Internet Era 6 Global Reach of Fake News 7 Notes 8

Chapter 2— How Fake News Spreads 10 Word of Mouth 10 Written Word 10 Printed Media 11 Internet 11 Social Media 12 Notes 12

Chapter 3—Can Technology Save Us? 14 Technology of Fake News 14 Big Data 15 Bots 15 Experiments in Fake News Detection 16 Experiments in Bot and Botnet Detection 17 Google and Facebook Anti–Fake News Efforts 18 Notes 19

Chapter 4—Can We Save Ourselves? 22 Learn about Search Engine Ranking 22 Be Careful about Who You “Friend” 22 ID Bots 23 Read before Sharing 23 Fact-Check 24 Evaluate Information 24 Seek Information beyond Your Filter Bubble 26 Be Skeptical 26 Use Verification and Educational Tools 26 Notes 27

Chapter 5—How Can We Help Our Students? 29 Teach Information or Media Literacy 29 Make Students Aware of Psychological Processes 30 Tie Information Literacy to Workplace Applications 30 Teach Students to Evaluate Information 31 Teach Information Literacy Skills and Concepts 31 Teach the Teachers 32 Conclusion 32 Notes 33

Contents

5

Lib ra

ry Te

ch n

o lo

g y R

e p

o rts

alatech so

u rce.o

rg

N o

v e m

b e r/D

e ce

m b

e r 2

0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

History of Fake News

“Massive digital misinformation is becoming pervasive in online social media to the extent that it has been listed by the World Economic Forum (WEF) as one of the main threats to our society.”1

F ake news is nothing new. While fake news was in the headlines frequently in the 2016 US election cycle, the origins of fake news date back to before

the printing press. Rumor and false stories have prob- ably been around as long as humans have lived in groups where power matters. Until the printing press was invented, news was usually transferred from per- son to person via word of mouth. The ability to have an impact on what people know is an asset that has been prized for many centuries.

Pre–Printing Press Era

Forms of writing inscribed on materials like stone, clay, and papyrus appeared several thousand years ago. The information in these writings was usually limited to the leaders of the group (emperors, pha- raohs, Incas, religious and military leaders, and so on). Controlling information gave some people power over others and has probably contributed to the creation of most of the hierarchical cultures we know today. Knowledge is power. Those controlling knowledge, information, and the means to disseminate informa- tion became group leaders, with privileges that others in the group did not have. In many early state soci- eties, remnants of the perks of leadership remain— pyramids, castles, lavish household goods, and more.

Some of the information that has survived, carved in stone or baked on tablets or drawn in pictograms, extolled the wonder and power of the leaders. Often

these messages were reminders to the common peo- ple that the leader controlled their lives. Others were created to insure that an individual leader would be remembered for his great prowess, his success in bat- tle, or his great leadership skills. Without means to verify the claims, it’s hard to know whether the infor- mation was true or fake news.

In the sixth century AD, Procopius of Caesarea (500–ca. 554 AD), the principal historian of Byzan- tium, used fake news to smear the Emperor Justin- ian.2 While Procopius supported Justinian during his lifetime, after the emperor’s death Procopius released a treatise called Secret History that discredited the emperor and his wife. As the emperor was dead, there could be no retaliation, questioning, or investigations. Since the new emperor did not favor Justinian, it is possible the author had a motivation to distance him- self from Justinian’s court, using the stories (often wild and unverifiable) to do so.

Post–Printing Press Era

The invention of the printing press and the concurrent spread of literacy made it possible to spread informa- tion more widely. Those who were literate could eas- ily use that ability to manipulate information to those who were not literate. As more people became liter- ate, it became more difficult to mislead by misrepre- senting what was written.

As literacy rates increased, it eventually became economically feasible to print and sell informa- tion. This made the ability to write convincingly and authoritatively on a topic a powerful skill. Lead- ers have always sought to have talented writers in their employ and to control what information was

Chapter 1

6

Li b

ra ry

T e ch

n o

lo g

y R

e p

o rt

s al

at ec

h so

u rc

e. o rg

N

o v e m

b e r/

D e ce

m b

e r

2 0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

produced. Printed information became available in different formats and from different sources. Books, newspapers, broadsides, and cartoons were often cre- ated by writers who had a monetary incentive. Some were paid by a publisher to provide real news. Others, it seems, were paid to write information for the ben- efit of their employer.

In 1522, Italian author and satirist Pietro Aret- ino wrote wicked sonnets, pamphlets, and plays. He self-published his correspondence with the nobility of Italy, using their letters to blackmail former friends and patrons. If those individuals failed to provide the money he required, their indiscretions became pub- lic. He took the Roman style of pasquino—anonymous lampooning—to a new level of satire and parody. While his writings were satirical (not unlike today’s Saturday Night Live satire), they planted the seeds of doubt in the minds of their readers about the people in power in Italy and helped to shape the complex politi- cal reality of the time.3

Aretino’s pasquinos were followed by a French variety of fake news known as the canard. The French word canard can be used to mean an unfounded rumor or story. Canards were rife during the seventeenth cen- tury in France. One canard reported that a monster, captured in Chile, was being shipped to France. This report included an engraving of a dragon-like creature. During the French Revolution the face of Marie Antoi- nette was superimposed onto the dragon. The revised image was used to disparage the queen.4 The resulting surge in unpopularity for the queen may have contrib- uted to her harsh treatment during the revolution.

Jonathan Swift complained about political fake news in 1710 in his essay “The Art of Political Lying.” He spoke about the damage that lies can do, whether ascribed to a particular author or anonymous: “False- hood flies, and truth comes limping after it, so that when men come to be undeceived, it is too late; the jest is over, and the tale hath had its effect.”5 Swift’s descriptions of fake news in politics in 1710 are remarkably similar to those of writers of the twenty- first century.

American writer Edgar Allan Poe in 1844 wrote a hoax newspaper article claiming that a balloonist had crossed the Atlantic in a hot air balloon in only three days.6 His attention to scientific details and the plau- sibility of the idea caused many people to believe the account until reporters failed to find the balloon or the balloonist. The story was retracted four days after publication. Poe is credited with writing at least six stories that turned out to be fake news.7

Mass Media Era

Father Ronald Arbuthnott Knox did a fake news broadcast in January 1926 called “Broadcasting the

Barricades” on BBC radio.8 During this broadcast Knox implied that London was being attacked by Commu- nists, Parliament was under siege, and the Savoy Hotel and Big Ben had been blown up. Those who tuned in late did not hear the disclaimer that the broadcast was a spoof and not an actual news broadcast. This dra- matic presentation, coming only a few months after the General Strike in England, caused a minor panic until the story could be explained.

This fake news report was famously followed by Orson Welles’s War of the Worlds broadcast in 1938. The War of the Worlds was published as a book in 1898, but those who did not read science fiction were unfa- miliar with the story. The presentation of the story as a radio broadcast again caused a minor panic, this time in the United States, as there were few clues to indi- cate that reports of a Martian invasion were fictional. While this broadcast was not meant to be fake news, those who missed the introduction didn’t know that.9

On November 3, 1948, the Chicago Daily Tribune editors were so certain of the outcome of the previ- ous day’s presidential election that they published the paper with a headline stating, “Dewey Defeats Tru- man.” An iconic picture shows President Truman hold- ing up the newspaper with the erroneous headline. The caption for the picture quotes Truman as saying, “That ain’t the way I heard it.”10 The paper, of course, retracted the statement and reprinted the paper with the correct news later in the day. This incident is one reason that journalists at reputable news outlets are required to verify information a number of times before publication.

It is easy to see that fake news has existed for a long time. From the few examples described above, the effects of fake news have ranged widely, from amusement to death. Some authors of fake news prob- ably had benign motivations for producing it. Others appear to have intended to harm individuals, families, or governments. The intended and unintended con- sequences of fake news of the pre-internet era were profound and far-reaching for the time. As the means of spreading fake news increased, the consequences became increasingly serious.

Internet Era

In the late twentieth century, the internet provided new means for disseminating fake news on a vastly increased scale. When the internet was made pub- licly available, it was possible for anyone who had a computer to access it. At the same time, innovations in computers made them affordable to the average person. Making information available on the inter- net became a new way to promote products as well as make information available to everyone almost instantly.

7

Lib ra

ry Te

ch n

o lo

g y R

e p

o rts

alatech so

u rce.o

rg

N o

v e m

b e r/D

e ce

m b

e r 2

0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

Some fake websites were created in the early years of generalized web use. Some of these hoax websites were satire. Others were meant to mislead or deliber- ately spread biased or fake news. Early library instruc- tion classes used these types of website as cautionary examples of what an internet user needed to look for. Using a checklist of criteria to identify fake news web- sites was relatively easy. A few hoax website favor- ites are

• DHMO.org. This website claims that the com- pound DHMO (Dihydrogen Monoxide), a compo- nent of just about everything, has been linked to terrible problems such as cancer, acid rain, and global warming. While everything suggested on the website is true, it is not until one’s high school chemistry kicks in that the joke is revealed— DHMO and H2O are the same thing.

• Feline Reactions to Bearded Men. Another popular piece of fake news is a “research study” regarding the reactions of cats to bearded men. This study is reported as if it had been published in a scientific journal. It includes a literature review, a descrip- tion of the experiment, the raw data resulting from the experiment, and the conclusions reached by the researchers as a result. It is not until the reader gets to the bibliography of the article that the experiment is revealed to be a hoax. Included in the bibliography are articles supposedly writ- ten by Madonna Louise Ciccone (Madonna the singer), A. Schwartzenegger (Arnold, perhaps?), and Doctor Seuss and published in journals such as the Western Musicology Journal, Tonsological Proceedings, and the Journal of Feline Forensic Studies.

• city-mankato.us. One of the first websites to make use of website technology to mislead and mis- direct was a fake site for the city of Mankato, Minnesota. This website describes the climate as temperate to tropical, claiming that a geologi- cal anomaly allows the Mankato Valley to enjoy a year-round temperature of no less than 70 degrees Fahrenheit, while providing snow year- round at nearby Mount Kroto. It reported that one could watch the summer migration of whales up the Minnesota River. An insert shows a picture of a beach, with a second insert showing the current temperature—both tropical. The website proudly announces that it is a Yahoo “Pick of the Week” site and has been featured by the New York Times and the Minneapolis Star Tribune. Needless to say, no geological anomaly of this type exists in Min- nesota. Whales do not migrate up (or down) the Minnesota River at any time, and the pictures of the beaches and the thermometer are actually showing beaches and temperatures from places very far south of Mankato. It is true that Yahoo,

the New York Times, and the Minneapolis Star Tri- bune featured this website, but not for the rea- sons you might think. When fake news could still be amusing, this website proved both clever and ironic.

• MartinLutherKing.org. This website was created by Stormfront, a white supremacist group, to try to mislead readers about the Civil Rights activ- ist by discrediting his work, his writing, and his personal life.11 The fact that the website used the .org domain extension convinced a number of people that it was unbiased because the domain extension was usually associated with nonprofit organizations working for good. The authors of the website did not reveal themselves nor did they state their affiliations. Using Martin Luther King’s name for the website insured that people looking for information about King could easily arrive at this fake news website. This website is no longer active.

HOAX Websites

DHMO.org www.dhmo.org

“Feline Reactions to Bearded Men” www.improbable.com/airchives/classical/cat/cat.html

“Mankato, Minnesota” http://city-mankato.us

“Martin Luther King, Jr.” www.martinlutherking.org

Global Reach of Fake News

Initial forays into the world of fake news fall into the category of entertainment, satire, and parody. They are meant to amuse or to instruct the unwary. Canards and other news that fall into the category of misinfor- mation and misdirection, like the Martin Luther King website, often have more sinister and serious motives. In generations past, newspaper readers were warned that just because something was printed in the news- paper did not mean that it was true. In the twenty-first century, the same could be said about the internet. People of today create fake news for many of the same reasons that people of the past did. A number of new twists help to drive the creation and spread of fake news that did not exist until recently.

Twenty-first-century economic incentives have increased the motivation to supply the public with fake news. The internet is now funded by advertisers

8

Li b

ra ry

T e ch

n o

lo g

y R

e p

o rt

s al

at ec

h so

u rc

e. o rg

N

o v e m

b e r/

D e ce

m b

e r

2 0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

rather than by the government. Advertisers are in business to get information about their products to as many people as possible. Advertisers will pay a website owner to allow their advertising to be shown, just as they might pay a newspaper publisher to print adver- tisements in the paper. How do advertisers decide in which websites to place their ads? Using computing power to collect the data, it is possible to count the number of visits and visitors to individual sites. Popu- lar websites attract large numbers of people who visit those sites, making them attractive to advertisers. The more people who are exposed to the products adver- tisers want to sell, the more sales are possible. The fee paid to the website owners by the advertisers rewards website owners for publishing popular information and provides an incentive to create more content that will attract more people to the site.

People are attracted to gossip, rumor, scandal, innuendo, and the unlikely. Access Hollywood on TV and the National Enquirer at the newsstand have used human nature to make their products popular. That popularity attracts advertisers. In a Los Angeles Times op-ed, Matthew A. Baum and David Lazer report “Another thing we know is that shocking claims stick in your memory. A long-standing body of research shows that people are more likely to attend to and later recall a sensational or negative headline, even if a fact checker flags it as suspect.”12

In the past several years, people have created web- sites that capitalize on those nonintellectual aspects of human nature. Advertisers are interested in how many people will potentially be exposed to their prod- ucts, rather than the truth or falsity of the content of the page on which the advertising appears. Unfor- tunately, sites with sensational headlines or sugges- tive content tend to be very popular, generating large numbers of visits to those sites and creating an adver- tising opportunity. Some advertisers will capitalize on this human propensity for sensation by paying writ- ers of popular content without regard for the actual content at the site. The website can report anything it likes, as long as it attracts a large number of people. This is how fake news is monetized, providing incen- tives for writers to concentrate on the sensational rather than the truthful.

The problem with most sensational information is that it is not always based on fact, or those facts are twisted in some way to make the story seem like something it is not. It is sometimes based on no infor- mation at all. For example:

Creators of fake news found that they could cap- ture so much interest that they could make money off fake news through automated advertising that rewards high traffic to their sites. A man running a string of fake news sites from the Los Angeles suburbs told NPR he made between $10,000 and $30,000 a month. A computer science student in

the former Soviet republic of Georgia told the New York Times that creating a new website and filling it with both real stories and fake news that flat- tered Trump was a “gold mine.”13

Technological advances have increased the spread of information and democratized its consumption globally. There are obvious benefits associated with instantaneous access to information. The dissemina- tion of information allows ideas to be shared and for- merly inaccessible regions to be connected. It makes choices available and provides a platform for many points of view.

However, in a largely unregulated medium, sup- ported and driven by advertising, the incentive for good is often outweighed by the incentive to make money, and this has a major impact on how the medium develops over time. Proliferation of fake news is one outcome. While the existence of fake news is not new, the speed at which it travels and the global reach of the technology that can spread it are unprec- edented. Fake news exists in the same context as real news on the internet. The problem seems to be distin- guishing between what is fake and what is real.

Notes 1. Michela Del Vicario, Alessandro Bessi, Fabiana Zollo,

Fabio Petroni, Antonio Scala, Guido Caldarelli, H. Eugene Stanley, and Walter Quattrociocchi, “The Spreading of Misinformation Online,” Proceedings of the National Academy of Sciences of the United States of America 113, no. 3 (January 19, 2016): 534, https:// doi.org/10.1073/pnas.1517441113.

2. Procopius, Secret History, trans. Richard Atwater (New York: Covici Friede; Chicago: P. Covici, 1927; repr. Ann Arbor: University of Michigan Press, 1961), https:// sourcebooks.fordham.edu/basis/procop-anec.asp.

3. “Pietro Aretino,” Wikipedia, last updated August 7, 2017, https://en.wikipedia.org/wiki/Pietro_Aretino.

4. Robert Darnton, “The True History of Fake News,” NYR Daily (blog), New York Review of Books, Febru- ary 13, 2017, http://www.nybooks.com/daily/2017 /02/13/the-true-history-of-fake-news/.

5. Jonathan Swift, “The Art of Political Lying,” Ex- aminer, no. 14 (November 9, 1710), para. 9, repr. in Richard Nordquist, “The Art of Political Lying, by Jonathan Swift,” ThoughtCo., last updated March 20, 2016, https://www.thoughtco.com/art-of-political -lying-by-swift-1690138.

6. Edgar Allan Poe, “The Balloon Hoax,” published 1844, reprinted in PoeStories.com, accessed September 6, 2017, https://poestories.com/read /balloonhoax.

7. Gilbert Arevalo, “The Six Hoaxes of Edgar Al- lan Poe,” HubPages, last updated March 30, 2017, https://hubpages.com/literature/The-Six-Hoaxes -of-Edgar-Allan-Poe.

8. A. Brad Schwartz, “Broadcasting the Barricades,” A. Brad Schwartz website, January 16, 2015, https://

9

Lib ra

ry Te

ch n

o lo

g y R

e p

o rts

alatech so

u rce.o

rg

N o

v e m

b e r/D

e ce

m b

e r 2

0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

abradschwartz.com/2015/01/16/broadcasting -the-barricades/.

9. “The War of the Worlds (radio drama),” Wikipedia, last updated August 24, 2017, https://en.wikipedia .org/wiki/The_War_of_the_Worlds_(radio_drama).

10. Tim Jones, “Dewey Defeats Truman,” Chicago Tri- bune website, accessed September 6, 2017, www .chicagotribune.com/news/nationworld/politics /chi-chicagodays-deweydefeats-story-story.html.

11. Keith Thomson, “White Supremacist Site Martin- LutherKing.org Marks 12th Anniversary,” The Blog, HuffPost, last updated May 26, 2011, www.huffing tonpost.com/entry/white-supremacist-site-ma_b _809755.html.

12. Matthew A. Baum and David Lazer, “Google and Facebook Aren’t Fighting Fake News with the Right Weapons,” op-ed, Los Angeles Times, May 8, 2017, www.latimes.com/opinion/op-ed/la-oe-baum -lazer-how-to-fight-fake-news-20170508-story.html.

13. Angie Drobnic Holan, “2016 Lie of the Year: Fake News,” PolitiFact, December 13, 2016, www.politi fact.com/truth-o-meter/article/2016/dec/13 /2016-lie-year-fake-news/.

10

Li b

ra ry

T e ch

n o

lo g

y R

e p

o rt

s al

at ec

h so

u rc

e. o rg

N

o v e m

b e r/

D e ce

m b

e r

2 0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

How Fake News Spreads

Word of Mouth

News has always been disseminated by word of mouth. Early humans lived in small groups, moving from place to place as needs required. As the human population grew, there was greater need for communication. Con- tact between groups became more common, and the connections between groups became more complex.1 News was still spread by word of mouth, but there was more to tell. There were, of course, subsistence details to convey, but there was also family news to share, gossip to pass on, fashion trends to consider, and theo- logical questions to answer. There were few means to verify news that came from outside the local group. If a traveler arrived from a distance and said that the people in the next large town were wearing silk rather than skins, there was no way to verify this informa- tion without visiting the distant place in person.

Presumably as people came to view local resources as belonging to the group, there might have been incentive to mislead outsiders about the size of the population protecting those resources or to understate the quality or amount of resources. If a resource was scarce or valuable, there might be reason to provide misinformation. However, because news was oral, there is no record. We can’t know exactly what was said.

Written Word

Groups began to create tools that would allow them to tell a story, keep track of numbers, give direc- tions, and so on about the same time as populations became sedentary and began to grow. In the Middle East, farmers, landowners, politicians, and family

historians began to invent the means to keep track of, remember, and convey information.2 Some groups used pictures, some used counting devices, and even- tually systems of writing were born. Written informa- tion posed its own set of problems.

First, there is the problem of writing material. Some people used stone for a writing surface.3 Mark- ing stone takes a lot of time and effort. The result is permanent, but it is hard to carry around. Some groups used clay as a writing surface.4 This is a terrific material to use if you want to make your information permanent. Mark the clay, fire it, and the information is available for a long period of time. The downside of clay is that it is relatively heavy, it takes up a lot of room, and it breaks easily. This makes it somewhat difficult to transport. The Egyptians used papyrus (labor intensive and expensive).5 Native Americans used tree bark (delicate and easily damaged).6 Peo- ple with herds of animals used animal skins to make parchment and vellum (not always available when required, lots of preparation needed).7 The Incas used knotted cords called quipus that acted as mnemonic devices as well as counting devices.8

Second, not everyone knew the secret of how to interpret the writing between groups or even inside a group. If knowledge is power, knowing how to read allowed people to assume the reins of power and to limit access to information, thus controlling what people did or did not know. This control made people dependent on those who knew the secret. As we saw above, some people did not hesitate to offer fake news to serve their own purposes to manipulate or influ- ence those who could not read.

While the elite used systems of writing, the non- literate members of the group would have continued to use word-of-mouth transmission of information.

Chapter 2

11

Lib ra

ry Te

ch n

o lo

g y R

e p

o rts

alatech so

u rce.o

rg

N o

v e m

b e r/D

e ce

m b

e r 2

0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

Information was conveyed from those in power by proclamation. A representative of the leader would be sent to read out a message to those who could not read but who had a need to know. Again there was no guar- antee that the information being read was written truthfully, nor that it was read accurately to the non- literate public. What people knew in the early stages of literacy was controlled by the literate.

Different writing systems required translators to convey information between groups. Here again, the honesty and or accuracy of the translation had a large effect on the exact information that people received. The same is true today. We often see articles that essentially “translate” information from highly tech- nical and specialized fields into information most peo- ple can understand. The translator’s motives can influ- ence what is reported and what language is used to report it. In the Wild West of the internet world, it’s hard to know what a translator’s motives are without spending an inordinate amount of time checking out the author’s credentials.

Printed Media

As more people became literate, it became harder to control information. More information appeared in printed form. More kinds of information were shared.9 Printed information was carried from place to place, and as new and faster means of transpor- tation became available, people got news faster and more often. As means of spreading news widely and quickly, without intervention or translation, became more common, it was harder to control the messages people saw and heard. Newspapers, magazines, tele- graph, and eventually radio, television, and the inter- net provided multiple avenues to transmit informa- tion without necessarily getting permission from the state or other power holder. As new media inventions became viable, they were used to share the news and other information, creating a wide range of options for news seekers.

Internet

With the birth and spread of the internet, it was thought that a truly democratic and honest means of sharing information had arrived. Control of the con- tent accessible via the internet is difficult (but not impossible), making former information power hold- ers less powerful. Anyone with access and a desire to share their thoughts could use the internet to do so. At first the technological requirements for creating a web page were beyond most individuals, but com- panies who saw a market built software that allowed “non-programmers” to create a web page without any

knowledge of the computer code that was actually responsible for transmitting the message.

Information can now come from anywhere and at any time. Literally billions of actors can partici- pate in the spread of information. The rate of flow of information and the sheer volume of information are overwhelming and exhausting. The democratiza- tion in information allows everyone and anyone to participate and includes information from bad actors, biased viewpoints, ignorant or uninformed opinion— all coming at internet users with the velocity of a fire hose. The glut of information is akin to having no information at all, as true information looks exactly like untrue, biased, and satirical information.

Added to the overwhelming amount of informa- tion available today is the impossibility for anyone to know something about everything. The details about how things work or what makes them function are beyond most individuals. What makes a cellphone work? What happens when you store something “in the cloud”? How does a hybrid car engine know which part of the engine to use when? What is the statis- tical margin of error, and how does it affect polls? Are vaccines harmful? Did the Holocaust really hap- pen? Arthur C. Clarke’s Third Law states, “Any suffi- ciently advanced technology is indistinguishable from magic.”10 What this means in terms of fake news is that people are vulnerable to being misinformed because, in a world where all things seem possible, they have little or no basis for separating truth from fiction. It’s hard to find a trusted source, so all sources must be trustworthy or all must be suspect.

When the internet was made available to the gen- eral public in the 1990s, it was seen as a means of democratizing access to information. The amount of information that became available began as a trickle and turned into a Niagara, fed by a roaring river of new content. It became wearisome and then almost impossible to find a single piece of information in the torrent. Search engines were developed that used both human and computer power to sort, categorize, and contain much of the content on the internet. Even- tually Google became the go-to means for both access to and control of the flood of information available, becoming so common that Google became a verb.

Computerization of information has a number of benefits. Large amounts of information can be stored in increasingly small spaces. Records of many kinds have become public because they can be conveyed electronically. With the advent of the internet, peo- ple can benefit from the combination of computeriza- tion and access, allowing information to be sent and received when and where it is needed. New devices have been invented to supply the fast and furious appetite for information. New types of information and new avenues for communication have become commonplace in the last decade. More and newer

12

Li b

ra ry

T e ch

n o

lo g

y R

e p

o rt

s al

at ec

h so

u rc

e. o rg

N

o v e m

b e r/

D e ce

m b

e r

2 0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

versions of devices and platforms appear with increas- ing frequency. Originally this explosion of informa- tion available to the public was viewed as the democ- ratization of power for the benefit of everyone, but this view didn’t last long.11

This utopian view of the benefits of the comput- erization of information began to be overshadowed almost immediately. The concept of free information for the masses required that someone other than the consumers of that information pay for it. To make pay- ing for the internet attractive, data was needed. Auto- matic software programs were developed to perform repetitive tasks that gathered data. These programs were known as bots—short for robots. What they col- lected became a commodity. Data collected by bots showed what sites were being used and what prod- ucts were being purchased, by whom, and how often. This information could be used to convince advertis- ers to pay to place their advertisements on websites. The data could also be offered for sale to prospective clients to use for their own purposes. Through using bots, it became possible to harvest a wide variety of information that could be sold. Once bots were suc- cessfully programmed to collect and send informa- tion, that ability was expanded for uses far beyond simple advertising.

Social Media

The advent of social media presented another oppor- tunity for advertising to specific and targeted groups of people. On social media sites such as Facebook and Twitter, information is often personal. These platforms are used to find like-minded people, to stay in touch with family and friends, to report the news of the day, and to create networks among people. These platforms provide an easy way to share information and to make connections. Social media networks provide a short- hand method of communication using icons to indi- cate approval and various emotions. This allows peo- ple to respond to items posted on their pages without actually having to write something themselves. If they enjoy something, the push of a button allows that mes- sage to be conveyed. It they wish to share the infor- mation with friends and followers, a single click can accomplish that task. It is possible for bots to be pro- grammed to count those clicks and respond to them.

News outlets, advertisers, political parties, and many others have created web pages that can be directed to the accounts and networks of social media users using programmed algorithms called bots. The bots can be programmed to search for information on the internet that is similar to what a social media user has already clicked on, liked, or shared. They can then inject that new information into what the user sees.12 So, for example, rather than seeing stories from

hundreds of news outlets, a bot will find news outlets that are similar to those already being viewed. Bots provide users with easy access to information about things they already like. By following links between accounts, bots can push information to the friends of a user as well. This means that friends begin to see the same array of information. Eventually one user and the friends and followers of that individual are seeing only information they agree with. This cre- ates an information bubble that makes it appear that the likes of the group inside the bubble represent the likes of the majority of people (because the group inside the bubble never sees anything contrary to its preferences).

In Imperva Incapsula’s 2015 annual report on impersonator bot and bad bot traffic trends, Igal Zeif- man states, “The extent of this threat is such that, on any given day, over 90 percent of all security events on our network are the result of bad bot activity.”13 Social and political bots have been used for the purposes of collecting and sharing information. In the last decade, there has been a concerted effort to design bots and bot practices that work to steer populations in general toward a particular way of thinking; to prevent people from organizing around a specific cause; and to mis- direct, misinform, or propagandize about people and issues.14 The bots work much faster than humans can and work 24/7 to carry out their programming.

Humans assist bots in their work by liking and sharing information the bots push at them, often with- out reading the information they are sending along. Tony Haile, CEO of Chartbeat, studied “two billion visits across the web over the course of a month and found that most people who click don’t read. In fact, a stunning 55% spent fewer than 15 seconds actively on a page. . . . We looked at 10,000 socially-shared arti- cles and found that there is no relationship whatso- ever between the amount a piece of content is shared and the amount of attention an average reader will give that content.”15 This means that once a message has reached a critical number of people via bots, those people will assist in the spread of that information even though more than half of them will not have read it. The manipulation of computer code for social media sites allows fake news to proliferate and affects what people believe, often without ever having been read beyond the headline or caption.

Notes 1. “History of Communication,” Wikipedia, last updat-

ed August 28, 2017, https://en.wikipedia.org/wiki /History_of_communication.

2. Joshua J. Mark, “Writing,” Ancient History Encyclope- dia, April 28, 2011, www.ancient.eu/writing/.

3. “Stone Carving,” Wikipedia, last updated August 30, 2017, https://en.wikipedia.org/wiki/Stone_carving.

13

Lib ra

ry Te

ch n

o lo

g y R

e p

o rts

alatech so

u rce.o

rg

N o

v e m

b e r/D

e ce

m b

e r 2

0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

4. “Clay Tablet,” Wikipedia, last updated August 25, 2017, https://en.wikipedia.org/wiki/Clay_tablet.

5. Joshua J. Mark, “Egyptian Papyrus,” Ancient History Encyclopedia, November 8, 2016, www.ancient.eu /Egyptian_Papyrus/.

6. “Uses for Birchbark,” NativeTech: Native American Technology and Art, accessed September 6, 2017, www.nativetech.org/brchbark/brchbark.htm.

7. “Differences between Parchment, Vellum and Paper,” National Archives website, US National Archives and Records Administration, accessed September 6, 2017, https://www.archives.gov/preservation/formats /paper-vellum.html.

8. Mark Cartwright, “Quipu,” Ancient History Encyclo- pedia, May 8, 2014, www.ancient.eu/Quipu/.

9. Winstone Arradaza, “The Evolution of Print Media,” Prezi presentation, November 11, 2013, https://prezi .com/qpmlecfqibmh/the-evolution-of-print-media/; “A Short History of Radio with an Inside Focus on Mobile Radio,” Federal Communications Commis- sion, Winter 2003–2004, https://transition.fcc.gov /omd/history/radio/documents/short_history.pdf; “Morse Code and the Telegraph,” History.com, ac- cessed September 6, 2017, www.history.com/topics /inventions/telegraph; Andrew Anthony, “A His- tory of the Television, the Technology That Seduced the World—and Me,” Guardian, September 7, 2013, https://www.theguardian.com/tv-and-radio/2013 /sep/07/history-television-seduced-the-world.

10. Arthur C. Clarke, Profiles of the Future: An Inquiry into the Limits of the Possible (London: V. Gollancz, 1973), 39.

11. Peter Ferdinand, “The Internet, Democracy and Democratization,” Democratization 7, no. 1 (2000): 1–17, https://doi.org/10.1080/13510340008403642.

12. Tarleton Gillespie, “The Relevance of Algorithms,” in Media Technologies: Essays on Communication, Materi- ality and Society, ed. Tarleson Gillespie, Pablo J. Boc- zkowski, and Kirsten A. Foot (Cambridge, MA: MIT Press, 2014), 167–94; Alessandro Bessi and Emilio Ferrara, “Social Bots Distort the 2016 U.S. Presiden- tial Election Online Discussion,” First Monday 21, no. 11 (November 7, 2016), http://journals.uic.edu/ojs /index.php/fm/rt/printerFriendly/7090/5653; Tim

Hwang, Ian Pearce, and Max Nanis, “Socialbots: Voices from the Fronts,” Interactions, March/April 2012: 38–45; Emilio Ferrara, Onur Varol, Clayton Davis, Filippo Menczer, and Alessandro Flammini, “The Rise of Social Bots,” Communications of the ACM 59, no. 7 (July 2016): 96–104.

13. Igal Zeifman, “2015 Bot Traffic Report: Humans Take Back the Web, Bad Bots Not Giving Any Ground,” Imperva Incapsula Blog, December 9, 2015, https:// www.incapsula.com/blog/bot-traffic-report-2015 .html.

14. Samuel C. Woolley, “Automating Power: Social Bot Interference in Global Politics,” First Monday 21, no. 4 (April 4, 2016), http://firstmonday.org/ojs/index .php/fm/article/view/6161/5300; Peter Pomerantsev and Michael Weiss, The Menace of Unreality: How the Kremlin Weaponizes Information, Culture and Money (Institute of Modern Russia and The Interpreter, 2014), www.interpretermag.com/wp-content /uploads/2015/07/PW-31.pdf; Bence Kollanyi, Philip N. Howard, and Samuel C. Wooley, Bots and Automa- tion over Twitter during the U.S. Election, Data Memo 2016.4 (Oxford, UK: Project on Computational Pro- paganda, November 2016), http://comprop.oii.ox.ac .uk/wp-content/uploads/sites/89/2016/11/Data-Me- mo-US-Election.pdf; Paul Roderick Gregory, “Inside Putin’s Campaign of Social Media Trolling and Fake Ukrainian Crimes,” Forbes, May 11, 2014, https:// www.forbes.com/sites/paulroderickgregory/2014/05 /11/inside-putins-campaign-of-social-media-trolling -and-faked-ukrainian-crimes/; Brian T. Gaines, James H. Kuklinski, Paul J. Quirk, Buddy Peyton, and Jay Verkuilen, “Same Facts, Different Interpretations: Partisan Motivation and Opinion on Iraq,” Journal of Politics 69 no. 4 (November 2007): 957–74; Sara El- Khalili, “Social Media as a Government Propaganda Tool in Post-revolutionary Egypt,” First Monday 18, no. 3 (March 4, 2013), http://firstmonday.org/ojs/ index.php/fm/rt/printerFriendly/4620/3423.

15. Tony Haile, “What You Think You Know about the Web Is Wrong,” Time.com, March 9, 2014, http:// time.com/12933/what-you-think-you-know -about-the-web-is-wrong/.

14

Li b

ra ry

T e ch

n o

lo g

y R

e p

o rt

s al

at ec

h so

u rc

e. o rg

N

o v e m

b e r/

D e ce

m b

e r

2 0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

Can Technology Save Us?

Technology of Fake News

Fake news sites target the filter bubbles of groups most aligned with that news. They use the power of social media to do so. Initially fake news of the social media era was relatively easy to spot. The claims of early social media fake news purveyors were often meant as entertainment. Language, fonts, and links were often indicators that could be used to determine veracity. It took only a short time for fake news to become more insidious, more plentiful, more subtle, and subverted for manipulation of information and public opinion. Fake news has many new social media outlets where it can appear and can spread quickly via both human and nonhuman actors. During the 2016 presidential election cycle for example, fake news appeared often.1 Determining what news was to be believed and what news was to be ignored became more a case of party affiliation than good sense.

Fake news sites and stories are shared for many dif- ferent reasons. Some readers find the stories amusing. Some find them alarming. Others find them affirming of their beliefs. Many people share fake news without ever having read the content of the article.2 Sharing of fake news, whether because it is amusing or because people think it is real, only exaggerates the problem. Did Pope Francis endorse candidate Donald Trump? No, but that didn’t stop the story from appearing on social media and spreading widely.3 Did Hillary Clin- ton run a child sex ring out of a Washington, DC, pizza shop? No, but that didn’t stop a man with a gun from going there to exact vengeance.4

In the early days of the internet, fake news was not a big problem. There were some websites that sought to spoof, mislead, or hoax, but mostly it was all in good fun. While some websites sought to

spread misinformation, their numbers were limited. It seemed as if the authority to shut down malicious websites was invoked more often. Creating a website on the early internet took time, effort, and computer programming skills that limited the number of people who could create fake news sites.

During the last decade, as an offshoot of the stream of information provided by the internet, social media platforms, such as Facebook and MySpace, were invented so that individuals could connect with others on the internet to point them to websites, share comments, describe events, and so on.

Following that came the invention of another type of social media—Twitter—which allows people to send very brief messages, usually about current events, to others who choose to receive those mes- sages. One could choose to “follow” former President Barak Obama’s Twitter postings—to know where he is going, what is on his agenda, or what is happen- ing at an event. This kind of information can be very useful for getting on-site information as it happens. It has proved useful in emergency situations as well. For example, during the Arab Spring uprisings, Twit- ter communications provided information in real time as events unfolded.5 During Hurricane Sandy, people were able to get localized and specific information about the storm as it happened.6 Twitter is also a con- venient means of socializing, for getting directions, and for keeping up-to-date on the activities of friends and family.

The power of the various tools that use the power of the internet and the information supplied there is epic. The spread of the technology required to make use of these tools has been rapid and global. As with most tools, the power of the internet can be used for both good and evil. In the last decade, the use of the

Chapter 3

15

Lib ra

ry Te

ch n

o lo

g y R

e p

o rts

alatech so

u rce.o

rg

N o

v e m

b e r/D

e ce

m b

e r 2

0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

internet to manipulate, manage, and mislead has had a massive upswing.

Big Data

The collection of massive amounts of data using bots has generated a new field of study known as “big data.”7 Some big data research applies to the activities of people who use the internet and social media. By gathering and analyzing large amounts of data about how people use the internet, how they use social media, what items they like and share, and how many people overall click on a link, advertisers, web devel- opers, and schemers can identify what appear to be big trends. Researchers are concerned that big data can hide biases that are not necessarily evident in the data collected, and the trends identified may or may not be accurate.8 The use of big data about social media and internet use can result in faulty assump- tions and create false impressions about what groups or people do or do not like. Manipulators of big data can “nudge” people to influence their actions based on the big data they have collected.9 They can use the data collected to create bots designed to influence populations.10

Bots

Information-collecting capabilities made possible by harnessing computer power to collect and analyze massive amounts of data are used by institutions, advertisers, pollsters, and politicians. Bots that col- lect the information are essentially pieces of computer code that can be used to automatically respond when given the right stimulus. For example, a bot can be programmed to search the internet to find particular words or groups of words. When the bot finds the word or words it is looking for, its programming makes note of the location of those words and does something with them. Using bots speeds up the process of finding and collecting sites that have the required informa- tion. The use of bots to collect data and to send data to specific places allows research to progress in many fields. They automate tedious and time-consuming processes, freeing researchers to work on other tasks.

Automated programming does good things for technology. There are four main jobs that bots do: “Good” bots crawl the web and find website content to send to mobile and web applications and display to users. They search for information that allows rank- ing decisions to be made by search engines. Where use of data has been authorized, the data is collected by bot “crawlers” to supply information to marketers. Monitoring bots can follow website availability and monitor the proper functioning of online features.

This kind of data collection is useful to those who want to know how many people have looked at the information they have provided. “In 1994, a former direct mail marketer called Ken McCarthy came up with the clickthrough as the measure of ad perfor- mance on the web. The click’s natural dominance built huge companies like Google and promised a whole new world for advertising where ads could be directly tied to consumer action.”11 Counting clicks is a relatively easy way to assess how many people have visited a website. However, counting clicks has become one of the features of social media that deter- mines how popular or important a topic is. Featur- ing and repeating those topics based solely on click counts is one reason that bots are able to manipulate what is perceived as popular or important. Bots can disseminate information to large numbers of people. Human interaction with any piece of information is usually very brief before a person passes that infor- mation along to others. The number of shares results in large numbers of clicks, which pushes the bot-sup- plied information into the “trending” category even if the information is untrue or inaccurate. Information that is trending is considered important.

Good bots coexist in the technical world with “bad” bots. Bad bots are not used for benign purposes, but rather to spam, to mine users’ data, or to manipulate public opinion. This process makes it possible for bots to harm, misinform, and extort. The Imperva Incapsula “2016 Bot Traffic Report” states that approximately 30 percent of traffic on the internet is from bad bots. Further, out of the 100,000 domains that were studied for the report, 94.2 percent experienced at least one bot attack over the ninety-day period of the study.12 Why are bad bots designed, programmed, and set in motion? “There exist entities with both strong motiva- tion and technical means to abuse online social net- works—from individuals aiming to artificially boost their popularity, to organizations with an agenda to influence public opinion. It is not difficult to automati- cally target particular user groups and promote spe- cific content or views. Reliance on social media may therefore make us vulnerable to manipulation.”13

In social media, bots are used to collect informa- tion that might be of interest to a user. The bot crawls the internet for information that is similar to what an individual has seen before. That information can then be disseminated to the user who might be inter- ested. By using keywords and hashtags, a website can attract bots searching for specific information. Unfor- tunately, the bot is not interested in the truth or false- hood of the information itself.

Some social bots are computer algorithms that “automatically produce content and interact with humans on social media, trying to emulate and pos- sibly alter their behavior. Social bots can use spam malware, misinformation slander or even just noise”

16

Li b

ra ry

T e ch

n o

lo g

y R

e p

o rt

s al

at ec

h so

u rc

e. o rg

N

o v e m

b e r/

D e ce

m b

e r

2 0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

to influence and annoy.14 Political bots are social bots with political motivations. They have been used to artificially inflate support for a candidate by send- ing out information that promotes a particular candi- date or disparages the candidate of the opposite party. They have been used to spread conspiracy theories, propaganda, and false information. Astroturfing is a practice where bots create the impression of a grass- roots movement supporting or opposing something where none exists. Smoke screening is created when a bot or botnet sends irrelevant links to a specific hashtag so that followers are inundated with irrele- vant information.

When disguised as people, bots propagate nega- tive messages that may seem to come from friends, family or people in your crypto-clan. Bots distort issues or push negative images of political candi- dates in order to influence public opinion. They go beyond the ethical boundaries of political polling by bombarding voters with distorted or even false statements in an effort to manufacture negative attitudes. By definition, political actors do advo- cacy and canvassing of some kind or other. But this should not be misrepresented to the public as engagement and conversation. Bots are this cen- tury’s version of push polling, and may be even worse for society.15

Social bots have become increasingly sophisti- cated, such that it is difficult to distinguish a bot from a human. In 2014, Twitter revealed in a SEC filing that approximately 8.5 percent of all its users were bots, and that number may have increased to as much as 15 percent in 2017.16 Humans who don’t know that the entity sending them information is a bot may easily be supplied with false information.

Experiments in Fake News Detection

Researchers have studied how well humans can detect lies. Bond and DePaulo analyzed the results of more than 200 lie detection experiments and found that humans can detect lies in text only slightly better than by random chance.17 This means that if a bot supplies a social media user with false information, that per- son has just a little better than a 50 percent chance of identifying the information as false. In addition, because some bots have presented themselves and been accepted by humans as “friends,” they become trusted sources, making the detection of a lie even more difficult.

To improve the odds of identifying false informa- tion, computer experts have been working on multi- ple approaches to the computerized automatic recog- nition of true and false information.18

Written Text

Written text presents a unique set of problems for the detection of lies. While structured text like insurance claim forms use limited and mostly known language, unstructured text like that found on the web has an almost unlimited language domain that can be used in a wide variety of contexts. This presents a chal- lenge when looking for ways to automate lie detection. Two approaches have been used recently to identify fake news in unstructured text. Linguistic approaches look at the word patterns and word choices, and net- work approaches look at network information, such as the location from which the message was sent, speed of response, and so on.19

Linguistic Approaches to the Identification of Fake News

The following four linguistic approaches are being tested by researchers:

In the Bag of Words approach, each word in a sen- tence or paragraph or article is considered as a sepa- rate unit with equal importance when compared to every other word. Frequencies of individual words and identified multiword phrases are counted and analyzed. Part of speech, location-based words, and counts of the use of pronouns, conjunctions, and neg- ative emotion words are all considered. The analysis can reveal patterns of word use. Certain patterns can reliably indicate that information is untrue. For exam- ple, deceptive writers tend to use verbs and personal pronouns more often, and truthful writers tend to use more nouns, adjectives, and prepositions.20

In the Deep Syntax approach, language structure is analyzed by using a set of rules to rewrite sentences to describe syntax structures. For example, noun and verb phrases are identified in the rewritten sentences. The number of identified syntactic structures of each kind compared to known syntax patterns for lies can lead to a probability rating for veracity.21

In the Semantic Analysis approach, actual experi- ence of something is compared with something writ- ten about the same topic. Comparing written text from a number of authors about an event or experi- ence and creating a compatibility score from the com- parison can show anomalies that indicate falsehood. If one writer says the room was painted blue while three others say it was painted green, there is a chance that the first writer is providing false information.22

In Rhetorical Structure (RST), the analytic frame- work identifies relationships between linguistic ele- ments of text. Those comparisons can be plotted on a graph, Vector Space Modeling (VSM) showing how close to the truth they fall.23

17

Lib ra

ry Te

ch n

o lo

g y R

e p

o rts

alatech so

u rce.o

rg

N o

v e m

b e r/D

e ce

m b

e r 2

0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

Networks

In approaches that use network information, human classifiers identify instances of words or phrases that are indicators of deception. Known instances of words used to deceive are compiled to create a database. Databases of known facts are also created from vari- ous trusted sources.24 Examples from a constructed database of deceptive words or verified facts can be compared to new writing. Emotion-laden content can also be measured, helping to separate feeling from facts. By linking these databases, existing knowledge networks can be compared to information offered in new text. Disagreements between established knowl- edge and new writing can point to deception.25

Social Network Behavior using multiple reference points can help social media platform owners to iden- tify fake news.26 Author authentication can be veri- fied from internet metadata.27 Location coordination for messages can be used to indicate personal knowl- edge of an event. Inclusion or exclusion of hyper- links is also demonstrative of trustworthy or untrust- worthy sources. (For example, TweetCred, available as a browser plugin, is software that assigns a score for credibility to tweets in real time, based on char- acteristics of a tweet such as content, characteristics of the author, and external URLs.28) The presence or absence of images, the total number of images by mul- tiple sources, and their relationships and relevance to the text of a message can also be compared with known norms and are an indicator of the truth of the message. Ironically, all of this information can be col- lected by bots.

Experiments in Bot and Botnet Detection

A variety of experiments have been conducted using multiple processes to create a score for information credibility.29 Research groups are prepared to supply researchers with data harvested from social media sites. Indiana University has launched a project called Truthy.30 As part of that project, researchers have developed an “Observatory of Social Media.” They have captured data about millions of Twitter messages and make that information available along with their analytical tools for those who wish to do research. Their system compares Twitter accounts with doz- ens of known characteristics of bots collected in the Truthy database to help identify bots.

Truthy http://truthy.indiana.edu/about/

DARPA, Defense Advanced Research Projects Agency, is a part of the US Department of Defense. It is responsible for the development of emerging tech- nologies that can be used by the US military. In early 2015, DARPA sponsored a competition whose goal was to identify bots known as influence bots. These bots are “realistic, automated identities that illicitly shape discussions on social media sites like Twitter and Face- book, posing a risk to freedom of expression.”31 If a means of identifying these bots could be discovered, it would be possible to disable them. The outcome of the challenge was that a semi-automated process that combines inconsistency detection and behavioral mod- eling, text analysis, network analysis, and machine learning would be the most effective means of identify- ing influence bots. Human judgment added to the com- puter processes provided the best results.

Many other experiments in the identification of bots have been reported in the computer science liter- ature.32 Bots and botnets often have a specific task to complete. Once that task is completed, their accounts are eliminated. Detecting bots and botnets before they can do harm is critical to shutting them down. Unfortu- nately, the means for detecting and shutting down bots are in their infancy. There are too many bot-driven accounts and too few means for eliminating them.

What happens to the information that bots collect is one part of the story of fake news. During the 2016 US presidential campaign, the internet was used to advertise for political candidates. Official campaign information was created by members of each politi- cian’s election team. News media reported about can- didates’ appearances, rallies, and debates, creating more information. Individuals who attended events used social media to share information with their friends and followers. Some reports were factual and without bias. However, because political campaigns involve many people who prefer one candidate over another, some information presented a bias in favor of one candidate or not favoring another candidate.

Because it is possible for anyone to launch a web- site and publish a story, some information about the political candidates was not created by any official of the campaign. In fact, many stories appeared about candidates that were biased, taken out of context, or outright false. Some stories were meant as spoof or satire; others were meant to mislead and misinform. One story reported that the pope had endorsed pres- idential candidate Donald Trump. In any other con- text, the reader would likely have no trouble realizing that this story was not true.

Enter the bots. There have been some alarming changes in how, where, and for what bots are used in the past ten years. Bots are being programmed to col- lect information from social media accounts and push information to those accounts that meet certain criteria.

18

Li b

ra ry

T e ch

n o

lo g

y R

e p

o rt

s al

at ec

h so

u rc

e. o rg

N

o v e m

b e r/

D e ce

m b

e r

2 0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

Social networks allow “atoms” of propaganda to be directly targeted at users who are more likely to accept and share a particular message. Once they inadvertently share a misleading or fabricated article, image video or meme, the next person who sees it in their social feed probably trusts the original poster, and goes on to share it themselves. These “atoms” then rocket through the informa- tion ecosystem at high speed powered by trusted peer-to-peer networks.33

Political bots have been central to the spread of political disinformation. According to Woolley and Guilbeault, the political bots used in the 2016 US elec- tions were primarily used to create manufactured consensus:

Social media bots manufacture consensus by artificially amplifying traffic around a political candidate or issue. Armies of bots built to fol- low, retweet, or like a candidate’s content make that candidate seem more legitimate, more widely supported, than they actually are. Since bots are indistinguishable from real people to the average Twitter or Facebook user, any number of bots can be counted as supporters of candidates or ideas. This theoretically has the effect of galvanizing political support where this might not previously have happened. To put it simply: the illusion of online support for a candidate can spur actual sup- port through a bandwagon effect.34

The Computational Propaganda Research project has studied the use of political bots in nine countries around the world. In Woolley and Guilbeault’s report on the United States, the authors state, “Bots infil- trated the core of the political discussion over Twit- ter, where they were capable of disseminating pro- paganda at mass-scale. Bots also reached positions of high betweenness centrality, where they played a powerful role in determining the flow of information among users.35

Social bots can affect the social identity people create for themselves online. Bots can persuade and influence to mold human identity.36 Guilbeault argues that online platforms are the best place to make changes that can help users form and maintain their online identity without input from nonhuman actors. To do that, researchers must identify and modify fea- tures that weaken user security. He identifies four areas where bots infiltrate social media:

1. Users create profiles to identify themselves on a social media platform. It is easy for bots to be pro- grammed to provide false information to create a profile. In addition, the accessibility of the infor- mation in the profiles of other social media users is relatively easy to use to target specific populations.

2. In person, humans rely of a wide range of signals to help determine whether or not they want to trust

someone. Online users have more limited options, making it much easier for bots to pretend to be real people. For platforms like Twitter, it is signifi- cantly easier to imitate a human because the text length is short and misspellings, bad grammar, and poor syntax are not unusual. Guilbeault indi- cates that popularity scores are problematic. He suggests, for example, “making popularity scores optional, private, or even nonexistent may signifi- cantly strengthen user resistance to bot attacks.”37

3. People pay attention to their popularity in social media. A large number of friends or followers is often considered to be a mark of popularity. That can lead to indiscriminate acceptance of friend requests from unknown individuals, providing a place for social bots to gain a foothold. Bots send out friend requests to large numbers of people, collect a large following, and, as a result, become influential and credible in their friend group.

4. The use of tools such as emoticons and like but- tons help to boost the influence of any posting. Bots can use the collection of likes and emoticons to spread to other groups of users. This process can eventually influence topics that are trending on Twitter, creating a false impression of what top- ics people are most interested at a given time. This can, of course, deflect interest in other topics.38

While Guilbeault has identified practices on social media platforms where improvements or changes could be made to better protect users, those changes have yet to be made. A groundswell of opinion is needed to get the attention of social media platform makers. The will to remove or change a popular fea- ture such as popularity rating doesn’t seem likely in the near future. In fact, while research is being done in earnest to combat the automated spread of fake or malicious news, it is mostly experimental in nature.39 Possible solutions are being tested, but most automatic fake news identification software is in its infancy. The results are promising in some cases, but wide applica- tion over social media platforms is nowhere in sight. The research that exists is mostly based on identify- ing and eliminating accounts that can be shown to be bots. However, by the time that has been accom- plished, whatever the bot has been programmed to do has already been done. There are very few means to automatically identify bots and botnets and disable them before they complete a malicious task.

Google and Facebook Anti–Fake News Efforts

The social media platforms and search engines them- selves have made some efforts to help detect and flag fake news. Facebook created an “immune system” to

19

Lib ra

ry Te

ch n

o lo

g y R

e p

o rts

alatech so

u rce.o

rg

N o

v e m

b e r/D

e ce

m b

e r 2

0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

help protect itself from infection by bots.40 Google announced that it will increase its regulation of adver- tising and linked-to websites.41 Facebook has turned over the verification of information to five lead- ing fact-checking organizations.42 Facebook has also initiated a feature in parts of Europe called Related Articles, which provides readers with access to the results of fact-checking of original stories.43 Google Digital News Initiative is creating programs to help users verify information themselves with Factmata. Overall, these attempts are reactive at best. The sheer volume of potential misinformation and the difficulty in identifying and shutting down bot accounts make these attempts seem feeble.

Factmata http://factmata.com/

It seems that the battle of the computer program- mers will continue indefinitely. When one side devel- ops a new means of manipulating information to mis- lead, misinform, or unduly influence people, the other side finds a way to counter or at least slow the ability to make use of the new idea. This cycle continues in a seemingly endless loop. Using technology to iden- tify and stop fake news is a defensive game. There does not appear to be a proactive means of eliminat- ing fake news at this time. Money, power, and politi- cal influence motivate different groups to create com- puter-driven means of human control.

Notes 1. Andrew Zaleski, “How Bots, Twitter, and Hackers

Pushed Trump to the Finish Line,” Backchannel, Wired, November 10, 2016, https://www.wired .com/2016/11/how-bots-twitter-and-hackers -pushed-trump-to-the-finish-line/; Alessandro Bessi and Emilio Ferrara, “Social Bots Distort the 2016 U.S. Presidential Election Online Discussion,” First Monday 21, no. 11 (November 7, 2016), http:// journals.uic.edu/ojs/index.php/fm/rt/printer Friendly/7090/5653.

2. Tony Haile, “What You Think You Know about the Web Is Wrong,” Time.com, March 9, 2014, http:// time.com/12933/what-you-think-you-know -about-the-web-is-wrong/.

3. Don Evon, “Nope Francis,” Snopes, July 24, 2016, www.snopes.com/pope-francis-donald-trump -endorsement/.

4. Marc Fisher, John Woodrow Cox, and Peter Her- mann, “Pizzagate: From Rumor, to Hashtag, to Gun- fire in D.C.,” Washington Post, December 6, 2016, https://www.washingtonpost.com/local/pizza gate-from-rumor-to-hashtag-to-gunfire-in-dc /2016/12/06/4c7def50-bbd4-11e6-94ac-3d32 4840106c_story.html.

5. D. Parvaz, “The Arab Spring, Chronicled Tweet by Tweet,” Al Jazeera English, November 6, 2011, www .aljazeera.com/indepth/features/2011/11/2011 113123416203161.html; Sara El-Khalili, “Social Media as a Government Propaganda Tool in Post- revolutionary Egypt,” First Monday 18, no. 3 (March 4, 2013), http://firstmonday.org/ojs/index.php/fm /rt/printerFriendly/4620/3423.

6. “Twitter Served as a Lifeline of Information During Hurricane Sandy,” Pew Research Center, FactTank, October 28, 2013, www.pewresearch.org/fact-tank /2013/10/28/twitter-served-as-a-lifeline-of-infor mation-during-hurricane-sandy/.

7. David Turner, Michael Schroeck, and Rebecca Shock- ley, Analytics: The Real-World Use of Big Data in Fi- nancial Services, executive report (Somers, NY: IBM Global Services, 2013).

8. Kate Crawford, “The Hidden Biases in Big Data,” Harvard Business Review, April 1, 2013, https://hbr .org/2013/04/the-hidden-biases-in-big-data.

9. Dirk Helbing, Bruno S. Frey, Gerd Gigerenzer, Ernst Hafen, Michael Hagner, Yvonne Hofstetter, Jeroen van den Hoven, Roberto V. Zicari, and Andrej Zwit- ter, “Will Democracy Survive Big Data and Artificial Intelligence?” Scientific American, February 25, 2017, https://www.scientificamerican.com/article/will -democracy-survive-big-data-and-artificial-intelli gence/; previously published in Scientific American’s sister publication Spektrum der Wissenschaft as “Digi- tale Demokratie statt Datendiktatur.”

10. Steven J. Frenda, Rebecca M. Nichols, and Elizabeth F. Loftus, “Current Issues and Advances in Misinfor- mation Research,” Current Directions in Psychological Science 20, no. 1 (2011): 20–23.

11. Haile, “What You Think You Know.” 12. Igal Zelfman, “Bot Traffic Report 2016,” Imperva

Incapsula Blog, January 24, 2017, https://www.incap sula.com/blog/bot-traffic-report-2016.html.

13. Onur Varol, Emilio Ferrara, Clayton A. Davis, Filippo Menczer, and Alessandro Falmmini, “Online Human- Bot Interactions: Detection, Estimation and Charac- terization,” in Proceedings of the Eleventh Internation- al AAAI Conference on Web and Social Media (ICWSM 2017) (Palo Alto, CA: AAAI Press, 2017), 280.

14. Emilio Ferrara, Onur Varol, Clayton Davis, Filippo Menczer, and Alessandro Flammini, “The Rise of So- cial Bots,” Communications of the ACM 59, no. 7 (July 2016): 96.

15. Philip N. Howard, Pax Technica: How the Internet of Things May Set Us Free or Lock Us Up (New Haven, CT: Yale, 2015), 211.

16. Twitter, Inc., Form 10-Q, Report for the Quarterly Period Ended June 30, 2014, US Securities and Ex- change Commission file number 001-36164, www .sec.gov/Archives/edgar/data/1418091/000156459 014003474/twtr-10q_20140630.htm; Varol et al., “Online Human-Bot Interactions.

17. Charles F. Bond and Bella M. DePaulo, “Accuracy of Deception Judgments,” Personality and Social Psy- chology Review 10, no. 3 (2006): 214–34.

18. Niall J. Conroy, Victoria L. Rubin, and Yimin Chen, “Automatic Deception Detection: Methods for Finding Fake News,” Proceedings of the Association for Information Science and Technology 52, no. 1

20

Li b

ra ry

T e ch

n o

lo g

y R

e p

o rt

s al

at ec

h so

u rc

e. o rg

N

o v e m

b e r/

D e ce

m b

e r

2 0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

(2015), https://doi.org/10.1002/pra2.2015 .145052010082.

19. Jeffrey Hancock, Michael T. Woodworth, and Ste- phen Porter, “Hungry like the Wolf: A Word-Pattern Analysis of the Languages of Psychopaths,” Legal and Criminological Psychology 18 (2013): 102–14; David M. Markowitz and Jeffrey T. Hancock, “Linguistic Traces of a Scientific Fraud: The Case of Diederick Stapel,” PLOS ONE 9, no. 8 (2014), https://doi.org /10.1371/journal.pone.0105937; Rada Mihalcea and Carlo Strapparava, “The Lie Detector: Explorations in the Automatic Recognition of Deceptive Language” (short paper, Joint Conference of the 47th Annual Meeting of the Association for Computational Lin- guistics and 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, Singapore, August 2–7, 2009).

20. Momchil Hardalov, Ivan Koychev, and Preslav Na- kov, “In Search of Credible News,” in Artificial Intel- ligence: Methodology, Systems, and Applications: 17th International Conference, AIMSA 2016, Varna, Bulgar- ia, September 7–10, 2016, Proceedings, ed. C. Dichev and G. Agre (London: Springer, 2016), 172–80; Mar- kowitz and Hancock, “Linguistic Traces,” E105937; Milhalcea and Strapparava, “The Lie Detector.”

21. Song Feng, Ritwik Banerjee, and Yejin Choi, “Syntac- tic Stylometry for Deception Detection,” in Proceed- ings of the 50th Annual Meeting of the Association for Computational Linguistics (New York: Association for Computational Linguistics, 2012), 171–75, www.acl web.org/anthology/P12-2034.

22. Victoria L. Rubin and Tatiana Lukoianova, “Truth and Deception at the Rhetorical Structure Level,” Journal of the Association for Information Science and Technology 66, no. 5 (2015): 905–17.

23. Jacob Ratkiewicz, Michael Conover, Mark Meis, Bruno Goncalves, Snehal Patil, Alessandro Flammini and Filippo Mercer, “Truthy: Mapping the Spread of Astroturf in Microblog Streams,” In WWW ’11: Proceedings of the 20th International Conference Com- panion on World Wide Web (New York: Association of Computational Linguistics, 2011), 249–52, http:// doi.org/10.1145/1963192.1963301; Zhiwei Jin, Juan Cao, Yongdong Zhang, Hianshe Zhou, and Qi Tian, “Novel Visual and Statistical Image Features for Microblogs News Verification,” IEEE Transactions on Multimedia 19, no. 3 (March 2017): 598–608.

24. Victorial L. Rubin, Yimin Chen, and Niall J. Con- roy, “Deception Detection for News: Three Types of Fakes,” in ASIST 2015: Proceedings of the 78th ASIS&T Annual Meeting, ed. Andrew Grove (Silver Spring, MD: Association for Information Science and Technology, 2015); Myle Ott, Claire Cardie, and Jeffrey T. Hancock, “Negative Deceptive Opinion Spam,” in The 2013 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies: Proceedings of the Main Conference (Stroudsburg, PA: Association of Computational Linguistics, 2013), 497–501; Xin Luna Dong, Evgeniy Gabrilovich, Kevin Murphy, Van Dang, Wilko Horn, Camillo Lugaresi, Shaohua Sun, and Wei Zhang, “Knowledge-Based Trust: Estimating the Trustworthiness of Web Sources,” Proceedings of

the VLDB Endowment, arXiv:1502.03519v1 [cs.DB] February 12, 2015.

25. Giovanni Luca Ciampaglia, Prashant Shiralkar, Luis M. Rocha, Johan Bollen, Fillippo Menczer, and Ales- sandro Flammini, “Computational Fact Checking from Knowledge Networks,” PLOS ONE 10, no. 6 (2015), https://doi.org/10.1371/journal .pone.0128193.

26. Hamdi Yahuaoui Al-Mutairi and Hazem Raafat, “Lattice-Based Ranking for Service Trust Behaviors,” Knowledge Based Systems 102 (2016): 20–38; Carlos Castillo, Marcelo Mendoza, and Barbara Poblete, “Predicting Information Credibility in Time-Sensitive Social Media,” Internet Research 29, no. 5 (2013): 560–88.

27. Benjamin Paul Chamberlain, Clive Humby, and Marc Peter Deisenroth, “Probabilistic Inference of Twitter Users’ Age Based on What They Follow,” Associa- tion for the Advancement of Artificial Intelligence, arXiv:1601.04621v2 [cs.SI], February 24, 2017.

28. Aditi Gupta, Ponnurangam Kumaraguru, Carlos Castillo, and Patrick Meier, “TweetCred: Real-Time Credibility Assessment of Content on Twitter,” in So- cial Informatics: SocInfo 2014, ed. L. M. Aiello and D. McFarland (London: Springer, 2014), 228–43.

29. Zhao Liang, Ting Hua, Chang-Tien Lu, and Ing-Ray Chen, “A Topic-Focused Trust Model for Twitter,” Computer Communications 76 (2016): 1–11; Victoria L. Rubin, Niall J. Conroy, and Yimin Chen, “ Towards News Verification: Deception Detection Methods for News Discourse” (paper, Hawaii International Con- ference on System Sciences [HICSS48] Symposium on Rapid Screening Technologies, Deception Detec- tion and Credibility Assessment Symposium, Kauai, HI, January 2015), http://works .bepress.com/victoriarubin/6/; Rubin, Chen, and Conroy, “Deception Detection for News”; Diego Saez- Trumper, “Fake Tweet Buster: A Webtool to Identify Users Promoting Fake News on Twitter,” in HT ’14: Proceedings of the 25th ACM Conference on Hypertext and Social Media (New York: Association for Comput- ing Machinery, 2014), 316–17, https://doi.org /10.1145/2631775.2631786; Chen Yimin, Niall J. Conroy, and Victoria L. Rubin, “News in an Online World: The Need for an ‘Automatic Crap Detector,’” Proceedings of the Association for Information Science and Technology 52, no. 1 (2015), https://doi.org /10.1002/pra2.2015.145052010081.

30. Clayton A. Davis, Giovanni Luca Ciampaglia, Luca Maria Aiello, Keychul Chung, Michael D. Conover, Emilio Ferrara, Alessandro Flammini, et al., “OSoMe: The IUNI Observatory on Social Media,” preprint, PeerJ Preprints, accepted April 29, 2016, https://doi.org/10.7287/peerj.preprints.2008v1.

31. V. S. Subrahmanian, Amos Azaria, Skylar Durst, Vad- im Kagan, Aram Galstyan, Kristina Lerman, Linhong Zhu, et al., “The DARPA Twitter Bot Challenge,” Computer, June 2016, 38.

32. Norah Abokhodair, Daisy Yoo, and David W. McDon- ald, “Dissecting a Social Botnet: Growth, Content and Influence in Twitter,” Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work and Social Computing (New York: Association for Computing Machinery, 2015), 839–51, https://doi

21

Lib ra

ry Te

ch n

o lo

g y R

e p

o rts

alatech so

u rce.o

rg

N o

v e m

b e r/D

e ce

m b

e r 2

0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

.org/10.1145/2675133.2675208; Lorenzo Alvisi, Al- len Clement, Alessandro Epasto, Silvio Lattanzi, and Alessandro Panconesi, “SoK: The Evolution of Sybil Defense via Social Networks,” in Proceedings of the 2013 IEEE Symposium on Security and Privacy (Piscat- away, NJ: Institute of Electrical and Electronics Engi- neers, 2013), 382–96, https://doi.org/10.1109 /SP.2013.33; Yazan Boshmaf, Ildar Muslukhov, Konstantin Beznosov, and Matei Ripeanu, “The So- cialbot Network: When Bots Socialize for Fame and Money” (paper, 27th annual Computer Security Ap- plications Conference, ACSAC 2011, Orlando, FL, December 5–9, 2011); Qiang Cao, Xiaowei Yang, Jieqi Yu, and Christopher Palow, “Uncovering Large Groups of Active Malicious Accounts in Online Social Networks” in CCS ’14: Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security (New York: ACM: 2014), 477–88, https:// doi.org/10.1145/2660267.2660269; Clayton Allen Davis, Onur Varol, Emilio Ferrara, Alessandro Flam- mini, and Filippo Menczer, “BotOrNot: A System to Evaluate Social Bots,” in WWW ’16 Companion: Proceedings of the 25th International Conference Companion on World Wide Web, 273–74, https:// doi.org/10.1145/2872518.2889302; Chad Edwards, Autumn Edwards, Patric R. Spence, and Ashleigh K. Shelton, “Is That a Bot Running the Social Me- dia Feed?” Computers in Human Behavior 33 (2014) 372–76; Aviad Elyashar, Michael Fire, Dima Kagan, and Yuval Elovici, “Homing Social Bots: Intrusion on a Specific Organization’s Employee Using Socialbots” in ASONAM ’13: Proceedings of the 2013 IEEE/ACM In- ternational Conference on Advances in Social Networks Analysis and Mining (New York: ACM, 2013), 1358– 65, https://doi.org/10.1145/2492517.2500225; Carlos Freitas, Fabricio Benevenuto, Saptarshi Ghosh, and Adiano Veloso, “Reverse Engineering Socialbot Infil- tration Strategies in Twitter.” arXiv:1405.4927 [cs.SI], May 20, 2014; Russell Frank, “Caveat Lector: Fake News as Folklore,” Journal of American Folklore 128, no. 509 (Summer 2015): 315–32; Varol et al., “Online Human-Bot Interactions”; Claudia Wagner, Silvia Mitter, Christian Körner, and Markus Strohmaier, “When Social Bots Attack: Modeling Susceptibility of Users in Online Social Networks” in Making Sense of Microposts: Proceedings of the WWW ’12 Workshop on “Making Sense of Microposts,” ed. Matthew Row, Mi- lan Stankovic, and Aba-Sah Dadzie (CEUR Workshop Proceedings, 2012), http://ceur-ws.org/Vol-838.

33. Claire Wardle, “Fake News: It’s Complicated,” First Draft News, February 16, 2017, https://medium .com/1st-draft/fake-news-its-complicated-d0f 773766c79.

34. Samuel C. Woolley and Douglas R. Guilbeault, Com- putational Propaganda in the United States of America: Manufacturing Consensus Online, Computational Propaganda Research Project, Working Paper 2017.5 (Oxford, UK: Project on Computational Propaganda, 2017), 8, http://comprop.oii.ox.ac.uk/2017/06/19 /computational-propaganda-in-the-united-states -of-america-manufacturing-consensus-online/.

35. Woolley and Guilbeault, Computational Propaganda, 22.

36. Douglas Guilbeault, “Growing Bot Security: An Ecological View of Bot Agency,” International Journal of Communication 10 (2016): 5012.

37. Guilbeault, “Growing Bot Security,” 5012. 38. Guilbeault, “Growing Bot Security,” 5003–21. 39. Samuel C. Woolley and Philip N. Howard, “Political

Communication, Computational Propaganda, and Autonomous Agents.” International Journal of Com- munication 10 (2016): 4882–90; Sam Woolley and Phil Howard, “Bad News Bots: How Civil Society Can Combat Automated Online Propaganda,” TechPresi- dent, December 10, 2014, http://techpresident.com /news/25374/bad-news-bots-how-civil-society-can -combat-automated-online-propaganda; Jonathan Stray, “Defense against the Dark Arts: Networked Propaganda and Counter-propaganda.” Jonathan Stray website, February 24, 2017, http://jonathan stray.com/networked-propaganda-and -counter-propaganda.

40. Tao Stein, Ergond Chen, and Karan Mangla, “Face- book Immune System,” in Proceedings of the 4th Workshop on social network systems. Article #8. EuroSys Social Networks Systems (SNS) 2011, April 10, 2011 Salzburg, http://www.cse.iitd. ac.in/~siy107537/sil765/readings/a10-stein.pdf.

41. Charles Warner, “Google Increases Regulation of False Ads and Fake News,” Forbes, January 25, 2017, https://www.forbes.com/sites/charleswarner/2017 /01/25/google-increases-regulation-of-false-ads-and -fake-news/.

42. Emily Bell, “Facebook Drains the Fake News Swamp with New, Experimental Partnerships,” Little Green Footballs, December 15, 2016, http://littlegreen footballs.com/page/322423_Facebook_Drains _the_Fake_News_.

43. Kathleen Chaykowski, “Facebook Expands Fight against Fake News with Automatic, Related Articles,” Forbes, August 3, 2017, https://www.forbes.com /sites/kathleenchaykowski/2017/08/03/facebook -expands-fight-against-fake-news-with-automatic -related-articles/.

22

Li b

ra ry

T e ch

n o

lo g

y R

e p

o rt

s al

at ec

h so

u rc

e. o rg

N

o v e m

b e r/

D e ce

m b

e r

2 0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

Can We Save Ourselves?

M ost people have no clue how the technology that envelops them works or what physical principles underlie its operation. . . . Thus, the

‘limits of plausibility’ have vanished, and the ‘knowl- edge of the audience’ is constructed from Facebook feeds, personal experience, and anecdote.”1 Notwith- standing, there are some things individuals can do and tools that can be used to mitigate the spread of fake news. While we might not be able to stop the cre- ation of fake news, individuals can take steps to help themselves and others.

Learn about Search Engine Ranking

A first strategy to foiling the purveyors of fake news is to educate ourselves about how fake news is created and how it spreads. For example, when people search for information, they often use a search engine. The amount of information that is retrieved is always overwhelming. The vast majority of searchers do not look at links beyond the first page of results, and most people never get beyond the second link on the first page.2 This makes the placement of information on the page of results very important. The criteria that drive the placement of information are complex and often opaque to the general public. The result is that search engine users accept whatever information appears at the top of the search results. This makes users very vulnerable to receiving and accepting misleading or even fake information. Learning how the ranking of websites is accomplished can at least forewarn users about what to look for.3

Be Careful about Who You “Friend”

In the world of social media, information is brought directly to us, rather than requiring us to search for it. That information is often shared and commented on with friends and followers. One reason fake news can spread is because we are not as careful as we should be about accepting friend requests. It is great to be popular, and one way of measuring popularity is to have a long list of friends and followers. It makes us feel good about ourselves. Because those friends and followers generally agree with what we already believe, having a lot of friends feeds our confirmation bias, which also makes us feel good about ourselves.

If and when friend requests are accepted, we make a psychological transition from thinking about the requestor as a stranger to thinking about the requestor as a friend. A certain amount of trust accompanies the change in status from stranger to friend. That new friend becomes privy to the inner circle of informa- tion in our lives and is also connected to our other friends and followers. We trust those friends to “do no harm” in our lives. We can unfriend or block someone if we change our minds, but that often happens after something bad occurs.

The friends list can be great when everybody on it is a human. However, it is possible for social media friends to be bots. These bots are, at best, programmed to gather and provide information that is similar to what we like. Unfortunately, bots are sometimes pro- grammed to gather and spread misinformation or dis- information. “A recent study estimated that 61.5% of total web traffic comes from bots. One recent study of

Chapter 4

23

Lib ra

ry Te

ch n

o lo

g y R

e p

o rts

alatech so

u rce.o

rg

N o

v e m

b e r/D

e ce

m b

e r 2

0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

Twitter revealed that bots make for 32% of the Twitter posts generated by the most active account.”4 About 30 percent of the bot accounts are “bad” bots.5

If we accept a bot as a friend, we have unknow- ingly made the psychological shift to trust this bot- friend, making any mis- or disinformation it shares more plausible. After all, friends don’t steer friends wrong. If an individual likes a posting from a bot, it sends a message to the individual’s other friends that the bot-posted information is trustworthy. “A large- scale social bot infiltration of Facebook showed that over 20% of legitimate users accept friendship requests indiscriminately and over 60% accept requests from accounts with at least one contact in common. On other platforms like Twitter and Tumblr, connecting and interacting with strangers is one of the main fea- tures.”6 People with large numbers of friends or fol- lowers are more likely to accept friend requests from “people” they don’t know. This makes it easy for bots to infiltrate a network of social media users.

It is very difficult to identify a friend or follower that is actually a bot. Even Facebook and Twitter have a hard time identifying bots. Bots are programmed to act like humans. For example, they can be pro- grammed to send brief, generic messages along with the links they share. That makes them seem human. They can be programmed to do that sharing at appro- priate times of day. If they don’t post anything for an eight-hour span, it makes them look like a human who is getting a good night’s sleep. They can also mimic human use of social media by limiting the amount of sharing or likes for their account. If they share thou- sands of links in a short period of time, they seem like machines. If the number of items shared by each bot is limited, they seem more like humans. Bots can even be programmed to mimic words and phrases we com- monly use and can shape messages using those words and phrases. This makes their messages look and feel familiar, and they are, therefore, more believable.

If we friend a bot, that bot gets access to a wide variety of networked social media accounts and can spread fake news to our list of friends and followers. Those people can then share the fake news in an ever- widening circle. This means bots can influence a large number of people in a short period of time. Bots can also be linked into networks called botnets, increas- ing their ability to reshape a conversation, inflate the numbers of people who appear to be supporting a cause, or direct the information that humans receive.

ID Bots

It is possible to watch for bots, and we should make it a habit to do so before accepting friend requests. Some things we can do to protect ourselves from bots follow:

1. Accounts that lack a profile picture, have con- fused or misspelled handles, have low numbers of Tweets or shares, and follow more accounts than they have followers are likely to be bots. “If an account directly replies to your Tweet within a second of a post, it is likely automatically pro- grammed.”7 Look for these signs before accepting a friend request.

2. Should a possible bot be identified, it should be reported. Everyone can learn how to report a sus- pected bot. Social media sites provide links to report misuse and propaganda.

3. Using a wide variety of hashtags and changing them on a regular basis, rather than relying on a single hashtag, can keep bots from smoke screen- ing (disrupting) those hashtags.

4. If accounts you follow gain large numbers of fol- lowers overnight, that is probably an indication that bots are involved. Check the number of fol- lowers for new friends.

5. For those with the skills to do so, building bots that can counter the bad bots can be effective.8

Read before Sharing

Another reason fake news spreads and “goes viral” is because people (and bots) click Share without having read beyond the headline or without thinking about the content of the message. A headline may be mislead- ing or may be unrelated to the story it is attached to. Headlines are meant to capture the attention, and they are often written to provoke a strong reaction. It is easy to provoke an emotional response with a sensational headline. Sharing the link with others without looking at the story attached can result in the spread of fake news. Read the content of a link before sharing it.

In 2015, Allen B. West posted a picture of US Mus- lims who were serving in the US military attending a regular prayer time. The caption for the picture was “Look at what our troops are being FORCED to do.” This caption implied that all US servicemen and -women were being required to participate in Muslim prayer services during the month of Ramadan. The picture was widely shared until it was revealed to be “fake news.”9

The idea that the US government would require its military personnel to participate in any religious observance is provocative. It elicits an emotional response, which often leads us to share both the story and our outrage with others—to spread the word. That knee-jerk reaction often causes us to react rather than take the time to consider what the plausibility of the story really is.

A strong emotional response to a picture, caption, or headline should act as a warning to slow down, think, and ask questions. The US military is part of

24

Li b

ra ry

T e ch

n o

lo g

y R

e p

o rt

s al

at ec

h so

u rc

e. o rg

N

o v e m

b e r/

D e ce

m b

e r

2 0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

the US government. A strict separation of religion and government is guaranteed by the Constitution of the United States. The contradiction between the picture caption and what we know about how the US is governed should cause us to question the informa- tion. Yes, soldiers must follow orders, but why would soldiers be ordered to participate in a religious cere- mony of any kind? Such orders would violate a funda- mental principle on which the country was founded. If the information were true, that would mean that the democracy had failed and all those people sworn to uphold the rules of the democracy deposed. If that had happened, we would probably have heard about it from other sources. This brief thought process should bring the veracity of the posting into question. From there it takes just a minute to find out that the picture is of a regular Muslim prayer service in which US ser- vicemen who are Muslims were participating—volun- tarily. Invoking that brief moment of skepticism can prevent the spread of fake news.

Fact-Check

There are a growing number of fact-checking sites that make it their business to find out whether a story, caption, or headline is true or false. Instead of shar- ing the fake story with others, it is a good practice to check with a fact-checking site first to see what it has to say about the story. It’s a good idea to keep a list of fact-checking sites handy for that purpose. Snopes maintains a list of known fake news websites. FactCheck’s Spiral Viral page shows its findings about information most often questioned. It lists all ques- tions and answers at its site as well.10

Some Fact-Checking Sites

Snopes (specializes in political fact checking) www.snopes.com/

PolitiFact www.politifact.com/

Hoax-Slayer (email and social media hoaxes) www.hoax-slayer.com/

StopFake (fighting false information about events in Ukraine) www.stopfake.org

FactCheck www.factcheck.org

Factmata (fact checks chain email) http://factmata.com/ (fact checking using AI)

LazyTruth www.lazytruth.com

SciCheck (fact checking for science-based claims) www.scicheck.com

Twitter and Facebook are attempting to make use of fact-checking organizations so they can more read- ily identify fake news and, perhaps, identify bots that spread the fake news. Making regular use of fact- checking sites before sharing information with oth- ers on social media can help stop the spread of fake news. We can also engage with social media sites to encourage changes that will benefit users. For exam- ple, instead of counting clicks to determine popular- ity, metrics rating the amount of time spent at a site or page might be a better measure of interest. Mov- ing away from the current popularity ratings based on click counting could help limit the spread of fake news. If enough users made it known that the current popularity ratings are not adequate, it might be pos- sible to influence the social media makers to count something more meaningful.

Evaluate Information

We can help ourselves and our students by under- standing how to evaluate sources and by routinely applying that knowledge to the sources we use.11 What is a source? What source can be relied on to be accu- rate and reliable? What signs can help to identify a trustworthy source?

The word source can mean several things, even in the context of information literacy and fake news. A source can be the person who supplied information. A source can be the person who wrote a news article, report, or other piece. A source can be an organiza- tion that puts its name and reputation behind a piece of writing. There are also anonymous sources of two kinds: the first is the person who does not want his or her name revealed as the one who supplied the infor- mation to a reporter; the second is a person who hides his or her identity or affiliations while publishing his or her own information.

According to Dr. Anders Ericksson and colleagues, it takes 10,000 hours of practice to become an expert on something.12 Whether it is playing baseball, playing the violin, or reporting the news, at least 10,000 hours of practice is required. That means that an expert will usually have at least 10,000 hours more experience than a novice. While some controversy exists about the exact number of hours required, the nub of the

25

Lib ra

ry Te

ch n

o lo

g y R

e p

o rts

alatech so

u rce.o

rg

N o

v e m

b e r/D

e ce

m b

e r 2

0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

argument is that it requires substantial experience and knowledge of a subject to make one an expert. Experts always know more about their subject than nonexperts do.

It is important to remember that experts are usu- ally experts in one or two specific things. No one is an expert in everything. If we are looking for expertise in the history of the Civil War, we would not seek out an expert in open heart surgery. For information seekers, it should be habitual to look for biographical informa- tion about authors to get some idea of how much expe- rience that author has with the subject being written about. Education, years on the job, applied experi- ence, prizes won—all these items serve as credentials to help verify an author’s level of expertise. It is rela- tively easy to check the veracity of biographical infor- mation using the internet.

Because the internet is available to everyone, any- one can write and post what they like, whether they have any expertise or experience with the subject. A teenager in Macedonia invented news stories about Donald Trump for months before the US presiden- tial election in 2016.13 Those stories appeared along with stories written by reputable journalists work- ing for trusted news sources. The algorithms that make stories from legitimate news sources and fake news sources appear on a social media newsfeed are based on information that people have responded to (clicked on, liked, commented on, or shared) previ- ously. That means if a social media user clicks on an article written by the Macedonian teenager, it is much more likely that user will see more of the same, rather than articles from real news sources. It is unlikely that a teenager in Macedonia would know more about a US political figure than a seasoned political journalist from the United States. Checking the credentials of an author is another way of avoiding fake news.

Experience and education do not always result in unbiased reporting. The reputation of the organiza- tion that supports (employs) a reporter also serves as a means of evaluating a source. Publishers that have been in the news business for a while get a reputation based on the accuracy, reliability, and slant of the sto- ries they publish. The New York Times, Wall Street Jour- nal, Fox News, and CNN have built their reputations by selecting reporters who write the stories and then by selecting the stories those authors produce. The publishers act as gatekeepers for the news. For those publishers with a track record for providing accurate reporting, their reputation can serve as a credential and can reflect that reputation on their reporters.

It is true that reporters with valid credentials who write for reputable news outlets sometimes mis- lead or misinform. The monetization of internet-based news is responsible for at least some misinformation. The relentless 24/7 flow of news also puts pressure on reporters and publishers to release information

quickly, sometimes before the facts have been com- pletely verified. The need for speed can also cause one news outlet to simply repost a report from another news outlet, even if the facts have not been verified.

Producers of On the Media have provided informa- tional sheets in their “Breaking News Consumer’s Hand- book.” Several points they list speak to the pressure for legitimate news sources to release information quickly. They offer pointers about the language reporters use and what specific phrases mean regarding the reliabil- ity of the information they supply.14 On the Media also suggests that part of the verification process for news stories should be geographic. Sources geographically close to the incident being reported are more likely to have reporters at the site and will therefore be closest to the unfolding event. Checking the geographic loca- tion of a story can help to evaluate its authenticity.

It is good practice to follow any links or citations given in a story. Fake news writers often include links and citations to make their posts seem more credible. However, those links may not connect to any infor- mation that is relevant to the original post. A Fact- Check report posted on November 18, 2016, found the following:

Another viral claim we checked a year ago was a graphic purporting to show crime statistics on the percentage of whites killed by blacks and other murder statistics by race. Then-presidential candidate Donald Trump retweeted it, telling Fox News commentator Bill O’Reilly that it came “from sources that are very credible.” But almost every figure in the image was wrong—FBI crime data is publicly available—and the supposed source given for the data, “Crime Statistics Bureau—San Fran- cisco,” doesn’t exist.15

A quick and easy check for the veracity of a source that seems questionable is to go to the homepage of the news source and look at what other articles are being posted. While one story may sound plausible, there may be others that are less so. By looking at the site in the aggregate, it is sometimes possible to deter- mine the purpose and tone that will help identify the site as legitimate or bogus.

Some fake news sites will reuse older information retrieved from other sites to mislead by association. For example, President Donald Trump credited him- self with convincing Ford Motor Company, after his election, to move the production of one of their vehi- cles from Mexico to Ohio. However, the original pub- lication date of the announcement by Ford was August 2015, long before Mr. Trump was elected. Similarly, in 2015, then-candidate Trump suggested that he had influenced Ford to move its plant, citing a story on Prntly.com. In fact, the original story came from CNN in March 2014 and referred to moving some assembly work to Ohio. The plant to be built in Mexico was still being built in Mexico.16

26

Li b

ra ry

T e ch

n o

lo g

y R

e p

o rt

s al

at ec

h so

u rc

e. o rg

N

o v e m

b e r/

D e ce

m b

e r

2 0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

Seek Information beyond Your Filter Bubble

We can avoid fake news by leaving our filter bubbles and seeking out opinions that do not agree with our own. Comparing sources is always a good idea. Com- paring sources that illustrate different points of view can often give some context to the interpretation of the information being offered. If CNN says one thing about a news story, it is likely that Fox will also cover the same story. The differences between the two sto- ries will often identify the “middle ground” where the truth often lies.

We can subscribe to publications that specifically provide information opposite from what we would get on social media. Escape Your Bubble is an online pub- lication that gathers information about your political preferences and then provides you with information that comes from sources outside your political bub- ble. Its goal is to help people understand each other better. There are reasons why Republicans champion certain causes or hold certain opinions. They often do not agree with Democrats about the reasons a prob- lem exists or how to fix it. It’s good to get input from both sides in order to understand why people do what they do. Getting the facts from different perspectives can help to identify fake news.

Escape Your Bubble https://www.escapeyourbubble.com/

We all have biases and preferences. It is impor- tant to acknowledge those biases and to keep them in mind, especially when confronted with information that does not support what that bias tells us. We must work hard to overcome confirmation bias because without effort we tend to dismiss information that does not agree with what we already believe is true. By at least considering information that disagrees, we can make a more informed decision or form a reason- able opinion. This is something we need to remember and consider in this era of fake news.

Be Skeptical

Approach news with skepticism. The psychology lit- erature shows that in order to process information, we must initially accept or believe it. Just to make sense of something, the default is for the brain to believe it. It takes an additional (and more difficult) step to reject the information as false. As time passes, we tend to remember as true the first information we heard, read, or saw, even if it was not true and even if we know it was not true. The more times we hear

something, the better we remember it.17 So if we read, see, or hear fake news from a number of friends, fol- lowers, or bots, that information sticks in our memo- ries, even if it is not true and even if we know it is not true. Finally, if some information contradicts a dearly held belief, the normal reaction is to reject that infor- mation and to more firmly believe what we already believe. This psychological fact allows humans to pro- cess information, but it also makes us vulnerable to those who manipulate information. Remaining skepti- cal is one way to combat the biases and psychological preferences built into our brains, at least long enough to consider alternatives.

Use Verification and Educational Tools

A wide variety of reliable news agencies provide infor- mation and tips to both their reporters and their read- ers for avoiding fake news. There are several projects underway to increase levels of trust in the legitimate media. The Trust Project at Santa Clara University in California is working to “develop digital tools and strategies to signal trustworthiness and ethical stan- dards in reporting.”18 The Trust Project brings together news reporters and editors with the goal of restoring trust in the news media. This project has identified indicators for journalism including a series of checks that can be applied to news stories to indicate that the information has been vetted for honesty, reliabil- ity, ethical treatment, and so on. Articles are flagged with indicators that show fact verification has taken place, ethical standards have been observed, conflicts of interest have been exposed, and reporting versus opinion and sponsored content articles are flagged. Over seventy news organizations are collaborating on this project.

The Trust Project http://thetrustproject.org/

The National Institute for Computer-Assisted Reporting is part of the 4,500-member association Investigative Reporters and Editors. NICAR provides the ability to combine information from varied digi- tal sources, allowing reporters to verify information and to extract facts and data more easily. New tools help reporters with analysis, visualization, and pre- sentation of structured data: Google Refine, ManyEyes (IBM), TimeFlow (Duke University), Jigsaw (Georgia Tech), the Sphinx Project (CMU), DocumentCloud, and ProPublica. All of these groups are working to help legitimate news sources provide readers with accurate and reliable content.19

27

Lib ra

ry Te

ch n

o lo

g y R

e p

o rts

alatech so

u rce.o

rg

N o

v e m

b e r/D

e ce

m b

e r 2

0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

National Institute for Computer-Assisted Reporting https://ire.org/nicar/database-library/

Investigative Reporters and Editors Association https://www.ire.org/

DocumentCloud https://www.documentcloud.org

The Public Data Lab publishes A Field Guide to Fake News.20 This guide describes “digital methods to trace production, circulation and reception of fake news online.”21 This publication was prepared for release at the International Journalism Festival in Perugia in April 2017. Its goal is to investigate fake news in its context including where it appears and how it circu- lates online.

A number of educational institutions have created classroom curricula to help students learn to be smart consumers of information, especially news.22 The Stanford History Education Group has created a class- room curriculum that includes a bank of assessments to test the ability to judge credibility of news reports.

Stanford History Education Group https://sheg.stanford.edu/

The News Literacy Project is a nonpartisan national educational program that aims at teaching middle and high school students how to read and eval- uate news stories. It has developed an online modu- lar curriculum called Checkology that walks students, middle school through college, through the process of reporting the news, from on-site reporting to publica- tion. Students can also learn how to create their own news stories, giving them practice in creating fair and unbiased reports, which, in turn, helps them to evalu- ate news stories from others.

News Literacy Project www.thenewsliteracyproject.org

Consistent and persistent use of a handful of sim- ple practices could help to identify fake news and to stop its spread. Putting those practices to use could remove or at least reduce the incentives that drive the creators of fake news. There are tools and techniques available to help people become informed and savvy news consumers. Legitimate news media sources are creating criteria and tagging to help people to iden- tify and select “real” news. There are easy means to

escape our information bubbles and echo chambers. In the end, it is up to all individuals to do what they can to educate themselves about fake news and the technology that brings fake news to their doorstep. While we educate ourselves, we can help to educate our students and patrons.

Notes 1. David J. Helfand, “Surviving the Misinformation

Age,” Skeptical Inquirer 41, no. 3 (May/June 2017): 2 2. Shannon Greenwood, Andrew Perrin, and Maeve

Duggan, “Social Media Update 2016,” Pew Research Center: Internet and Technology, November 11, 2016, www.pewinternet.org/2016/11/11/social-media -update-2016/.

3. Evan Bailyn, “Your Guide to Google’s Algorithm in 2017: All Ranking Factors, Updates and Changes,” SEO Blog, FirstPageSage, November 21, 2016, https://firstpagesage.com/seo-blog/2017-google -algorithm-ranking-factors/.

4. Igal Zelfman, “Bot Traffic Report 2016,” Imperva Incapsula Blog, January 24, 2017, https://www.incap sula.com/blog/bot-traffic-report-2016.html.

5. Norah Abokhodair, Daisy Yoo, and David W. McDon- ald, “Dissecting a Social Botnet: Growth, Content and Influence in Twitter,” Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work and Social Computing (New York: Association for Computing Machinery, 2015), 839–51, https:// doi.org/10.1145/2675133.2675208.

6. Emilio Ferrara, Onur Varol, Clayton Davis, Filippo Menczer, and Alessandro Flammini, “The Rise of So- cial Bots,” Communications of the ACM 59, no. 7 (July 2016): 100.

7. Sam Woolley and Phil Howard, “Bad News Bots: How Civil Society Can Combat Automated Online Propa- ganda,” TechPresident, December 10, 2014, http:// techpresident.com/news/25374/bad-news-bots-how -civil-society-can-combat-automated-online -propaganda.

8. Eryn Carlson, with reporting by Tama Wilner, “Flag- ging Fake News: A Look at Some Potential Tools and Strategies for Identifying Misinformation,” Nieman- Reports, April 14, 2017, http://niemanreports.org /articles/flagging-fake-news/.

9. Tom Nichols, The Death of Expertise: The Campaign against Established Knowledge and Why It Matters (New York: Oxford University Press, 2017), 114.

10. Glenn Kessler, “The Fact Checker’s Guide for Detect- ing Fake News,” Washington Post, November 22, 2016, https://www.washingtonpost.com/news/fact -checker/wp/2016/11/22/the-fact-checkers-guide -for-detecting-fake-news/?utm_term=.920d6de37499.

11. Sam Wineburg, Sarah McGrew, Joel Breakstone, and Teresa Ortega, “Evaluating Information: The Corner- stone of Civic Online Reasoning,” Stanford History Education Group, Stanford Digital Repository, No- vember 22, 2016, https://purl.stanford.edu /fv751yt5934.

12. K. Anders Ericsson, Ralf Th. Krampe, and Clemens Tesch-Romer, “The Role of Deliberate Practice in the

28

Li b

ra ry

T e ch

n o

lo g

y R

e p

o rt

s al

at ec

h so

u rc

e. o rg

N

o v e m

b e r/

D e ce

m b

e r

2 0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

Acquisition of Expert Performance,” Psychological Review 100, no. 3 (1993): 393–94.

13. Samanth Subranmanian, “Inside the Macedonian Fake-News Complex,” Wired Magazine, February 15, 2017, https://www.wired.com/2017/02 /veles-macedonia-fake-news.

14. “Breaking News Consumer’s Handbook,” On the Media, WNYC, www.wnyc.org/series/breaking -news-consumers-handbook.

15. Eugene Kiely and Lori Robertson, “How to Spot Fake News,” FactCheck.org, November 18, 2016, www .factcheck.org/2016/11/how-to-spot-fake-news/.

16. Tony Haile, “What You Think You Know about the Web Is Wrong,” Time.com, March 9, 2014, http:// time.com/12933/what-you-think-you-know -about-the-web-is-wrong/.

17. Daniel T. Gilbert, “How Mental Systems Believe,” American Psychologist 46, no. 2 (1991): 107–19.

18. Carlson, with Wilner, “Flagging Fake News,” Nieman Reports, April 14, 2017, http://niemanreports.org /articles/flagging-fake-news/

19. Sarah Cohen, “Computational Journalism,” Com- munications of the ACM 54, no. 10 (October 2011): 66–71.

20. Liliana Bonegru, Jonathan Gray, Tommaso Venturi- ni, and Michele Mauri, A Field Guide to Fake News: A Collection of Recipes for Those Who Love to Cook with Digital Methods, research report, first draft (Public Data Lab, April 7, 2017), http://apo.org.au /node/76218.

21. Bonegru et al., Field Guide, 2. 22. John Dyer, “Can News Literacy Be Taught?” Nieman-

Reports, April 14, 2017, http://niemanreports.org /articles/can-news-literacy-be-taught/.

29

Lib ra

ry Te

ch n

o lo

g y R

e p

o rts

alatech so

u rce.o

rg

N o

v e m

b e r/D

e ce

m b

e r 2

0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

How Can We Help Our Students?

Teach Information or Media Literacy

Students today have never lived in a world without computers and cellphones. They have always been immersed in technology and bombarded with infor- mation. This is normal for them. They use technol- ogy easily and accept new technology readily. They are willing to experiment and are quick to discard anything that is not entertaining or that takes too long to complete. They live in a world of 3-D, virtual reality, and predictive searching. They have a pref- erence for visual rather than written material. They skim the surface of the information they receive, rather than doing a deep dive to thoroughly research a topic. They expect technology to work for them, at lightning speed, without the need for instruction or intervention.

Most people are confident that they know more than they do. Experiments conducted by David Dun- ning and Justin Kruger in 1999 showed that people who know relatively little about a subject are overcon- fident about their level of expertise in it.1 The “Dun- ning-Kruger effect” finds that students and others overestimate what they know, despite knowing that they lack experience or knowledge of the subject. Peo- ple in general tend to trust their social media friends, and students in particular tend to rely on social media for their information. The sources of information they trust are the ones their friends share with them. The expertise of the author, the possible bias of the pro- ducer, the geographic location of the creator, the facts that back up an assertion or claim, all take a back seat to the credibility of their friend network. This makes them particularly susceptible to manipulation. If they happen to have unknowingly friended a bot that feeds

them misinformation, they are likely to believe that information.

Helping individuals learn to be information- or media-literate is one of the single most important skills we can offer. It translates into the ability to understand, control, and apply information. In order to combat fake news, the first step should be to start teaching students early in their education. By the time students get to high school, which is typically the first place they encounter “information literacy” today, their learning habits are ingrained. We need to teach basic information literacy skills much earlier in life, and we need to repeat lessons throughout a student’s education.

Psychologically, the first thing we see or hear about a topic is what we remember as true. The more times we hear something repeated, the more likely it is that we will remember it, even if it is not true.2 To start students on the road to information or media lit- eracy, we need to start teaching those skills in ele- mentary school so that critical thinking and question- ing will become ingrained and habitual. We need to capitalize on children’s propensity to ask questions and encourage them to do so. We also need to help them learn how to find answers to their questions. A scaffolded curriculum of information literacy across the K–12 system would build a foundation that stu- dents could use to approach adult problems after graduation.

Students need guidance as they often lack life experience. Teaching students to seek out experts and to value those who have expertise in a subject will provide them with a key to avoiding fake news. With the democratization of access to information via the internet, it is easy to find information, but is it not

Chapter 5

30

Li b

ra ry

T e ch

n o

lo g

y R

e p

o rt

s al

at ec

h so

u rc

e. o rg

N

o v e m

b e r/

D e ce

m b

e r

2 0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

always easy to determine if that information came from an expert and trustworthy source.3 Students should understand that information coming from an expert source will be more reliable than information coming from an unknown source. Teachers should provide guidelines for students to use in identifying and selecting information supplied by experts.

As students reach high school, their tendency is to rely less on the expertise of their teachers and rely more on their friends. This is problematic in terms of fake news because many students get their news only from their social media newsfeed. Teens often share news they have received via social media because a headline or a picture, rather than the actual content of an article, has caught their attention. They are often unaware that they are receiving information from bots driven by algorithms based on the likes, shares, and clicks at their social media pages. They are often unaware that the information they see can be influ- enced by nonhuman actors. Students often do not seek out alternate sources of information, nor do they com- pare information to see how details might differ. We need to encourage them to do so and show them how. Technological interventions that are entertaining as well as instructive can help to get information across to teens.

Make Students Aware of Psychological Processes

Knowledge is power. When we are aware that we are psychologically programmed to believe information first and then reject it later if necessary, it becomes easier to insert skepticism into our analysis of news. This makes it easier to reject fake news if we can ini- tially accept that it might be fake news. It is easier to dismiss the initial misinformation if we know our brain has a tendency to hold onto it. Explaining the psychological tendencies that could cause students to believe fake news, and reminding them of those tendencies periodically, can give them a means of examining that news more critically. Making students aware of how their brains are working can improve their performance.4

In college, students are often psychologically ready for a fresh start or at least exhibit a willingness to con- sider new ideas. At this critical juncture, it is impor- tant to provide the reasoning and the instruction that will help them to apply their critical-thinking skills to their new environment. The freshman experience concerning information literacy can be very impor- tant, as it can, if successful, create the basis for the rest of their college work. It is important to introduce academically related information-literacy concepts and skills at a time when they can be applied immedi- ately to an assignment or problem. Skills concerning

fake news can be taught any time as fake news is a “hot topic” in the nonacademic world, and students will have the opportunity to apply what they learn immediately in their personal lives. Workshops, tuto- rials, YouTube videos, and games can be created based on the topic of fake news. The information-literacy skills conveyed in the exercises about fake news can be applied immediately, but can also be transferred to academic issues at the appropriate time.

Tie Information Literacy to Workplace Applications

Building a curriculum to serve college students is critical to producing the workforce practices employ- ers are looking for. It is critical to tie information literacy to the world outside academia and beyond college. Students need to know how important the information literacy skills are going to be to their future success in the working world.5 Most students will not have access to the research databases avail- able to them at the university level once they move into the working world. Students are usually familiar with common platforms such as Google and Facebook. Lessons involving Google and social media platforms can provide a focus for instruction using sources stu- dents might have available to them as workers and that they will certainly use in their everyday lives. Tips, shortcuts, and cautions can center on the issue of fake news, to make a class or workshop content rel- evant while teaching valuable skills.

The information literacy skills and concepts stu- dents are taught need to be offered in memorable ways, across the curriculum. Offer students instruction options in as many media as possible. Remember stu- dents today are visual people for the most part. They don’t read deeply, and they tend to reject anything that has no entertainment value. A YouTube video can have more impact than an in-class demonstration. A comic book about information literacy problem solv- ing can be more memorable than a checklist hand- out. Make sure the tools you make available are eas- ily accessible electronically. A problem-solving online game can be effective as well as entertaining. Having students create information literacy projects centered on issues they feel are important could offer them an opportunity for deeper understanding of the subject and provide valuable insight. Get input from students about what teaching tools they find most effective and compelling.

Collaborate with a film studies class, an art class, or a computer engineering class to address informa- tion literacy topics in new and interesting ways. Part- ner with other instructors as often as possible to allow students to get information literacy training in more than one setting, while they are learning another

31

Lib ra

ry Te

ch n

o lo

g y R

e p

o rts

alatech so

u rce.o

rg

N o

v e m

b e r/D

e ce

m b

e r 2

0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

subject. This will allow students to understand the applicability of information literacy to other subjects.

Have students work on hands-on exercises that demonstrate the need for care in selecting sources. In memory studies, it has been shown that people remember better if they have done something them- selves.6 Rather than telling or showing students how to find a source or check for factuality, plan instruc- tion so that the students do the work, guided by the teacher. Go the next step and have students apply what they learn in one setting to a problem in another set- ting. It has also been shown that students benefit from working in groups. Allowing instruction to take place in small groups with input as necessary from a roam- ing instructor will help students to learn from one another and to better remember what they learned.

Teach Students to Evaluate Information

Teach students about author credentials and how to evaluate them. Credential is a term librarians often use, but many students do not know exactly what the term means. What is a credential? What credentials are legitimate indicators of expertise? Acceptable credentials will vary from subject to subject, so the definition is hard to pin down. Academic researchers often try to use sources with peer-review processes in place to do the vetting of authors for them. Unfor- tunately, in daily life those academic sources do not always serve. They require extra steps to access, and they often require affiliation with an organization that supplies the sources. Most people receiving news from social media are not likely to check that news against an academic database or other reliable source in any case. It can be time consuming to discover an author’s credentials. Students will benefit from instruction in what constitutes a credential, where to find evidence of credentials, and why it’s worth the time it takes to discover an author’s credentials.

In the same way, students should be encouraged to think about bias. Everyone has biases that shape their worldview. That worldview has an impact on the interpretation of events. In reporting on a controver- sial situation, a journalist should strive for objectiv- ity, but bias can color the representation of the event. It can have an effect on what an eyewitness sees. It can have an effect on the words a reporter chooses when writing a story. Knowing the point of view of the author will help students to identify bias. Biographi- cal information about the author can be helpful in this regard, as is knowing the viewpoint and reputation of the organization the reporter works for. Have stu- dents consider, for example, how a reporter working for the NRA might present information about a school shooting. That same school shooting will probably

be reported differently by a reporter writing for an anti-gun group. When confronting controversial sub- jects, students should be given instruction that will help them find information from both sides of the story. Once students understand why the credentials of authors are important and how those credentials inform the reader of possible bias, have a discussion to help them to understand why they should not rely on anonymous sources of information.

Teach Information Literacy Skills and Concepts

Concentrate on information literacy concepts and skills, rather than teaching students how to use a par- ticular tool. Use those general concepts and skills in concert with exercises that allow students to explore a variety of research tools. Instructors will never have enough time to demonstrate every database for stu- dents. It is more efficient to explain to students how databases work in general and then have them use a variety of databases to experience how they differ from one another. Students have been using computer databases most of their lives—Google, Facebook, Twitter—and they frequently learn how to use them by trial and error rather than by reading a help page or following step-by-step instruction sheets. Have them spend their time applying searching and evalu- ation skills to content rather than learning how to use a particular database.

Make fact-checking sites known and available (see gray box). If students are taught to be skeptical about information, they should have questions about the truth of the news they access. In order to verify news as real or fake, students should be given the tools nec- essary to do so. Rather than relying on their network of friends or the popularity rating of a post, students should be directed to fact-checking sites, and informa- tion about what those sites are should be readily avail- able at multiple locations—websites, social media pages, printable lists, and so on.

Snopes www.snopes.com

PolitiFact www.politifact.com

FactCheck www.factcheck.org

Show students the importance of following up on citations and links. Information literacy instruc- tors have used an article called “Feline Reactions to

32

Li b

ra ry

T e ch

n o

lo g

y R

e p

o rt

s al

at ec

h so

u rc

e. o rg

N

o v e m

b e r/

D e ce

m b

e r

2 0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

Bearded Men” to demonstrate the importance of con- sidering all aspects of an article. The article appears to be reporting the results of a research experiment and is formatted to look like a legitimate research article. It is only when one examines the bibliogra- phy that things begin to look suspicious. There are articles listed in the bibliography supposedly authored by Madonna and Dr. Seuss, for example. Nonexis- tent journals are cited as well.7 An unwary or novice researcher might be led to believe that the article was reporting on serious research. In the same way, fake news may contain links and citations to articles and other information simply to give the story the look of serious research and reporting. In fact, the links may lead to information that is false, biased, or completely unrelated to the subject. It is important to follow links and citations to verify that they support the claims made in the original piece.

Show students how easy it is to create a fake web- site using a URL that looks very similar to a legiti- mate website. Many fake news sites use web addresses that are very similar to the web addresses of legit- imate news agencies. It is very easy to assume that the news being displayed is true if one is convinced that the source is legitimate. Unusual add-ons after the domain name, replacement of a capital letter with a small letter, replacing a 1 (numeral one) with an l (lower-case letter L) or vice versa are all tiny details that can make the difference between getting real news and getting fake news.

Teach students to use critical-thinking skills to evaluate a post before they send it on to friends or fol- lowers. This could mean training that examines the psychology of memory, the explanation of algorithms and other computer-related processes, or the exami- nation of author credentials. Since librarians typically have a very limited amount of time in which to con- vey their message, the information must be stripped to the bare essentials for classroom use. This would be a good place to make creative use of technology to create lessons that get the message out electronically, making them available at any time. Lessons online can be assigned for homework or preparation for a class, rather than in a face-to-face class. Make a series of TED-style talks about critical thinking, for exam- ple, and post them on the library web page or Face- book page.

Teach students about privacy issues. Students are fairly cavalier about providing personal information online in order to accomplish something. They are often unaware of what happens to the information they supply. Revealing basic information to set up a profile or gain access to a website doesn’t seem inva- sive. However, many groups that ask for basic infor- mation sell that information to others.8 There are groups that buy information from multiple sources, and using the power of computing, put an individual’s

profile from multiple sites into one file, which may reveal more than one might wish. Individually, the profiles are not necessarily useful, but in the aggre- gate, they can reveal private information without the knowledge of the individual.

Teach students to slow down. Research shows that the average time spent on a web page is less than fif- teen seconds.9 While this might be enough time to grasp the content of a headline, it is not enough time to examine the meaning of the content or to deter- mine where the information came from. Allowing suf- ficient time to absorb the content of a page is criti- cal to understanding the message. Taking the time to think about the content of a web page before passing it on to someone else will help to stop the spread of fake news.

Teach the Teachers

Teach the teachers. While librarians have been immersed in information literacy for decades, other teachers have not necessarily had information liter- acy at the forefront of their curricular objectives. As the automated provision of information has become unavoidable, and the manipulation of that informa- tion for good or evil is now in the hands of anyone with sufficient coding skills to accomplish it, teachers at all levels in all subject areas are ready to benefit from the decades-old expertise of librarians. Librar- ians should make their information literacy instruc- tion materials readily available and advertise their location. Offer workshops and instruction to faculty and others who influence students. Giving workshops for teachers in the late summer or early fall will help them understand the problems associated with fake news and prepare them to help their students. This is also the time to act as a liaison with writing and tutor- ing centers of all levels and kinds to share informa- tion literacy lessons with them. By teaching the teach- ers we can expand our reach beyond the fifty-minute one-shot session. Cooperation and collaboration with instructors in every subject area will help students to solidify their skills in information literacy and to avoid fake news.

Conclusion

The creation and spread of fake news is a problem that seems ingrained in human nature. It has existed for millennia and has been used to sway public opinion, smear reputations, and mislead the unwary. In the digital age, information travels much more widely and much faster than it ever has before. Computer power makes it easy to manipulate huge amounts of data, aggregate data from past and present research, and

33

Lib ra

ry Te

ch n

o lo

g y R

e p

o rts

alatech so

u rce.o

rg

N o

v e m

b e r/D

e ce

m b

e r 2

0 1 7

Combating Fake News in the Digital Age Joanna M. Burkhardt

democratize access to information. Computer power also makes it easy for those who know how to “game the system” for their own purposes. Fake news online is difficult to identify, its source is difficult to identify, and the means of making it stop are not yet known.

Information literacy focusing on social media and fake news appears to be the best option for allow- ing students, teachers, and the general public to avoid being taken in by those who create fake news. In the past, people were told, “Don’t believe every- thing you read in the newspaper.” More recently, peo- ple have been told, “Don’t believe everything you see on television.” Today the warning must be, “Don’t believe everything you see, hear, or read on social media.” Healthy skepticism and rigorous evaluation of sources—authors, publishers, and content—are key to avoiding fake news.

Notes 1. Justin Kruger and David Dunning, “Unskilled and

Unaware of It: How Difficulties in Recognizing One’s Own Incompetence Lead to Inflated Self-assessment,” Journal of Personality and Social Psychology 77, no. 6 (1999): 1121–34.

2. Daniel T. Gilbert, “How Mental Systems Believe,” American Psychologist 46, no. 2 (1991): 107–19.

3. Tom Nichols, The Death of Expertise: The Campaign against Established Knowledge and Why It Matters (New York: Oxford University Press, 2017).

4. Michael S. Ayers and Lynne M. Reder, “A Theoreti- cal Review of the Misinformation Effect: Predic- tions from an Actuation-Based Memory Model,” Psychonomic Bulletin and Review 5, no. 1 (2008): 1–21; Meital Balmas, “When Fake News Becomes Real: Combined Exposure to Multiple News Sources and Political Attitudes of Inefficacy, Alienation and Cynicism,” Communication Research 41, no. 3 (2014): 430–54; André Blais, Elisabeth Gidengil, Patrick Fournier, and Jiyoon Kim, “Political Judgments, Perceptions of Facts, and Partisan Effects,” Electoral Studies 29 (2010): 1–12; Prashant Bordia and Nicho- las DiFonzo, “Psychological Motivations in Rumor Spread,” in Rumor Mills: The Social Impact of Rumor and Legend, ed. Gary Alan Fine, Veronique Campion- Vincent, and Chip Heath (Piscataway, NJ: Aldine Transactions, 2005), 87–101; R. Kelly Garrett, “Echo Chambers Online? Politically Motivated Selective

Exposure among Internet News Users,” Journal of Computer-Mediated Communication 14 (2009): 265–85; Stephan Lewandowsky, Ullrich K. H. Ecker, Colleen M. Seifert, Norbert Schwarz, and John Cook, “Misinformation and Its Correction: Continued Influ- ence and Successful Debiasing,” Psychological Science in the Public Interest 13, no. 3 (2012): 106–31; Mi- chelle L. Meade and Henry L. Roediger III, “Explora- tions in the Social Contagion of Memory,” Memory and Cognition 30, no. 7 (2002): 995–1009; Danielle C. Polage, “Making Up History: False Memories of Fake News Stories,” Europe’s Journal of Psychol- ogy 8, no. 2 (2012): 245–50; Betsy Sparrow and Ljubica Chatman, “Social Cognition in the Internet Age: Same as It Ever Was?” Psychological Inquiry 24 (2013): 273–92; Adrian F. Ward, “Supernormal: How the Internet Is Changing Our Memories and Our Minds,” Psychological Inquiry 24 (2013): 341–48.

5. Tyler Omoth, “The Top 5 Job Skills That Employers Are Looking for in 2017,” TopResume, accessed Sep- tember 7, 2017, https://www.topresume.com/career -advice/the-top-5-job-skills-that-employers-are-look ing-for-in-2017; Susan Adams, “The 10 Skills Employ- ers Most Want in 20-Something Employees,” Forbes, October 11, 2013, https://www.forbes.com/sites/su sanadams/2013/10/11/the-10-skills-employers-most -want-in-20-something-employees/#4a06d13a6330.

6. Gilbert, “How Mental Systems Believe.” 7. Catherine Maloney, Sarah J. Lichtblau, Nadya Kar-

pook, Carolyn Chou, and Anthony Arena-DeRosa, “Feline Reactions to Bearded Men,” Improbable Re- search (blog), accessed September 6, 2017, Annals of Improbable Research, www.improbable.com /airchives/classical/cat/cat.html.

8. David Auerbach, “You Are What You Click: On Mi- crotargeting,” Nation, February 13, 2013, https:// www.thenation.com/article/you-are-what-you -click-microtargeting/; Nicholas Diakopoulos, “Rage against the Algorithms,” Atlantic, October 3, 2013, https://www.theatlantic.com/technology/archive /2013/10/rage-against-the-algorithms/280255/; Tarleton Gillespie, “The Relevance of Algorithms,” in Media Technologies: Essays on Communication, Materiality and Society, ed. Tarleson Gillespie, Pablo J. Boczkowski, and Kirsten A. Foot (Cambridge, MA: MIT Press, 2014), 167–94.

9. Tony Haile, “What You Think You Know about the Web Is Wrong,” Time.com, March 9, 2014, http:// time.com/12933/what-you-think-you-know -about-the-web-is-wrong/.

Notes

Statement of Ownership, Management, and Circulation Library Technology Reports, Publication No. 024-897, is published eight times a year by the American Library As- sociation, 50 East Huron St., Chicago (Cook), Illinois 60611-2795. The editor is Samantha Imburgia, American Library Association, 50 East Huron Street, Chicago, IL 60611-2795. Annual subscription price, $325.00. Printed in U.S.A. with periodicals class postage paid at Chicago, Illinois, and at additional mailing offices. As a nonprofit organization authorized to mail at special rates (DMM Section 424.12 only), the purpose, function, and nonprofit status of this organization and the exempt status for federal income tax purposes have not changed during the preceding twelve months.

(Average figures denote the average number of copies printed each issue during the preceding twelve months; actual figures denote actual number of copies of single issue published neared to filing date: August/September 2017 issue.) Total number of copies printed: average, 655; actual, 613. Paid distribution outside the mails includ- ing sales through dealers and carriers, street vendors, counter sales, and other paid distribution outside the USPS: average 81; actual, 57. Total paid distribution: average, 419; actual, 381. Free or nominal rate copies mailed at other classes through the USPS (e.g., First-Class mail): average, 0; actual, 0. Free or nominal rate distribution out- side the mail (carriers or other means): average, 12; actual, 11. Total free or nominal rate distribution: average, 12; actual, 11. Office use, leftover, unaccounted, spoiled after printing: average, 224; actual, 221. Total: average, 755; actual, 613. Percentage paid: average, 97.3%; actual, 97.19%.

Statement of Ownership, Management and Circulation (PS Form 3526, July 2014) filed with the United States Post Office Postmaster in Chicago, September 27, 2017.

Subscribe alatechsource.org/subscribe

Purchase single copies in the ALA Store alastore.ala.org

alatechsource.org ALA TechSource, a unit of the publishing department of the American Library Association

Library Technology R E P O R T S

Upcoming Issues

January 54:1

Library Spaces and Smart Buildings: Technology, Metrics, and Iterative Design edited by Jason Griffey

February/ March 54:2

How to Stay on Top of Emerging Tech Trends by David Lee King

April 54:3

Privacy and Security Online by Nicole Hennig

Copyright of Library Technology Reports is the property of American Library Association and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

  • _GoBack
  • _GoBack
  • _GoBack
  • _GoBack
  • _GoBack
  • _GoBack
  • History of Fake News
    • Pre–Printing Press Era
    • Post–Printing Press Era
    • Mass Media Era
    • Internet Era
    • Global Reach of Fake News
    • Notes
  • How Fake News Spreads
    • Word of Mouth
    • Written Word
    • Printed Media
    • Internet
    • Social Media
    • Notes
  • Can Technology Save Us?
    • Technology of Fake News
    • Big Data
    • Bots
    • Experiments in Fake News Detection
    • Experiments in Bot and Botnet Detection
    • Google and Facebook Anti–Fake News Efforts
    • Notes
  • Can We Save Ourselves?
    • Learn about Search Engine Ranking
    • Be Careful about Who You “Friend”
    • ID Bots
    • Read before Sharing
    • Fact-Check
    • Evaluate Information
    • Seek Information beyond Your Filter Bubble
    • Be Skeptical
    • Use Verification and Educational Tools
    • Notes
  • How Can We Help Our Students?
    • Teach Information or Media Literacy
    • Make Students Aware of Psychological Processes
    • Tie Information Literacy to Workplace Applications
    • Teach Students to Evaluate Information
    • Teach Information Literacy Skills and Concepts
    • Teach the Teachers
    • Conclusion
    • Notes

Teaching and Learning in a Post-Truth World

It’s time for schools to upgrade and reinvest in media literacy lessons.

Renee Hobbs

I n the summer of 2016, I found a startling announcement in my Facebook feed from WTOE 5 News, saying, “Pope Francis Shocks World, Endorses Donald Trump for President, Issues Statement.”

It looked so real that I was tempted to share it with my friends. But before I did that, I did some

research to confirm the statement, and that’s how I learned that WTOE 5 was not a real

news outlet. Pope Francis did not endorse any American presidential candidate.

Hobbs.indd 26 9/26/17 5:59 AM

A S C D / w w w . A S C D . o r g 27

But in those heated days before the 2016 election, nearly one million people did share that particular story, making it one of the top so-called “fake news” stories of 2016 (Ritchie, 2016). And of course, there were hundreds of other examples of false and misleading information circulating online as the fake news phe- nomenon spread like wildfire, not just here in the United States, but in Germany, Italy, and around the world.

Since then, there’s been a lot of talk among educators about the importance of teaching students to critically analyze news and information. The public is gaining awareness of our vulnerability to media manipulation. Researchers have found that most adults can’t accurately judge the truth or falsity of an online news story because they assume that content that aligns with their existing beliefs is automatically true (Goodfellow, 2017).

So-called “fake news” is rising in visibility and influence due to the attention economy, a concept first developed by Herbert A. Simon in 1971. Many choices are available to us as both con- sumers and creators of media, and, sadly, it seems as if people have adopted a problematic post-truth attitude: If it’s entertaining or meshes with their own views, who really cares if it’s true? This makes it easy for creators of “fake news” in a world where digital content is cheap to produce. These sites use sensationalism (sex, violence, children, animals, and the mysterious unknown) to profit from viral sharing, where more clicks equals more revenue. And when articles include emotionally inflamed or intense words or images, they spread quickly and reach a larger audience.

Not only are we seeing more emotionally manipulative online content, but it is also more challenging to find and validate the source of the information we consume. Because most Americans get their news from social media, we experience content as unbundled snippets, without source information or context clues to assist in interpretation. These are all good reasons to implement media literacy education in middle and high schools.

New evidence reported in the American Educational Research Journal by Joseph Kahne and his colleagues shows that teens and young adults who have had some exposure to media lit- eracy and civic education in school are better able to analyze

Today, propaganda is everywhere and it takes new digital forms that blur the lines between entertainment, information, and persuasion.

A P

R IL

7 0 /S

H U

T T E

R S

T O

C K

Hobbs.indd 27 9/26/17 5:59 AM

28 E d u c a t i o n a l l E a d E r s h i p / n o v E m b E r 2 0 1 7

news content for accuracy and bias, even when the story is in line with their existing political beliefs. Based on an online experiment with a nationally representative survey of young people, this study is the first of its kind to demonstrate that civic media literacy education can improve the degree to which students can distinguish between evidence-based and inaccurate online political claims (Kahne & Bowyer, 2017).

Teachers must take up the cause and help students analyze and evaluate the information they receive each day. In a post-truth world, media literacy matters. The future of our democracy depends on it.

Sorting Fact from Fiction As many commentators have observed, the use of the term fake news conceals more than it reveals. Although I’m happy that many K–12 educators have increased interest in teaching students how to critically analyze media, I rec- ommend that they resist using this particular term.

Learners are far better served by a more precise set of definitions and concepts, including terms like propa- ganda, disinformation, clickbait, hoaxes and satire, pseudoscience, sponsored content, and partisanship. These more precise terms need to be a fundamental part of English and social studies education in all American secondary schools.

Fortunately, educators around the world are banding together to develop resources to help educators at the sec- ondary level teach media literacy. For example, the European Association for Viewers’ Interests has developed a chart (https://eavi.eu/beyond-fake- news-10-types-misleading-info/) that helps people analyze and evaluate online content simply through the process of trying to identify the genre

of the message (EAVI, 2017). Tools like this chart can help stu-

dents learn to critically evaluate media messages. Students can first review the 10 definitions on the chart, looking carefully at the legend to understand the terms. They can then go online to find relevant examples, checking a variety of sources, including content posted to social media platforms. For instance, when I scroll through my newsfeed on Facebook, I can find an example of sponsored content (such as a video that features Adult Swim’s ani- mated TV show, “Rick and Morty”). I notice a clickbait story urging me to learn more about a foolproof method for reducing wrinkles. There’s also propaganda in the form of a dog rescue

video shared with me by an old high school friend.

After identifying examples like this on their own social media accounts, students can work collaboratively to make educated guesses about the authors’ motivation for any particular example. Who created this message? Were they creating this message to make money? To inform (or mis- inform)? As a form of political or social power? As a joke or a form of humor? Or because they truly are passionate about the issue?

Contemporary Propaganda Of all the types of misleading news listed in the EAVI chart, propaganda is perhaps the most difficult for students to understand. Propaganda, which is generally defined as strategic com- munication designed to activate strong emotions, bypass critical thinking, and shape attitudes and behaviors, has long been an important form of social power. But for too many American students, the term is only associated with historical examples from the middle of the 20th century. As a result of biases and omissions in classroom instruction, some high school and college students wrongly think propaganda only happened in Nazi Germany!

Today, propaganda is everywhere, and it takes new digital forms that blur the lines between entertainment, infor- mation, and persuasion. Propaganda can be found on YouTube videos, websites, and TV news, and in movies, music, and video games. And it doesn’t have to be solely negative; some forms of propaganda are actually beneficial. Think of the public service messages that remind you not to text and drive, for example. Well-designed propa- ganda activates strong feelings that motivate people to take action.

Teachers, librarians, and school

When students recognize the constructed nature of information, they begin to identify the different points of view that are embodied in the choices that authors make.

Hobbs.indd 28 9/26/17 5:59 AM

A S C D / w w w . A S C D . o r g 29

leaders are using the pedagogy of media literacy edu- cation to teach about the different types of disinformation, including propaganda. For example, Susan Vernon, a high school English teacher from North Polk High School in Iowa, used Mind Over Media: Analyzing Contemporary Propaganda in working with her students. Mind Over Media (www.mindovermedia.tv) is an online resource developed at the University of Rhode Island’s Media Edu- cation Lab, which I direct. The website includes more than 1,000 current examples of contemporary propaganda from across the United States and around the world, on topics including politics and current events; food, nutrition, and health; immigration; environmental science; national and international affairs; crime, law, and justice; health and public policy; media and technology regulation; animal rights; and more. It also offers free lesson plans on exploring new forms of propaganda like viral media and sponsored content.

Students in Ms. Vernon’s class first learned the definition of propaganda and reviewed four common techniques used in constructing it:

n Evoking strong emotions. n Simplifying information and ideas. n Appealing to audience needs. n Attacking opponents. The students then selected examples of propaganda

from the Mind Over Matter website and evaluated them on a scale that runs from “harmful” to “beneficial.” When evaluating each item, students had to make explicit their judgments and interpretations through classroom dialogue and by making comments using the online platform. After students evaluated a particular example, such as a meme related to genetically modified foods, the website’s database showed them how others interpreted that example in similar (and different) ways, which created an opportunity for rich classroom discussion.

To become media literate, it is important to gain awareness of how and why we choose to accept some information as truthful and other information as false. Making judgments about the potential benefits and harms of online propaganda gives people structured opportunities to practice the art of interpreting and evaluating media. We get to see interpretations that are sometimes more diverse than those we find in our local communities.

“Young people are exposed to so much information that it is a struggle for them to form their own opinions about major topics that impact their world,” says Steve Keim, a high school English teacher at Southern Huntingdon County High School/Middle School in Pennsylvania, who

recently explored the topic of contemporary propaganda with his students through the Mind Over Media website. Keim believes that being able to identify propaganda and filter out quality information is a vital skill for these stu- dents to learn so they can avoid being manipulated by news outlets that crop up on their social media pages.

The Thanksgiving Meme One fascinating artifact on the Mind Over Media website is a meme featuring a reproduction of a classic painting depicting the Puritans and American Indians in the first Thanksgiving celebration. The painting is accompanied by the phrase: “The irony of refusing aid and assistance to ref- ugees/migrants while preparing to celebrate a holiday about receiving aid and assistance as refugees/migrants.” Figure 1 shows the meme, which was shared widely online in 2016.

As the data graph under the photo shows, this meme has been interpreted very differently by users of the Mind Over Media website. Thirty-five percent of website users see it as beneficial, and twenty percent see it as harmful. The polarized results create a natural starting point for dialogue and reflection in the classroom: Why do some people think this meme is beneficial? Why do some see it as harmful?

To answer this question, multi-perspectival thinking is required. High school students interpret such propaganda in many different ways. One participant interpreted the Thanksgiving meme as somewhat harmful, noting, “This image calls upon old traditions in order to garner sympathy

FIGURE 1. The Thanksgiving Meme

Source: Media Education Lab. Mind Over Media www.mindovermedia.tv. Used with permission.

Hobbs.indd 29 9/26/17 6:25 AM

for the refugees from the Middle East war zones. This appeal to tradition is harmful and ignorant of the changing times. The Pilgrims were colonists while the migrants today are refugees under very different circumstances. It should be noted that the colonists caused mass genocide as well, so com- paring the migrants of today to the colonists of the past does not exactly paint a pretty picture.”

Another student saw the Thanks- giving meme as somewhat beneficial, writing, “The pilgrims were once migrants searching for a home, and now the world is faced by a new chal- lenge of greater proportion but the same ethical question. The creator of this piece wants the viewers to compare the two situations at hand and have the [United States] apply the same hospitality to the refugees from the Middle East. Yet, that same hospitality didn’t work out all that well for the Native Americans that showed kindness to the pilgrims.”

Although their interpretations differ widely, both students are considering the potential intentions and motives of the author, wondering, “Who is the author, and what is his or her purpose?” Two major theoretical ideas of media literacy education that have been articulated by the National Asso- ciation for Media Literacy Education (2009) are activated in this lesson: (1) all media messages are constructed and (2) people interpret messages dif- ferently based on their background,

life experience, and culture. When students recognize the constructed nature of information, they begin to identify the different points of view that are embodied in the choices authors make. They recognize that meanings are in people, not in texts. Through classroom dialogue and dis- cussion, students learn to appreciate

the many different ways that media messages can be interpreted. This helps them activate critical thinking skills and cultivate respect for diverse interpretations.

Updating the Tradition The critical examination of propa- ganda is not new. As far back as 1938, high school teachers were using instructional strategies to help build critical thinking about the propaganda of the time, which was disseminated through radio, newspapers, newsreels, and popular movies. The Institute for Propaganda Analysis (1937–1942) developed curriculum resources and activities that demonstrated how high school students could take a close look at the content of a media message and search for evidence, verification, and the communicator’s motives (Hobbs & McGee, 2014).

Now it’s time to update the tra- dition of propaganda education for the 21st century. With social media sites and news outlets making it easy to “select” our exposure and create echo chambers and filter bubbles, people today may actually get less access to diverse points of view than in previous eras. Often, the true funder of fake news or propaganda is disguised or hidden, as in the use of sock puppets (organizations that deliver messages without revealing the funding sources that support them) or bots and trolls (social media users who amplify their voices by using computer programs or multiple accounts).

The quality of civic education and civic learning in public education must be continually responsive to the lived experience of the students we serve. If schools are to fulfill their social purpose of preparing students for life in a democratic society, education leaders will need to get creative about how to ensure students are thoughtful

30 E d u c a t i o n a l l E a d E r s h i p / n o v E m b E r 2 0 1 7

Critically

Analyzing Media Here’s what students should ask every time they engage with con- temporary propaganda.

Message: What key information and ideas are being expressed?

Techniques: What symbols and rhetorical strategies are used to attract attention and activate an emotional response? What makes them effective?

Means of communication and format: How does the message reach people, and what form does it take?

Representation: How does this message portray people and events? What points of view and values are activated?

Audience receptivity: How may people think and feel about the message? How free are they to accept or reject it?

“Young people are exposed to so much information that it is a struggle for them to be able to form their own opinions about major topics that impact their world.”

Hobbs.indd 30 9/26/17 6:00 AM

A S C D / w w w . A S C D . o r g 31

and intelligent about the information they consume, and that in the face of increasing polarization, they can tell the fake from the facts. EL

References European Association for Viewers

Interests. (2017). Beyond fake news: Ten types of misleading information. https://eavi.eu/beyond-fake-news- 10-types-misleading-info

Goodfellow, J. (2017, February 6). Only 4% of people can distinguish fake news from the truth, Channel 4 study finds. The Drum. Retrieved from www. thedrum.com/news/2017/02/06/only- 4-people-can-distinguish-fake-news- truth-channel-4-study-finds

Hobbs, R., & McGee, S. (2014). Teaching about propaganda: An examination of the historical roots of media literacy. Journal of Media Literacy Education, 6(2), 56–67.

Kahne, J., & Bowyer, B. (2017). Education for democracy in a partisan age: Con- fronting the challenges of motivated reasoning and misinformation. American Educational Research Journal, 54(1), 3–34.

National Association for Media Literacy Education. (2009). Core principles of media literacy education. Retrieved from https://namle.net/2009/06/02/the-core- principles-of-media-literacy-education

Ritchie, H. (2016, December 30). Read all about it: The biggest fake news stories of 2016. CNBC. Retrieved from www.cnbc. com/2016/12/30/read-all-about-it-the- biggest-fake-news-stories-of-2016.html

Simon, H. (1971). Designing organiza- tions for an information rich world. In M. Greenberger (Ed.), Computers, communications, and the public interest (pp. 37–72). Baltimore, MD: The Johns Hopkins University Press.

Renee Hobbs ([email protected]) is a pro- fessor of communication studies and director of the Media Education Lab (www.mediaeducationlab.com) at the Harrington School of Communication and Media at the University of Rhode Island, where she co-directs the Graduate Certificate Program in Digital Literacy. Her latest book is Create to Learn: Intro- duction to Digital Literacy (Wiley, 2017). Follow her on Twitter @reneehobbs.

EL Online Worried your students might develop a total distrust of the

media? Read Erik Palmer’s tips in the online article “The Real Problem with Fake News” at www.ascd.org/el1117palmer.

Turn Balanced Literacy Into Transformative Literacy

Watch ARC Core in Action Visit our website to watch a sample lesson americanreading.com/arc-core

ARC Core™ is a K-12 basal alternative designed to dramatically improve outcomes for both students and teachers.

Available in English & Spanish

Hobbs.indd 31 9/26/17 6:00 AM

Copyright of Educational Leadership is the property of Association for Supervision & Curriculum Development and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

Get help from top-rated tutors in any subject.

Efficiently complete your homework and academic assignments by getting help from the experts at homeworkarchive.com