Norms and Concepts: A Response to Watts and Mosurinjohn

[Note: this could be kinder, longer, more detailed, and better written, but I’m super busy and have to move on to other projects on my summer to-do list. Sorry, Galen, that my response couldn’t have been more constructive.]

In “Can Critical Religion Play by Its Own Rules?,” Galen Watts and Sharday Mosurinjohn offer a number of criticisms of the “methodological school” they call “Critical Religion” (CR). In their narrative, the central claim of CR scholars is that “religion” is a problematic concept that we should view with a great deal of skepticism; in addition, they argue that this has become a hegemonic “common sense among most, if not all, scholars of religion.” Watts and Mosurinjohn’s criticisms of this “school” are threefold: (1) CR scholars are inconsistent when historicizing (they historicize some concepts but reify others), (2) they are crypto-normative (they pretend to be value-free or value-neutral yet their projects clearly reflect normative investments), and (3) their arguments for abandoning “religion” as an analytic term are ultimately arbitrary. Their primary targets appear to be Russell McCutcheon, Tim Fitzgerald, and myself, although they point to many others along the way.

It is difficult to know where to begin in response to the essay. As was immediately apparent on social media, some of those targeted in the essay felt as if they were caricatured, some claimed that there is no such thing as a “methodological school” here, as what the scholars named share is perhaps their conclusions—the concept of “religion” is deeply problematic—more than their methods, and some complained that the essay appeared to ignore a good bit of the already existing literature criticizing this type of scholarship. More than a few people reflected on the fact that it appears that the journal—purportedly the “flagship” journal of our field of study—has a history of publishing weakly argued articles that criticize this type of scholarship (see, for instance, the essay by Atalia Omer, “Can a Critic Be a Caretaker too?” [2011], which was terrible). It seems as if the journal’s editors are hostile to the type of scholarship produced by McCutcheon, Fitzgerald, Martin, et al., and as long as an essay is on the right side of the fault line, apparently the quality of the arguments is beside the point. (Of course, I’m not privy to the editors’ private thoughts or the review processes this and other articles have undergone, so this is merely an impression based on what I’ve seen them publish and what they apparently won’t publish.)

I find myself largely in agreement with these initial responses. In my opinion, the authors resorted to caricaturing their opponents, they problematically reified “critical religion” as a “methodological school,” and they did not engage with many of the existing conversations about the body of scholarship that finds “religion” to be a problematic concept. Most importantly, as I will note below, although I was targeted in this essay, I don’t think my scholarship is guilty of most of their accusations. In addition, I have never identified as a “critical religion” scholar, and there are many tensions and disagreements among those scholars grouped together under the abbreviation “CR.” (I suspect that many of those targeted similarly found their work poorly represented in the presentation of “critical religion,” not just me. I’ll leave it to others to point out where in particular their own work was caricatured.)

Some have objected that “we’re not a school” is a lazy objection designed to dodge the substance of the arguments. As a friend of mine said to me, even if we’re not a “school,” we “do in fact function like a type of school despite some of their differences.” Perhaps. But, even if true, that does not give critics the right to paint with a broad brush and riff on their general impressions. I recently wrote a piece on the phenomenon of “postcritique,” wherein I discussed the work of Eve Kosofsky Sedgwick, Bruno Latour, and Rita Felski, whose criticisms of “critique” overlap to some extent. Nevertheless, in my essay I addressed each of them individually, because what was true about one might not be true of another—the overlap is only ever partial, and we must be sensitive to that. Anything else is intellectual laziness.

In any case, let me turn to the substance of the accusations in the essay. Why do I think my scholarship is largely not guilty of the charges laid out here? First, I’ve devoted myself to historicizing not only “religion” but many other discourses as well—some of which I continue to use, some of which I have abandoned—such as the following: the discourses of liberalism and the idea of a “private sphere”; the concept of “things-in-themselves”; discourses related to ideology and domination, particularly the language of false consciousness and internalized oppression; subjective definitions of social domination; the language of autonomy and liberal discourses on subjectivity, freedom, and repression; and more. It is hardly the case that I focus only on the problems with the concept of “religion.”

Second, I’ve written at length about what is problematic or objectionable about the concept of religion—by no means is my recommendation to use other terms entirely arbitrary. Unfortunately, Watts and Mosurinjohn do not address any of my arguments, other than to point out that I allege the concept has a great deal of normative baggage. Nowhere do they consider my arguments about what that baggage is—outlined in Masking Hegemony and Capitalizing Religion—and nowhere do they make an argument that the concept is worth saving despite that particular baggage. How can I accept their assertion that my arguments for avoiding the use of “religion” are arbitrary when they never actually address any of my arguments?

Notably, the fact that Watts and Mosurinjohn clump together a wide variety of scholars under the category “CR” is most problematic on this point. They insist that the arguments against religion used by some so-called CR scholars contradict the arguments of other so-called CR scholars, and rule that ultimately the arguments against “religion” are arbitrary. That point would make sense only if those “CR” scholars claimed to represent a unified school of thought and if multiple arguments for a position made the position arbitrary. If this group does not identify as a coherent school of thought—and they don’t—then pointing out that there is disagreement as to exactly why “religion” is problematic isn’t particularly interesting—it just means that people disagree. Bizarrely, Watts and Mosurinjohn claim that these scholars are probably correct that the concept of religion has a great deal of “unduly normative baggage,” but they drop the point and never return to whether or not that’s a good reason to drop its use, other than to point out that all discourses are normative in some way or another. However, the fact that all discourses are in some way normative probably wouldn’t be a reason to retain any particular concept. That all discourses are implicitly normative is not a reason to continue to use of the concept of, for instance, “primitive savages”; whether a normative concept is worth retaining or not likely depends on the historical specifics rather than the abstract point “all discourses are normative,” but the authors here offer no defense for why “religion,” despite its apparent baggage, still warrants use.

There is one point this particular essay makes that, arguably, is worth taking seriously: none of these so-called “critical religion” scholars have produced a systematic account of the nature of normativity or have fully accounted for what kinds of norms might be appropriate or inappropriate in scholarship (they do cite a short blog post I wrote more than half a decade ago, but which I’ve never fleshed out in detail in print—an unfinished project, to be sure). As Watts and Mosurinjohn rightly note, the claim that “colonialist” discourses reified “religion” in ways that supported colonialism is implicitly normative: the assertion appears to be relevant only if one has some sort of moral objection to colonialism. As a result, scholars who make such claims may very well inadequately reflect upon the extent to which their own work is normative even as they use “normativity” as a weapon against other scholars’ work. Scholars like myself could do a better job of accounting for why we depend upon some norms in our work even as we criticize the use of different norms in the scholarship of others.

The authors imagine this to be a truly damning imminent critique because it catches their opponents in an apparent contradiction: CR scholars are implicitly normative even as they claim to “refrain[] from normative evaluation.” That is, they pretend to be value-neutral while their scholarship is value-laden. However, while some scholars they criticize have written or do write as if their purported goal is to be value-neutral in some fundamental way, this criticism falls wide of the mark as concerns my own scholarship. In my first monograph, Masking Hegemony: A Genealogy of Liberalism, Religion, and the Private Sphere, I explicitly noted that what drove my criticisms of liberal discourses is that I thought they did a poor job of securing gay rights and the right for women to have an abortion—something that troubled me because I support gay rights and the right of women to have abortions. In Capitalizing Religion: Ideology and the Opiate of the Bourgeoisie, my second monograph, I criticized capitalist discourses because I hate capitalism and the harms I’ve seen it has done to others, and I explicitly said so in the introduction. In my introductory textbook, A Critical Introduction to the Study of Religion, and my latest book, Discourse and Ideology: A Critique of the Study of Culture, I explicitly cite my normative opposition to social domination as a motivating impulse behind my desire to produce sophisticated methods for studying discourse and domination. Consequently, pointing out that my scholarship is value-laden is not a surprise to me or any of my readers, but rather repeats what I’ve explicitly said in print. I’ve never pretended to be value-neutral or non-normative, something that should have been evident to Watts and Mosurinjohn if they had read any of my books.

At one point the authors deride myself, William Arnal, and Russell McCutcheon for criticizing “religion” while simultaneously deploying the concept of “colonialism.” They note that I once said in print that “our role as ‘critics’ is precisely not to ‘fall back on those folk classifications schemes that have been naturalized for us,’” but then point out that Arnal and McCutcheon appear to fall back on the folk classification scheme that talks of colonialism:

Needless to say, establishing what deserves the label colonialism, much like determining what merits the label religion, is no neutral act. In fact, far from it. So, we might ask: which definition of colonialism are Arnal and McCutcheon working with? What necessary and sufficient criteria are they applying? And from whence do these criteria emerge? They nowhere actually define it, so we are forced to hazard a guess. One hypothesis we think plausible is that their criteria emerge from the folk classification schemas that have been naturalized for them—schemas that their readers (us) probably share. Indeed, Arnal and McCutcheon know full well that their academic readers likely share their folk understanding of colonialism (negative connotations and all) and therefore take advantage of the semantic shortcut to save space in their writing. In other words, by reproducing this schema in their historical analysis, they do exactly what they have repeatedly criticized others for doing: they “authorize the specific local as the universal.”

Of course the authors are correct that the label “colonialism” is not neutral. What is surprising to me is their suggestion that this point would be lost on Arnal and McCutcheon who, I’m certain, know quite well that “colonialism” is a discourse that is neither neutral nor universal, but rather arose at a particular place in history—in particular, their use of the term likely arises less out of the discourses that legitimated colonialism than postcolonial discourses that delegitimated colonialism by pointing out how they functioned to reinforce European domination over others. In addition, it’s unlikely that Arnal and McCutcheon are here falling back on a “folk” understanding of colonialism; rather, it seems clear that only those exposed to quite sophisticated postcolonial scholarly discourses would fully understand the nature of their arguments here. Last, while Arnal and McCutcheon were not engaged in historicizing discourses on colonialism, it seems clear to me that they would likely be open to such a project. Ultimately, no one can historicize their entire vocabulary at once, and criticizing a scholar for historicizing x, y, and z but not a, b, and c would be like criticizing a sociologist for not focusing on psychology, or criticizing a climate change scientist for not focusing on the latest string theory.

Watts and Mosurinjohn’s essay spends a lot of time “in the weeds,” so to speak—there is a lot of emphasis on details about which CR scholar said this and which said that, why that contradicts something they or another CR scholar said, etc. That made it difficult for me to read, as I would agree with some of these folks but not others; as a result, after reading through all the details I’m left a bit confused as to exactly which of the criticisms are supposed to stick to me and which are not.

Ultimately, at the end of the essay, I am left with the following two questions, which address considerations that are front and center in their criticisms.

  1. What is their argument for which kinds of scholarly normativity are appropriate and which are not? I assume they don’t think it is okay for scholars to say “Jews are evil,” but where do they draw the line?
  2. If we are to retain the concept of religion (and other concepts we might otherwise historicize rather than deploy) as an analytical tool, what is their justification? What is it useful for? What does it pick out from the world, and how does it pick it out in a way that is more useful than other vocabularies? Scientists no longer refer to the “ether,” not because no use could be made of it or because nothing could be said about it, but rather because it was superfluous for describing the world in a useful manner. Where do they draw the line between useful and less useful concepts, and what are their criteria?

As I reflect on the essay, it seems like they object to how so-called CR scholars answer those sorts of questions, but fail to offer answers of their own. As such, it reads (to me, although I’m sure others read it differently) as a series of “gotcha claims” (gotcha! you contradicted yourself!) but without adding anything substantive to the general matters being addressed. Given time, I could similarly pick apart the claims of any scholar like this (even my favorite ones—I could do this with Foucault or Derrida, for instance), but if I’m not contributing to the substance of the discussion, are there any gains other than a petty sense of intellectual superiority? I suppose they “got me” by pointing out that I once implied in print that our scholarship should perhaps refrain from “engaging in a social or political field in which we or others have something to gain or lose from our application of this contested term,” while my own scholarship is to some extent engaged in social or political fields. I’ll have to be more precise in the future, and thus I guess I’m grateful for their criticism there. But while I believe I have offered many reasons to avoid the concept of religion—not least of which is the fact that in some political contexts its use appears to enfranchise dominant groups and disenfranchise minority groups—Watts and Mosurinjohn have offered absolutely nothing of substance about the nature of scholarly norms or why we should or shouldn’t use any concept.

They imply at one point that they want to continue to use the word religion because religions “have significant consequences for people’s lives,” consequences that they want to insist may be “good or bad, just or unjust.” Unfortunately, they offer nothing to support this claim, don’t defend their definition of “bad” or “unjust,” and nor do they point out that their social critique crucially depends upon the concept of religion—rather than less problematic concepts—to make such judgments.

Anyone can take shots at other scholars by focusing on minor details while ignoring the larger issues. If Watts and Mosurinjohn have anything of substance to add on those larger issues, none of that substance appears here.

Posted in Uncategorized | Leave a comment

Evidence for Racial Disparities in the US

Pew Research Center (2016)

  • Median household income for blacks was 55% that of whites in 1967; the number rose to only 60% by 2014.

Ira Katznelson, When Affirmative Action Was White (2005)

  • When Social Security was signed into law by Franklin Delano Roosevelt in 1935, the available benefits were contingent upon “prior wages, which, for blacks, often had been derisory” (42). In addition, farmworkers and domestic workers—i.e., the sorts of jobs African Americans were more likely to hold—were excluded from the program altogether. “Across the nation, fully 65 percent of African Americans fell outside the reach of the new program; between 80 and 90 percent in different parts of the South” (43).
  • These exclusions were remedied by legislation passed in 1954, but “even then, African Americans were not able to catch up since the program required at least five years of contributions before benefits could be received” (43). Consequently, “for the first quarter century of its existence, Social Security was characterized by a form of policy apartheid” (43).
  • New Deal legislation under Roosevelt was also racially disproportionate in the protections offered to laborers. The new laws were designed in part to improve the “health, efficiency, and well-being of workers,” in part through the establishment, for instance, of a minimum wage and a 40-hour work week (55). However, once again, predominantly black farmworkers and domestics were excluded from these protections. The decision to exclude farmworkers and domestics appears to have been motivated by explicitly racist concerns. One southern lawmaker noted, for instance, that an equal minimum wage for whites and blacks “might work in some sections of the United States, but those of us who know the true situation know that it just will not work in the South. You cannot put the Negro and the white man on the same basis and get away with it” (60).

Andrea Flynn, et. al, The Hidden Rules of Race (2017):

  • Another particularly relevant past practice with still-lingering consequences was the practice of “redlining.” One part of Roosevelt’s New Deal involved the creation of the Federal Housing Administration (FHA) in 1934. Once formed, the FHA developed policies according to which home loans could be offered to possible home buyers. They favored offering loans for homes in predominantly segregated, homogenous, and white parts of town surrounded by a “green line” on FHA maps and avoided offering loans for houses in those homes in predominantly black or desegregated parts of towns surrounded by a “red line,” insofar as homes within the red line were of lower quality and value, as well as less likely to accrue in value. “[B]ecause of the way the administrative rules were set up, the growth in housing was channeled into [predominantly white] suburbs at the expense of central cities” (69).
  • Studies have demonstrated that light-skinned and dark-skinned blacks are treated differently on the job market. If we control for “level of schooling, high school performance, work experience, health status, self-esteem, age, marital status, number of dependents, workplace characteristics, and parental socioeconomic status and neighborhood characteristics at age sixteen, [scholars] found lighter-complexioned black males experienced treatment in U.S. labor markets little different from white males. On the other hand, using the same controls, black males with medium and dark skin tones incurred significant discriminatory penalties relative to white males” (Flynn et. al 2017, 42). Consequently, “[g]reater proximity to white-identified norms of appearance and attractiveness carries benefits” (42).
  • Although some Americans would prefer to attribute the wealth gap to minorities’ laziness or their “victim mentality,” it is clear that the racial gap persists even among people who hold the same educational level. “At every level of education, earnings for black men and women lag behind those of their similarly skilled white counterparts” (78). That is to say, median income for hard-working blacks with college degrees, master’s degrees, or further, advanced degrees (such as doctorates) is lower than the median income for hard-working whites who’ve accomplished the same level of education and hold the same skills (79).
  • Segregation in the workplace appears to economically benefit whites. One study revealed “a $10,000 increase in the average annual wage of an occupation is associated with a seven percentage-decrease in the proportion of black men in the occupation” (86). That is to say, when there are fewer black men in an occupation more money is made for the remaining members of the occupation. Crucially, once again, this holds independently of education and skill level, and thus cannot be attributed to laziness or “victim mentality.” “The relationship between wages and racial make-up of an occupation is true across all skill levels, which tells us that wage disparities cannot be explained away by education or training differentials” (86).
  • Studies show that employers sometimes evaluate applicants on the basis of shifting criteria, depending on the race of the applicant. “[E]mployers willingly overlook[] missing qualifications in white job applicants and weigh[] qualifications differently depending on the applicants’ race” (87). In particular, one study showed that “deficiencies of skill or experience appear to be more disqualifying for minority job seekers” (87).
  • One study showed that white men with a felony record were more likely to be called back for job interviews than black men without a criminal record but holding the same skills; that is to say, it appears that we have statistical evidence that felony records are more likely to be overlooked if one is white, so much so that whites with felonies are apparently seen as more qualified than blacks without, despite all other qualifications being equal (87).
  • Another study showed that, in large and racially diverse cities like Boston or Chicago, applicants who submitted resumes with white sounding names (like Emily or Greg) were 50 percent more likely to be called for an interview than applicants with identical resumes but with black sounding names (like Lakisha or Jamal; Bertrand and Mullainathan 2004, 998). How about when the resumes are different? For instance, what are the effects when the study was done with resumes that showed a greater or lesser number of skills or years of work experience? “In summary, employers simply seem to pay less attention or discount more the characteristics listed on the resumes with African-American sounding names. Taken at face value, these results suggest that African-Americans may face relatively lower individual incentives to invest in higher skills” (cited in Flynn et. al).
  • The darkness of African-Americans’ skin also correlates with “greater odds of harsher sentences if convicted of comparable crimes, including greater odds of receiving the death penalty for similar capital crimes” (42).
  • Cocaine is disproportionately more likely to be used by whites, while crack cocaine is more likely to be used by blacks, but although they are “virtually identical” (cocaine offers longer highs while crack offers more intense but shorter highs), “sentences for using crack cocaine [were] one hundred times longer than for powder cocaine” (119). In 2010, federal laws were passed that reduced the discrepancy “from 100:1 to 18:1” (119). These sentencing laws therefore produce a rather significant disproportionate impact on blacks.
  • In addition, “[r]esearch has shown that more than 80 percent of defendants sentenced for crack offenses are African American, despite the fact that more than 66 percent of users are white or Hispanic” (119). That is to say, while blacks make up 44 percent or less of crack users, they are 80 percent of those sentenced for its use. This statistic is likely to be conditioned by a number of possible causes, such as implicit bias or racial profiling on the part of the officers investigating and arresting black users, or the fact that whites or Hispanics (more likely whites) have access to wealth that permits them to hire those more expensive lawyers more capable of preventing conviction or reducing sentencing for those who are convicted. Whatever the reason, the application of crack cocaine sentencing laws disproportionately impact African-Americans.
  • Apart from crack, in general studies show that “African Americans comprise only 15 percent of the country’s drug users, yet they make up 37 percent of those arrested for drug violations, 59 percent of those who are convicted, and 74 percent of those sentence to prison for a drug offense” (119). That is to say, they make up a minority of drug users but a majority of those convicted and sentenced for drug offenses. Another considerable disproportionate impact of an apparently colorblind set of laws.

Michelle Alexander, The New Jim Crow (2012)

  • Studies show rates of marijuana use are relatively uniform across race, but studies also show that “white students use cocaine at seven times the rate of black students, use crack cocaine at eight times the rate of black students, and use heroin at seven times the rate of black students” (99). In addition, “white youth have about three times the number of drug-related emergency room visits as their African American counterparts” (99). However, despite the fact that we’ve little empirical evidence that blacks use drugs at rates higher than whites, “in seven states, African Americans constitute 80 to 90 percent of all drug offenders sent to prison” (98). In addition, “[i]n at least fifteen states, blacks are admitted to prison on drug charges at a rate from twenty to fifty-seven times greater than that of white men” (98).
  • As a result of the fact that warriors in the so-called “war on drugs” tended to target black neighborhoods, “1 in every 14 black men was behind bars in 2006, compared with 1 in 106 white men. For young black men, the statistics are even worse. One in 9 black men between the ages of twenty and thirty-five was behind bars in 2006” (100). Again, however, based on the statistics cited above, “[t]hese gross racial disparities simply cannot be explained by rates of illegal drug use activity among African Americans” (100). Nor can these disparities be explained by incarceration due to violent crimes, as “[t]oday violent crime rates are at historically low levels, yet incarceration rates continue to climb” (101).
  • Racial profiling in policing has consistently been found to be acceptable by the Supreme Court and, as a result, racial profiling persists across the US, despite studies that have demonstrated, for instance, “in New Jersey, whites were almost twice as likely to be found with illegal drugs or contraband as African Americans, and five times as likely to be found with contraband as Latinos” (133). However, because of racial profiling, “in New Jersey, the data showed that only 15 percent of all drivers on the New Jersey Turnpike were racial minorities, yet 42 percent of all stops and 73 percent of all arrests were of black motorists—despite the fact that blacks and whites violated traffic laws at almost exactly the same rate” (133).
  • “Maryland studies produced similar results: African Americans comprised only 17 percent of drivers along a stretch of I-95 outside of Baltimore, yet they were 70 percent of those who were stopped and searched. Only 21 percent of all drivers along that stretch of highway were racial minorities (Latinos, Asians, and African Americans), yet those groups comprised nearly 80 percent of those pulled over and searched” (133).
  • “In Volusia County, Florida, … [o]nly 5 percent of the drivers on the road were African Americans or Latinos, but more than 80 percent of the people stopped and searched were minorities” (134).
  • “In Illinois, … [w]hile Latinos comprised less than 8 percent of the Illinois population and took fewer than 3 percent of the personal vehicle trips in Illinois, they comprised approximately 30 percent of the motorists stopped by drug interdiction officers. … Latinos, however, were significantly less likely than whites to have illegal contraband in their vehicles” (134).
  • “A racial profiling study in Oakland, California, in 2001 showed that African Americans were approximately twice as likely as whites to be stopped, and three times as likely to be searched” (134).
  • According to a study “commissioned by the attorney general of New York, … African Americans were stopped six times more frequently than whites, and … stops of African Americans were less likely to result in arrests than stops of whites—presumably because blacks were less likely to be found with drugs or other contraband” (135).
Posted in Uncategorized | Leave a comment

Afterword: Consequences for the Modern University

hall-of-languages-crowd-1100x733

I’m presently completing a book project I’ve been working on for several years, tentatively titled Discourse and Ideology: A Critique of the Study of Culture. As I’m wrapping up the project, I’ve been thinking about the consequences of the project for my particular social location: the modern university. Here are some of the thoughts I’m trying out for the “Afterword.”

*****

Throughout this book I’ve defended a poststructuralist approach to discourse analysis and ideology critique. This is, to no one’s surprise, the approach I use as an instructor in the college classroom. Much of what I do as a teacher involves showing students how, in particular contexts, people have to say X—i.e., to utilize locally authoritative discourses—to serve their interests, whether X is “the Bible says,” “it’s the letter of the law,” “this is bullying; you’re bullying me,” or “this assessment demonstrates I met the outcomes I set for myself.” In many cases, it doesn’t matter if X is true, or if X is something they actually believe—it’s just what they have to say to influence the behavior of other persons in this context. While some may view this as overly cynical, it seems to me to be one of the most useful lessons one can learn in college: what counts as persuasive is always context-dependent.

For those in institutional settings like mine, examples are easy to identify. As Chair of the Faculty Senate at my college, I am charged with acting in ways that serve the interests of the Faculty. In order to serve the interests of the Faculty, I had to learn what sorts of discourses are persuasive to different parts of the institution. What is persuasive to the Board of Trustees may not be persuasive to the President, and what is persuasive to the President may not be persuasive to the Provost, and what is persuasive to the Provost may not be persuasive to the Registrar’s office, the office for student services, or the athletics director, etc.—despite the fact that we all work in the same institution and, in principle but not in fact, are theoretically working toward the same social ends. What serves the interests of occupants of some positions in the social structure is often different from what serves the interests of those in other positions.

In addition, what counts as relevant empirical evidence sufficient to move them varies: while pointing out that some rule in the dorms appears to unfairly punish students who are already disadvantaged might move the Provost to initiate a revision process for the rule, that evidence is unlikely to move members of the security staff, who, if confronted by that information, might very well say “that might be true, but this is the rule and I’m obligated to do my job and write up this student for their infraction.” The fact that a student is often rude to her professors might be something of interest to a student’s advisor attempting to help that student improve her academic performance, but if a student complains to the Dean that I graded her unfairly, the fact that she was rude to me will be entirely beside the point to the Dean if there is actual evidence for the assignment of an unfair grade. Navigating any complex social institution requires attending to which discourses, ideologies, or empirical evidences are relevant in which context, and those who fail to understand this fact will likely be unable to serve their own interests or the interests of those they represent.

With the rising costs of a college education, and increasing concerns over whether a college education is a worthwhile investment, the question of domination often presses itself upon me. As I regularly point out to my students, when I am acting as a professor and they are acting as students, we are in a relationship of domination; by all accounts, I gain more privilege and material capital as a result of our relationship than they do. If they are successful in completing their degree (this is to leave aside those students who pay a great deal in college tuition but are set up to fail because they were admitted without the requisite qualifications or were not sufficiently served by the available academic resources for weak students), they gain a credential that will likely—although not always—bear a significant value across their lifetime; the costs they incur in gaining that credential, however, are enormous. By contrast, for most professors the benefits are greater—and more directly assured, insofar as we command a salary whose value is not contingent like the credential students are earning—and the costs fewer.

For that reason, by helping students identify what is in their interests and how to manipulate discourses so as to serve their interests, I hope to reduce the asymmetry of costs and benefits built into our institutional relationship of domination. I never feel justified in telling them what discourse they ought to adopt, or what sympathies they ought to feel. However, I do think their interests are served when I show them how to switch from one register to another and produce varying empirical evidences as needed. Which discourses will be useful to them will always of course be contingent on their particular interests and sympathies. If they are attempting to cure cancer, biological discourses will be more useful for identifying which variables in the world they can manipulate to serve their interests than the discourses they learn in a sociology class. By contrast, if they are attempting to reduce the racial economic gap in the US, sociological discourses will likely be more useful for intervening in relevant variables than biological discourses. Ideally, each part of the curriculum teaches them forms of knowledge that assist them in achieving their interests; in English classes they learn how to use clear writing to persuade, in graphic design classes they learn how to use the visual arrangement of words and shapes to persuade, and in statistics classes they learn how to use math to persuade.

This seems to me to be the best justification—at least on discourses I accept as authoritative—for the ongoing relevance of the liberal arts: we train students how to use varying discourses, show them how those discourses are more or less useful depending on their immediate needs or interests, and show them how to switch from one discourse to another as their needs, interests, or contexts shift. When we are successful, we equip our students with the skills they need to serve their interests for the remainder of their lifetime. This does not eliminate my relationship of domination with them, but attenuates it.

In addition, some of my students—like me—have sympathies for other, disadvantaged subjects in our world. By equipping such students with discourses that allow them to bring into relief forms of social domination in our world, as well as means of manipulating the social order so as to reduce those forms of domination, I’m not only serving their interests but also working to advance a social agenda that conforms with my sympathies.

In many ways, this distinguishes me from faculty who view themselves as “social justice warriors,” insofar as I am less prepared than some of them are to insist that students ought to share my sympathies. I fear that their approach contributes to the stereotype that college professor brainwash students with their liberal ideologies. While I wish more students did share my sympathies, I also maintain that Hume was right when insisting that no “ought” ever follows from any “is.” It might be the fact that racism saturates American culture, but whether or not one is moved by this is contingent upon one’s sympathies, which are never universal. Were I to naturalize my sympathies, or were I to present them as if they were universal, I would be mystifying the world rather than showing them how to understand it. We cannot independently verify—by appealing to empirical evidence—claims like “you ought to care about racial minorities” in the way that we can with claims such as “if you have sympathies for minorities, you can serve the interests of those minorities by acting on the social order in these particular ways.”

This is as “objective” as I think we can be if we accept the premise of poststructuralism that knowledge is always conditioned by human interests and historically contingent upon regnant discourses. But this is objective enough for me, and, I hope, helps to authorize what I see as uniquely valuable in the modern liberal arts university—namely that, unlike most other public discourses, what we teach is in some way empirically verifiable independently of any particular students’ interests or sympathies, and that the method of inquiry we model will be useful for them throughout their lifetime, even if their interests diverge from ours.

Posted in Commentary | Tagged , , , , , | Leave a comment

The Social Functions of Obligatory Denunciations

Image result for al qaeda

In preparation for a new course I’m teaching this fall, I’ve been reading a great deal on Islam. I’ve surveyed both scholarly and popular narratives on Islam, particularly as I hope to compare and contrast such narratives in my course. One thing that has struck me is the near-universal and apparently obligatory denunciations of “extremist Muslims,” “Islamic fundamentalists,” or “Islamic terrorism,” and of course Al-Qaeda in particular. In addition, the condemnations are presented as if obvious or common sense. It’s apparently “obvious” that the September 11 attacks on New York and Washington D.C. are “terrible” or “evil.” Interestingly, these denunciations appear even when—or perhaps because—the prose that follows goes on to historicize or contextualize the form of violence under consideration. Apparently, if one is going to offer reasons for which a group might perpetrate violence, one opens oneself to the charge that one is excusing that violence—hence the obligatory qualifications of the following sort: “before getting to the reasons behind 9/11, I want to make it clear that Al-Qaeda’s actions were evil and unforgivable.” Such denunciations, it is worth noting, appear in both scholarly and popular literature.

For all of the reasons outlined by Ferdinand de Saussure and Jacques Derrida, signifiers signify only in relation to their differences from other signifiers. As such, condemnations of “illegitimate” violence are meaningful only in relation to its other: “legitimate” violence. For words like “illegitimate violence” to be meaningful, there must be a contrast—implicitly or explicitly—with “legitimate violence.”

Consequently, I would argue that these obligatory denunciations of illegitimate violence have a dual social function (and here I play off of the double [and opposite] meanings given to the word “sanction”): such denunciations negatively sanction—by decrying—illegitimate violence, but simultaneously positively sanction—by implicitly condoning, absolving, or excusing—legitimate violence. Every such denunciation is simultaneously a signal of approval.

This is why the one-sided or unidirectional nature of these obligatory denunciations are so revealing: in all of the literature I’ve been reading, I’ve not seen a single obligatory and obvious denunciation of, e.g., the violences perpetrated by the United States. Even when criticized, the actions of the United States are, at worst, complicated, lamentable, unfortunate, but never obviously terrible or evil.

So, as I head back to the classroom this fall, I’m going to think before I qualify my lectures by delivering “obvious” and obligatory condemnations of the forms of violence we’ll necessarily cover. Such verbal sanctions—especially when unidirectional—function implicitly to legitimate other forms of violence.

Image result for afghanistan drone bombing

Posted in Commentary | Tagged , , , , , , , | Leave a comment

On Neo-Perennialism

Last year I commented on Facebook that I thought there were structural similarities between classical perennialism in religious studies and the arguments in three recent monographs I had read, specifically Stephen Bush’s Visions of Religion: Experience, Meaning, and Power, Jason Blum’s Zen and the Unspeakable God: Comparative Interpretations of Mystical Experience, and Kevin Schilbrack’s Philosophy and the Study of Religions: A Manifesto.

In summary, my claim was that all three worked with an implicit, universalistic schema whereby all religions throughout history are ordered according to an experience–belief–community schema: humans have exceptional experiences, form beliefs on the basis of those experiences, and then form religious communities around those experiences and beliefs. Unlike classical perennialism, all three of these authors intend to be unreservedly historicist in their approach—they all explicitly denounce classical perennialism—and yet, in my opinion, this ahistorical experience–belief–community schema haunts their work. Aaron Hughes—editor of Method and Theory in the Study of Religion (alongside Steven Ramey)—asked me to put my argument on paper, and this resulted in an essay titled “Yes, … but …”: The Neo-Perennialists.” The abstract reads:

This essay argues that despite their opposition to perennialism, a number of recent scholars inadvertently repeat some of the problematic gestures of perennialism. These scholars are attempting to push the field forward after poststructuralist critiques of religious studies, particularly regarding the varieties of essentialism that have plagued the field. However, their account of “religion” ends up looking, at least in some respects, little different from the pre-critical, essentialist, and ahistorical accounts of religion that were regnant prior to the wave of poststructuralist critiques of religious studies. To some extent we appear to be back to where we started.

Cover image for Zen and the Unspeakable God: Comparative Interpretations of Mystical Experience By Jason N. BlumMTSR sent the piece out for comment to all three of the authors I criticized. Thus a symposium or conversation of sorts began, resulting in my article, their three responses, and my rejoinder. (All five pieces are now pre-published electronically and can be found on MTSR’s website [links below].) All three authors accused me of misinterpreting them, insisting that there is no such experience–belief–community schema to be found in their work; I’ll leave it to others to decide if my claim is a total fiction or if there is indeed some truth to it.

Despite our disagreement and whatever our differences, I do believe that the exchange was useful insofar as it allowed all four of us to clarify some of the similarities and differences between our approaches; thus I’m greatly in debt to Bush, Blum, and Schilbrack for their willingness to participate in the conversation (many thanks to all three of you!). I hope others find our exchange to be of use for thinking about the nature of and appropriate limits to scholarship in our field.

 

Posted in Announcement | Tagged , , , , , , , | Leave a comment

A Canadian Myth of Origin

The following is an excerpt from a chapter I’m writing for a book on mythmaking and identity formation at public tourist attractions, edited by Erin Roberts and Jennifer Eyl. I’d like to thank them for allowing me to share this prior to the book’s publication.


In January of 2017 I visited Ottawa, Canada’s capitol city. At that time the downtown area was saturated with banners and signs marking “Canada 150,” the year-long celebration of the 150th anniversary of the creation of Canada as a self-governing dominion or confederation, independent of Britain (established via the British North America Act, 1867). The Canada 150 logo could be seen on just about every street:

canada 150

As the Canada 150 website claims,

[t]he logo is composed of a series of diamonds, or “celebratory gems,” arranged in the shape of the iconic maple leaf. The four diamonds at the base represent the four original provinces that formed Confederation in 1867: Ontario, Quebec, New Brunswick and Nova Scotia. Additional diamonds extend out from the base to create nine more points—in total representing the 13 provinces and territories.

The Canada 150 logo is an evocative symbol and will become an enduring reminder of one of Canada’s proudest moments. The maple leaf motif is recognized at home and abroad as distinctively Canadian, and it fosters feelings of pride, unity and celebration.

Although the four diamonds are said to represent the “original” provinces, just what exactly constitutes the “origin” of Canada is, in fact, a deeply contested matter. The website claims the maple leaf “fosters unity,” but other cities—such as Vancouver—have launched a “Canada 150+” campaign in order to note that there were aboriginals in North America long before the formation of the confederation in 1867, and that those aboriginals are perhaps a part of a “Canada” that existed prior to that particular point in time. Tensions between aboriginals and those descendants of the French and British colonials have been present since the settlers first arrived, and the status of the First Peoples is to this day subject to ongoing legal battles.

I found a particularly interesting site for the complex discursive construction of “Canada” at the Canadian War Museum in Ottawa, particularly in the “Early Wars in Canada” permanent exhibit. According to the museum’s website, this exhibit focuses on “The wars of First Peoples, the French and the British [which] shaped Canada and Canadians.” What is ambiguous here, of course, is just what the referent of “Canada” is in “Early Wars in Canada.” Since most of the exhibit concerns a time period before the confederation of 1867—the exhibit begins by noting that it will cover “earliest times to 1885”—to what does the term Canada refer? The exhibit claims to depict “Wars on Our Soil,” but who constitutes the “we” behind the “our” in “our soil”?

One of the first messages in the exhibit claims that “War has shaped Canada and Canadians for at least 5,000 years.” The excavation of 11 bodies with “fractured skulls and smashed facial bones and teeth” at an archaeological site at Namu, British Colubmia—dated from four or five millennia ago—is cited as evidence.  Notably, such a claim implies that the region in North America that eventually became the state of Canada was always Canada, and that the people who lived there were always Canadians—or, if not always, at least from approximately 3000 BCE. In this way, Canadian-ness is anachronistically—yet strategically—projected backwards in time from the present, making the present a teleological end-goal of the last 5,000 years.

Notably, such an anachronistic projection of the present into the past could be done at any moment in history. For instance, imagine that in a thousand years what we now consider the nation of Canada becomes annexed to Mexico; at that point, the narrative could be altered such that, “War has shaped Mexico and Mexicans for at least 6,000 years.” Nothing ensures that the retroactive identification of the past with the present will be stable; the past, then, can continually be rewritten. Revisionist history is, perhaps, the only type of history possible.

The survey of particular wars that have taken place across “Canada” begins with inter-tribal battles between First Peoples. Citing a narrative from the Odawa tribe, the exhibit notes that hunters who went beyond the respected boundaries of their tribe risked death at the hands of neighboring tribes; as more deaths occurred, “several states were obliged to declare open hostilities against each other …. From this time they were engaged in constant warfare.” The inclusion of the Odawa tribe and the First Peoples generally in an exhibit within the Canadian War Museum implies that these peoples were Canadians, even if they did not identify as such. Much as many Christians co-opt ancient Israelite traditions for Christian purposes, here it seems contemporary Canadians claim ancient First Peoples as their own, for contemporary purposes.

Further into the exhibit, following displays of material evidence of the means of war between such tribes (i.e., spears, bows and arrows, etc.), a more cautious note appears: “In Iroquoian communities in what is now southern Ontario, every man and woman had a military role” (emphasis added). This is notable insofar as it attempts to avoid the anachronism seen in the previous parts of the exhibits. However, it also rhetorically distances the First Peoples from contemporary Canadian identity. Although the Iroquois may have lived on the land now known as Ontario, perhaps they were not Ontarians. (Arguably, the creators of the exhibit want to have their cake and eat it too: staking out Canada’s ancient authority or authenticity by including First Peoples at one point, but excluding First Peoples when it comes to contemporary political authority.)

Mere presence upon what came to be known as Canadian soil is apparently insufficient to make one a Canadian, as museum-goers next learn when the exhibit comes to the Vikings, who are described as “alien invaders.” Although they “established an outpost” at what came to be Newfoundland, they were enemies of the First Peoples and were eventually defeated and forced to leave the continent. From this it appears that the First Peoples were Canadians, but the Vikings—despite their stay—were not.

By contrast, when the exhibit gets to the arrival of the French, the French are not characterized as “alien invaders.” On the contrary, they are said to have “settled” and to have “founded” Quebec. In addition, they built forts “for defence against European rivals.” The status of the French is thus, at this point, ambiguous. Although they have “settled” in Canada, they are also “European” and have “European rivals.” Perhaps at that point the French occupied a liminal space between France and Canada? Perhaps their parturition from France and the birth of Canada was not yet complete? Either way, it is clear that their identity is here, at this point in the narrative, individuated primarily from the fact that they arrived from France, insofar as they are consistently referred to as “the French.” That individuation seems to have priority over their other possible identities.

The Europeans brought guns with them, and the exhibit notes that as the First Peoples adopted their use, it changed the way they engaged in war. “Algonkians and Hurons acquired matchlock muskets through trade. When they realized that wooden armour provided no protection against lead bullets, First Peoples stopped wearing armour and fighting battles in the open.” Here the First Peoples’ identities are individuated through their tribal names—Algonkian and Huron—but insofar as the header above this text claims that “Firearms changed First Peoples warfare in Canada” (emphasis added), perhaps as First Peoples they are nevertheless still Canadians. However, the exhibit then turns to note that as the Algonkians and Hurons allied with the French, they collectively warred against “the Iroquois League and the British.” Is Canada a land divided at this point? If the Iroquois are part of Canada, is this civil war?

The “Post-Contact Wars” between the Iroquois and the “Algonkian-French-Huron alliance” had the effect of “militarizing” Canada:

Every man became a soldier, every parish had its own militia, and every town had a garrison, fortifications, and a military commander. The Governor-General, who served as commander-in-chief, could mobilize Canada’s entire armed strength within days.

The use of the word “entire” is instructive here; if the governor can mobilize all of Canada’s military against the Iroquois League, then it follows that the Iroquois are, apparently, not part of Canada. Later the display claims that “Canada faced defeat by the Iroquois League,” further implying that the Iroquois were not included among the Canadians. The exhibit goes on to say that, “[b]eginning in 1669, Canadian men aged 16 to 60 received military training and served in the militia …. They joined First Peoples warriors on raids against the Iroquois League and the British.” Here Canadian men joined First Peoples, in which case First Peoples are apparently not Canadian; here “Canadian” appears to refer only to the French forces.

Later in the exhibit, museum-goers learn that the relations between the First Peoples and the Europeans involved both tension and accommodation. “First Peoples found themselves accommodating to or resisting the European presence, while working to preserve their own culture and heritage.” What is remarkable about this statement is, in part, that First Peoples’ “own culture” seems, implicitly, to be something pre-Canadian. They had their own culture, which they tried to preserve from (corruption by?) European influence. Are they, then, pre-Canadian Canadians? The same paragraph continues: “This accommodation and resistance continues today.” This last claim implies that perhaps there is still something un-Canadian about both First Peoples and the “European presence” in Canada. Perhaps, then, we are dealing not with civil war but a war between foreign nations on Canadian soil?

This conclusion seems to be confirmed when the narrative goes on to emphasize ongoing conflict between the French and the British, culminating in the “Seven Years’ War.” The “local clash” between the French and the British

quickly escalated into a world war. Beginning in 1755, Britain and France sent thousands of professional soldiers to North America. A year later, hostilities spread to Europe and both nations formally declared war. By 1759, war raged in Africa, Asia, Europe, North America, and the Caribbean, and Quebec was under attack by a British fleet and army.

Here, it seems, there are foreign nations—France and Britain—clashing on Canadian soil. The one discursive exception is in the last clause: if Quebec is part of Canada, then perhaps the French there were Canadian, as opposed to the British foreigners at their door. This is confirmed when the exhibit goes on to say that Louisbourg, “a Canadian city” founded by the French, was destroyed by the British. Apparently the French in Louisbourg were Canadians, although the British exiled them to France after the defeat.

By the time museum-goers get to the end of the eighteenth century and the beginning of the nineteenth, we learn that the partition of the United States from Britain helped to constitute Canada as a nation. British-American colonists who rebelled against the British homeland also attacked Canadian territory, forcing Canadians to collectively defend their territory. The display says that during the War of 1812, “British regulars, Canadian militia, and First Peoples warriors smashed a major American invasion at Queenstown Heights.” What’s interesting here is that apparently First Peoples are not part of Canada, as they must be mentioned in addition to the Canadian militia. The exhibit insists that this alliance repelled similar American attacks from 1812 to 1814, “and saved Canada from annexation.” Are the British and First Peoples part of this Canada that resisted annexation, or are they merely allied with the Canadians? The narrative draws attention to one “Canadian civilian” who alerted “Mohawk and Ojibwa warriors” of important intelligence regarding the American armies; if she was Canadian but they were Mohawk and Ojibwa, perhaps they were not Canadian?

Later we’re told that in 1885 “a small Canadian army suppressed Métis and Cree resistance” to Ottawa’s administration of the province. The narrative assures museum-goers, however, that “both societies survived as viable communities, which continue to work to protect their rights and heritage.” Here, it seems, not only were the Métis and Cree not part of Ottawa or Canada, but that they continue to be distinct communities. Eventually “Canadians” took the Prairies away from “First Peoples”: “In 1870, First Peoples controlled the Prairies. By 1880, Canadian settlers dominated the region.” Here it is quite clear that First Peoples are not Canadian, especially as “First Peoples resented the Canadian settlers.” The French settlers have, at this point, now become Canadian settlers.

We have, then, an inconsistent and contradictory message. Despite the inclusion of First Peoples as part of Canada at the beginning of the exhibit, the overwhelming message throughout is that the French settlers are the real Canadians. The French are the only group consistently identified as Canadian, and First Peoples are largely depicted as either allied with or against these authentic Canadians, rather than as Canadian themselves.

Analysis

The matrix of individuation applied in this origin myth involves all of the following identities: Algonkian, American, British, Canadian, Canadian militia, Cree, European, First Peoples, French, Huron, Iroquois, Iroquois League, Louisboug, Métis, Mohawk, Odawa, Ojibwa, Ontarian, Québécois, and Viking. Mere presence on “Canadian soil” (at least as drawn at the time of the exhibit’s creation) does not make one “Canadian,” as many of those groups present on that “soil” are depicted as invaders, interlopers, outsiders, or allies. In the case of most of the groups mentioned, their initial identity or primary individuation appears to be based on their European country of origin or their tribal name. That is, at first French Canadians appear to be French first, and Canadian second; Canadian-ness thus seems to be a second order individuation built upon other, previously existing identifications. By the end, however, the French Canadians become the true Canadians.

As noted above, at first the exhibit seems to want to include the First Peoples’ tribes as part of Canada—hence the claim that Canadian wars go back 5,000 years. However, by the time we get to the end of the nineteenth century, it appears that those individuated as First Peoples are not part of Canada and, in some cases, perhaps at odds with or at war with Canada. By contrast, the Vikings and the British-Americans mentioned, despite their residency on “Canadian soil,” are consistently treated as alien interlopers, clearly outside Canada proper.

If a group’s presence on Canadian soil does not qualify one as Canadian, what does? What criterion underlies the determination of Canadian-ness? No such criterion is made explicit in the exhibit, although those first identified as French (and a few of the British) eventually became Canadian. Arguably, there could be no objective or publicly available criterion by which some are identified as Canadian and others not—ultimately, Canadian-ness is accomplished by fiat via the recitation of these very sorts of discourses. The discourses cannot appeal to something outside themselves to justify their boundary-drawing, as the Canada they point to is the performative result of their recitation rather than their condition.

The discourses that individuate Canada in this exhibit clearly have no legal authority—to some extent it’s merely a museum discourse. Border control agents cannot appeal to it in order to determine who may enter the country. However, that does not mean the discourses at such sites are meaningless, purposeless, or completely without social consequences. On the contrary, insofar as the functions of discourse include the ranking, normalization, and valuation of distributed identities, subjects who identify as Canadian may develop sentiments of affinity or estrangement—or sympathies and antipathies—toward the various groups individuated in the discourse at hand. In Discourse and the Construction of Society, Bruce Lincoln rightly argues that mythic narratives are “one of the chief instruments through which [groups] maintain themselves separate from, hostile toward, and convinced of their moral … superiority to their … neighbors.” French Canadians may, for instance, develop sentiments of estrangement toward those who identify with their aboriginal ancestry; they may perceive First Peoples as a “them” apart from an “us.” So although the discourses found in a museum may not have an official, legal status in Canada, they may indirectly shape the voting choices of citizens or the judiciary’s interpretation of the law. These discourses can interpellate subjects, teaching them who or what they are, but also telling them who they are not.


Special thanks go to Naomi Goldenberg, Cameron Montgomery, and Stacie Swain at the University of Ottawa for hosting my trip to the war museum, and especially to Stacie for helping me interpret those parts of the display that depended on background knowledge about Canada that I did not have; I couldn’t have written this without her help.

Posted in Commentary | Tagged , , , , , , | 1 Comment

Disambiguating Normativity

marx+fem+dialecticI’ve grown increasingly frustrated with a certain type of argument about the use of norms in academic study. It usually goes something like this: “If we accept poststructuralist critiques of the field, everything is imbricated with values and power relations—these are, a priori, inescapable. As such, there are no grounds for excluding value-laden approaches from the field. On the contrary, constructive normative or theological approaches should be as acceptable as critique.”

To offer just one example, consider Thomas A. Lewis’ recent book, Why Philosophy Matters for the Study of Religion—and Vice Versa (Oxford University Press, 2015; I pick out Lewis merely because his book was the last I read that made this sort of claim ). He writes,

[t]oo often, we distinguish those who are doing normative work—ethicists, theologians, and philosophers of religion, for instance—from those who are doing more descriptive work—such as historians—as engaged in fundamentally different activities. … [However,] all are making normative judgments; much of what distinguishes them is that the first category are more likely to be reflecting explicitly on the justification for their normative claims, whereas the second are more likely to focus their energy elsewhere. … Nonetheless, both are making normative claims. (53)

Lewis’ conclusion? “Normative claims are inevitable in the study of religion (as in most if not all disciplines). What is important is not to try somehow to exclude normative claims but rather be willing to offer justification for the norms that we invoke” (45-6).

The problem with this, from my perspective, is that it collapses together what I would prefer to separate out as different types of normativity. That is, this argument strikes me as a bit ham-fisted, and I think we need to disambiguate further.

As a poststructuralist, I’ve long since given up on the dream of objectivity or objective truth, and thus I’m completely in agreement with the basic premise of this type of argument. However, I still find it both important and useful to appeal to intersubjective verification (of the sort we see in the work of the American pragmatists)—and it is this concern for intersubjective verification that drives me to seek to disambiguate different types of normativity. Let’s consider four different types of cases.

  1. Scholarly investigation may be motivated by normative concerns or sympathies, e.g., an individual or collective desire for social equality between men and women, or a desire for economic equality.
  2. Scholarship may employ a grid of classification that incorporates normative standards, e.g., “advanced” vs “primitive” societies.
  3. Similarly, scholarship may use evaluative concepts, either praiseworthy or pejorative, e.g., the first generation of sociologists talked about “healthy” and “sick” societies.
  4. Scholars may explicitly make normative recommendations or “should” statements regarding the relationship between the academy and the world at large, e.g., we “should” promote social justice or “should” foster certain values or virtues in our students.

What does intersubjective verification have to do with these examples? I would argue that even when sympathies or antipathies are divided, we can potentially still have intersubjective verification in the first type of case, but not in the latter cases.

Many feminist scholars of religion are motivated by normative concerns about patriarchy. Thus a feminist historical-critical reading of the Torah might demonstrate that the text depicts women as subordinate to men in a variety of ways, and that interpretations of this authoritative text have been used to reinforce patriarchal social relations in Jewish and Christian communities over the last couple of millennia. Provided we’re clear about our stipulative definitions of “Torah,” “women,” “men,” “patriarchy,” etc., even someone who shares no feminist sympathies could potentially agree with the historical-critical analysis. Although normative feminist concerns may have driven the analysis in the first place, the conclusions are, in principle, intersubjectively verifiable even by those who feel antipathy toward feminism.

The same would not be true of the other three cases provided above. As concerns the second and third cases, someone with competing sympathies would likely object to the implicit normative standards set up in the discourse at hand. Of course I can understand why, from Durkheim’s European perspective, he depicted “advanced” societies as superior to “primitive” societies. However, insofar as I don’t share his social or political sympathies or his normative assumptions about which kinds of societies are better or worse, I would wholly reject these normative evaluations embedded in his grid of classification. It would be intersubjectively verifiable—independently of one’s sympathies—that Durkheim made these valuations. But those of us who aren’t sympathetic to the devaluation of kinship communities could not verify the truth of Durkheim’s claims, precisely because we’ve rejected his grid of classification in advance. For those of us with different sympathies, we couldn’t verify Durkheim’s claims any more than a modern doctor could verify the truth of a claim about the balance of the four humors in a human body.

Similarly, the fourth case involves a type of normativity that would not be intersubjectively verifiable by those with competing sympathies. “We should promote equality between men and women” could only be agreed upon by those who share feminist sympathies. Individuals or groups who hold patriarchal norms cannot intersubjectively verify the truth of this “should” claim.

As a poststructuralist, I accept that everything we do is imbricated with norms and relations of power—whether we like it or not, our work is motivated by social concerns and can advance or retard varying social interests. As Foucault claims, knowledge is a weapon of war. However, because not all forms of normativity are equally intersubjectively verifiable, I still draw the line at negative critique (even as I recognize that critique may be motivated by norms or sympathies), and I think we should attempt to avoid using praiseworthy or pejorative evaluative terms, as well as “should” statements about our objects of study.

Of course, this normative conclusion could be intersubjectively valid only for those scholars who, like me, value intersubjective verification.

Posted in Uncategorized | 1 Comment

Self-Radicalized?

Whenever there is a “terrorist” attack by anyone who identifies as Muslim, the first tendency of the press is to blame some reified, monolitic “Islam” for the event.

By contrast, when there is a mass shooting by a white man in the US, the first tendency of the press is to isolate the individual from American culture, usually by appeals to the discourse of “mental illness.” White men shoot not because of any cultural influences; they only do it because they, as individuals, are sick. Nothing in American culture (e.g., sexism, racism, libertarian paranoia, etc.) have anything to do with the actions of these mentally ill loners.

Thus it was a shock to see this headline after the recent shooting at a gay night club in Orlando, Florida (a case in which the perpetrator self-identified as Muslim): “Attacker appeared to be self-radicalized.”

Apparently we are now including Muslims in the number of folks whose actions we refuse to historicize. Apparently no one perpetrates illegal gun deaths (as opposed to legal gun deaths, which are justified with a totally different, problematic set of discourses) for any reason whatsoever, other than their inner unmoved-mover randomly flips a switch.

So, we’ll proceed to ignore any possible role of Islamic homophobia, gun culture, American Islamaphobia, etc. in motivating action. Once again, we’re all individuals.

Posted in Commentary | Tagged , , , , , , | Leave a comment

No One Misunderstands Their Own Religion

The claim that “this person/group does not understand their own religion” should be eliminated from academic prose. If we think someone misunderstands their religion, it’s we who misunderstand.

Of course it’s clear that many Christians don’t know the history of Christianity, and many Muslims don’t know the history of Islam, just as many Americans don’t know much about the history of the US.

However, imagine someone saying that the 2nd amendment protects the right to bear arms, including semi-automatic assault rifles. While that’s clearly not what the 2nd amendment meant for the framers of the constitution, it would be stupid of us to say that this individual doesn’t understand their own politics. They understand their politics quite well.

When we say that someone misunderstands their own tradition, what we’re doing is constructing an authentic history and placing this person outside of it.

I often hear my students accuse their nominally Catholic peers of not understanding Catholicism. By contrast, I’d say most Catholics are nominally Catholic, and that they therefore represent the majority or the center (at least on the northeast United States). And my nominally Catholic students understand nominal Catholicism expertly. 

Posted in Commentary | Tagged , , | 4 Comments

When Your Theory of Religion Is Part of the Problem

Yesterday the New York Times ran a story about a “decorated Army Reserve officer” and veteran of the war in Iraq who “left bacon at a mosque and brandished a handgun while threatening to kill Muslims.” One of the men at the mosque reported that Russel Thomas Langoford “told me to go back to my country. I said, ‘Which country do you want me to go to? Give me the ticket and I will fly.’ He said, ‘No, I will not give you a ticket. I will kill you and bury your body right there.’”According to the Times, Langford “was charged with ethnic intimidation, assault with a deadly weapon, going armed to the terror of the public, communicating threats, stalking and disorderly conduct,” and it appears that a spokeman for the Army denounced the alleged behavior as “totally contradictory to Army values.”

Stories such as this are appearing more and more often, particularly with presidential hopeful Donald Trump publicly fanning the flames of communalism. For most of us on the political left, such incidents are immediately denounced as offensive and appalling. And the Times article, although written with neutral prose, gives pride of place to the voices of the Muslim men at the close of the article, in a way appeared to be designed to elicit sympathy for those at the mosque over Langford and his actions.

From the perspective of ideology critique, identify formation, and social categorization, Langford’s actions are easily explained , and thus the story isn’t all that provocative or interesting to those of us who do critical theory, except insofar as it’s an opportunity to express collective disgust.

However, much more interesting is the way the Times explains Langford’s use of bacon. According to the story,

Advocacy groups say pork is often used to insult Muslims, whose religion does not allow them to eat it. The Washington-based Council on American-Islamic Relations said the act constituted a desecration of the place of worship.

Although just one phrase, the “religion does not allow them” line does a great deal of explanatory heavy lifting for the article. This is an incredibly abbreviated version of the theory or religion according to which “religion” is fundamentally about “beliefs” that are based on “doctrines” or “sacred texts”—and these “beliefs” directly guide the behavior of the practitioners in the “religion.” It’s a simple formula, and a common one:

Belief —> Behavior

While I’ve little doubt that the editors at the Times find Langdon’s actions reprehensible, I would argue that this theory of religion is part of what directs these sorts of actions in the first place. Why is Langdon so hostile to Muslims? Likely because he thinks their religion is something that makes them do things like not eat pork, or, perhaps, fly planes into buildings. Disseminating or reinforcing this theory of religion—one that assumes everyone in the tradition partakes of the same central beliefs that universally drive their behavior in the same ways—is an excellent way to provide the conditions under which it is rhetorically effortless to demonize the group as a whole.

In addition, presenting “religion” as a form of culture that directly drives behavior is consistent with a sui generis perspective that treats “religion” as fundamentally different than other forms of culture. For instance, I have a hard time believing that the Times would ever publish a story about how French secularism forces the French to oppress religious practitioners. Beliefs that force practitioners to behave a certain way seems to be uniquely a feature of those forms of culture we label as “religious.” The other forms of culture, we presume, are more complicated than the Belief —> Behavior formula we use for “religion.”

While I doubt the Times wants to reinforce Langdon’s “us vs. them” ideology, by presenting “Muslims” as forced by their “religion” to behave in ways that are different from their neighbors, that is exactly what they’re doing.

Posted in Commentary | Tagged , , , , , , | Leave a comment