Archive for cyber racism
Two new cases of cyber racism – one in Moscow, the other in Denver – are both making news for the way that they highlight new forms of racism and for the way that they challenge our ideas about free speech.
In Moscow recently, A 21-year-old received a one-year suspended sentence for forming a racist group on the popular Vkontakte social network. In Russia, forming a racist group on their equivalent of Facebook was illegal because it violated Russia’s anti-extremist laws. This kind of action on a social networking site is not viewed as “free speech” worthy of protection. So, what about in the U.S.?
Just last month, a former City of Denver employee Joel Pousson, 46, a former clerk at the City and County of Denver’s planning department, was arrested in August after authorities traced a racist, hate-filled e-mail to a computer in his Littleton home. Poussan, who is white, allegedly sent the email to an African American woman who works as a Human Resources Manager on the same day Pousson was notified that he was being terminated. In the email, Pousson repeatedly called the HR Manager a “n***” and suggested that she was now being targeted by the KKK (a very brief excerpt: “Because now the Klan has your name and address. And there are plenty of Klan members needing stroke with the klan. … Call it an initiation And the sheet-wearing ghost that takes you out, he gets a lot of rank.”) The reason that Pousson’s email was not considered “free speech” is that both the State of Colorado and the City of Denver have laws against “Ethnic Intimidation/Threats,” and Pousson’s email was prosecuted under this law.
Whether it’s a group organizing online or an individual sending email, the promotion of racism in the public domain threatens the sense of safety and security for those who are the targets of such cyber racism. Sometimes, it can also be the precursor to racially motivated violence. But, even if there’s no explicit threat of violence, racial hatred promoted online runs counter to the ideals of racial equality liberals say they value.
Yet, in the U.S. there’s very little dialogue about cyber racism, in part I think because of the liberal tenet that hate speech just an unfortunate consequence of free speech, even though that’s not true in Europe and other western democracies. And, free speech according to most of the leading intellectuals writing about the Internet, is considered the highest ideal, as in a post today by Tim Wu at The Chronicle of Higher Ed. In the piece (which previews a new book he has from Knopf), Wu deftly connects the early crusades against Hollywood movies by Catholics to current efforts to limit speech on the Internet by pressuring technology firms:
These firms are already under strong pressure to censor from powerful governments, religious groups, political parties, and essentially any outfit with a reason to want information suppressed. The Turkish government, for example, demands that Google take down mockery of the nation’s founder, not just in Turkey, but everywhere. The Church of Scientology has never stopped demanding of anyone who will listen to remove criticism of its practices from the Internet, usually claiming copyright infringement.
Wu’s assessment, like that of Mike Godwin and other cyberlibertarians, about the importance of free speech is flawed because it rests on an analysis of information as existing apart from political and social context. In such an analysis, “information” on the Internet is content free and should all be treated the same. I do agree with Wu, however, when he writes about what managing speech looks like today:
This is what speech management looks like in 2010. No one elected Facebook or YouTube, and neither one is beholden to the First Amendment. Nonetheless, it is their decisions that dictate, effectively, who gets heard. What’s the answer? There is no easy answer. Monopolies like Google, Facebook, and Hollywood have certain advantages: That’s why they tend to come into existence. That means the American public needs to be aware of the dangers that private censors can pose to free speech. The American Constitution was written to control abuses of power, but it didn’t account for the heavy concentration of private power that we see today. And in the end, power is power, whether in private or public hands.
While Wu evokes “power” at the end of this passage, he doesn’t go quite far enough in his analysis of how power shapes what constitutes information. Here, Wu is trapped within the same larger (white) frame as other scholars writing about the Internet without considering race. Within this frame, all “information” is the same offers no mechanism for evaluating claims for racial or social justice against the protection of free speech. Such a supposedly value-neutral frame for discussing free speech as separate from a social and political context systematically disadvantages some members of society and while it privileges others. To go back to the two cases of cyber racism I discussed at the top of this post, seeing all speech as neutral “information” would then mean that the racist group organizing online in Moscow and the racist email sender were both entitled to have their speech protected to defend the right to free speech.
Taking a stand against cyber racism isn’t a threat to the future of free speech. I don’t think we have to defend racist groups online in order to value free speech. And, I don’t think we have to defend the actions of people like the guy in Denver who sent the racist emails in order to value free speech either.
Outside the U.S., other democratic nations have taken seriously Article 4 of the International Convention on the Elimination of All Forms of Racial Discrimination. This article requires countries such as Australia, NZ, the UK and Canada which are parties to the Convention to “declare an offence punishable by law all dissemination of ideas based on racial superiority or hatred…and also the provision of any assistance to racist activities.” However, complying with this article in the global, digital era is no easy task.
Writing about the Australian context on today’s JWire, Peter Wertheim has a smart column in which he notes the difficulties of battling cyber racism across national boundaries when the U.S. acts as a haven and Internet companies (ISP’s) are recalcitrant, even proud, of hosting racist content. Wertheim writes:
ISP’s lack the knowledge and insight into racism to enable them to make an informed decision about whether a particular publication has crossed the line into racial vilification or harassment. More to the point, web-sites often generate advertising revenue for their owners, and the owners pay the ISPs. In social media platforms, the more viewers and discussion, the more advertising revenue can be created, and this advertising revenue usually goes directly to the platform provider. ISP’s and platform providers have a clear commercial interest against any form of regulation, and in being as permissive as possible. The final decision about whether or not to allow an allegedly racist publication to remain on the net should not rest with them.
Ultimately, even though the law is not the whole answer to cyber racism, it must be a critical part of the answer. Without the ultimate sanction of the law, the scourge of cyber racism will continue to grow unchecked. Like other contemporary scourges, such as terrorism and environmental degradation, cyber racism operates across national boundaries and governments acting individually cannot deal with it effectively.
Wertheim’s observation that Internet companies “lack the knowledge and insight into racism” to enable them to know what to do when faced with racist content is an astute one. I’ve worked in the Internet industry and I don’t think that the people there are evil, but have never learned to think critically about race or racism. Unlike that MCI commercial from the 1990s, the advent of the Internet has not meant “here – there is no race.” In fact, the advent of the Internet means that we need to be smarter about the new forms of racial hatred – like cyber racism – rather than dismissing them as just the price we pay for free speech.
As Werthiem points out, the law can’t be the whole answer but it “must be a critical part of the answer” is spot on, I think. And, as Wu notes, these decisions are already being made by those at the helm of Facebook and YouTube.
Cyber racism is a real problem of the Internet era but we shouldn’t confuse taking action against it as a threat to the future of free speech. In fact, it’s quite possible to balance free speech and concerns about cyber racism. Indeed, we must in this global, digital era.
Updated 11/16/10 @ 5:18PM ET: Just saw this on Twitter via @hopenothate: The BBC reports that in the UK, a man has been jailed 15 months for uploading racist video clips calling for a “racial holy war” on YouTube. Local law enforcement officials are quoted in the piece as saying: “Publishing something that is abusive and insulting and that is likely to stir racial hatred is against the law and [law enforcement] will work with the police to prosecute robustly anyone who does so.” This is not a threat to free speech, but rather recognizes that free speech has to be weighed in the balance with protecting the rights of those who are targeted by racist speech.
Cyber racism, and panic about these threats, spread across a high school in Louisiana last week. Facebook messages threatening violence against black students at Assumption High School in Napoleonville, Louisiana led to increased campus security, hundreds of parents taking children out of school early and concerns that the situation could strain race relations among the school’s students. The threats, which contained racial slurs, references to lynchings and some names of potential targets, were posted on a Facebook page belonging to “Colins John” according to reports from students. Word quickly spread among students, parents, school administrators and authorities late Tuesday night about posts made by, whose profile picture featured a person in a Ku Klux Klan robe and hood. The threats also caused about half of the 1,200 students to leave before the end of the school day Wednesday. Another 200 did not go to school at all.
But the racist threats were not posted by any member of the KKK, nor by any member of a white supremacist organization. The next day, a 17-year-old student at the high school, who is also black, confessed to creating the threatening Facebook page. The student is now charged with terrorizing, cyber stalking, hate crime and theft of utility service. He is being held in jail without bond.
Individual Acts of Cyber Racism. This is not the first time that an individual young person, not affiliated with any kind of a hate group, has engaged in an individual act of cyber racism. In my book, I talk about the case of Richard Machado, then a student at UC-Irvine, who used the student directory’s pull-down menu of names to select emails of students he designated as having “Asian-sounding” names. He then sent an email to a list of students saying that he was going to kill all of them. Machado’s crime was newsworthy because he used the Internet to send threatening hate messages and there were unique technological features of this crime. And, there are lessons from the Machado case for the Louisiana case.
The fact that the student accused in the Louisiana case is African American and that Machado (a recently naturalized American citizen from El Salvador) suggests some important elements about how race and racial identity figure into cyber racism. Machado was not, according to published accounts, involved in an organized white supremacist group, nor was he known to have visited white supremacist sites online. Similarly, the young student in Louisiana was not a member of any organized hate group and is African American. Yet, the language of Machado’s email and the high school student’s Facebook page clearly contained quite literally worded hate speech.
White Racial Frame. One explanation for this type of action is that both these young men, no less than most other people in the U.S., have adopted the dominant white racial frame. Part of what’s useful about this theoretical framework is that it situates individual racist actions, like these, within a larger system of racial oppression rather than in either individual identity (not only whites adopt the white racial frame) or individual pathology of racial prejudice tied to a personality disorder. Neither of these young men needed to have been white to engage in individual acts of white supremacy online. Nor, did either need to be mentally ill to engage in such acts, and there is no indication from the published accounts that either is mentally unstable. Instead, they merely needed to grow up in the U.S. and adapt to the dominant culture’s white racial frame.
Emails, and Facebook Pages, that Wound. Placing the victims’ story at the center of an analysis of hate speech via email or Facebook, as critical race theorists suggest, is difficult because of the way this story and others like it are reported in the mainstream news accounts. Press accounts mainly leave out the perspective of those who are the targets of hate speech. In the Louisiana case, we get some limited reports that students (and their parents) were frightened and left school (or didn’t attend), but there aren’t interviews with any of these students. In the Machado case, the the UC-Irvine students included on his list of recipients for the hate-filled email messages appear nowhere in the public record of reporting about the story. So, mainstream press accounts are also written from within the white racial frame and thus leave out the systemic pattern of virulent racism that might offer more context and understanding about the impact of such online speech. In California, Asian students on UC campuses have been targets of virulent anti-Asian telephone calls, graffiti and e-mail at the time of Machado’s attacks. In Louisiana, anti-black racism has a long history, much of it interwoven with Klan history, and that might be enough for some parents to keep their children home from school upon hearing about KKK-themed threats on Facebook.
The Myth of Online Anonymity. Many people believe that when you’re online, you’re completely anonymous. There’s a rather famous (in computer-geeky-circles) New Yorker cartoon from the early Internet era that shows a dog, sitting at a computer keyboard, the caption reads, “On the Internet, nobody knows you’re a dog.” In many ways, that notion of anonymity on the Internet – that “nobody knows you’re a dog” – is a myth. And, it’s a myth that fuels these sorts of individual acts of cyber racism because people think that they can’t be identified when they’re online. In fact, nothing could be further from the truth. The casual Internet user is completely track-able online. Covering your digital footprints takes pretty high level skills that most of us don’t possess.
The high school student in Louisiana confessed to creating the hate-filled Facebook page, but not before law enforcement found him. They did this through a coordinated effort. The local sheriff’s office in this case worked with the state Attorney General’s Office and the Louisiana State Police during the investigation. They requested information from Facebook’s corporate offices, as well as from Yahoo and Charter Communications (an Internet Service Provider) to determine the identity of the Facebook poster and make an arrest. So, just as this form of hate speech can be facilitated through the Internet, it can also be countered through the same technologies.
The way that Machado was ultimately caught also reflects some of the possibilities of the Internet for addressing cyber racism. Upon receiving the racist hate email, several students responded with email of their own to the Office of Academic Computing (OAC). The staff at the OAC was able to identify Machado as the sender by tracing the emails he sent using SMTP (Simple Mail Transfer Protocol). Then, they identified the lab and located the individual computer from which they were being sent. When staffers went to this machine, they found Machado still sitting at that particular computer in the lab, and asked him to leave. Surveillance cameras in the computer lab later confirmed that Machado was in fact the person responsible for the threatening email messages. Part of what this technological hate-crime-busting story suggests is that there are ways to address such individual acts of cyber racism, if there is a will and an effort to do so. Mostly, in the U.S., there isn’t a will to do anything about such acts.
The Usual Suspects. Machado was the first person convicted of a federal hate crime via the Internet in the United States. The fact that Machado was convicted of a hate crime involving the Internet reveals some features of the law and the Internet in the U.S. Within the U.S., the only time speech online loses its First Amendment (“speech”) protection is when it is joined with conduct that threatens, harasses, or incites illegality. Yet, this case suggests that the law does not appear to be consistently applied to all people in the U.S. The fact that prosecutors vigorously pursued the Machado case, and seem to be pursuing the Louisiana high school student, is consistent with the rest of the criminal justice system in the U.S. in which minority men are viewed as inherently suspect and differentially arrested, prosecuted, and incarcerated. So, even when it comes to cyber racism, it’s black and brown men who are regarded as the usual suspects.
Massively multiplayer online role-playing games (MMORPG), like World of Warcraft (WOW) and Modern Warfare 2, are becoming more popular than ever before. Accurate statistics are difficult to come by, but the top game (World of Warcraft) reportedly has over 10 million subscribers. These numbers reflect a global ‘audience’ of participants because players are not just located in the U.S., but are located throughout the world. While the fantasy and the disappearing of race in online gaming has been the focus of some scholarship, until now no one has taken up the practice of “griefing.” Griefing – or pranking – is a practice of disrupting online games that dominates MMORPG’s. In another context like a basketball court of a baseball field, this is known as “trash talking.” It’s a way to distract other players and disrupt the game to one’s own advantage. What seems to be unique about the online games is the way that griefing has become thoroughly racialized.
Here, in a talk delivered recently at the Berkman Center at Harvard University, Lisa Nakamura recaps the history of racist griefing online and links the current crisis in racial discourse in the US with this practice, exploring the implications for digital games as a public sphere. Nakamura is the Director of the Asian American Studies Program, Professor in the Institute of Communication Research and Media Studies Program, and Professor of Asian American Studies at the University of Illinois, Urbana Champaign, and the author of the book, Digitizing Race.
A few words of context is necessary for this video, especially if you’re new to, or unfamiliar with, Internet culture. In the first part of the talk, Nakamura spends some time referring to ‘ROFLcon’ which is a biennial convention of Internet memes that takes place at MIT. Throughout, she also refers to 4Chan. 4chan users have been responsible for the formation or popularization of Internet memes such as lolcats (and the endless variety of ‘I can haz cheezeburger’ images) and Rickrolling, in which Internet destination was hijacked for a prank, so that images of Rick Astley singing “Never Gonna Give You Up” appeared instead of the page that was searched for. She spends all this time talking about these humorous Internet memes because two of her main points in this discussion are that “racism is a meme” and that “being funny is the real currency of the popular Internet.” Here’s the talk, which is on the long side (1:10), but worth it:
To explain racist griefing, Nakamura poses the concept of “enlightened racism.” For this concept, she draws on the work of Susan Douglas’ Enlightened Sexism, which Douglas defines as a response, deliberate or not, to the perceived threat of a new gender regime. It insists that full equality has now been achieved so it’s now ok, even amusing to resurrect sexist stereotypes of girls and women. Quoting Douglas, Nakamura says, “Enlightened sexism takes the gains of the women’s movement as a given and uses them as permission to resurrect retrograde images of girls and women as sex objects, bimbos and hoochie mamas still defined by their appearance and biological destiny.”
Similarly, then, she argues that “Enlightened racism is a form of racist behavior and speech only available to those who are known, or assumed known, not to be racist.” In many ways, I think this concept is part of what is going on with the “ghetto parties” at college campuses and the young people painting themselves with blackface for Facebook photos, that we’ve talked about here, and the “hipster” racism some have mentioned elsewhere online. The racist griefing that she is addressing in online gaming often makes explicit use of racist epithets, which she explains this way: “The n-word is funny because it is so extreme that no one could really mean it. And humor is all about ‘not meaning it.’ If you take humor and the n-word, you get enlightened racism online and attention.”
Nakamura goes on to argue that paradoxically, “the worse the racism and sexism are, the more extreme and cartoonish it is, the harder it is to take seriously, and the harder it is to call it out.” She points out, quite astutely I think, that for those within gaming culture, calling out racism in this context signals you as someone “not of the gaming culture” and thus, as someone who is taking racism “too seriously” and doesn’t have a good sense of humor. Yet, this sort of
humor is a “confusing discursive mode for young people,” she observes, because they are “unable to separate enlightened racism from regular racism.” And, indeed, I think this is a real problem here. As Nakamura notes, the image of the “humorless feminist” is now joined with the image of a “humorless” old(er) person who takes race too seriously.
As usual, I find Nakamura’s work compelling and provocative, although I do have a couple of points of criticism. While I realize this was just a luncheon talk and as such is work that’s in a formative stage, I was surprised that she didn’t mention the work of Doug Thomas who has written on racism in MMORPG’s. Of course, I’m also not convinced that everyone in our society has moved on to “enlightened” racism, as I point out at some length in Cyber Racism. But, I get the appeal of studying this form of racism and acknowledge that this is certainly the more popularized form of racism.
I’m also a bit surprised that she would use the term ‘enlightened racism’ and not make reference to the book by this same name, written by Sut Jhally and Justin Lewis (about audiences watching the Cosby Show). Although Jhally & Lewis’ work is 18 years old now, I think there are some relevant insights from this work that might inform our understanding of racist expressions in a supposedly post-racial era. Much like people today look to the election of President Barack Obama as a marker of the ‘end of racism,’ so too did many people took the success of the Huxtables, the fictional family on The Cosby Show, as evidence of racial progress. In their research, Jhally & Lewis interviewed racially diverse audiences to find out how they viewed and interpreted The Cosby Show. Part of their purpose was to see if watching the show diminished racist attitudes, which was an explicit goal of the show’s producers. Instead, what they found was that the show actually confirmed people’s racist attitudes because they took the Huxtable/Cosby’s success as evidence that there were no barriers to blacks’ success in this society, so any failing must be due to individual characteristics. While what constitutes ‘an audience’ is certainly changing in the digital era, I think this kind of research with people who are actually involved in MMORPG’s would be a useful way to explore the latest iteration of ‘enlightened racism.’
In case you’ve missed it, there’s a lot of discussion whirling around the web these days about a HP-designed webcam that seems to read the faces of white people and not the faces of black folks. Some are accusing HP of racism. Is this a case of cyber racism? It all got started by this, rather funny, video (2:16):
In the last day or two, an “unknown political group” has created a video (and loaded YouTube), called “I’m a Racist,” and it’s been getting a lot of attention. The short description posted with the video states ‘We believe the health care system needs to be fixed. However, government intervention is not the answer, nor should we be called racist for not agreeing with Obama’s health plan!’ Fortunately, Rachel Maddow and Melissa Harris-Lacewell, provide a thorough critique in this clip (8:01):
Harris-Lacewell makes an excellent point here when she points out the way the ad reinforces an individualized notion of racism, as a personal trait, rather than an understanding that racism is systemic.
This “Guess I’m a Racist” meme jumped to Twitter and people began updating using the hashtag #youmightbearacist. (Using hashtags (#) on Twitter is just a way for people to have a conversation around a theme, so on an evening when the BET Awards are on, people might use #BET as a hashtag to talk about the awards. But the racism prompted by that hashtag is another story.)
Some of the updates to Twitter with the #youmightbearacist hashtag were meant to be funny and skewer racism, some were not so funny deeply racist. Almost all reinforced the point that Harris-Lacewell makes about the anti-health care ad, which is that they assume that racism resides in an individual rather than operates systematically.
There are a couple of things that are interesting about all this for me. First, the video opposing health care is a fairly slick politlcal ad yet it’s created by an “unknown” political ad. In this way, it’s similar to the cloaked sites that I’ve written about here (and in my recent book, Cyber Racism) in which people disguise authorship of websites in order to conceal a political agenda. This ad is slightly different because it’s being pretty overt about part of their political agenda (opposing health care reform), but because the identity of the group that created the ad is hidden, we don’t know how their stance on this one issue may (or may not) be part of a larger political agenda.
What intrigues me further about this is the convergence and overlap of media. So, the unknown political group releases a video on YouTube exclusively, and the video quickly goes viral and becomes one of the most viewed videos on YouTube. They do not buy air time on television to get their message out, but they don’t have to, because the video gets picked up by Maddow’s show and she airs the video. Then, the meme travels to Twitter, where people both reinforce and resist (sort of) the notion of what it means to be “a racist.” The political battle over race, and the meaning of racism, has moved into the digital era.
In the last few days, there have been two stories in the news which highlight the very different approaches to hate online in the U.S. and in the U.K. The story from here in the U.S. involves a racist image of Michelle Obama (drawn to look like an ape). The image first appeared online because someone posted it on their blog (the image has since been removed from the blog). Once the image was online, it quickly appeared at the top of Google’s results when anyone did a Google-image search for “Michelle Obama.” Whether or not this was a result of a “Google bomb” (an intentional manipulation of Google’s algorithm) or just a fluke, remains the subject of some debate. Those on the right in the U.S., such as FoxNews, are pointing out that this Google bomb was quickly diffused, unlike Bush’s Google bomb. For it’s part, Google (the leading search engine company based in California), bought ads warning users about “offensive results” and apologized, yet still claims no responsibility for the images appearing in Google search results.
Mostly, though, opinion in the U.S. about this incident follow along the line of this piece in the AtlanticOnline (a mainstream to left publication). Derek Thompson writes:
The Internet is unwieldy boundlessness of content, some of which is utterly depraved. But that’s to be expected when you’re talking about the sum of all knowledge and information in the world. Racist images aren’t illegal. And researching examples of racism online isn’t only legal, it’s can also be useful for journalists, social academics and anybody trying to piece together fragments of the zeitgeist. Google isn’t the editor in chief of the internet, it’s a curator. It’s job is to organize and I hope it doesn’t delete or de-index content just because it’s offensive — and especially not because it’s offensive to important people.
And, Thompson is correct in his assessment of the U.S. landscape around these issues. The bind, of course, is in the line I’ve highlighted in bold there above: Racist images aren’t illegal here in the U.S. This one fact makes taking other sorts of action difficult, but not impossible. And, the reason these images are not illegal in the U.S. is that many people here want to argue that the First Amendment, which is designed to protect dissent against the government, protects all manner of racist speech. Or, in the line of reasoning above, the Internet simply contains too much information for it to be possible to ever regulate it. But, the right to free speech and being indexed by the search engine Google are two different things. As one of the commenters after that piece at the AtlanticOnline points out: being on the Internet and being indexed by Google are two different things. No one has a constitutionally protected right to have their online content indexed by Google.
Let’s take a look at another example from the U.K. Two men were convicted for publishing racist hate speech, including “Tales of the Holohoax.” These postings of online hate were reported to the police in 2004 after concerned citizens saw them. This action is possible in the U.K. because it is against the law to incite racial hatred either in print or online. The two men were sentenced under U.K. law to four years and two years in Leeds Crown Court in July, 2009. The story is back in the news now because the two men are appealing their convictions saying that the websites, which were hosted on servers in the U.S., would be “entirely lawful” here. And, they’re right. Effectively poinitng out that the U.S. functions as a haven for hate online.
What’s still unclear is how the courts will rule in this case.
Two social psychologists from Northwestern University conducted one of the first experimental field studies in a virtual, online world and found racial biases operate in much the same ways that they do in the material, offline world. The study’s co-investigators are Northwestern’s Paul W. Eastwick, a doctoral student in psychology, and Wendi L. Gardner, associate professor of psychology and member of Northwestern’s Center for Technology and Social Behavior. The study was conducted in There.com, which is similar to Second Life, and offers users a relatively unstructured online virtual world where people choose avatars – or human-looking graphics – to navigate and interact.
This next bit gets a little technical, so bear with me.
The experiemental study design is referred to as a “door in the face” (DITF) and it works like this: the experimenter (in this case an avatar) first makes an unreasonably large request to which the responder is expected to say no, followed by a more moderate request. In the past, researchers have found that people are more likely to comply with the moderate request when it was preceded by the large request than when the moderate request was presented alone, and this held true in the virtual world as well. In the virtual world, the experiment’s moderate request was: “Would you teleport to Duda Beach with me and let me take a screenshot of you?” In the DITF condition, that request was preceded by a request of the avatar to have screenshots taken in 50 different locations — requiring about two hours of teleporting and traveling.
Still reading? Good. What these researchers then did was to vary the skin tone of the avatar making the request, like this:
What’s interesting to note is the way that the skin tone change altered the responses:
In one of the most striking findings, the effect of the DITF technique was significantly reduced when the requesting avatar was dark-toned. The white avatars in the DITF experiment received about a 20 percent increase in compliance with the moderate request; the increase for the dark-toned avatars was 8 percent.
While it may not be surprising to learn that people take their racism with them into these (supposedly) new virtual worlds, this research is still noteworthy both for its innovative methodology and because it challenges the conventional wisdom on two fronts: one that we are living in a post-racial society and that the Internet is an inherently liberatory technology that offers an escape from old hierarchies of oppression.
The good folks at Contexts asked me to an interview for their podcast series a few weeks back about my new book, Cyber Racism, and now it’s available online, here. The description from their website about the podcast:
Cyber Racism is about white supremacist groups online, and Daniels tells us how white supremacy online is important for how we think about education, free speech and multiculturalism.
If you’ve missed any of the discussion I (or Joe) have posted here about cyber racism, this provides a good introduction. There’s a little bit at the end about the work Joe and I do here on the blog. One small correction, the scholar I refer to in the piece who developed the phrase “translocal whiteness” is Les Back (I mangled his name).
In Cyber Racism, I examine the many ways racism is being translated into the digital era from the print-only-era of newsletters (such as those I explored in my earlier book, White Lies). I also spend some of the new book exploring ways of fighting cyber racism (see Chapter 9). There is a recent example that illustrates both the pernicious threat of cyber racism and an effective strategy for combating it.
Allen McDuffee is a NYC-based freelance journalist whose writing has appeared in The Nation, Mother Jones, DailyKos and HuffingtonPost. McDuffee as well as for his own site, Governmentality. Here’s McDuffee’s account of how this incident began (from July 15, 2009):
Last night as I looked at the results from my statistical gathering software program, I was disgusted to learn that an individual had posted and linked to some content from my blog. Most writers and bloggers work hard to get their work linked to, but when I saw the content of this individual’s blog, I literally became sick to my stomach.A white supremacist, with a screen id and blog called Kalki666, found a post I had written critical of Israel and decided to repurpose it for his anti-Semitic agenda. He also used me as his research assistant for the main part of that same post when he found this post on my blog from May 21 and just re-posted it yesterday. And then there are the swiped images, too. Not only had he posted my content and linked to me on his blog, he further linked on white supremacist discussion boards. In no way, shape or form will I allow him to attribute his agenda to my reporting and blogging. I fully condemn Kalki666′s actions and everything that he, his blog and his community stand for. Yes, I am critical of Israeli policies. I am also critical of the Palestinian Authority and Hamas. But beyond that, it needs to be clear that being critical of Israel does not make one anti-Semitic.
This kind of “re-purposing” of content intended for a white supremacist agenda is one of the characteristics of cyber racism. In the book, I talk about the way other white supremacists have used this same strategy to re-frame material from the Library of Congress archive of WPA recordings with freed, former slaves to make their argument that slavery was “sanitary and humane” rather than the brutal and de-humanizing institution it was, in fact. Lifted out of context and re-posted on a white supremacist website, the oral history of slavery becomes part of an arsenal of web savvy white supremacists. In McDuffee’s case, text he authored critical of Israel – but not intended as antisemitic – ends up re-posted on a white supremacist forum to further their antisemitic agenda. On the web, as in print publishing, context and authorship matter; but, unlike printed-media, the copy/paste technology of the web makes the migration of ideas from one context and author to another several orders of magnitude easier.
Then, McDuffee’s story gets even more interesting. He writes:
Now, upon further research, I learned that Kalki666 was surfing and posting from an IP address registered to Wheaton College (IL)–a conservative, Evangelical Christian college. [And...] I’m writing to Dr. Duane Liftin, the President of Wheaton College. He should be made aware of the types of activities that are occurring on the Wheaton College IP address. If it’s an employee, I’m sure this violates the usage policy of the College. If it’s a student, well I suppose this opens a whole host of other issues.
I’m also going to bring it to the attention of WordPress, where the blog is hosted. While the post that I’ve described here probably does not violate their usage policy, I’m certain that I saw several others that do–ones that, in my mind anyway, provoke violence. To me, this is the difference between free speech and injuring speech that ought be censored. As a journalist, I take this issue very seriously and, again, I think this deserves its own post where I will elaborate in the next few days.
So, while the form of this digital-era white supremacy is thoroughly web-based, so is the response. First, McDuffee identifies the IP address (the unique identifier for each computer) and locates it geographically and institutionally to a suburban Chicago college. He then uses email to contact the president of the college and the software company that runs the blog software. McDuffee smartly invokes the “usage policy” (sometimes called “TOS” for “Terms of Service”) in place at the college. Indeed, most institutions, software platforms, and Internet Service Providers (the company that provides your Internet service) have some sort of TOS that prohibits explicitly racist / antisemitic language that encites hatred or violence. I’m often asked if fighting cyber racism isn’t “impossible” because of “free speech protection” – and the answer is no, it’s not impossible. This sort of hate speech over the Internet is a “TOS” issue, not a free speech issue. However, enforcement of these policies is almost entirely left up to individuals – like McDuffee – to pursue the issue and demand action.
Furthermore, McDuffee deftly uses his blog to document and post the responses from the college president, the blogging software and from the white supremacist in question. McDuffee was understandably horrified by this turn of events, and he was tenacious in his quest for a just resolution. And, his efforts paid off. Within 48-60 hours (approximately 2 days) of the initial discovery, McDuffee posted this:
UPDATE #9: Wheaton College President Duane Litfin emails me (July 17 1:44pm)
The culprit has been found and escorted off campus. More details to follow shortly.
As it turned out, the culprit was neither a student, nor an employee of the college, but was an interloper who had accessed one of several free-to-the-public computers in the college library. He was identified as Merrill Sech, 38, of Westmont, IL. When the campus police and a local Wheaton police confronted him on the college campus to escort him off campus and issue a do not return letter because he violated their computing policy, he assaulted the officers. So, Sech was arrested. According to McDuffee’s FOIA request, Sech also has a history of other criminal offenses and is currently in DuPage County Jail. For more info, there’s also this podcast about the incident. According to McDuffee, the story is still unfolding in various ways, so you’ll want to check his Governmentality blog (or follow him on Twitter @allen_mcduffee) to catch all the updates.
For my purposes here, I want to highlight that in order to effectively fight cyber racism, you need people who are 1) committed to the value of racial equality, 2) web-savvy and 3) willing to take action. McDuffee embodies all these qualities as an individual. On what might be called the structural side, you need laws and policies in place that regard hate speech as unacceptable (as the college did in this case), and officials that are willing to take action against these sorts of violations (as the college president, campus and local police did).
McDuffee’s encounter with this white supremacist illustrates several of the points that I make in Cyber Racism, chiefly that the threat from white supremacy online is less a threat of “recruiting” and more a threat to ideas and values of racial equality. McDuffee’s encounter also illustrates that the political struggle for racial equality is one that requires us to be committed, web-savvy and willing to take action and demand a response from institutions and organizations that may be unwitting perpetrators of white supremacy.
In today’s New York Times “Room for Debate” series, The Editors have an online forum about the “Internet angle” on the recent acts of domestic terrorism ( photo credit: pasukaru76 ). In both recent cases - the murder of Dr. Tiller and the attack on the Holocaust Museum – The Editors write that “the suspect arrested was well-known among fringe “communities” on the Web” (the quotes around “communities” are in the original from The Editors). I’m going to leave the Tiller case for now, and focus on an examination of the Internet angle in the von Brunn case. I return to the Tiller case at the end of this post.
After von Brunn was released from prison he went to work for a Southern California bookstore affiliated with the Institute for Historical Review (IHR) a Holocaust-denial group.
I refer to the IHR site (and others) as “cloaked” sites because they intentionally disguise their intention in order to fool the unsuspecting web user about their purpose. As I’ve written about here before and in the book, the cloaked sites draw millions of readers each year.
Following that, von Brunn created his own virulently anti-Semitic website called Holy Western Empire (link not provided). If you’re curious about his web presence, several writers at TPM have posted screen shots of von Brunn’s overtly racist and antisemitic website and other online postings here, here and here. Von Brunn’s sites appear to be “brochure” sites – that is, one-way transfers of information (rather than interactive sites where users can add content).
I’ve spent more than ten years researching hate and white supremacy online and in my new book, Cyber Racism, I discuss both kinds of websites: the “cloaked” sites like those of the Institute for Historical Review and the overtly racist and antisemitic websites like von Brunn’s Holy Western Empire.
There is no denying that white supremacy has entered the digital era. And, the overt racist and antisemitic sites have proven even more popular in the Age of Obama.
Avowed white supremacist extremists, such as James von Brunn (and David Duke), were early adopters of Internet technologies. White supremacists were among the first to create, publish and maintain web pages on the Internet. The reality that von Brunn and other white supremacists were early adopters of the Internet runs counter to two prevailing notions we have: 1) that white supremacists are gap-toothed, ignorant, unsophisticated and uneducated; and, 2) that the Internet is a place without “race.”
In fact, neither of these notions is accurate or supported by empirical evidence. There’s plenty of data to show that some white supremacists are smart, as well as Internet savvy. And, the Internet is very much a ‘place’ where race and racism exist.
So, what’s at stake here? What’s the harm in white supremacy online?
I argue that there are a number of ways in which white supremacy online is a cause for concern, namely: 1) easy access and global linkages, 2) harm in real life, and 3) the challenge to cultural values such as racial equality.
With the Internet, avowed white supremacists have easy access to others that share their views and the potential at least to connect globally, across national boundaries with those like-minded people. I highlight potential because so far, there hasn’t been any sign of transnational border crossing to carry out white supremacist terrorist acts, although while there is a great deal of border crossing happening online.
There is also a real danger that ‘mere words’ on extremist websites can harm others in real life (e.g., Tsesis, Destructive Messages: How Hate Speech Paves the Way for Harmful Social Movements, NYU Press, 2002). And, for this reason, I’m in favor of a stronger stance on removing hate speech from the web and prosecuting those who publish it for inciting racial hatred and violence. In my view, websites such as von Brunn’s constitute a burning cross in the digital era and there is legal precedent to extinguish such symbols of hate while still valuing free speech (see Chapter 9 in Cyber Racism for an extensive discussion of efforts to battle white supremacy online transnationally). There is, however, lots of ‘room for debate’ on this subject and that’s the focus of the NYTimes forum today.
It’s important to highlight the cloaked websites I mentioned earlier. The emergence of cloakes sites illustrate a central feature of propaganda and cyber racism in the digital era: the use of difficult-to-detect authorship and hidden agendas intended to accomplish political goals, including white supremacy.
The danger in the cloaked sites is much more insidious than the overt sites, and here’s why: even if we could muster the political will in the U.S. to make overt racist hate speech illegal – admittedly a long shot – such legislation would do nothing to address the lies contained in cloaked sites.
The goal of cloaked sites is to undermine agreed upon facts – such as the fact that six million Jews were murdered in the Holocaust – and to challenge cultural values such as racial equality and tolerance. And, these sites are the ones that are likely to fool a casual web user who may stumble upon them and be unable to decipher fact from propaganda.
I’ll give you one other example of a cloaked site and connect this back to the Tiller case. A student of mine a couple of years ago made an in-class presentation in which she included the website Teen Breaks to illustrate the concept of “post-abortion syndrome.” Now, as savvy readers and those involved in pro-choice politics know, there is no medically recognized “post-abortion syndrome.” This is a rhetorical strategy of the anti-abortion movement used to terrify women and keep them from having abortions. This pro-life propaganda is effectively disguised by the cloaked site Teen Breaks which appears to be one of many sites on the web that offer reproductive health information for teens.
This cloaked site takes a very different strategy from the “hit list” websites that publish the names, home addresses, and daily routines of abortion providers. Whereas the “hit list” not-so-subtly advocates murder, the cloaked sites undermine the very agreed upon facts about the health risks of abortion. These are two very different, but both very chilling, assaults on women’s ability to make meaningful choices about their reproductive lives.
Similarly, the holocaust-denial sites and the overt racist and antisemitic websites are two very different, and both chillingly effective, assaults on racial equality.