Participatory online media, often referred to as Web 2.0, refers to technology – like this blog – where anyone can contribute content to a website. In the early days of Web 2.0, there was a lot of talk about how new media technology would facilitate democracy. And yet, the reality of Web 2.0 at all sorts of sites where people are encouraged to post their reactions to current events, suggests that the technology is allowing white people to vent racism that might otherwise remain hidden. Here’s just one recent example, in which MediaTakeOut.com points out the way (competitor) TMZ.com approves racist comments on the site, allegedly to increase traffic to their site. (I say “allegedly” because we don’t know the motives of the people approving the comments at TMZ.com.) What seems clear is that all sorts of places online have become venues for whites to display racism that is usually reserved for the whites-only backstage, as I noted some months ago about Facebook.
This sort of cyber racism seems particularly pronounced at YouTube, the popular video sharing site, as this blogger notes. In a recent Sociology honors thesis submitted for his degree at UC-Berkeley, undergraduate Albert Wang conducted a content analysis of hate speech on YouTube. Wang screened 10,579 English-language comments posted to 45 videos, of which 599 comments met my criteria for hate speech and were thus included in his sample. Of these 599, most dealt with blacks, who were targeted by 209 hate comments, and women, who were the targets of 190. Sixty-one more comments targeted Muslims, forty-eight targeted whites and twenty-nine targeted Jews. The remaining biased comments targeted Hispanics, homosexuals, men and a variety of ethnicities and nationalities, such as the Japanese and the Turks; some also attacked non-whites in general. In all, 271 comments targeted race, 197 gender, 93 religion, 22 nationality, 9 sexuality and 7 ethnicity. Among those that targeted race, Blacks were the most frequently targeted group.
Wang also attempted to interview people who posted hate speech on YouTube. to hate occur on YouTube. The conclusion he reaches based on the larger quantitative analysis and his small sample of interviews (7 responded out of 34 requested interviews) with posters is that hate speech online exists in a variety of forms (e.g., most frequent targets are Blacks, women, Jews and Muslims), and the reasons for it range from resentment of what is viewed as the liberal establishment to anger at racial tension. Wang identified two important similarities. First, hate comments are an inherently online form of discourse as people take advantage of the Internet’s anonymity and the relative ease of posting comments online. Second, online hate speech does not appear to present any new ideas but instead rehashes long-standing stereotypes and malice.
Wang’s research is consistent with my own research in my new book, Cyber Racism, which looks at the way white supremacists have translated their rhetoric into the digital era. Central to my argument in this book is that white supremacy has entered the digital era and old forms of racism are being adapted to new technologies. Avowed white supremacist extremists, such as David Duke, were early adopters of digital media technologies; they were among the first to create, publish and maintain web pages on the Internet. The reality that David Duke and other white supremacists were early adopters of digital media, runs counter to two prevailing notions: one, about who white supremacists are, and the other, about the Internet. The first is that white supremacists are gap-toothed, ignorant, unsophisticated and uneducated; the second is that the Internet is a place without “race.” In fact, neither of these notions is supported by the empirical evidence. White supremacists have customized Internet technologies in ways that are innovative, sophisticated and cunning. And, the Internet is an increasingly important site for political struggle where meanings of race, racism and civil rights are contested. The emergence of cloaked websites illustrates a central feature of propaganda and cyber racism in the digital era: the use of difficult-to-detect authorship and hidden agendas intended to accomplish political goals, including white supremacy.
Of course, there is also important, progressive organizing that simultaneously happens online. Democratic movements, organized at the grassroots by people of goodwill with Internet-enabled mobile phones, have transformed elections. Cyberactivists organized the march of nearly 10,000 people against a white supremacist judicial system in Jena, Louisiana through email, blogs, Facebook, MySpace, and YouTube. And, almost ten years earlier, Black women excluded by the white-dominated mainstream media and male-dominated African American press, took advantage of the participatory quality of Internet technologies to organize the Million Woman March. Yet, some take these encouraging signs about the use of the Internet to mean that the technology itself is inherently democratizing. Still others see the presence of white supremacy online as evidence that the Internet is an inherently dangerous place. What we need is a more nuanced analysis enables a way to rethink our ways of knowing about racial equality and civil rights in the digital era.