The Dangerous Speech Project: a foolish, damaging, and flawed attempt to autocensor.

This is from the Dangerous Speech Project. Who miss the point. The best correction for bad speech is more speech, and the correct response to inflammatory speech is satire, mockery and fisking.

The correct response to violence is the law: the systematic use of aggression and violence to keep the peace. This is not for some unelected organization to police. This is a matter for the crown, for parliament, for constitutions.

For what one person calls inflammatory another calls freedom. Or theology. Or both.

Inflammatory public speech rises steadily before outbreaks of mass violence, suggesting that it is a precursor of, or even a prerequisite for violence. In many cases, a few influential figures turn their own people against another group, using speech that has a special capacity to inspire violence: Dangerous Speech. Found in myriad languages, cultures, and religions, Dangerous Speech is uncannily similar across them. For example, it often refers to people as insects, vermin, aliens, threats, or pollution.

Violence may be prevented by diminishing such speech, or by making it less compelling to its audiences – without harming freedom of expression. The Dangerous Speech Project works to find the best ways to do this.

What these people do not consider is that mass violence is a reaction. It may be driven by a demagogue driving up hatred — or by a group doing hateful things. If one cannot damn the hated things, if one cannot speak up because it is inappropriate and it may “lead to hatred” — then people go silent. And angry.

And let us be clear: the Dangerous Speech Project is no friend of free speech. Its goals are censorship. It is funded by the Canadians, who devalue free speech and have human rights kangaroo courts that hound those who do not follow their narrative.

Our intent is for this work to have at least three points of impact. First, we aim to deliver a classification scheme for counterspeech online which can be used to design programs and policies to diminish expressions of hatred and extremism online. Diminishing such expressions should accomplish two separate objectives, in turn: to change the minds of the original authors of hateful and dangerous speech, and to expose others to less dangerous speech.

Second, we aim for the proposed work to set the stage for a more ambitious intervention-based study in which we generate counterspeech of several types, to test the comparative advantages of each. While this experimental phase is beyond the scope of the current project, the present work will lay the necessary groundwork for its undertaking.

Finally, as part of this project, we will develop a set of computational tools for detecting spontaneous expressions of hate on Twitter and responses that attempt counterspeech. We anticipate that these software tools will be valuable resources for both this project as well as many other projects (both academic and practical in nature) focused on understanding and curtailing hate speech in online environments.

This effort is funded by Public Safety Canada as part of its Kanishka Project, a five-year initiative investing in research on terrorism and counter-terrorism, including preventing and countering violent extremism. The Kanishka Project is named after the Air India Flight 182 plane that was bombed on June 23, 1985, killing 329 innocent people, most of them Canadians, in the worst act of terrorism in Canadian history.

These people are aligned with twitter. They are setting up algorithms to modify conversations. I expect twitter to fold soon: it is time to leave. This will limit my screenshots to a certain extent — though the best place to capture fools in existence, tumblr, is still ripe material — but the algorithms have three flaws.

The first is moral. At the risk of repeating what has been said since Milton (who was probably repeating others) the best defence against hateful speech is more speech. Not limitations, but freedom. If someone is offensive they better be prepared to take what is coming: one of the things one notes about the modern censors is they cannot stand laughter.

The second is practical. The classification of hatred — as a construct — is difficult. Getting universal criteria is hard. And then identifying carefully and accurately hate from non hate: the conversation that may hurt some from that which advocates the death of many — is very, very difficult. Such instruments need careful construction, verification, and need to be peer-reviewed. I have done enough survey development to know how long this takes — it is measured in months to years. Twitter and the internet work in days to weeks, and consider censorship something to route around. Their instruments will never be robust and defensible.

The final issue is that they are using proxy measures. Hateful speech is not violence. Violence is violence. Many of those who are terrorists will not use hate speech because they consider it useless: they will be polite until they blow up a plane or building. This is the proxy measure argument. It is not easy to classify speech. But predicting rare events is virtually impossible — as any person who has used Bayes’ Theorem to work out post test probabilities will attest.

However, we can safely say this.

This project has no moral foundation. It has a flawed methodology. It will fail. But it will cause damage, commencing with the devaluation of those social media sites, such as twitter, that employ them.

2 Comments

  1. Hi Chris! Thanks for your thoughtful review and critique of our work. I’d like to respond to a couple of the points you raise here.

    – We are very much committed to freedom of expression – our project director spent a decade as a journalist before going into human rights law, and I own a significant stake in a media startup in my hometown. Part of the reason we created the Dangerous Speech framework is to help governments, platforms, etc. resist the urge to ban “hate speech,” which is a poorly defined category that often includes anything the regulator doesn’t want to hear.

    – We received a small one-time grant from Public Safety Canada to support the specific research project described in the section you quote from our website, much of which is subgranted to our research partner at McGill University. The grant only covers direct costs of that project and a small amount of administrative overhead; it does not contribute to any of our general operating funds.

    – The algorithms our research partners have developed do not do anything to Twitter – they merely collect information (and have been developed independently of Twitter, in spite of their restrictions on third party access). We do not want computers or Twitter to modify people’s conversations. What we’re trying to do is sift through the mass of content on Twitter to figure out where and how people are changing each other’s perspectives and behavior, so we can better understand how the core interpersonal action of talking with each other and changing each other’s minds persists and changes in the context of a space like Twitter. We are also quite fond of Milton, and we hope that our research will help people be more effective when responding to hateful speech with more speech.

    – We’re quite aware that establishing universal criteria is a hard, long-term process. We’ve been working on this for about five years, and we’re in the process of expanding our research capacity to enhance the reach and reproducibility of our efforts. The Internet is a fast-moving place to be certain, but we’re committed to doing this work the right way, drawing on experts from multiple disciplines and countries to continually refine our guidelines and practices.

    – Proxy measures are indeed difficult to work with. There’s bits and pieces of quantitative data linking speech acts to mass violence, but we’re careful not to draw a line of causation between the two (unlike many governmental and international bodies). We’ll never be able to say that our research or the tools we created has definitively prevented violence from occurring – but we think it could help.

    Let me know if you have any questions – happy to discuss any of this further.

    Tonei Glavinic
    Program Manager
    Dangerous Speech Project

    February 12, 2016
    Reply
  2. […] We are very much committed to freedom of expression – our project director spent a decade as a journalist before going into human rights law, and I own a significant stake in a media startup in my hometown. Part of the reason we created the Dangerous Speech framework is to help governments, platforms, etc. resist the urge to ban “hate speech,” which is a poorly defined category that often includes anything the regulator doesn’t want to hear. […]

    February 14, 2016
    Reply

Leave a Reply