If you want total control, your subjects must enforce your rules upon themselves. You can achieve this condition by being inconsistent in your enforcement, which motivates them to use wide margins to edit their own behavior, at which point arbitrary enforcement can keep them in a state of sublimated panic, unwilling to even bring up any reference to hot button issues.
Our Community Guidelines prohibit hate speech that either promotes violence or has the primary purpose of inciting hatred against individuals or groups based on certain attributes. YouTube also prohibits content intended to recruit for terrorist organizations, incite violence, celebrate terrorist attacks, or otherwise promote acts of terrorism. Some borderline videos, such as those containing inflammatory religious or supremacist content without a direct call to violence or a primary purpose of inciting hatred, may not cross these lines for removal.
So what is “hate speech”? Their definition of “hate speech” is as nebulous as their agenda:
Hate speech refers to content that promotes violence against or has the primary purpose of inciting hatred against individuals or groups based on certain attributes, such as:
- race or ethnic origin
- veteran status
- sexual orientation/gender identity
There is a fine line between what is and what is not considered to be hate speech. For instance, it is generally okay to criticize a nation-state, but if the primary purpose of the content is to incite hatred against a group of people solely based on their ethnicity, or if the content promotes violence based on any of these core attributes, like religion, it violates our policy.
Then notice the insertion of more vague terms in their vague content policy:
Our products are platforms for free expression. But we don’t support content that promotes or condones violence against individuals or groups based on race or ethnic origin, religion, disability, gender, age, nationality, veteran status, or sexual orientation/gender identity, or whose primary purpose is inciting hatred on the basis of these core characteristics. This can be a delicate balancing act, but if the primary purpose is to attack a protected group, the content crosses the line.
The first question we might ask is, “What is hatred?” Does this mean simple dislike or criticism, or is this limited to the type of Protocols of the Learned Elders of Dublin type paranoid conspiracy thinking that leaves no possible conclusion for the reader except that a final solution to the Fenian Problem must be undertaken?
Next, we need to ask, “What is inciting?” This is another weasel term, in that encouraging people to think critically about a topic could be seen as inciting them on some level. It also could be retroactive; if someone does something rash after reading something reasonable, it can easily be inferred that this material was inciting something-or-other.
Finally, and this is because we are dealing with Leftists, “What is violence?” We all know about encouraging assault, genocide and rape, but is it violence when a group is relocated? Since some will resist, it might be. To condone violence might be as simple as saying, “I think the races should be separated in order to avoid further violence.”
Despite all of the friendly language about free speech and fine lines, these rules are deliberately vague and designed to allow Google maximum leeway in removing material that its censorship team — probably not highly paid, and probably career Leftists — find offensive. So far, they have used it to remove a number of music videos from their search results, merely for having bad associations or bad topics.
If you look at these rules by their terms only, it becomes clear that any criticism of these protected groups — races, ethnic groups, sexes and the insane (mentally disabled) among them — is now on the chopping block, or at least will be if Google decides it is according to their interpretation of these fuzzy boundaries.
The real agenda behind this latest move is to allow Google to hide anything conservative tinged-from its search, and by demonetizing such videos, discouraging their production. For some time there has been a cottage industry in making realist videos and receiving the “monetization,” or share of the advertising revenue, from those videos. That is Google’s real target.
Part of the problem here is that “hate speech” is such a vague concept that it almost cannot be defined. This broad brush includes both deranged threats of violence, and principled and reasonable criticism of Leftist policies. In order to protect the latter, Google has designed rules potentially designating all non-Leftist thought as the former.