Twitch’s updated hateful conduct and harassment policy, announced on Dec. 9, 2020, is now in effect. Here’s what’s new.
With this updated policy, Twitch has created two new sections to define what it, as a platform, views as hateful conduct, sexual harassment and harassment. Its original policy only has vague guidelines around what it sees as these behaviors and lumps them all under “Harassment.” The new policy expands on this generously, providing definitions for each and many examples of what it views these behaviors to be.
Ultimately, the new policy lays out what it sees as harm and escalates its response in some cases, but still expects its creators and users to bear most of the burden and responsibility of rooting these behaviors out. The new policy leans much more into “context” than immediate action.
Twitch defines this as “discrimination, denigration, harassment, or violence” toward protected characteristics and groups under hateful conduct. This policy nows add “color, caste and immigration status” to its protected groups. It omits age in this particular statement, but age is considered a protected group in the context of the examples Twitch provides.
The original policy only stated, “Twitch will consider a number of factors to determine the intent and context of any reported hateful conduct. Hateful conduct is a zero-tolerance violation. We will take action on all accounts associated with such conduct with a range of enforcement actions, including and up to indefinite suspension.”
The new policy outlines these behaviors–regardless of intent–as:
Exceptions are made for movies, TV, developer-generated content and non-prohibited video games that are not directly discriminatory.
Twitch also makes a point of stating it will not share its slur list publicly, as it does not want to enable evasion, such as censoring with special characters. Twitch claims it will “take context into account when evaluating whether use of a slur violates our policies.”
In addition to this breakdown, Twitch has added entirely new sections dedicated to sexual harassment and harassing behavior, so far as to give a specific definition of sexual harassment behaviors the original policy did not include.
The new section reads, “Sexual harassment makes users feel uncomfortable, unsafe, and deters them from participating in online communities. This abuse can take the form of unwelcome sexual advances and solicitations, sexual objectification, or degrading attacks relating to a person’s perceived sexual practices, regardless of their gender.”
The behaviors of the updated policy lists are as follows:
Twitch does not list any exceptions within these examples. It claims it will take the number of times an account has been timed out, reported, banned, etc., even when the behavior itself is unwanted but isn’t clearly derogatory.
Twitch now defines harassment as a growth deterrent and “creates a gateway for more severe forms of harm and abuse.” It continues to define harassment as personal attacks, promotion of harm and malicious brigading.
The examples it includes are:
Twitch’s original policy details it will suspend accounts that violate the harassment policy, and accounts that do so could be indefinitely banned on the first violation. In the update, Twitch doesn’t say this. Instead, it emphasizes that it wants to “enable users to express themselves naturally with their friends and communities without fear that these interactions could be misidentified as harassment.”
The updated policy puts more of the responsibility to cull bad behavior on the creator and community members in this section, noting that “context” is important and “may require that individuals who feel targeted by abuse indicate that these actions were not consensual banter before we will intervene.”
As in the original policy, creators are still expected to be responsible for moderating their communities. However, the two policies differ because the original policy details that it expects creators to make a “good-faith effort” to moderate prohibited behaviors and leaves it at that.
The new policy expands on this, noting that creators who incite others to harass and who do not use the tools Twitch provides to moderate their communities can be suspended on or off the platform.
Both policies outline the resources available to creators to moderate their communities.
A new section in the updated policy defines the differences between hateful conduct and harassment, reading, “Harassment becomes hateful conduct when the behavior is targeted at an individual(s) on the basis of protected characteristic(s). As Twitch does not tolerate any abuse that is motivated by hatred, prejudice or intolerance, the penalty for such behavior is more severe. Instances of Hateful Conduct will always lead to enforcement action, even if the report is submitted by a 3rd-party that wasn’t targeted by this behavior.”
The updated policy also expands the Q&A section, clarifying more about intent, whether or not the creator must speak up first about unwanted behavior, and more.