We break down what Twitch's new hateful conduct and harassment policy includes, changes and omits from its original policy.
Twitch’s updated hateful conduct and harassment policy, announced on Dec. 9, 2020, is now in effect. Here’s what’s new.
With this updated policy, Twitch has created two new sections to define what it, as a platform, views as hateful conduct, sexual harassment and harassment. Its original policy only has vague guidelines around what it sees as these behaviors and lumps them all under “Harassment.” The new policy expands on this generously, providing definitions for each and many examples of what it views these behaviors to be.
Ultimately, the new policy lays out what it sees as harm and escalates its response in some cases, but still expects its creators and users to bear most of the burden and responsibility of rooting these behaviors out. The new policy leans much more into “context” than immediate action.
Twitch defines this as “discrimination, denigration, harassment, or violence” toward protected characteristics and groups under hateful conduct. This policy nows add “color, caste and immigration status” to its protected groups. It omits age in this particular statement, but age is considered a protected group in the context of the examples Twitch provides.
The original policy only stated, “Twitch will consider a number of factors to determine the intent and context of any reported hateful conduct. Hateful conduct is a zero-tolerance violation. We will take action on all accounts associated with such conduct with a range of enforcement actions, including and up to indefinite suspension.”
The new policy outlines these behaviors--regardless of intent--as:
- The promotion, glorification, threatening and advocation of violence, physical harm and death
- Hateful slurs--targeted and undirected. The exception is if those slurs are reclaimed words and shown to be with that intent, and within music that is not hateful/combined with discriminating behavior.
- The posting, uploading and sharing of hateful images and symbols
- Speech, imagery and emote combinations that dehumanize and perpetuate stereotypes or memes.
- Content related to ableism and slut-shaming.
- Any calls made for subjugation or segregation of protected characteristics. The exception being discussions about immigration policy, voting rights and more--as long as the discussion isn’t discriminatory toward those protected characteristics.
- Content that supports political or economic dominance of any race, ethnicity or religion. This does include white supremacy and nationalism, but not self-determination movements, such as independence movements.
- The mocking of events and victims of document hate crimes or denial of documented mass murder and genocide acts.
- Content including “unfounded” claims that blame protected groups or encourage fearmongering toward a protected group.
- Encouraging the use of conversion therapy.
- Abusive usernames.
- Membership, support and promotion of hate groups, and sharing of hate group propaganda.
Exceptions are made for movies, TV, developer-generated content and non-prohibited video games that are not directly discriminatory.
Twitch also makes a point of stating it will not share its slur list publicly, as it does not want to enable evasion, such as censoring with special characters. Twitch claims it will “take context into account when evaluating whether use of a slur violates our policies.”
In addition to this breakdown, Twitch has added entirely new sections dedicated to sexual harassment and harassing behavior, so far as to give a specific definition of sexual harassment behaviors the original policy did not include.
The new section reads, “Sexual harassment makes users feel uncomfortable, unsafe, and deters them from participating in online communities. This abuse can take the form of unwelcome sexual advances and solicitations, sexual objectification, or degrading attacks relating to a person’s perceived sexual practices, regardless of their gender.”
The behaviors of the updated policy lists are as follows:
- Unsolicited sexual advances.
- Objectifying statements about sexual body parts or practices.
- Repeated comments about “perceived attractiveness” after it has been made clear it is unwanted.
- Derogatory comments about sexual practices and morality--i.e., Slut-shaming.
- Coercing others into sexual content or favors by using bribes, threats, etc.
- Sharing unwanted and unsolicited links to pornographic images and videos.
- Sharing and threatening to share private sexually suggestive/explicit media without that person’s consent.
Twitch does not list any exceptions within these examples. It claims it will take the number of times an account has been timed out, reported, banned, etc., even when the behavior itself is unwanted but isn’t clearly derogatory.
Twitch now defines harassment as a growth deterrent and “creates a gateway for more severe forms of harm and abuse.” It continues to define harassment as personal attacks, promotion of harm and malicious brigading.
The examples it includes are:
- Wishing harm or death on others.
- Glorifying, endorsing or condoning someone’s past/current trauma.
- Making implied threats--explicit threats are handled under Twitch’s Violence and Threats policy.
- Targeting severe and repeated personal attacks.
- Sharing negative doctored or artistic content that is used to degrade and harm another.
- Inciting others to harass, harm and abuse someone on or off of the platform.
- Making malicious contact with businesses and private persons.
- Using another person’s stream to gain an advantage in multiplayer games.
- Stalking or ignoring personal boundaries.
- Abusive usernames.
Twitch’s original policy details it will suspend accounts that violate the harassment policy, and accounts that do so could be indefinitely banned on the first violation. In the update, Twitch doesn’t say this. Instead, it emphasizes that it wants to “enable users to express themselves naturally with their friends and communities without fear that these interactions could be misidentified as harassment.”
The updated policy puts more of the responsibility to cull bad behavior on the creator and community members in this section, noting that “context” is important and “may require that individuals who feel targeted by abuse indicate that these actions were not consensual banter before we will intervene.”
As in the original policy, creators are still expected to be responsible for moderating their communities. However, the two policies differ because the original policy details that it expects creators to make a “good-faith effort” to moderate prohibited behaviors and leaves it at that.
The new policy expands on this, noting that creators who incite others to harass and who do not use the tools Twitch provides to moderate their communities can be suspended on or off the platform.
Both policies outline the resources available to creators to moderate their communities.
A new section in the updated policy defines the differences between hateful conduct and harassment, reading, “Harassment becomes hateful conduct when the behavior is targeted at an individual(s) on the basis of protected characteristic(s). As Twitch does not tolerate any abuse that is motivated by hatred, prejudice or intolerance, the penalty for such behavior is more severe. Instances of Hateful Conduct will always lead to enforcement action, even if the report is submitted by a 3rd-party that wasn’t targeted by this behavior.”
The updated policy also expands the Q&A section, clarifying more about intent, whether or not the creator must speak up first about unwanted behavior, and more.
For more stories like this one delivered straight to your inbox, please subscribe to the GameDailyBiz Digest!/* =$comments; */?>