How do we define incivility?
Incivility is a concept studied by academics. Here we use the general term incivility to mean any type of content that can be perceived as offensive to others. The key part of the definition is reception, that is, that others will or may find the content to be offensive. Uncivil behavior is toxic in that it makes others respond in negative ways.
Here we use incivility to describe behaviors on social media that are undesirable in two key ways: unprofessional and aggression.
First we focus on unprofessional behaviors: vulgarity and and inflammatory language. These behaviors send a potentially bad signal to potential employers and the broader social media sphere.
Next, we focus on behaviors that suggest users are showing aggression. These behaviors tend to quell or discourage healthy discussion and debate on social media. These behaviors are usually by design in an effort to hurt others. These measures include: hateful language, insults and threats.
Both sets of behaviors are problematic, and can have potentially big consequences. HR representatives often use the presence of these behaviors to screen job candidates. Similarly, professionals with longstanding careers often suffer long term PR damage from messages that they sent years ago.
Measures of Unprofessionalism
Our first measure is what people often think of when they hear the word incivility. Vulgar, crude, or profane language. Think of words that are “off limits” in polite discussion. These words often are heuristics that HR representatives use to symbolize unprofessionalism.
Inflammatory language tends to provoke others, by saying things that may prompt others to respond with negative sentiment and high activation (e.g., outrage or disgust).
Measures of Aggression
Hateful language goes beyond vulgarity. This algorithm detects negative stereotypes, disparagement on the basis of race and ethnicity and sexual orientation. Beyond this, hateful language covers other forms of language that negatively categorize others.
Here we define threats as language that intentionally disrespects others. This includes name-calling, verbally abusive language, invectives and ridicule. This is slightly different than hateful language in that insults can be rooted in things other than race, gender or sexual sexual orientation. Here we include things such as disparaging comments regarding intelligence, appearance and character as part of our broader insult category.
Finally, and most dramatically, we have a special algorithm that is built just to detect threats. Threats are written statements that explicitly state the desire to do harm to another person. Harm can be psychical harm such as violence, or online harm such as trolling.