The ways in which different people view and approach the task of moderation varies, and different people will have different ideas about how to best manage a community.
An academic study interviewed dozens of moderators across multiple platforms and grouped moderation approaches into five different categories:
Being able to understand the perspective behind each of these approaches and then applying them to your own community as needed is a powerful ability as a community manager. This article will discuss in more detail what these five categories mean and how you can apply them within your own communities.
Moderators that nurture and support communities (nurturing-type moderators) focus on shaping the community and conversations that occur in the server among members to match their vision. The foundation for their moderation actions stem from their desire to keep the community positive and welcoming for everyone, not just long-time members. They seek to create a community with a good understanding of the rules that can then develop itself in a positive way over time.
These types of moderators may implement pre-screening of members or content in their communities by implementing a verification gate or using an automoderator to filter out low quality members or content and curate the conversations of the server to be better suited to their vision.
Although this passive behind-the-scenes guidance is one type of nurturing moderator, these types of moderators also often actively engage with the community as a “regular member.” For nurturing-type moderators, this engagement isn’t meant specifically to provide an example of rule-following behavior, but rather to encourage high-quality conversations on the server where members will naturally enjoy engaging with each other and the moderators as equals. They are leading by example.
While nurturing- and supporting-type moderators operate based upon their long-term vision for a community, moderators that are focused on overseeing and facilitating communities focus on short-term needs and the day-to-day interactions of community members. They are often involved in handling difficult scenarios and fostering a healthy community.
For example, these types of moderators will step in when there is conflict within the community and attempt to mediate between parties to resolve any misunderstandings and restore friendliness to the server. Depending on the issue, they may also refer to specific rules or community knowledge to assign validity to one viewpoint or to respectfully discredit the behavior of another. In both situations, moderators will attempt to elicit agreement from those involved about their judgment and resolve the conflict to earn the respect of their community members and restore order to the server.
Those in the overseeing and facilitating communities category may also take less involved approaches towards maintaining healthy day-to-day interaction among members, such as quickly making decisions to mute, kick, or ban someone that is causing an excessive amount of trouble rather than attempting to talk them down. They may also watch for bad behavior and report it to other moderators to step in and handle, or allow the community to self-regulate when possible rather than attempting to directly influence the conversation.
*Unless you are using the channel description for verification instructions rather than an automatic greeter message.
If you want to use the remove unverified role method, you will need a bot that can automatically assign a role to a user when they join.
Verification Actions
Once you decide whether you want to add or remove a role, you need to decide how you want that action to take place. Generally, this is done by typing a bot command in a channel, typing a bot command in a DM, or clicking on a reaction. The differences between these methods are shown below.
In order to use the command in channel method, you will need to instruct your users to remove the Unverified role or to add the Verified role to themselves.
Where overseeing and facilitating community moderators emphasize interactive and communicative approaches to solving situations with community members, moderators who see themselves as fighting for communities heavily emphasize taking action and content removal rather than moderating via a two-way interaction. They may see advocating for their community members as part of their job and want to defend the community from those who would try to harm it. Oftentimes, the moderators themselves may have been on the receiving end of the problematic behavior in the past and desire to keep others in their community from having to deal with the same thing. This attitude is often the driver behind their no-nonsense approach to moderation while strictly enforcing the community’s rules and values, quickly working to remove hateful content and users acting in bad faith.
Moderators in this category are similar to the subset of moderators that view moderation from the overseeing and facilitating communities, specifically the ones that quickly remove those who are causing trouble. However, compared to the perspective that misbehavior stems from immaturity, moderators that fight for communities have a stronger focus on the content being posted in the community, rather than the intent behind it. In contrast to moderators in the overseeing and facilitating communities category, these moderators take a firmer stance in their moderation style and do not worry about complaints from users who have broken rules. Instead they accept that pushback on the difficult decisions they make is part of the moderation process.
Markdown is also supported in an embed. Here is an image to showcase an example of these properties:
Example image to showcase the elements of an embed
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
An important thing to note is that embeds also have their limitations, which are set by the API. Here are some of the most important ones you need to know:
If you feel like experimenting even further you should take a look at the full list of limitations provided by Discord here.
It’s very important to keep in mind that when you are writing an embed, it should be in JSON format. Some bots even provide an embed visualizer within their dashboards. You can also use this embed visualizer tool which provides visualization for bot and webhook embeds.
Those that see themselves as governing and regulating communities see the moderation team as a form of governance and place great emphasis on the appropriate and desirable application of the community rules, often seeing the process for making moderation decisions as similar to a court system making decisions based on a set of community “laws.” They may also see themselves as representatives of the community or the moderation team and emphasize the need to create policies or enforce rules that benefit the community as a whole
Moderators in this category may consciously run the community according to specific government principles, such as having a vote on community changes. However, they may also achieve consensus within the team about changes to the server without involving the community at large or even have one moderator make the final determination about community changes. This “final decision” power is usually exercised in terms of vetoing a proposed policy or issuing a ruling on an issue that is particularly contentious within the mod team or community. Very rarely would a form of decision-making be exercised, and it would be granted to very specific members of a team hierarchy such as the server owner or administrative lead. Even so, moderators in this category find following procedure to be important and tend to involve others to some extent in making decisions about the community rather than acting on their own. You can learn more about different community governance structures here.
This tendency is also seen in the way that they approach rule enforcement. Moderators that see themselves as governing and regulating communities view the rules as if they were the laws of a country. They meticulously review situations that involve moderator intervention to determine which rule was broken and how it was broken while referring to similar past cases to see how those were handled. These moderators also tend to interpret the rules more strictly, according to the “letter of the law,” and attempt to leave no room for argument while building their “case” against potential offending users.
Moderators that see themselves as managing communities view moderation as a second job to be approached in a professional way. They pay particular attention to the way they interact with other members of the community moderation team as well as the moderation teams of other communities, and strive to represent the team positively to their community members. This type of moderator may appear more often as communities become very large and as there becomes a need for clearer, standard processes and division of responsibility between moderators in order to handle the workload.
Though this metaphor focuses more on moderator team dynamics than relationships between moderators and users, it can also shape the way moderators approach interactions with users. Managing-type moderators are more likely to be able to point users toward written rules, guidelines, or processes when they have questions. Managing-type moderators are also much less likely to make “on-the-fly” decisions about new issues that come up. Instead, they will document the issue and post about it in the proper place, such as a private moderator channel, so it can be discussed and a new process can be created if needed. This approach also makes it easier to be transparent with users about decision making. When there are established, consistent processes in place for handling issues, users are less likely to feel that decisions are random or arbitrary.
Another strength of this approach is evident in efficient on-boarding processes. When a community has clear processes for documenting, discussing, and handling different situations, adding new moderators to the team is much easier because there is already a set of written instructions for how they should do their job. This professional approach to moderation can also help moderators when they are attempting to form partnerships or make connections with other servers. An organized moderation team is much more likely to make a good impression with potential partners. If you want to learn more about managing moderation teams, click here.
Even though this comparison is important for better understanding of both bots and webhooks, it does not mean you should limit yourself to only picking one or the other. Sometimes, bots and webhooks work their best when working together. It’s not uncommon for bots to use webhooks for logging purposes or to distinguish notable messages with a custom avatar and name for that message. Both tools are essential for a server to function properly and make for a powerful combination.
*Unconfigurable filters, these will catch all instances of the trigger, regardless of whether they’re spammed or a single instance
**Gaius also offers an additional NSFW filter as well as standard image spam filtering
***YAGPDB offers link verification via google, anything flagged as unsafe can be removed
****Giselle combines Fast Messages and Repeated Text into one filter
Anti-Spam is integral to running a large private server, or a public server. Spam, by definition, is irrelevant or unsolicited messages. This covers a wide base of things on Discord, there are multiple types of spam a user can engage in. The common forms are listed in the table above. The most common forms of spam are also very typical of raids, those being Fast Messages and Repeated Text. The nature of spam can vary greatly but the vast majority of instances involve a user or users sending lots of messages with the same contents with the intent of disrupting your server.
There are subsets of this spam that many anti-spam filters will be able to catch. If any of the following: Mentions, Links, Invites, Emoji, and Newline Text are spammed repeatedly in one message or spammed repeatedly across several messages, they will provoke most Repeated Text and Fast Messages filters appropriately. Subset filters are still a good thing for your anti-spam filter to contain as you may wish to punish more or less harshly depending on the spam. Namely, Emoji and Links may warrant separate punishments. Spamming 10 links in a single message is inherently worse than having 10 emoji in a message.
Anti-spam will only act on these things contextually, usually in an X in Y fashion where if a user sends, for example, 10 links in 5 seconds, they will be punished to some degree. This could be 10 links in one message, or 1 link in 10 messages. In this respect, some anti-spam filters can act simultaneously as Fast Messages and Repeated Text filters.
Sometimes, spam may happen too quickly for a bot to catch up. There are rate limits in place to stop bots from harming servers that can prevent deletion of individual messages if those messages are being sent too quickly. This can often happen in raids. As such, Fast Messages filters should prevent offenders from sending messages; this can be done via a mute, kick or ban. If you want to protect your server from raids, please read on to the Anti-Raid section of this article.
Text Filters
Text filters allow you to control the types of words and/or links that people are allowed to put in your server. Different bots will provide various ways to filter these things, keeping your chat nice and clean.
*Defaults to banning ALL links
**YAGPDB offers link verification via google, anything flagged as unsafe can be removed
***Setting a catch-all filter with carl will prevent link-specific spam detection
A text filter is integral to a well moderated server. It’s strongly, strongly recommended you use a bot that can filter text based on a blacklist. A Banned words filter can catch links and invites provided http:// and https:// are added to the word blacklist (for all links) or specific full site URLs to block individual websites. In addition, discord.gg can be added to a blacklist to block ALL Discord invites.
A Banned Words filter is integral to running a public server, especially if it’s a Partnered, Community or Verified server, as this level of auto moderation is highly recommended for the server to adhere to the additional guidelines attached to it. Before configuring a filter, it’s a good idea to work out what is and isn’t ok to say in your server, regardless of context. For example, racial slurs are generally unacceptable in almost all servers, regardless of context. Banned word filters often won’t account for context, with an explicit blacklist. For this reason, it’s also important a robust filter also contains whitelisting options. For example, if you add the slur ‘nig’ to your filter and someone mentions the country ‘Nigeria’ they could get in trouble for using an otherwise acceptable word.
Filter immunity may also be important to your server, as there may be individuals who need to discuss the use of banned words, namely members of a moderation team. There may also be channels that allow the usage of otherwise banned words. For example, a serious channel dedicated to discussion of real world issues may require discussions about slurs or other demeaning language, in this exception channel based Immunity is integral to allowing those conversations.
Link filtering is important to servers where sharing links in ‘general’ chats isn’t allowed, or where there are specific channels for sharing such things. This can allow a server to remove links with an appropriate reprimand without treating a transgression with the same severity as they would a user sending a racial slur.
Whitelisting/Blacklisting and templates for links are also a good idea to have. While many servers will use catch-all filters to make sure links stay in specific channels, some links will always be malicious. As such, being able to filter specific links is a good feature, with preset filters (Like the google filter provided by YAGPDB) coming in very handy for protecting your user base without intricate setup however, it is recommended you do configure a custom filter to ensure specific slurs, words etc. that break the rules of your server, aren’t being said.
Invite filtering is equally important in large or public servers where users will attempt to raid, scam or otherwise assault your server with links with the intention of manipulating your user base to join or where unsolicited self-promotion is potentially fruitful. Filtering allows these invites to be recognized, and dealt with more harshly. Some bots may also allow by-server white/blacklisting allowing you to control which servers are ok to share invites to, and which aren’t. A good example of invite filtering usage would be something like a partners channel, where invites to other, closely linked, servers are shared. These servers should be added to an invite whitelist to prevent their deletion.
Anti-Raid
Raids, as defined earlier in this article, are mass-joins of users (often selfbots) with the intent of damaging your server. There are a few methods available to you in order for you to protect your community from this behavior. One method involves gating your server with verification appropriately, as discussed in DMA 301.You can also supplement or supplant the need for verification by using a bot that can detect and/or prevent damage from raids.
*Unconfigurable, triggers raid prevention based on user joins & damage prevention based on humanly impossible user activity. Will not automatically trigger on the free version of the bot.
Raid detection means a bot can detect the large number of users joining that’s typical of a raid, usually in an X in Y format. This feature is usually chained with Raid Prevention or Damage Prevention to prevent the detected raid from being effective, wherein raiding users will typically spam channels with unsavoury messages.
Raid-user detection is a system designed to detect users who are likely to be participating in a raid independently of the quantity of frequency of new user joins. These systems typically look for users that were created recently or have no profile picture, among other triggers depending on how elaborate the system is.
Raid prevention stops a raid from happening, either by Raid detection or Raid-user detection. These countermeasures stop participants of a raid specifically from harming your server by preventing raiding users from accessing your server in the first place, such as through kicks, bans, or mutes of the users that triggered the detection.
Damage prevention stops raiding users from causing any disruption via spam to your server by closing off certain aspects of it either from all new users, or from everyone. These functions usually prevent messages from being sent or read in public channels that new users will have access to. This differs from Raid Prevention as it doesn’t specifically target or remove new users on the server.
Raid anti-spam is an anti spam system robust enough to prevent raiding users’ messages from disrupting channels via the typical spam found in a raid. For an anti-spam system to fit this dynamic, it should be able to prevent Fast Messages and Repeated Text. This is a subset of Damage Prevention.
Raid cleanup commands are typically mass-message removal commands to clean up channels affected by spam as part of a raid, often aliased to ‘Purge’ or ‘Prune’.It should be noted that Discord features built-in raid and user bot detection, which is rather effective at preventing raids as or before they happen. If you are logging member joins and leaves, you can infer that Discord has taken action against shady accounts if the time difference between the join and the leave times is extremely small (such as between 0-5 seconds). However, you shouldn’t rely solely on these systems if you run a large or public server.
User Filters
Messages aren’t the only way potential evildoers can present unsavoury content to your server. They can also manipulate their Discord username or Nickname to cause trouble. There are a few different ways a username can be abusive and different bots offer different filters to prevent this.
*Gaius can apply same blacklist/whitelist to names as messages or only filter based on items in the blacklist tagged %name
**YAGPDB can use configured word-list filters OR a regex filter
Username filtering is less important than other forms of auto moderation, when choosing which bot(s) to use for your auto moderation needs, this should typically be considered last, since users with unsavory usernames can just be nicknamed in order to hide their actual username.
As you read through this article, you may have found that some moderation category descriptions resonated with you more than others. The more experience you have moderating, the wider the variety of moderation approaches you’ll implement. Rather than trying to find a single “best” approach from among these categories, it’s better to consider your overall balance in using them and how often you consider moderation issues from each perspective. For example, you can nurture and support a community by controlling how members arrive at your server and curating the content of your informational channels to guide conversation, while also managing and overseeing the interactions of honest, well-intentioned community members and quickly banning those who seek to actively harm your community.
It’s perfectly natural that each person on your moderation team will have an approach that comes easier to them than the others and no category is superior to another. Making sure all moderation categories are represented in your moderation team helps to ensure a well-rounded staff that values differing opinions. Even just understanding each of these frameworks is an important component of maintaining a successful community. Now that you understand these different approaches, you can consciously apply them as needed so that your community can continue to thrive!
One additional component not included in the table is the effects of implementing a verification gate. The ramifications of a verification gate are difficult to quantify and not easily summarized. Verification gates make it harder for people to join in the conversation of your server, but in exchange help protect your community from trolls, spam bots, those unable to read your server’s language, or other low intent users. This can make administration and moderation of your server much easier. You’ll also see that the percent of people that visit more than 3 channels increases as they explore the server and follow verification instructions, and that percent talked may increase if people need to type a verification command.
However, in exchange you can expect to see server leaves increase. In addition, total engagement on your other channels may grow at a slower pace. User retention will decrease as well. Furthermore, this will complicate the interpretation of your welcome screen metrics, as the welcome screen will need to be used to help people primarily follow the verification process as opposed to visiting many channels in your server. There is also no guarantee that people who send a message after clicking to read the verification instructions successfully verified. In order to measure the efficacy of your verification system, you may need to use a custom solution to measure the proportion of people that pass or fail verification.