Bot Sentinel is a community-funded project, and we have a frustratingly limited budget. If you use Bot Sentinel and want to help us improve the platform, please consider contributing!
In 2018, Christopher Bouzy (@cbouzy) launched Bot Sentinel to help fight disinformation and targeted harassment. We believe Twitter users should be able to engage in healthy online discourse without inauthentic accounts, toxic trolls, foreign countries, and organized groups manipulating the conversation.
We designed Bot Sentinel to be simple to use and as informative as possible. We publicly display detailed information about Twitter accounts the platform is tracking to give visitors of this website a better understanding of how nefarious accounts spread disinformation and target other accounts. We try to be as transparent as possible and give visitors as many data points as possible.
Bot Sentinel is a non-partisan platform; we track all accounts. The platform uses machine learning and artificial intelligence to classify Twitter accounts and add the accounts to a publicly available database that anyone can browse.
We trained Bot Sentinel to classify Twitter accounts using thousands of accounts and millions of tweets for our machine learning model. The system can correctly classify accounts with an accuracy of 95%. Unlike other machine learning tools designed to detect “bots,” we are focusing on specific behaviors and activities deemed inappropriate by Twitter rules. We analyze hundreds of tweets to accurately classify each Twitter account and provide an easy-to-understand report.
Researchers rarely agree on what activity constitutes toxic trolling or harmful inauthentic activity, so we took a novel approach when training our machine learning model. Instead of creating a model based on our interpretation of an inauthentic account or toxic troll, we used Twitter rules as a guide when selecting Twitter accounts to train our model. We searched for accounts that were repeatedly violating Twitter rules and we trained our model to classify accounts similar to the accounts we identified as “problematic.” Note: Ideology, political affiliation, religious beliefs, geographic location, or frequency of tweets are not factors when determining the classification of a Twitter account.
We rate accounts based on a score from 0% to 100%, the higher the score the more likely the account engages in targeted harassment, toxic trolling, or uses deceptive tactics engineered to cause division and chaos. We analyze several hundred tweets per account, and the more someone engages in behavior consistent with problematic accounts, the higher their Bot Sentinel score is. We feel since problematic accounts are likely violating Twitter rules, most Twitter users would want to report and avoid these accounts because they offer little value to meaningful public discourse.
Most inauthentic accounts and toxic trolls are not part of a large conspiracy attempting to influence American policies and/or elections. However, there are inauthentic accounts who engage in deceptive tactics and there is a correlation between problematic behavior and inauthentic accounts who are part of an influence campaign. An inauthentic account that is actively trying to cause division and discord will behave in a manner consistent with someone who receives a high Bot Sentinel score.