In their relentless quest to purge the Twitter-verse of “abusive” accounts, Twitter has suspended 100,000 accounts that created a new account during their suspension period, a 45% increase over the previous year. In their continuing pursuit of making users “feel safe” on Twitter, they are being proactive in targeting accounts using AI. Flagging terms that include threats, encouraging self-harm, abusive or violent tweets, has enabled Twitter to ferret out tweets without having to rely on users’ reports. In fact, 38% of the tweets that were deemed abusive were found via flagging.
Do you feel unsafe on Twitter? Is it really that difficult to simply report the tweet and block the user? Twitter is a platform for adults; do we really need to be protected proactively with AI? Who determines the search criteria? Will Twitter share the search criteria?
The historical bias against Trump supporters’ accounts is well established. They continue to engage in shadow banning accounts, throttle accounts, mysteriously unfollow accounts, unlike and un-retweet things and lose hundreds of followers seemingly overnight.
While The Good Fight can tweet a shopping list of items that include assassination of a sitting president and retain their good standing and coveted blue check mark, I was briefly suspended for jokingly suggesting that another user should “learn to code.” As a former coder myself, I find it very difficult to believe that something as innocuous as a hashtag of #LearnToCode is considered hate speech, but this is not:
As the PC police continue to censor us “for our own good,” how far do we allow them to stifle our free speech? The First Amendment was created precisely to protect speech that is offensive. Inoffensive speech doesn’t NEED protection! @jack and company will be releasing their newest rules updates later this week. Wonder how he’ll be protecting us next.