Instagram has revealed additional tools to prevent abusive messages from being sent during “sudden spikes.” Its new “limits” function automatically suppresses comments and messages from users who do not follow – or who have recently started following – those who turn it on.
It was created to prevent abuse from huge groups of people “who simply pile on in the moment,” according to Instagram. One example cited by the corporation was the racial abuse that occurred after the men’s Euro 2020 football final.
Following England’s penalty shootout loss, black players were exposed to a barrage of racist abuse, particularly on social media platforms.
The severity of the abuse caused the prime minister and others to urge on social media companies to do more to prevent it, as well as several arrests. Instagram, which is owned by Facebook, admitted to BBC News in July that it had made “mistakes” in moderating the abuse and pledged to look into it further.
Instagram said the new capabilities were created to safeguard users from “an unexpected barrage” of critical feedback. It noted in its announcement that “creators and prominent figures occasionally suffer abrupt spikes in comments and DM [direct-message] requests from people they don’t know.”
“This is often an outpouring of support, such as when they go viral after winning an Olympic medal.”
“However, it might sometimes result in an avalanche of unwelcome remarks or communications.” Rather than forcing well-known users to block all comments and messages, the new tool will allow anyone to effectively silence those who are not “long-standing followers.” Instagram also stated that it could be turned on or off at any time.
The company also stated that its previously disclosed Hidden Words technology would be made available to everyone in the world.
Instagram also stated it had “extended” the list of phrases, hashtags, and emojis that the system automatically blocks to filter out abusive communications, which users can customise.
It also changed the language in pop-ups that appear when users try to publish “a potentially objectionable comment,” warning that their accounts could be cancelled if they do so again. It comes on the heels of Twitter’s announcement on Wednesday:
In the aftermath of the Euro 2020 final, it had deleted almost 2,000 tweets aimed at England’s footballers. By far the highest amount of racial abuse originated in the United Kingdom. “99 percent of their owners were recognisable” among the accounts it had permanently terminated. It was putting to the test a feature that “temporarily autoblocks accounts that use bad language.”