If at first you don’t succeed… 

Twitter announced Wednesday an update to its ongoing effort to pare back the volume of what it deems to be toxic replies sloshing around the social media platform. Specifically, starting May 5 on the Twitter iOS app and shortly thereafter on the Android app, English-language users may be shown “improved prompts” asking them to rethink their typed-but-not-yet-sent replies in a new — and presumably more nuanced — set of circumstances. 

Wednesday’s announcement signals an evolution of an experiment first announced in May of 2020. Distinct from, but related in spirit to, Twitter’s “humanization prompts” test, the idea as initially explained by Twitter in 2020 was that sometimes people benefit from taking a deep breath before tweeting. 

“When things get heated, you may say things you don’t mean,” explained the company at the time. “To let you rethink a reply, we’re running a limited experiment on iOS with a prompt that gives you the option to revise your reply before it’s published if it uses language that could be harmful.”

Notably, prompted users could still tweet whatever nonsense they wanted — they just had to deal with an additional step thrown in the mix by Twitter first.  

At the time, the system was called out by some for being perhaps a bit too blunt in its deployment of gentle scolding.

Now, Twitter says it’s learned from those early day. 

“In early tests, people were sometimes prompted unnecessarily because the algorithms powering the prompts struggled to capture the nuance in many conversations and often didn’t differentiate between potentially offensive language, sarcasm, and friendly banter,” read the press release in part. 

Be nicer.

Be nicer.

Image: twitter

Or not.

Or not.

Image: twitter

As for what Wednesday’s announcement means in practice? Well, a few things. 

Twitter says the updated system now takes into consideration the relationship between the person writing the reply and the account at which it’s directed. In other words, replies between two accounts that have long exchanged friendly missives might be treated differently than a first-time reply directed at an account the user doesn’t follow.  

The company also claims its systems can now more accurately detect profanity, and can distinguish — at least to some extent — context. Twitter, for example, lists “Adjustments to our technology to better account for situations in which language may be reclaimed by underrepresented communities and used in non-harmful ways” as one of the ways in which its prompts system has been improved since the initial rollout of the test last year. 

And while this all sounds a bit Sisyphean, Twitter insists its past prompting efforts have actually shown tangible results. 

SEE ALSO: Twitter tests ‘humanization prompts’ in effort to reduce toxic replies

“If prompted, 34% of people revised their initial reply or decided to not send their reply at all,” claims the company’s press release. “After being prompted once, people composed, on average, 11% fewer offensive replies in the future.”

Twitter, in other words, says these prompts work. Whether or not its oft-harassed users will agree is another thing altogether. 

WATCH: How to permanently delete your social media

Uploads%252fvideo uploaders%252fdistribution thumb%252fimage%252f91466%252f19d8944b 03c2 4bfb b53a 4798ab45c4ae.png%252f930x520.png?signature=8rijrejzsmogehuh2mchfwvptpm=&source=https%3a%2f%2fblueprint api production.s3.amazonaws

Premium black unisex t shirt.