Twitter has launched a new initiative called “Responsible ML” that will investigate the harms caused by the platform’s algorithms.
The company said on Wednesday that it will use the findings to improve the experience on Twitter:
This may result in changing our product, such as removing an algorithm and giving people more control over the images they Tweet, or in new standards into how we design and build policies when they have an outsized impact on one particular community.
The move comes amid mounting concerns around social media algorithms amplifying biases and spreading conspiracy theories.
[Read: The biggest tech trends of 2021, according to 3 founders]
A recent example of this on Twitter involved an image cropping algorithm that automatically prioritized white faces over Black ones.
Trying a horrible experiment…
Which will the Twitter algorithm pick: Mitch McConnell or Barack Obama? pic.twitter.com/bR1GRyCkia
— Tony “Abolish ICE” Arcieri 🦀 (@bascule) September 19, 2020
Twitter said the image-cropping algorithm will be analyzed by the Responsible ML team.
They’ll also conduct a fairness assessment of Twitter’s timeline recommendations across racial subgroups, and study content recommendations for different political ideologies in seven countries.
Cautious optimism
Tech firms are often accused of using responsible AI initiatives to divert criticism and regulatory intervention. But Twitter’s new project has attracted praise from AI ethicists.
Margaret Mitchell, who co-led Google’s ethical AI time before her controversial firing in February, commended the initiative’s approach.
Cool ideas here, unique to the Twitter approach:
-community-driven ML, agency and choice
-studying effects over time
-in-depth assessment of harms
-public feedback
Excited about where this work could head. Congrats to @ruchowdh@quicola@williams_jutta!https://t.co/45dUMvlsXn— MMitchell (@mmitchell_ai) April 14, 2021
Twitter’s recent hiring of Rumman Chowdhury has also given the project some credibility.
Chowdhury, a world-renowned expert in AI ethics, was appointed director of ML Ethics, Transparency & Accountability (META) at Twitter in February.
In a blog post, she said Twitter will share the learnings and best practices from the initiative:
This may come in the form of peer-reviewed research, data-insights, high-level descriptions of our findings or approaches, and even some of our unsuccessful attempts to address these emerging challenges. We’ll continue to work closely with third party academic researchers to identify ways we can improve our work and encourage their feedback.
She added that her team is building explainable ML solutions to show how the algorithms work. They’re also exploring ways to give users more control over how ML shapes their experience.
Not all the work will translate into product changes, but it will hopefully at least provide some transparency into how Twitter’s algorithms work.
Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.