Twitter users will soon be able to crop their own image previews and have greater control over how photos appear in the main feed.
This change comes amid criticism that Twitter’s image cropping algorithm is biased in regard to what it does and does not crop out of photos.
Currently, users have no control over how an uploaded photo is previewed in-stream.
Twitter says it’s working on ways to give users more visibility and control over what images will look like when a tweet is published.
How an image is previewed in the tweet composer is not how it will show up in users’ feeds.
Twitter intends to correct that:
“We are prioritizing work to decrease our reliance on ML-based image cropping by giving people more visibility and control over what their images will look like in a Tweet.
Going forward, we are committed to following the “what you see is what you get” principles of design, meaning quite simply: the photo you see in the Tweet composer is what it will look like in the Tweet.”
Why did this suddenly become a priority for Twitter?
Here’s some background on the criticism that lead to this change.
Twitter Image Cropping Controversy
Right now, when an image is uploaded to a tweet and published, it gets cropped to 600 pixels by 335 pixels.
This is standard practice regardless of the image’s original dimensions. What’s not standard is which section of an image gets cropped.
Cropping is done algorithmically, so that means Twitter may decide to crop near the top, bottom, or middle of an image.
Depending on the image’s original size, a considerable amount of detail could be cropped out. This is especially true for images that are taller than they are wide.
Of course, images are displayed in full when users click on the cropped preview. But image previews in tweets, like links, are not always clicked on.
As it happens, a series of repeatable tests appear to demonstrate an alleged bias in what Twitter prefers to show in an image preview.
To put it simply, Twitter’s image preview algorithm seems to focus on white faces more than black faces.
A number of tweets demonstrating the apparent bias went viral last month.
There were examples with people in stock photos:
Testing this to see if it's real. pic.twitter.com/rINjaNvXaj
— Jef Caine (@JefCaine) September 19, 2020
There were examples with fictional characters:
I wonder if Twitter does this to fictional characters too.
Lenny Carl pic.twitter.com/fmJMWkkYEf
— Jordan Simonovski (@_jsimonovski) September 20, 2020
And there were even examples with dogs:
I tried it with dogs. Let's see. pic.twitter.com/xktmrNPtid
— – M A R K – (@MarkEMarkAU) September 20, 2020
Twitter admits it could have done better when designing its image preview algorithm:
“While our analyses to date haven’t shown racial or gender bias, we recognize that the way we automatically crop photos means there is a potential for harm. We should’ve done a better job of anticipating this possibility when we were first designing and building this product.”
Twitter intends on implementing changes to reduce the apparent bias shown in the above examples:
“We’re aware of our responsibility, and want to work towards making it easier for everyone to understand how our systems work. While no system can be completely free of bias, we’ll continue to minimize bias through deliberate and thorough analysis, and share updates as we progress in this space.”
The exact changes Twitter will roll out, and when they’ll be rolled out, are not known at this time.
The company is currently in the process of developing a solution. “There’s lots of work to do,” Twitter says.
That’s likely a better approach than rushing out an update and potentially making the situation worse.
Twitter says it will share additional updates as they become available.