What just happened? An EPFL security and privacy engineering lab graduate student submitted this week the results of a study that suggests when Twitter crops images on its website or official app, it focuses on human faces that are slimmer, younger, lighter, and more stereotypically feminine. Twitter awarded the student the grand prize of its algorithmic bias bounty challenge. This comes after a similar bias by Twitter's algorithm was reported last year.

The study, from Bogan Kulynych (via The Guardian), tested artificially generated human faces on Twitter's image crop algorithm. The artificial faces could be subtly altered for different features indicating skin tone, age, or weight while controlling for other features. It found that in 37 percent of cases the algorithm preferred lighter skin colors. The Twitter algorithm preferred what the study calls "stereotypically feminine features," about a quarter of the time, and younger-looking or slimmer faces 18 percent of the time. Readers can look at the full methodology and results on GitHub.

Sidenote – Why does Twitter crops photos?

The popular social network allows users to upload photos, millions are shared every day according to Twitter, but those come in all sorts of shapes and sizes. To make the browsing experience more consistent, photos in the Twitter timeline are cropped, so more tweets can be seen at a glance. How the algorithm crops photos, which parts of the image are shown with a higher priority, and the certainty of some bias is what this debate is all about.

Kulynich admits the study has limitations however. It couldn't discover all of the factors that the Twitter algorithm might prefer, or the ones that it might prefer the most. The pictures the study used also had different backgrounds, so it couldn't determine whether those were also affecting the algorithm's preferences. Because the coding was done by a single author, the study admits the author's own notions of things like femininity and age factored into perceptions of the Twitter algorithm's preferences.

Twitter held the algorithmic bias bounty challenge from July 30th through August 6th as part of the 2021 Def Con AI challenge. The grand prize this study won was $3,500.

Last year, Twitter came under fire when some users discovered its image crop algorithm appeared to prefer the faces of lighter-skinned people over those of darker-skinned people, which prompted Twitter to take its own look at the issue. This May it published the results of its own study which showed a preference for women eight percent of the time, white people four percent of the time, white women seven percent of the time, and white men two percent of the time.