Twitter's Scrambling to Figure Out Why Its Photo Preview Algorithm Seems Racist

Photo: Leon Neal (Getty Images)The neural network Twitter uses to generate photo previews is a mysterious beas

توسط DOSTEMANSALAM در 31 شهریور 1399
Illustration for article titled Twitters Scrambling to Figure Out Why Its Photo Preview Algorithm Seems Racist
Photo: Leon Neal (Getty Images)

The neural network Twitter uses to generate photo previews is a mysterious beast. When it debuted the smart cropping tool back in 2018, Twitter said the algorithm determines the most “salient” part of the picture, i.e. what your eyes are drawn to first, to use as a preview image, but what exactly that entails has been the subject of frequent speculation.

Advertisement

Faces are an obvious answer, of course, but what about smiling versus non-smiling faces? Or dimly lit versus brightly lit faces? I’ve seen plenty of informal experiments on my timeline where people try to figure out Twitter’s secret sauce, some have even leveraged the algorithm into an unwitting system for delivering punchlines, but the latest viral experiment exposes a very real problem: Twitter’s auto-crop tool appears to favor white faces over Black faces far too frequently.

Several Twitter users demonstrated as much over the weekend with images containing both a white person’s face and a Black person’s face. White faces showed up far more as previews, even when the pictures were controlled for size, background color, and other variables that could possibly be influencing the algorithm. One particularly viral Twitter thread used a picture of former President Barack Obama and Sen. Mitch McConnell (already the subject of plenty of bad press for his callous response to the death of Justice Ruth Bader Ginsburg) as an example. When the two were shown together in the same image, Twitter’s algorithm showed a preview of that dopey turtle grin time and time again, effectively saying that McConnell was the most “salient” part of the picture.

Advertisement

(Click the embedded tweet below and click on his face to see what I mean).

The trend began after a user tried to tweet about a problem with Zoom’s face-detecting algorithm on Friday. Zoom’s systems weren’t detecting his Black colleague’s head, and when he uploaded screenshots of the issue to Twitter, he found that Twitter’s auto-cropping tool also defaulted to his face rather than his coworker’s in preview images.

This issue was apparently news to Twitter as well. In a response to the Zoom thread, chief design officer Dantley Davis conducted some informal experiments of his own on Friday with mixed results, tweeting, “I’m as irritated about this as everyone else.” The platform’s chief technology officer, Parag Agrawal, also addressed the issue via tweet, adding that, while Twitter’s algorithm was tested, it still needed “continuous improvement” and he was “eager to learn” from users’ rigorous testing.

Advertisement

“Our team did test for bias before shipping the model and did not find evidence of racial or gender bias in our testing. But it’s clear from these examples that we’ve got more analysis to do,” Twitter spokesperson Liz Kelley told Gizmodo. “We’ll open source our work so others can review and replicate.”

When reached by email, she could not comment on a timeline for Twitter’s planned review. On Sunday, Kelley also tweeted about the issue, thanking users who brought it to Twitter’s attention.

Advertisement

Vinay Prabhu, a chief scientist with Carnegie Mellon University, also conducted an independent analysis of Twitter’s auto-cropping tendencies and tweeted his findings on Sunday. You can read more about his methodology here, but basically he tested the theory by tweeting a series of pictures from the Chicago Faces Database, a public repository of standardized photos of male and female faces, that were controlled for several factors including face position, lighting, and expression.

Surprisingly, the experiment showed Twitter’s algorithm slightly favored darker skin in its previews, cropping to Black faces in 52 of the 92 images he posted. Of course, given the sheer volume of evidence to the contrary found through more informal experiments, Twitter obviously still has some tweaking to do with its auto-crop tool. However, Prabhu’s findings should prove useful in helping Twitter’s team isolate the problem.

Advertisement

It should be noted that when it comes to machine learning and AI, predictive algorithms don’t have to be explicitly designed to be racist to be racist. Facial recognition tech has a long and frustrating history of unexpected racial bias, and commercial facial recognition software has repeatedly proven that it’s less accurate on people with darker skin. That’s because no system exists in a vacuum. Intentionally or unintentionally, technology reflects the biases of whoever builds it, so much so that experts have a term for the phenomenon: algorithmic bias.

Which is precisely why it needs to undergo further vetting before institutions dealing with civil rights issues on a daily basis incorporate it into their arsenal. Mountains of evidence show that it disproportionately discriminates against people of color. Granted, Twitter’s biased auto-cropping is a pretty innocuous issue (that should still be swiftly addressed, don’t get me wrong). What has civil rights advocates justifiably worried is when a cop relies on an AI to track down a suspect or a hospital uses an automated system to triage patients—that’s when algorithmic bias could potentially result in a life-or-death decision.

Advertisement


tinyurlis.gdv.gdv.htclck.ruulvis.netshrtco.detny.im
آخرین مطالب
مقالات مشابه
نظرات کاربرن