Mozilla wants to understand your weird YouTube recommendations

From cute cat videos to sourdough bread recipes: sometimes, it feels like the algorithm behind YouTube's "U

توسط HEKAYATFARDAYEEMAAA در 29 شهریور 1399

From cute cat videos to sourdough bread recipes: sometimes, it feels like the algorithm behind YouTube's "Up Next" section knows the user better than the user knows themselves.

Often, that same algorithm leads the viewer down a rabbit hole. How many times have you spent countless hours clicking through the next suggested video, each time promising yourself that this one would be the last one?

The scenario gets thorny when the system somehow steers the user towards conspiracy theory videos and other forms of extreme content, as some have complained.

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

To get an idea of how often this happens and how, the non-profit Mozilla Foundation has launched a new browser extension that lets users take action when they are recommended videos on YouTube that they then wish they hadn't ended up watching.

Dubbed the RegretsReporter extension, it provides a tool to report what Mozilla calls "YouTube Regrets" – this one video that messes up the recommendation system and leads the viewer down a bizarre path. 

Mozilla has been collecting examples of users' YouTube Regrets for a year now, in an attempt to shed light on the consequences that the platform's recommendation algorithm can have. 

YouTube's recommendation AI is one of the most powerful curators on the internet, according to Mozilla. YouTube is the second most visited website in the world, and its AI-enabled recommendation engine drives 70% of total viewing time on the site. "It's no exaggeration to say that YouTube significantly shapes the public's awareness and understanding of key issues across the globe," Mozilla said – and yet, Mozilla said, for years, people have raised the alarm about YouTube recommending conspiracy theories, misinformation, and other harmful content.

Mozilla fellow Guillaume Chaslot was among the first people to draw attention to the issue. The software engineer's research during the 2016 presidential election in the US concluded that YouTube's algorithm was effectively pushing users to watch ever-more radical videos. This prompted him to create AlgoTransparency, a website that attempts to find out which videos are most likely to be promoted on YouTube when fed certain terms.

"We'll be able to put findings from both the RegretsReporter and AlgoTransparency in the same space, so they complement each other," Chaslot tells ZDNet. "They are not perfect tools, but they will give some degree of transparency."

With the 2020 US election around the corner, and conspiracy theories surrounding the COVID-19 pandemic proliferating, Mozilla hopes that the RegretsReporter extension will provide data to gather a better understanding of YouTube's recommendation algorithm. 

"We're recruiting YouTube users to become YouTube watchdogs," said Mozilla's VP of engagement and advocacy in a blog post announcing the new tool. The idea is to help uncover information about the type of recommended videos that lead to racist, violent or conspirational content, and to spot patterns in YouTube usage that might lead to harmful content being recommended.

Users can report a Youtube Regret via RegretsReporter, and explain how they arrived at a video. The extension will also send data about YouTube browsing time to estimate the frequency at which viewers are directed to inappropriate content.

YouTube has already acknowledged issues with its recommendation algorithm in the past. The platform is able to delete videos that violate its policies, but problems arise when it comes to managing so-called "borderline" content: videos that brush up against YouTube's policies, but don't quite cross the line. 

Last year, YouTube promised to make amendments: "We'll begin reducing recommendations of borderline content and content that could misinform users in harmful ways – such as videos promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11," said the company.

As part of the effort, YouTube launched over 30 different policy changes to reduce recommendations of borderline content. For example, the company is working with external evaluators to assess the quality of videos, and stay clear of recommending or providing free advertisement to content that might cause harmful misinformation. 

According to the platform, those updates to the system have shown a 70% average drop in watch time for videos deemed borderline.

Chaslot is skeptical. "The algorithm is still the same," he says. "It's just the type of content that is considered harmful that changed. We still have no transparency on what the algorithm is actually doing. So this is still a problem – we have no idea what gets recommended."

In other words, how borderline content spreads on YouTube is still a mystery, and part of the answer lies in the inner workings of the company's recommendation algorithm – which YouTube is keeping a closely guarded secret.

For the past few years, the Mozilla Foundation has asked YouTube to open up the platform's recommendation algorithm for the public to scrutinize the inner workings of the system, without success. 

The organization has called for YouTube to provide independent researchers with access to meaningful data, such as the number of times a video is recommended, the number of views that result from recommendation, or the number of shares. Mozilla also required that the platform build simulation tools for researchers, to allow them to mimic user pathways through the recommendation algorithm.

Those requests were not met. Now, it seems that with RegretsReporter, Mozilla has decided that if YouTube won't give the data, the data will be taken directly from YouTube's users. 

SEE: New map reveals how much every country's top YouTuber earns

Of course, RegretsReporter is flawed: there is no way of preventing users from actively seeking out harmful videos to skew the data, for example. Neither is it possible to get insights from people who are unaware of the impact of the recommendation algorithm in the first place.

Until YouTube releases relevant data, however, there aren't any many other ways to understand the platform's recommendation algorithm, based on real users' experiences. For Chaslot, this is why legislation should be drawn to force transparency upon companies that use this type of technology.

"YouTube is used by a lot of kids and teenagers who are completely unaware of these problems," says Chaslot. "It's okay that YouTube promote what they want, but viewers should at least know exactly what the algorithm is doing."

Mozilla will be sharing findings from the research publicly, and is encouraging researchers, journalists and policymakers to use the information to improve future products.

A YouTube spokesperson said: "The goal of our recommendation system is to connect users with content they love and on any given day, more than 200 million videos are recommended on the homepage alone. We update our recommendations systems on an ongoing basis, to improve the experience for users."

Artificial Intelligence



tinyurlis.gdv.gdv.htu.nuclck.ruulvis.nettny.im
آخرین مطالب
مقالات مشابه
نظرات کاربرن