Rechercher dans ce blog

Tuesday, August 10, 2021

Opinion | Social media algorithms determine what we see. But we can't see them. - The Washington Post

Parents, professors and plenty of politicians disapprove of the content that YouTube serves up to its billions of users every day. Who else disapproves, according to a study published in July? YouTube itself.

A crowdsourced report by the Mozilla Foundation catalogues content on the platform that volunteer users considered “regrettable.” The takeaways: Seventy-one percent of the videos flagged were recommended to the users by the platform’s algorithmic system, and some of those videos violated YouTube’s own policies or came close to violating them. Sometimes, the troubling material wasn’t even related to the previous videos a user was watching — of special concern amid anecdotes of viewers following these recommendations all the way down the so-called YouTube rabbit hole of radicalization.

YouTube, unsurprisingly, takes issue with these findings. A second study released in August looked at viewership trends and found no evidence that recommendations were driving users to ever more radical content. Instead, people seemed mostly to have found their way to far-right videos from far-right websites they already frequented. And the term “regrettable,” on which the study relies, is fuzzy: One person’s regret is another’s niche interest, and while the researchers point to misinformation, racism and a sexualized “Toy Story” parody, YouTube itself notes that the flagged material also included videos as innocuous as DIY crafts and pottery-making tutorials. YouTube also argues that the paper’s determinations about rule violations are based only on the researchers’ interpretation of its rules, rather than the company’s.

So who’s right? The inability to answer that question is at the core of the problem. YouTube boasts that its efforts to reduce the recommendation of “borderline content” have resulted in a 70 percent decrease in watch-time of those videos that skirt the terms of service — an implicit acknowledgment that the engagement incentives of the recommender algorithm clash with the safety incentives of the content moderation algorithm that seeks to stamp out harmful material before users see it. What exactly borderline content is, however, remains unclear to the general public, as well as to those researchers who decided, to the platform’s consternation, to guess. The lack of transparency surrounding what the algorithm does recommend, to whom it recommends it and why also means that surveys like this report are one of the few ways even to attempt to understand the workings of a powerful tool of influence.

Lawmakers already are considering regulations to prompt platforms to open up the black boxes of their algorithms to outside scrutiny — or at least to provide aggregated data sets about the outcomes those algorithms produce. These latest studies, however, drive home a critical truth: Users themselves deserve to understand better how platforms curate their personal libraries of information, and they deserve more control to curate for themselves.

Read more:

Adblock test (Why?)



"Opinion" - Google News
August 10, 2021 at 04:41AM
https://ift.tt/3yAiLPR

Opinion | Social media algorithms determine what we see. But we can't see them. - The Washington Post
"Opinion" - Google News
https://ift.tt/2FkSo6m
Shoes Man Tutorial
Pos News Update
Meme Update
Korean Entertainment News
Japan News Update

No comments:

Post a Comment

Search

Featured Post

I just paid $9.99 for a carton of 18 eggs. Will prices ever drop? | Opinion - Sacramento Bee

[unable to retrieve full-text content] I just paid $9.99 for a carton of 18 eggs. Will prices ever drop? | Opinion    Sacramento Bee ...

Postingan Populer