Facebook: fighting fascism is bad for business

There’s a damning section in this NYT piece about Facebook’s ongoing refusal to deal with misinformation and hate speech.

The company had surveyed users about whether certain posts they had seen were “good for the world” or “bad for the world.” They found that high-reach posts — posts seen by many users — were more likely to be considered “bad for the world,” a finding that some employees said alarmed them.

So the team trained a machine-learning algorithm to predict posts that users would consider “bad for the world” and demote them in news feeds. In early tests, the new algorithm successfully reduced the visibility of objectionable content. But it also lowered the number of times users opened Facebook, an internal metric known as “sessions” that executives monitor closely.

“The results were good except that it led to a decrease in sessions, which motivated us to try a different approach,” according to a summary of the results, which was posted to Facebook’s internal network and reviewed by The Times.

Facebook chose to use a weaker algorithm.

While that left more objectionable posts in users’ feeds, it did not reduce their sessions or time spent.

The problem has never been that Facebook can’t police hate speech and dangerous misinformation. It’s that it won’t. Big tech is increasingly looking like Big Tobacco, profiting from a product it knows is doing great damage.


Posted

in

,

by