There’s a damning section in this NYT piece about Facebook’s ongoing refusal to deal with misinformation and hate speech.
The company had surveyed users about whether certain posts they had seen were â€œgood for the worldâ€ or â€œbad for the world.â€ They found that high-reach posts â€” posts seen by many users â€” were more likely to be considered â€œbad for the world,â€ a finding that some employees said alarmed them.
So the team trained a machine-learning algorithm to predict posts that users would consider â€œbad for the worldâ€ and demote them in news feeds. In early tests, the new algorithm successfully reduced the visibility of objectionable content. But it also lowered the number of times users opened Facebook, an internal metric known as â€œsessionsâ€ that executives monitor closely.
â€œThe results were good except that it led to a decrease in sessions, which motivated us to try a different approach,â€ according to a summary of the results, which was posted to Facebookâ€™s internal network and reviewed by The Times.
Facebook chose to use a weaker algorithm.
While that left more objectionable posts in usersâ€™ feeds, it did not reduce their sessions or time spent.
The problem has never been that Facebook can’t police hate speech and dangerous misinformation. It’s that it won’t. Big tech is increasingly looking like Big Tobacco, profiting from a product it knows is doing great damage.