The terror was far more contagious than the virus itself, and had the perfect network through which to propagate — a digital ecosystem built to spread emotional fear far and wide.
This is another part of the negative social media equation, alongside unscrupulous data gathering, that leads to the hostile climate we’re currently dealing with.
Outrage leads to engagement, engagement leads to strong algorithmic performance, which leads to greater visibility of outrageous content.
The platforms — i.e. Facebook — want to get that content in front of the people most likely to engage, so the content gets thrown into the echo chamber of a particular user segment. Then the outrage grows and grows and grows until it reaches a fevered pitch.
Platform critics see this behaviour as evidence of a broken system.
Apologists see this behaviour as evidence of unexpected consequences.
Supporters see this behaviour as evidence of society’s flaws. (#notabug)
I’m not sure where I sit. If anything, I think it’s evidence that platforms can’t rely on algorithms alone.
If social networks are going to operate a network while reaping the benefits of acting as a media business, they need to take a stance and enforce content and moderation policies, complete with core values. Or they need to step back and actually act as a platform, leaving others to build on top of their utilities.
Right now they’re trying to be everything, and absolve themselves of responsibility by using their in-house AI as the neutral third party.
So maybe that’s what I fundamentally disagree with – that you can just leave it to AI and machine learning and automate the moderation of civil discourse.