Harmful content such as fake news, hate speech and cyberbullying is rampant on various social media platforms. What balance should the companies in question strike between censorship and the promotion of freedom of speech, and what questions underpin this dilemma?
Political Science Student at UCLA
One of the many wonders that social media presents us with is the ability to exercise our free speech in various forms and be heard by a much wider audience than ever before. It is a great time to be alive – when everyone is connected to each other and information is out there for you to find within seconds. But this also gives rise to problems. Free speech and easy access to expressing yourself on a worldwide platform brings not only constructive criticism and appreciation but also hate, racism, misogyny and ignorance of the platform in use.
Companies like Facebook, Twitter and Instagram are struggling to find the right line between speech that is harmful to people and other content. The main problem lies in the question: how much content should they censor to ensure that users have a safe environment where they can connect and share without limiting our freedom excessively?
This puts a lot of weight on the shoulders of these companies. Facebook, right now, has great power in deciding what sort of content its one billion users see. This power matters if you deal with fake news. Recently, Facebook received a lot of backlash for not filtering out fake content, especially during the American election. The problem is that fake information or “other types of truths” will always be out there, and it is not necessarily the responsibility of Mark Zuckerberg to monitor this content. As Zuckerberg has stated:
“The problems here are complex, both technically and philosophically. We believe in giving people a voice, which means erring on the side of letting people share what they want whenever possible. We need to be careful not to discourage sharing of opinions or to mistakenly restrict accurate content. We do not want to be arbiters of truth ourselves.”
Instagram, on the other hand, tired of hate speech in their comments, have recently introduced mechanisms to overcome this problem via systems detecting malicious words. Yet, as critics have pointed out, this may result in Instagram seeming “too polished and controlled” or in their algorithms getting it wrong and censoring playful teasing or political discussion. Machines are not humans; they must be programmed to work in a certain way. Hence, it remains very difficult to differentiate between what is humorous and what is harmful.
This critique highlights the real problem with censorship: ultimately, it makes everyone the same. When artificial systems limit our political opinions and unique sarcastic humor, we lose the freedom to express ourselves on one platform that we’ve previously been able to use to do so almost completely freely. It also allows only one sort of opinions to be heard. By eliminating some content, and deciding what that content should be, the companies will betray the very first thing they stand for: freedom of speech.
Consider John Stuart Mill’s harm principle. He states that as long as people do not hurt others or limit their freedom, they are free to exercise their liberties (although he stresses physical rather than psychological harm). The problem is that a man who believes women should not have equal rights thinks he is right as the man who thinks women and men are equals. They both have their own truth, although the first obviously harms women’s liberties while the latter does not. Therefore, it makes sense that we limit the first’s liberty to speak freely on this subject. Yet Mill also states that no matter what, freedom of speech should never be limited. He believes that the only way societies progress is when people communicate, argue and debate their beliefs, opinions and truths. Because truths can always be replaced.
Zuckerberg was right when he argued that the problem is complex not only technically but also philosophically. Other people do not always have the same mindset or opinions as ourselves. No matter which end of the spectrum people’s thoughts exist in, if we do not provide absolute communication, connection and freedom, then we risk of setting a standard for everyone and expecting – and creating – submission to that standard. That possibility sounds an awful lot like a dystopia.