By now, you’ve probably heard that the company formerly known as Facebook has rebranded. They will now be called “Meta Platforms, Inc.” in light of the new focus on some vague “metaverse.” However, the recent controversy surrounding a report from Frances Haugen, a whistleblower from within Meta, underscored the announcement. In a 60 Minutes interview, Haugen said that while she was working for the company, she noticed that “there were conflicts of interest between what was good for the public and what was good for Facebook. And Facebook, over and over again, shows to optimize for its own interests.” In order to understand this conflict of interest, we first have to understand how Facebook and its business works.
For a long time, there has been an idea of a “free marketplace of ideas,” where any and all ideas can be discussed. In an ideal marketplace, the good ideas thrive, while the bad ones are drowned out and rejected. One of the tenets of social media (and especially Facebook) is to connect people and allow them to share their ideas with anyone in the world freely. Facebook has nearly two billion active users, making it the largest, most free marketplace of ideas in human history. These connections will cause smarter and better ideas to thrive, right?
Well, to put it lightly, no. The fatal flaw in the free marketplace of ideas is that it does not actually select the best, truest ideas, but rather the ones that garner the most engagement. On a website like Facebook, the number of likes that you get—and the success of your idea—is not dependent on how correct your idea is, but instead on how many people engage with it. This means that false information will always succeed in a communication medium that prioritizes engagement because misinformation is inherently engaging. It creates a disagreement over the validity of the idea, which provokes engagement, both from people who believe the message and from those attempting to disprove it.
This is an extremely dangerous trend. Combining that with online echo chambers where dissenting opinions are almost never heard, the result is communities becoming radicalized by misinformation. And make no mistake, America is currently in a pandemic of misinformation, which I mean almost literally. Misinformation about the COVID-19 vaccine has caused a level of vaccine hesitancy that has kept America from reaching the herd immunity threshold. People are dying from COVID-19 every day, and a substantial number of those deaths can be attributed to vaccine hesitancy.
Furthermore, misinformation is causing extreme harm to American politics. Anyone can draw a direct line from the danger our democracy faces to the online misinformation that causes people to believe in falsehoods such as QAnon, a completely illogical conspiracy theory about satan-worshipping pedophiles that rank among powerful American politicians.
However, if the harm done by misinformation is so obvious and pressing, why hasn’t Facebook stopped it? Surely a multi-billion dollar company with complete control over its own platform would be able to do something, right? Well, until recently, Facebook had the benefit of the doubt. Maybe they weren’t aware of the extent of misinformation or maybe preventing it is much harder than we all assumed. But Frances Haugen’s recent public statements have revealed the truth about Facebook’s business model.
You see, Facebook makes its money off of engagement. More likes on a post mean more people watching ads, which means more money in Mark Zuckerberg’s pocket. This is evidence that an increased level of engagement due to misinformation is actually a good thing for Facebook. Therefore, Haugen’s statement about Facebook optimizing for its own interests makes sense. Facebook has repeatedly proven over the years that they value their profits more than the world’s welfare. And, in case you think that’s an exaggeration, Facebook has contributed to misinformation-based unrest around the world—for instance, its role in the military coup in Myanmar.
The rebranding of their company as Meta is not just lame and pretentious, but also appalling. It shows the world that instead of fixing their current problems, the company will instead try to sweep them under the rug and develop a metaverse instead. An expansion of social media into augmented reality will not make problems of misinformation go away; it will make them much worse.
As any professional experienced with public safety will tell you, making a leap forward that affects a great many people without fixing glaring structural problems is how people die.
Facebook certainly has the power to fix these problems and stop the harm they inflict. In Haugen’s interview, she also said that Facebook has algorithms committees that can regulate false speech on the platform, but “as soon as the  election was over, they turned them back off or they changed the settings back to what they were before to prioritize growth over safety.” So clearly, the solution is that the government must mandate Facebook regulate speech on their platform.
Now, before I go any further, let me clear up a common misconception about online censorship. It is not a violation of your First Amendment rights to be banned or censored from social media because of your speech. The First Amendment only regulates action taken by the United States government and does not apply to private companies. Therefore, a social media platform will only allow you to speak using their platform as long as the ad revenue brought by your speech outweighs the profit loss from any trouble caused by your words.
Even in the case of the government explicitly censoring one’s speech, it is constitutional to do so if the speech in question can be proven to harm the rights of others. We can see this in the Facebook terms of service, which say that a user “may not use our Products to do or share anything … that infringes or violates someone else’s rights, including their intellectual property rights.”
With that said, it’s time for a paradigm shift on misinformation. False facts, unsubstantiated rumors and conspiracy theories need to be suppressed wherever they appear on the internet. Social media companies need to massively ramp up their efforts to stem the flow of misinformation. It is no longer feasible to wait for these companies to do so of their own volition; instead, the push to promote truth on the internet has to be supported by government regulation. There would certainly be drawbacks; there are many ways in which an increase in regulation of free speech could cause serious problems. But our democracy cannot survive in a political landscape where politicians make decisions on behalf of constituents that no longer base their political beliefs on the truth.