Okay, we've had an interesting discussion on my previous post on this subject, now I'd like to finish up my thoughts about this. The man of La Mancha linked to a NYT article in the comments, which I do recommend, but since it's paywalled I'll give you this discussion by Dan Paterson at CBS New for those who need a link. It's not as cogent but makes some good points.
There have always been people who have been seduced by cults with bizarre beliefs, even with fairly primitive technology. A popular brand in the U.S. has been the End Times, from the Millerites to Harold Camping, who mostly worked through the radio. Lyndon LaRouche managed to make a lot of people insane, including some smart college students, with nothing more than a cheaply produced newspaper. Many of these cults have been harmful only to their followers, although there are exceptions such as the anti-vaxers.
Our free speech regime seemed important enough, by consensus, that it was necessary to tolerate this amount of disinformation. The ACLU even defended Nazis. But the Internet and so-called "social media" have created a fundamentally different situation. Consumption of newspapers, radio and television is discretionary. The content is available, and you pick what to consume. The result is that most of us manage to get exposed to a range of information sources, ideas, and purported facts and we can make reasonable judgments. I'm aware of the Q-Anon cult but I have plenty of information from other sources to know that it's crazy, as is the Jewish space laser hypothesis.
The problem now is that people's consumption of information on these "social media" platforms is controlled by conscienceless algorithms that are scientifically designed to commandeer attention. Just as the processed food manufacturers have learned how to use fat, salt and sugar to addict eaters, the tech companies have learned how to use outrage, fear, novelty and bias to suck people down tunnels of falsehood, anger and hatred. They do this to make a profit, and it works very well.
So well, in fact, that one of the two major parties in the U.S. has been captured by lunatics. As I said before, I don't have a pat answer for this. But regulating the algorithms that manipulate the nature of the information people see does not seem to me a violation of freedom of speech. The crazytown stuff is still there, you can still find it, it just isn't being pushed selectively at people who are vulnerable to it. We can't have a functioning society, let alone a democracy, if people don't for the most part inhabit the same universe.
Update: I found this good discussion in, of all places, Harvard Business Review, by Yaƫl Eisenstat. HBR gives you 2 free reads. His recommendation is similar to mine:
And we can discuss how to regulate in an appropriate manner, focusing on requiring transparency and regulatory oversight of the tools such as recommendation engines, targeting tools, and algorithmic amplification rather than the non-starter of regulating actual speech.
By insisting on real transparency around what these recommendation engines are doing, how the curation, amplification, and targeting are happening, we could separate the idea that Facebook shouldn’t be responsible for what a user posts from their responsibility for how their own tools treat that content. I want us to hold the companies accountable not for the fact that someone posts misinformation or extreme rhetoric, but for how their recommendation engines spread it, how their algorithms steer people towards it, and how their tools are used to target people with it.
Update 2: Since we're always trying to teach critical thinking here, it appears I need to point out that there is a difference between opinions you don't agree with and actual, literal falsehoods. Start with that.
3 comments:
Okay, if that's the problem then the answer is stupid easy: ban algorithmic feeds. No interest groupings, no "people you should know," no tailored news. It's all there but, as before, you have to go find it. That would kill their business model and crush their market cap, but there are few things I care about less.
A simpler solution than banning algorithm amplification would be to remove Sec 230 protection for content amplified by the platform.
That, Daniel, would actually be both more difficult to manage (for them) and vastly less effective. Conspiracy theories are generally protected speech and american libel law is very narrow. And I'm going further than algorithm amplification, I would ban curated feeds of literally every kind. If anything appears in your window it's because you specifically selected it.
Post a Comment