Since the 2016 Brexit referendum and the US presidential election, there’s been an explosion of books and articles hoping to explain the decline of liberal democracy. One common explanation involves social media algorithms. The priorities of the social media behemoths are shaped not by a sense of collective civic responsibility or by a regard for democratic principles but by an insatiable appetite for financial growth and domination. The astronomical sums of revenue generated by the platforms rest on a business model that Shoshana Zuboff likens to a manufacturing process. She states that surveillance capitalism “unilaterally claims human experience as free raw material for translation into behavioral data [which] are declared as a proprietary behavioral surplus, fed into advanced manufacturing processes known as ‘machine intelligence’, and fabricated into prediction products that anticipate what you will do now, soon, and later.” To extract behavioral surplus, user attention must remain fixed on content, and nothing holds attention like fear and loathing. Users are in this way taken progressively deeper into a digital world of conspiracy theories and hate. Livingston argues that this explanation of democratic decline is accurate, but only to a point. It fails in its overemphasis of technology as a sole causal factor. Much of the “computational propaganda” literature to appear since 2016 is ahistorical and inattentive to alternative explanations. It fails to consider the social, political, and economic context that influences the reach of algorithmic amplification of extremist content. Yet policy and law makers in the United States and Europe tend to focus on platform regulation, on the technologies themselves, when searching for solutions. The concern here is that in diagnosing the problem in technical terms, solutions will fall short. Furthermore, “content regulation” by Western democracies might offer more authoritarian states a way to justify far more draconian regulatory controls.
Link to the event: