Myths and evidence on the algorithms that shape the Internet

Susanna Carta
7 min readJun 25, 2021

It has been said that social media algorithms trap us in ‘filter bubbles’ and that these technologies can manipulates our thoughts and threaten democracy. But how much of it is supported by evidence?

Susanna Carta

Common narratives have attributed many contemporary issues to social media and the algorithms they use to distribute the myriad of information that circulates online. In his 2011 seminal book The Filter Bubble: What the Internet Is Hiding from You, Eli Pariser suggested that social media algorithms create ‘filter bubbles’ that segregate users in ideological enclaves, preventing them from encountering information that would disprove their existing convictions. Today, this line of criticism has expanded: social media companies have been credited as the drivers of ideological segregation, polarisation, election manipulation, radicalisation, depression, and other social ills. A remarkable example of this argument is the 2020 Netflix documentary The Social Dilemma, which depicts social media designers as puppet masters who manipulate people’s thoughts and influence their behaviour. These narratives are not confined to the media. In a speech on the inauguration of the new President of the United States, European Commission President Ursula Von Der Leyen said that the insurrection of Trump supporters at the US Capitol was ‘what happens when hate speech and fake news spread like wildfire through digital media. They become a danger to democracy.’

These concerns are not unwarranted. Scholars from a wide range of disciplines have been studying the ways social media algorithms restrict the information environment, catering recommendations to users’ preferences and facilitating the circulation of disinformation and extremist content. Yet, according to Lee Vinsel, researcher at Virginia Tech, criticism around emerging technologies is often ‘hype-filled’, conferring an almost magical power to digital media firms. Vinsel suggests that the genre of ‘criticism hype’ — of which The Social Dilemma is a prime example — tends to exaggerate the ability of social media to manipulate our thoughts and behaviours, providing very little evidence for it. This type of narrative can obfuscate the nuance of social problems. For example, research from the University of Oxford’s Reuters Institute shows that during the Coronavirus pandemic, politicians have been the main drivers of misinformation, challenging claims that digital media are the only (or even most important) culprits.

In an essay published on Another Gaze on social media criticism, Rebecca Liu writes: ‘attempts at coherence can lapse into concessions of defeat as we slip from being self-possessed critics of a conceptually identifiable entity to lackeys of omniscient gods.’ Consequently, how can the real and potential problems of algorithmically curated feeds be understood without conferring all-powerful qualities to digital platforms?

The first step is understanding what algorithms are and how they are used to sort and order information online. But this is not an easy endeavour. Tech companies are notorious for not releasing information about the decision mechanisms of their recommendation algorithms. This is why we might feel that an algorithm knows us when we are presented with eerily accurate recommendations, but we can’t pinpoint how it is able to do so. This lack of understanding about the inner workings of algorithms feeds the narrative that algorithms are alive, even sentient. As explained by Virginia Dignum, associate professor at Delft University of Technology, ‘self-learning algorithms are sometimes called a black box because people don’t fully understand how they make their predictions. I would partly agree with this. The underlying process is simple mathematics — regressions — and we understand these.’

Animation by Boris Borisov

In basic terms, an algorithm is a set of rules to follow in order to solve a problem. Even mundane tasks like doing laundry, baking a cake, or giving directions can be understood as algorithms. A prime example of the use of algorithms is a Rubik’s Cube. In order to solve a standard Rubik’s Cube, you start with a six-faced cube where each face contains different coloured stickers, and you perform a set of rotations in order to obtain a cube where each face only has one colour. The sequence of moves required to solve a part of the Rubik’s cube is effectively an algorithm. The Rubik’s Cube can have approximately 43 quintillion possible combinations, but using a defined set of instructions (algorithms) will lead to solving the cube every time. Recommendation algorithms are used to find a solution to a similarly hefty problem: choosing which content to show to millions of individual users out of a vast pool of online information.

While the solution of a Rubik’s Cube can only be one, what constitutes a good recommendation depends on the objectives and values held by the designers of the algorithm. The main objective of designers is usually personalisation. Companies using recommender systems want to cater to users’ preferences in order to optimise for metrics such as clicks, purchases or time spent on the website with the aim of maximising certain commercial goals. Algorithmic recommender systems might show you a news article because it is liked by people who enjoy similar pieces as you. Or it might show you news stories based on topics, authors, and sources you have liked in the past. Machine learning algorithms are programmed to use these logics to make predictions about the content that matches your tastes and show it to you. The more information is available about you, the more accurate these predictions should be. Yet, on a basic level, machines will keep following the instructions they were designed to follow. As stated by Dignum, ‘[Algorithms] are artefacts. You don’t need to be afraid of hammers either. What you should be concerned about are the people who use hammers to do bad things (…) There’s always a human strategy behind using algorithms.’

Susanna Carta

If recommendation algorithms keep showing us more of what we like, are we being shielded from intellectually challenging information? Existing research on news personalisation seems to suggest that this is mostly not true. For example, a study that looked at people who used search engines to find news, found that these people were actually being exposed to diverse and challenging news. Another study found that Facebook use was associated with an increase in counter-attitudinal news exposure, which led to less polarised opinions. It has also been shown that people tend to use different media for news consumption, meaning that their encounters with personalisation algorithms are only a small part of their entire media diet. Researchers have argued that people’s own decisions might play a bigger role than algorithmic recommendations when encountering news online. Namely, the people and organisations that we surround ourselves with on social media may be more influential than algorithms in either diversifying or restricting the variety of information we see online.

This doesn’t mean that recommendation algorithms are not potentially harmful. For example, a Wall Street Journal investigation found that the YouTube algorithm led users to encountered more niche, extreme political content when they showed a political bias. While these claims are contested by several studies, researchers have lamented that the lack of transparency on how the YouTube algorithm works limits the ability of scholars to research it. There is also evidence that being shielded from challenging information can lead people to develop more extreme views, or be less trusting of opposing viewpoints. Yet, because there is little proof that algorithmic personalisation creates filter bubbles, it would be inaccurate to state that societal polarisation happens as a result of social media algorithms.

Overall, evidence suggests that digital media might not be as powerful as often portrayed. ‘Hype involves unrealistic near-future projections around technology — such projections can be positive-utopian or negative-dystopian. Either way you are misleading the public about what is happening in the world and what is likely in the near future.’ Says Lee Vinsel, the researcher at Virginia Tech. ‘You find people like Shoshana Zuboff and Tristan Harris [talking heads in the Netflix documentary The Social Dilemma, Ed.] claiming that social media companies can control our thoughts and actions, even though there’s very weak evidence that’s true. Among other things, this distracts us from the active roles that communities and groups of like-minded individuals play in sharing information.’

According to Vinsel, it isn’t that there isn’t technological change happening, but ‘it’s not as bright, shiny, or dramatic as boosters claim.’ There is scarce evidence on the effects of algorithmic personalisation on our lives or the state of democratic societies, but as the technology advances, effects might increase as well. Scholars have been asking for more algorithmic transparency, and for a revision of the values for which recommendation algorithms are designed. For example, news recommendation algorithms could be explicitly designed to favour exposure to politically cross-cutting news or to provide users with more autonomy to customise their own recommendations.

The algorithms that shape our information gain on the Internet are worth being monitored, but as of now, they might not be as transformative as critics suggest. In the future, it might be worth focussing on more than just new technologies as the causes of social problems, so as to be able to implement more nuanced, holistic strategies to tackle them.

--

--