Everyone agrees with me, no-one is arguing against my beliefs, every news article supports my stance – is this what reality looks like? For an in­creas­ing number of internet users, it does. Society rarely agrees and varying opinions are the pre­req­ui­sites for a de­mo­c­ra­t­ic debate culture. However, since the result of the US Pres­i­den­tial Elections in 2016 we’ve realized that the image that social media gives us isn’t complete and some opinions are even omitted. Whether we like it or not, we live in a filter bubble.

When Donald Trump won the election and became the 45th President of the United States in 2016, many people around the globe were shocked: there was nothing to indicate that this political outsider would be backed by the majority of people. At least that’s how Trump’s opponents saw it. For his sup­port­ers on the other hand, it was clear that they were all against Clinton. Both sides had made excellent arguments on the net and expressed their own opinions, but the other side hadn’t heard them. All of them had only read and commented within their filter bubble.

How is a filter bubble created?

Our society has shifted a large part of everyday life to the internet. For many people, com­mu­ni­ca­tion and acquiring in­for­ma­tion happens ex­clu­sive­ly online. Facebook’s newsfeed acts like a news magazine, Google as a lexicon, and mes­sen­gers like WhatsApp or Skype serve as forums for sharing in­for­ma­tion with friends, col­leagues, and family. We now find almost every­thing we want to know online. Internet providers know that too: Google, Facebook, Netflix, and Instagram know how important their part is in society. They are therefore con­stant­ly refining their al­go­rithms when it comes to user-friend­li­ness: they only show us the in­for­ma­tion that is sup­pos­ed­ly relevant to us.

This is nothing new: the popular service providers on the internet collect data about user behavior on their platforms and promise to adapt the user ex­pe­ri­ence even better to the needs of users – often without them having to do anything. In the past, data col­lec­tion has been crit­i­cized by many experts, but primarily under the (very important) aspect of data pro­tec­tion. The term 'data leech' describes how com­pre­hen­sive Google, Facebook, etc. collect and analyze users’ personal data: how much time does someone spend online? Where do they live? What are their hobbies?

Of course, all this in­for­ma­tion is also used by these companies for their own purposes: Google and Facebook earn a large part of their revenue from per­son­al­ized ad­ver­tis­ing, for example. But the in­for­ma­tion should also help to better tailor offers to the re­spec­tive user. This means not only is the ad­ver­tis­ing per­son­al­ized, but also the offers.

As a result, these services only show us the news, in­for­ma­tion, and opinions that match our user profile. This may seem a positive thing at first: feeds are no longer stuffed with articles that don’t interest you in any way, popular posts are no longer cluttered with comments that you don’t want to read, you don’t have to wade through arguments that don’t lead anywhere, etc. But in the long term, this creates problems that only come to light when you question the filtering mech­a­nisms of social media.

Criticism of Facebook bubbles and Google bubbles

The concept of the filter bubble goes back to the activist Eli Pariser, who in his book, The Filter Bubble: What the Internet is Hiding from You, crit­i­cizes the extent to which in­for­ma­tion is per­son­al­ized on the internet. He notices that different users – depending on their political attitudes, for example – get different results even after using the same search terms. However, the Google bubble is not an isolated case: other services on the web also use al­go­rithms for per­son­al­iza­tion. Bubbles like these also occur on Facebook. The problems that result from this, are not only in­di­vid­ual in nature, but also have an impact on society as a whole.

Discourse is important in a func­tion­ing democracy: ex­chang­ing different per­spec­tives is not only important between politi­cians from different parties, but should take place through­out the whole of society. Only then it is possible to see things from different per­spec­tives and to broaden your own horizons. However, those who live in a filter bubble will rarely find arguments that go against their own views, but are more likely to find a lot of support. Since many internet users do not yet have a suf­fi­cient­ly critical awareness of how to deal with the new media (known as 'media com­pe­tence'), their own per­cep­tion within the bubble is projected onto the entire world outside.

Instead of seeing one’s own opinion as just one of many, the filter bubble makes it seem like there is only this one opinion and that no others exist. This explains phenomena such as Trump’s sur­pris­ing victory. Within the filter bubble of liberal in­di­vid­u­als, there were no signs that enough people would share the Re­pub­li­can candidate’s thoughts. Jour­nal­ists, who are also in a filter bubble like this, act as mul­ti­pli­ers and spread this pre­con­ceived opinion in other media.

The formation of filter bubbles con­tra­dicts two basic ideas that correlate with the spread of the internet as a mass medium: On the one hand, it stands for the net­work­ing of the most diverse people across the globe, but anyone living within a ho­mo­ge­neous group will no longer benefit from this advantage. Secondly, the internet has been praised as a virtual place where in­for­ma­tion is freely ac­ces­si­ble and cannot be censored. This enabled the internet to form an an­tithe­ses to tra­di­tion­al media, as content is filtered by the re­spec­tive editorial offices. Now this filtering can also be found on the internet, but instead of an editorial team doing the filtering, it is now an algorithm that selects what users should know.

Who’s to blame for the filter bubble?

It is quite easy to presume it’s the fault of large cor­po­ra­tions and their al­go­rithms: Facebook, Yahoo!, and Google do not, or only in­suf­fi­cient­ly, educate their users about how and why they filter certain in­for­ma­tion, and don’t give them the op­por­tu­ni­ty to change or disable the filtering. In general, however, all users are jointly re­spon­si­ble for the content they receive. Facebook, for example, shows less news from users whose links we don’t click on. This means that we are already signaling a dis­in­ter­est in reports that do not agree with our opinion. The algorithm continues to do this and presents only the in­for­ma­tion that seems to be of interest to us.

Eli Pariser assumes that con­tra­dic­tion is rife in everyone and then compares this with healthy and unhealthy food: we know that we should eat food that is good for us, but we are happy to choose products that will satisfy us at that exact moment. Pariser argues that a mixture of the two should be the solution: in­for­ma­tion that matches our profile, but also in­for­ma­tion that chal­lenges us. Al­go­rithms should also be struc­tured in this way.

There’s no party that can take full re­spon­si­bil­i­ty for filter bubbles: it seems to be a mix of social and technical phenomena. On the one hand, every person tends to seek con­fir­ma­tion of their opinion. On the other hand, technical de­vel­op­ments used in the web are designed to make browsing as pleasant as possible – not to create in­tel­lec­tu­al chal­lenges. The fact is that one single person can’t possibly see all messages that appear on the internet every day. Therefore, there is basically nothing wrong with a filter based on technical al­go­rithms, but the resulting excess must be examined crit­i­cal­ly.

Echo chambers and fake news: filter bubble excess

Two other terms often appear in con­nec­tion with the filter bubble, which are 'echo chambers' and 'fake news'. An echo chamber is a room in which you can create a strong echo. In the fig­u­ra­tive sense, this refers to a virtual space where opinions only intensify and there are no longer any mit­i­gat­ing in­flu­ences. Echo chambers like these are created within a filter bubble because a fed-in opinion (e.g. in the form of a Facebook post) is only amplified by the echo of the other members within the bubble and is no longer put into per­spec­tive by a different point of view.

Among other things, this explains the success of so-called fake news. The alleged factual reports either depict distorted facts or are even totally made up. Agitators feed these fictional stories into a filter bubble where they can spread these alleged facts without being chal­lenged. This creates a per­cep­tion of the world that is de­ter­mined more by opinions than by facts, which leads to conflicts instead of dis­cus­sions.

The filter bubble: is it that bad?

There are some voices that criticize the theory of the filter bubble when it comes to diversity of opinions. It is ques­tion­able how high the influence of a filter bubble really is and whether the internet or the cor­re­spond­ing al­go­rithms amplify it. According to a study, in 2016 the majority of Americans still got their news from watching TV. 57% said their TV is their news source compared with 38% that stated online, 25% that stated radio, and 20% that stated that their news comes from printed sources. 18% admitted to getting their news fix on social media, which is where filter bubbles are at their strongest. However, there aren’t many people who have social media as their ONLY source of news, so does this mean that the filter bubble is being crit­i­cized unfairly? Before you say yes, consider this: around half of Americans use a search engine to access news online. The Google bubble could therefore have a sig­nif­i­cant influence on what in­for­ma­tion users receive. In addition, jour­nal­ists also have social media accounts and use Google to research so in this respect, this also in­flu­ences the media beyond the internet. But there are very con­tra­dic­to­ry opinions when it comes to the Google bubble: Eli Pariser provides clear evidence of how per­son­al­iza­tion in­flu­ences Google’s search results. However, these ob­ser­va­tions date back to 2011 and Google continues to make regular changes to its search engine. However, you shouldn’t forget that filter bubbles existed long before the internet: before the de­vel­op­ment of the world wide web, numerous people in or­ga­ni­za­tions, within circles of friends, and at get-togethers argued their opinions and tried to get others on their side without needing Google or Facebook bubbles. The internet has enabled different view­points to be heard and therefore more plu­ral­iza­tion. But to have a free internet in the future, where diverse people with differing points of view and opinions are on the same level, you shouldn’t un­der­es­ti­mate the danger of filter bubbles.

Ways out of the filter bubble

If you want to free yourself from your filter bubble, you have several options: the first step should be to question your own surfing behavior. If you’re looking for con­flict­ing views, you will find them despite the Facebook bubble and Google’s (supposed) per­son­al­iza­tion. This is how social media al­go­rithms can be con­scious­ly in­flu­enced and trained: for example, if you 'like' the pages of several political parties, you should receive a wider range of in­for­ma­tion from the political spectrum in the future. This way, everyone can create their own diversity. In addition, the network offers tools that at least free the search of per­son­al­ized results. Every internet user is free to search using search engines other than Google. The German search engine, Unbubble, for example, reveals that no in­for­ma­tion about its users’ behavior is collected or evaluated. This means that no per­son­al­ized search takes place and therefore no filter bubble can form. In addition, there are add-ons for some internet browsers, which help to prevent surfer behavior from being tracked. What’s good for privacy also helps to prevent filter bubbles. If companies are unable to collect personal data, they can’t per­son­al­ize results or ad­ver­tise­ments. You should also be careful when revealing personal in­for­ma­tion on social media. If you don’t want to give up Facebook, you can at least be more re­stric­tive when it comes to the data you enter. Assuming that the filter bubble is also expanding into tra­di­tion­al media, it makes sense to use as many different kinds of media and sources as possible in order to obtain in­for­ma­tion and news. For example, this can be done online by using balanced news ag­gre­ga­tors such as Feedly or News360. These enable you to perceive other per­spec­tives and broaden your horizons, despite the menacing filter bubble.

Go to Main Menu