How to Burst A Filter Bubble

Recommendation algorithms can promote conspiracy theories creating misinformation filter bubbles for users. But bursting these bubbles is easier than expected, say researchers who have discovered how.

Smartphone
(Credit:Bits And Splits/Shutterstock)

Newsletter

Sign up for our email newsletter for the latest science news
 

Websites like YouTube openly host videos that falsely claim vaccines harm people, that the US government knew in advance about the 9/11 attacks, that the moon landings were faked and other conspiracies.

Finding this content is not hard for those who search for it. But a more insidious issue is how recommendation algorithms present this misinformation to users who haven't asked for it. Indeed, many commentators complain that these algorithms can create “misinformation filter bubbles” in which users are fed a worrying diet of demonstrably false ideas.

Services like YouTube do not publish their recommendation algorithms. This makes it hard to know how easily users can fall into disinformation filter bubbles or how hard they find it to get out again.

Bursting Bubbles

So Ivan Srba and colleagues at the Kempelen Institute of Intelligent Technologies in Slovakia have developed a system of bots that pose as YouTube viewers to interrogate its recommendation algorithm and explore the nature of any disinformation filter bubbles that form. The team say their approach reveals how misinformation filter bubbles arise but also how to break out of them—at least on YouTube.

To understand these bubbles, the researchers first documented them in detail. Srba and co trained their bots to recognize misinformation videos using a dataset of 2000 videos manually labelled according to whether they promoted misinformation, debunked it or were neutral. In this way, the bots learned to label videos themselves.

The researchers then let the bots loose on YouTube, programming them to hunt for videos promoting and debunking five different conspiracy theories that are demonstrably false. These included 9/11 conspiracies, moon landing conspiracies and vaccines conspiracies.

In each case, the bots used a search term that looked for videos promoting the conspiracy and another term searching for videos debunking the idea. They evaluated each of the top fifty videos in each category along with the list of videos that YouTube recommends alongside each one. In total, the bots evaluated some 17,000 videos.

The results are disturbing. Srba and co say they found similar numbers of conspiracy videos compared to earlier studies, suggesting that efforts to tackle misinformation had had little impact.

The team also observed the formation of misinformation filter bubbles. These occur when the act of watching a misinformation video causes YouTube to recommend other misinformation videos. “Our audit showed that YouTube (similar to other platforms), despite their best efforts so far, can still promote misinformation seeking behavior to some extent,” say Srba and co.

However, the team’s most interesting result is that misinformation filter bubbles are easy to burst. “Watching debunking videos helps in practically all cases to decrease the amount of misinformation that users see,” they say.

So a user can destroy a filter bubble promoting anti-vaccine content, for example, by watching an evidence-based video on the same topic. Of course, that’s only possible if users are aware that the debunking content exists and are able to find it themselves.

Unknown Context

Nevertheless, Srba and co’s work extends the ongoing and increasingly important task of auditing the algorithms that recommend content on websites like YouTube.

Their research raises a number of questions about the way these platforms should moderate this kind of content. YouTube displays Wikipedia articles to give context about some topics, such as faked moon landings, but not always for others, such vaccine conspiracies.

Recommender algorithms already play an important role in the spread of information and are likely to become more powerful as humans spend more time online and in virtual worlds.

Understanding their role in the spread of misinformation will be crucial, in particular on the effect it has on real people. How do people fall for misinformation, how often and in what circumstances? That behavioral research will not be easy to do.

The European Union has begun to worry about this issue (it funded Srba and co’s research). But the US and other countries lack appropriate focus.

At the very least, the spread of misinformation poses a threat to evidence-based thinking and debate. At worst, it unnecessarily polarizes communities, undermines civic society and threatens the nature of democracy.

What could be more important than preventing that?


Ref: Auditing YouTube's Recommendation Algorithm for Misinformation Filter Bubbles : arxiv.org/abs/2210.10085

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 LabX Media Group