Artificial intelligence has proven invaluable in catering to online experiences, but it can have unintended results as is the case for YouTube where conspiracy theorists and climate deniers see things very differently than other users. AI technology holds somewhat of a unique place in the news, for reasons ranging from earning a role in entertainment to the racial bias found in a software designed to reconstruct faces from pixelated images.
Like any tool, software limitations have a lot to do with their design. However effectively machines are taught to recreate facial structures or support student education they can miss the point of these tasks altogether. Take, for example, a human being and a machine sitting down together in a room and setting upon the task of reading one article, and producing another like it. The human being can take concepts of subject, wording and analogy to create a new work based on a lifetime of experience with their given language. A machine given the same single article as a sample could read it hundreds of times faster, but if the machine lacks the linguistic knowledge of the human and no further training then it might only write the same exact article.
A similar situation is demonstrated by TheirTube, showcasing YouTube accounts with the same view history as real users and using the same AI to then recommend new videos, offering a look into the way YouTube can encapsulate users in a bubble of similar content. On the one hand, accounts showing heavy interest in conspiracy theories are exposed to headlines including vanishing flights, people with superpowers and unsolved mysteries. For climate deniers however, the recommendations take on a political element in populating videos with headlines which praise the oil industry and criticize climate models and left-wing political theory.
Regardless of ideology or political leaning, people have their own preferences on the kind of content they want to see and AI intends to facilitate that for the user. But the same way machine-learning AI is trained by being shown the same kinds of things, some implications arise when people are put in an ideological echo-chamber as well. One might begin surfing the website because they find unexplained mysteries are exciting and without expecting to then be recommended documentaries alleging world-wide conspiracy, but if users enjoy that kind of content and AI caters to that demand then it can lead to harmful ideas maintaining the forefront of AI-powered content direction.
Neural networks and the machine learning algorithms powering them aren’t to blame as they’re only tools used in this case to bring enjoyable media to users, and the same artificial intelligence could help prevent the spread of misinformation by identifying instances individually. The fact remains that the YouTube algorithm could put well-meaning people into a content space predisposed to harmful messages without the same easy access to videos challenging those ideas and it isn’t a reach to argue that the media people consume helps inform their world view. Complications aside, a demonstration of the way people of differing ideologies are informed by content is valuable in that the accessibility of that media is handled algorithmically by YouTube, and that, procedural generation of recommended listings have been majorly successful in collecting views.
More: YouTube, Facebook, & Twitter COVID-19 Viral Video Removal Explained