Machine learning is biased. But certain types of bias are more dangerous than others.
It began as science fiction often does: in a lab where engineers were hard at work. The developers at the Facebook AI Research Lab (FAIR) were accustomed to pushing the boundaries of artificial intelligence, but on this particular day, something extraordinary occured. Developers noticed that a chatbot had created a unique language that was completely independent from the standard script. More extraordinary still: the developers could not understand this new language.
In a move unsurprising to anyone acquainted with HAL from 2001: A Space Odyssey, the developers chose to shut down the chatbot. Regardless of whether or not the chatbot could have advanced enough to cause real harm, everyone was pretty creeped out. Fear of the singularity—the idea that as AI becomes more powerful, it will self-learn itself into an endless cycle of increasing intelligence—has increased in the last few years, with some forecasting doom and others anticipating a less grim outlook.
However, there are issues within the AI field that are even more sinister than a robot takeover. There’s no shortage of horror stories about bots spewing homophobic and racist remarks. These instances point to a glaring issue within the field.
MIT Research Scientist Rahul Bhargava describes machine learning as “the process of training a computer to make decisions that you want help making.” Many of these products are already on the market and are already directing us towards favorable courses of action. Something as simple as Alexa advising you to bring an umbrella based on weather reports shows that current AI devices have the ability to synthesize information and give you advice on how to proceed. However, problems arise when the “process of training” does not adjust for biases within a given data set. These issues are many and varied, but among the most disheartening of all is the recent increase in what has been termed “emergent bias.”
According to the study Bias in Computer Systems, emergent bias occurs “as a result of changing societal knowledge, population, or cultural values.” Of course, these changing values are best reflected today in the world of social media. Facebook is a prime example. Users can share articles and news clips with their network and, in turn, view what their friends are sharing.
The issue is that the more you share, the better Facebook’s algorithms are able to determine what content interests you. For example, if I “like” a page for coffee lovers, the Facebook algorithm may then recommend an article telling me that drinking three cups of coffee a day has many health benefits. If I were to then share that article, the Facebook bots would pick up on it and recommend more articles in a similar vein. Perhaps I would see and click on a follow-up article that references a similar study with like findings. And then maybe I would see a reaction piece from someone who attributes their overall wellness to drinking three cups of coffee a day. This is where emergent bias takes hold.
Because I “liked” a page for coffee lovers, all of the articles in my feed affirm the benefits of drinking three cups of coffee a day. But what the algorithm fails to show me is the counter study that states the negative effects of excessive caffeine consumption. Facebook’s ability to detect my interests and appeal to them creates a bubble that allows for very little counter-argument.
As the Facebook algorithm becomes accustomed to your preferences, it recommends more content based on your history. This can create what Kristian Hammond of TechCrunch refers to as “bias bubbles” in which the only content being seen is that which aligns with content liked or shared in the past. “The result,” Hammond notes, “is a flow of information that is skewed toward a user’s existing belief set.”
It’s not difficult to see why skewed information can be dangerous. With so called “fake news” flourishing, emergent bias allows for a news cycle that is less about presenting the object truth and more about appealing to our own ideologies. When there’s a bubble around issues like racism or sexism, the constant affirmation of a single point of view inhibits an individual’s ability to expand their knowledge beyond the bounds of their own network.
In a world where 67% of Americans are getting at least some of their news from social media, emergent bias has the power to influence the ways in which we receive—and perceive—current events. And the further we delve into our own bubbles of belief, the less likely we are to be exposed to other points of view—let alone to consider them. Thus, we must continue to acknowledge the bias with our machines so that we can recognize the bias within ourselves. Then, perhaps, we can create a more objective world.