Advertisement

Are social media algorithms perpetuating biases online?

Tuesday December 15 2020
Internet.

Instead of filtering out disinformation and hate speech online, the algorithms trained to identify these may instead amplify the biases. PHOTO | FILE | NMG

By RUTH OMONDI

A few days ago, my colleague Brian Kagoro, a renowned pan-Africanist, and his fellow Zimbabwean political activists had their Twitter accounts suspended arbitrarily. This set off frantic efforts to seek answers from Twitter but no concrete answers were given. Instead, their accounts were restored as arbitrarily as they had been suspended.

Kagoro is a fiery human rights defender, pro-democracy activist and a believer in open society values. Using the hashtag "#AfricanLivesMatter", he posts on Twitter and Facebook on number of issues ranging from police brutality, authoritarianism, corruption, and human rights violations among others.

The flagging and suspension of Twitter accounts highlights the issues surrounding the use of algorithms and the inherent biases therein. We are perhaps getting to that point that some scholars in 2012 called "automation bias running rampant". While social media platforms like Facebook, YouTube, and Twitter are increasingly banking on artificial intelligence technology to flag and stop the spread of hate speech, disinformation and other abusive and offence content, studies are showing that algorithms that flag hate speech and disinformation online are biased against a certain category of people. Instead of filtering out disinformation and hate speech online, the algorithms trained to identify these may instead amplify the biases.

Researchers have also warned that the mere fact of algorithms "being products of complex processes, their decisions are not automatically equitable and just, and the procedural consistency of algorithms is not equivalent to objectivity".

They further note that while the application of algorithms may enhance efficiency, especially in the use of big data, its opaqueness makes it very difficult to determine or assess the extent of correctness, fairness in social application with varied contextual realities. The hope in using algorithms has been that they are near infallible and hence harmless in their use. But, the reality is that algorithms only have probabilistic levels of accuracy and the fallibility is inherent in systematic errors. The learning algorithms for instance, have been found to be vulnerable to the characteristics and errors of their training data.

Further evidence has demonstrated that how artificial agents such as algorithms and artificial intelligence behave is determined by the human specifications and therefore the results could be incorrect, inequitable, or have dangerous consequences. 

Advertisement

Further evidence from testing of racial biases by training a model with data sets also found substantial racial biases and concluded that the quality of data that comes into the models is critical. In which case, even though the algorithmic systems may be neutral, because the data that goes into them is determined by humans who are deciding what comprises hate speech or offensive content, based on their own biases, the data then becomes biased. Therefore, the flagging of content as hate speech or offensive could be returning flawed results and hence reflecting the biases that are inherent therein.

Yet others have argued that addressing the biases inherent in algorithms is not just about sorting out data issues, but also the underlying structural inequalities that underlie what is fed into the system. The lack of diversity in decision-making further compounds the biases that are inherent in society. This has been described by analysts as "diversity disaster", which has contributed to flawed systems that perpetuate biases.

These biases inherent in systems built by the artificial intelligence industry have been largely attributed to the lack of diversity within the field itself. What this has meant is that the nuances characteristic in certain contexts, like those in Africa do not get into these systems.

In essence therefore, in assessing content to determine those that are offensive and need flagging, or accounts that need to be suspended, a lot more needs to go into this, based on nuances that take into account the contextual realities. Often, what is considered offensive depends on social context and meaning ascribed to the content in a particular setting.

However, algorithms and content moderators who grade the data that teach algorithms how to detect this content often fail to take these contextual nuances into account. In fact, evidence has showed that they can amplify the biases that are already inherent in humans, indicating that the test data that feeds algorithms have in-built biases right from the start. To address these biases and to ensure fairness in the application of algorithms, diversity of voices need to be at the table when determining data that teaches algorithms.

This will bring onto the table the diverse nuances inherent in different contexts. In addition, the moderators who grade the data that teach algorithms must take deliberate steps to understand the contextual realities, as it has been demonstrated that in cases where moderators of the data are aware of social context of where the content is generated, the likelihood of labelling such content as offensive or abusive is significantly less. But more importantly, people's literacy about how algorithms work and the biases that are inherent therein would be critical for them to determine some of these biases and continuously ask for fairer and more open artificial agents.

It is therefore apparent that Brian Kagoro's and fellow pan-Africanists' Twitter accounts suspension was a case of algorithmic bias that unfairly flags tweets by certain groups, in this case from Africa, as offensive or abusive without taking into account the social, political, linguistic, and cultural nuances.

Ruth Omondi is a communications specialist and PhD student in communications and information at the University of Nairobi, with research interests in artificial agents, big data, disinformation, and democracy.

Advertisement