In this paper I argue that the problem of algorithmic bias should not be handled as a technical issue. There’s no such thing as a bias free AI model or system. What we have are sets of desirable and undesirable biases, and populating these sets should be done through public debates, rather than “technical” indoors decision.
Abstract
Artificial intelligence (AI) techniques are used to model human activities andpredict behavior. Such systems have shown race, gender and other kinds of bias, whichare typically understood as technical problems. Here we try to show that: 1) to get rid ofsuch biases, we need a system that can understand the structure of human activities and;2) to create such a system, we need to solve foundational problems of AI, such as thecommon-sense problem. Additionally, when informational platforms uses these modelsto mediate interactions with their users, which is a commonplace nowadays, there is anillusion of progress, for what is an increasingly higher influence over our own behavioris took for an increasingly higher predictive accuracy. Given this, we argue that the biasproblem is deeply connected to non-technical issues that must be discussed in publicspaces.
Language
Portuguese
DOI
https://doi.org/10.26512/rfmc.v8i3.34363