Skip navigation

Decolonising bias in organisational systems: a machine learning approach to equity, power, and algorithmic justice

Decolonising bias in organisational systems: a machine learning approach to equity, power, and algorithmic justice

Ibitoye, Ayodeji Olusegun ORCID logoORCID: https://orcid.org/0000-0002-5631-8507 and Kolade, Oluwaseun (2026) Decolonising bias in organisational systems: a machine learning approach to equity, power, and algorithmic justice. In: Olatunji, David Adekoya, Ajonbadi, Hakeem Adeniyi, Ciesielska, Malgorzata, Kolade, Oluwaseun and Mordi, Chima, (eds.) Decolonising the Organisation: Emerging Frontiers and New Perspectives. Plagrave Macmillan - Springer Nature, Cham, Switzerland, pp. 289-317. ISBN 978-3032148506 (doi:10.1007/978-3-032-14851-3_13)

[thumbnail of Accepted Book Chapter] PDF (Accepted Book Chapter)
52598 IBITOYE_ Decolonising_Bias_In_Organisational_Systems_A_Machine_Learning_Approach_(BOOK CHAPTER AAM)_2026.pdf - Accepted Version
Restricted to Repository staff only until 17 February 2027.

Download (378kB) | Request a copy

Abstract

This chapter examines how algorithmic systems can reproduce and exacerbate structural inequities along gender and racial lines, using the Adult Income dataset as a testbed for comparative analysis across four models: Logistic Regression, XGBoost, Explainable Boosting Machines (EBM), and Adversarial Debiasing Networks. Empirical evaluation revealed substantial disparities in unmitigated models, with disparate impact ratios falling as low as 0.68 for women and non-white individuals. Crucially, this study embeds technical findings within a decolonial theoretical framework, arguing that fairness cannot be reduced to statistical parity. Instead, it must be understood as a historically situated, epistemically accountable, and relationally constructed concept. The research challenges dominant narratives of algorithmic neutrality by foregrounding the colonial legacies and institutional hierarchies that inform both data practices and model design. By bridging machine learning evaluation with critical social theory, this research advances a reflexive, justice-oriented approach to algorithmic governance in organisations. It offers a framework for rethinking fairness not simply as a computational objective, but as a moral and organisational commitment grounded in equity, participatory design, and the inclusion of marginalised knowledges.

Item Type: Book Section
Uncontrolled Keywords: AI ethics, algorithmic bias, Decolonial Theory, machine learning fairness, organisational justice
Subjects: H Social Sciences > H Social Sciences (General)
Q Science > Q Science (General)
Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Faculty / School / Research Centre / Research Group: Faculty of Engineering & Science
Faculty of Engineering & Science > School of Computing & Mathematical Sciences (CMS)
Last Modified: 05 Mar 2026 16:12
URI: https://gala.gre.ac.uk/id/eprint/52598

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics