Md Fahim Sikder is currently pursuing his Ph.D. under the guidance of Professor Fredrik Heintz and Assistant Professor Daniel de Leng at the Reasoning and Learning (ReaL) Lab, IDA, Linköping University, Sweden. His research focuses on creating Generative Models for Time-Series and Fair Data Generation. Before this, Fahim served as a Lecturer in the Computer Science and Engineering department at the Institute of Science, Trade, and Technology (ISTT). He also took on the roles of Coordinator of the HEAP Programming Club and Coach of the ACM ICPC team at ISTT.
Fahim’s research interests include Artificial Intelligence, Generative Models, Trustworthy AI.
Md Fahim Sikder conducted several workshops and seminars, including a Workshop on Latex and a Week-long training course on Python (Beginning to Advance including Machine Learning). He participated in several National and International Contests. He achieved many titles, including “Champion at International Contest on Programming and System Development (ICPSD), 2014”, and “Champion at NASA SPACE APPS CHALLENGE 2016” in Rajshahi Region, Bangladesh.
Ph.D. in Computer Science, Ongoing
Linköping University, Sweden
Master of Science in Computer Science, 2018
Jahangirnagar University, Bangladesh
Bachelor of Science (Engineering) in Computer Science & Engineering, 2016
Gopalganj Science and Technology University, Bangladesh (Formerly- Bangabandhu Sheikh Mujibur Rahman Science and Technology University)
Higher Secondary Certificate Examination, 2012
Khilgaon Government High School, Bangladesh
Secondary School Certificate Examination, 2010
Khilgaon Government High School, Bangladesh
Department of Computer and Information Science (IDA)
Responsibilities include:
Department of Computer Science & Engineering (CSE)
Responsibilities include:
As Artificial Intelligence-driven decision-making systems become increasingly popular, ensuring fairness in their outcomes has emerged as a critical and urgent challenge. AI models, often trained on open-source datasets embedded with human and systemic biases, risk producing decisions that disadvantage certain demographics. This challenge intensifies when multiple sensitive attributes interact, leading to intersectional bias, a compounded and uniquely complex form of unfairness. Over the years, various methods have been proposed to address bias at the data and model levels. However, mitigating intersectional bias in decision-making remains an under-explored challenge. Motivated by this gap, we propose a novel framework that leverages knowledge distillation to promote intersectional fairness. Our approach proceeds in two stages: first, a teacher model is trained solely to maximize predictive accuracy, followed by a student model that inherits the teacher’s representational knowledge while incorporating intersectional fairness constraints. The student model integrates tailored loss functions that enforce parity in false positive rates and demographic distributions across intersectional groups, alongside an adversarial objective that minimizes protected attribute information within the learned representation. Empirical evaluation across multiple benchmark datasets demonstrates that we achieve a 52% increase in accuracy for multi-class classification and a 61% reduction in average false positive rate across intersectional groups and outperforms state-of-the-art models. This distillation-based methodology provides a more stable optimization opportunity than direct fairness approaches, resulting in substantially fairer representations, particularly for multiple sensitive attributes and underrepresented demographic intersections.