Md Fahim Sikder is currently pursuing his Ph.D. under the guidance of Professor Fredrik Heintz at the Reasoning and Learning (ReaL) Lab, IDA, Linköping University, Sweden. His research focuses on developing Explainable Synthetic Time-Series Data Generation techniques while ensuring privacy and fairness. Before this, Fahim served as a Lecturer in the Computer Science and Engineering department at the Institute of Science, Trade, and Technology (ISTT). He also took on the roles of Coordinator of the HEAP Programming Club and Coach of the ACM ICPC team at ISTT.
Fahim’s research interests include Deep Learning, Generative Models, Time-Series Generation, Data Privacy.
Md Fahim Sikder conducted several workshops and seminars, including a Workshop on Latex and a Week-long training course on Python (Beginning to Advance including Machine Learning). He participated in several National and International Contests. He achieved many titles, including “Champion at International Contest on Programming and System Development (ICPSD), 2014”, and “Champion at NASA SPACE APPS CHALLENGE 2016” in Rajshahi Region, Bangladesh.
Ph.D. in Computer Science, Ongoing
Linköping University, Sweden
Master of Science in Computer Science, 2018
Jahangirnagar University, Bangladesh
Bachelor of Science (Engineering) in Computer Science & Engineering, 2016
Bangabandhu Sheikh Mujibur Rahman Science & Technology University, Gopalganj, Bangladesh
Higher Secondary Certificate Examination, 2012
Khilgaon Government High School, Bangladesh
Secondary School Certificate Examination, 2010
Khilgaon Government High School, Bangladesh
Department of Computer and Information Science (IDA)
Responsibilities include:
Department of Computer Science & Engineering (CSE)
Responsibilities include:
Deep Generative Models (DGMs) for generating synthetic data with properties such as quality, diversity, fidelity, and privacy is an important research topic. Fairness is one particular aspect that has not received the attention it deserves. One difficulty is training DGMs with an in-process fairness objective, which can disturb the global convergence characteristics. To address this, we propose Fair Latent Deep Generative Models (FLDGMs) as enablers for more flexible and stable training of fair DGMs, by first learning a syntax-agnostic, model-agnostic fair latent representation (low dimensional) of the data. This separates the fairness optimization and data generation processes thereby boosting stability and optimization performance. Moreover, data generation in the low dimensional space enhances the accessibility of models by reducing computational demands. We conduct extensive experiments on image and tabular domains using Generative Adversarial Networks (GANs) and Diffusion Models (DMs) and compare them to the state-of-the-art in terms of fairness and utility. Our proposed FLDGMs achieve superior performance in generating high-quality, high-fidelity, and high-diversity fair synthetic data compared to the state-of-the-art fair generative models.