Algorithmic Fairness (AlFa) group is a research initiative dedicated to understanding, measuring, and mitigating bias in network science and data science. It is founded by Dr. Akrati Saxena and is hosted at LIACS, Leiden University, The Netherlands. We investigate how societal inequalities, such as those based on gender, ethnicity, socioeconomic status, or geography, are embedded in data and algorithms. These biases arise because digital systems often mirror the social and structural imbalances present in the real world, capturing unequal patterns of participation, visibility, and representation. When algorithms learn from such biased data without accounting for these disparities, they risk amplifying unfairness, disproportionately harming minorities or underrepresented groups. Ensuring fairness is therefore not just a technical challenge but a societal imperative—critical to building AI systems that are fair, explainable, and trustworthy. By developing transparent and equitable computational frameworks, the group aims to ensure that algorithmic decision-making contributes to inclusion, accountability, and social good.
Through these interconnected research themes, the AlFa group aims to advance the development of fair, ethical, and socially responsible AI, contributing to a more inclusive and equitable society.