Options
Socially Acceptable AI and Fairness Trade-offs in Predictive Analytics
Type
applied research project
Start Date
01 June 2020
End Date
31 May 2024
Acronym
AI, Big Data
Status
ongoing
Keywords
Machine learning
Participatory design
Ethics
ML based decision making
Artificial Intelligence
Fairness
Human Ressource Management
Description
Fairness and non-discrimination are basic requirements for socially acceptable implementations of AI, as these are basic values of our society. However, the relation between statistical fairness concepts, the fairness perception of human stakeholders, and principles discussed in philosophical ethics is not well understood. The objective of our project is to develop a methodology to facilitate the development of fairness-by-design approaches for AI-based decision-making systems. The core of this methodology is the “Fairness Lab”, an IT environment for understanding, explaining and visualizing the fairness implications of a ML-based decision system. It will help companies to build socially accepted and ethically justifiable AI applications, educate fairness to students and developers, and support informed political decisions on regulating AI-based decision making. Conceptually, we integrate statistical approaches from computer science and philosophical theories of justice and discrimination into interdisciplinary theories of predictive fairness. With empirical research, we study the fairness perception of different stakeholders for aligning the theoretical approach. The utility of the Fairness Lab as a tool for helping to create “fairer” applications will be assessed in the context of participatory design. With respect to application areas, we focus on employment and education. Our project makes a significant contribution to the understanding of fairness in the digital transformation and to promoting improved conditions for the deployment of fair and socially accepted AI.
Leader contributor(s)
Funder(s)
Notes
Machine Learning; Participatory Design; Ethics; ML-based Decision-Making; Artificial Intelligence; Fairness; Human Ressource Management
Range
HSG + other universities + partners
Range (De)
HSG + andere Unis + Partner
Division(s)
Eprints ID
247991
Reference Number
187473
1 results
Now showing
1 - 1 of 1
-
PublicationAlgorithmic Management: Its Implications for Information Systems Research(ACM, 2023)
;Cameron, Lindsey ;Lamers, Laura ;Meijerink, JeroenMöhlmann, MareikeIn recent years, the topic of algorithmic management has received increasing attention in information systems (IS) research and beyond. As both emerging platform businesses and established companies rely on artificial intelligence and sophisticated software to automate tasks previously done by managers, important organizational, social, and ethical questions emerge. However, a cross-disciplinary approach to algorithmic management that brings together IS perspectives with other (sub-)disciplines such as macro- and micro-organizational behavior, business ethics, and digital sociology is missing, despite its usefulness for IS research. This article engages in cross-disciplinary agenda setting through an in-depth report of a professional development workshop (PDW) entitled “Algorithmic Management: Toward a Cross-Disciplinary Research Agenda” delivered at the 2021 Academy of Management Annual Meeting. Three leading experts (Mareike Möhlmann, Lindsey Cameron, and Laura Lamers) on the topic provide their insights on the current status of algorithmic management research, how their work contributes to this area, where the field is heading in the future, and what important questions should be answered going forward. These accounts are followed up by insights from the breakout group discussions at the PDW that provided further input. Overall, the experts and workshop participants highlighted that future research should examine both the desirable and undesirable outcomes of algorithmic management and should not shy away from posing ethical and normative questions.Type: journal articleJournal: Communications of the Association for Information Systems (CAIS)Volume: 52