The Human Error Project
Type
fundamental research project
Acronym
HErr
Status
ongoing
Keywords
Algorithmic profiling
AI technologies
Data justice
Human rights
Description
We are living in a historical time when every little detail of our lived experience is turned into a data point that is used by AI systems and algorithms to profile us, judge us and make decisions about us. These technologies are used everywhere. Health and education practitioners use them to ‘track risk factors’ or find ‘personalized solutions’. Employers, banks and insurers use them to judge clients or potential candidates. Even governments, the police and immigration officials use these technologies to decide key issues about individual lives, from one’s right to asylum to one’s likelihood to commit a crime. The COVID-19 pandemic is only intensifying and exacerbating these practices of technological surveillance and profiling.
AI systems and predictive analytics are often used to make the process of data-driven decision more efficient and to ‘avoid the human error’. Yet paradoxically, as recent research has shown, these technologies are defined by intrinsic ‘errors’, ‘biases’ and ‘inaccuracies’, when it comes to reading humans which could lead to a variety of real-life harms and human rights abuses.
The Human Error Project: AI, Human Rights, and the Conflict over Algorithmic Profiling combines anthropological theory with critical data and AI research, and aims to investigate the fallacy of algorithms when it comes to reading humans by focusing on three different, albeit interconnected dimensions of human error in algorithms:
· Algorithmic Bias – Algorithms and AI systems are human made and will always be shaped by the cultural values and beliefs of the humans and societies that created them.
· Algorithmic Inaccuracy – Algorithms process data. Yet the data processed by algorithms is often the product of everyday human practices, which are messy, contradictory and taken out of context, hence algorithmic predictions are filled with inaccuracies, partial truths and mis-representations.
· Algorithmic Un-accountability – Algorithms lead to specific predictions that are often un-explainable. The fact that most of the algorithms used for algorithmic profiling are un-explainable, makes them unaccountable. How can we trust their decisions, if we cannot explain them?
Our Team will be working on different interconnected research projects. Prof. Barassi will be leading a 2-year-long qualitative investigation – based on critical discourse analysis and in-depth interviews – into the conflicts over algorithmic profiling in Europe, which is funded by the HSG Basic Research Fund. Dr. Antje Scharenberg will be working on a postdoctoral research project investigating the challenges of algorithmic profiling for human agency. Ms. Marie Poux-Berthe will be working on a three-year PhD Research on digital media and technologies and misconstruction of old age and aging and Ms Rahi Patra will be focusing on her PhD research on health surveillance technologies, algorithmic bias and their implications on human rights and privacy.
We believe that the understanding of human errors in algorithms has become a top priority of our times, because they shed light on the fact that the race for AI innovation is often shaped by stereotypical and reductionist understandings of human nature, and by new emerging conflicts about what it means to be human.
AI systems and predictive analytics are often used to make the process of data-driven decision more efficient and to ‘avoid the human error’. Yet paradoxically, as recent research has shown, these technologies are defined by intrinsic ‘errors’, ‘biases’ and ‘inaccuracies’, when it comes to reading humans which could lead to a variety of real-life harms and human rights abuses.
The Human Error Project: AI, Human Rights, and the Conflict over Algorithmic Profiling combines anthropological theory with critical data and AI research, and aims to investigate the fallacy of algorithms when it comes to reading humans by focusing on three different, albeit interconnected dimensions of human error in algorithms:
· Algorithmic Bias – Algorithms and AI systems are human made and will always be shaped by the cultural values and beliefs of the humans and societies that created them.
· Algorithmic Inaccuracy – Algorithms process data. Yet the data processed by algorithms is often the product of everyday human practices, which are messy, contradictory and taken out of context, hence algorithmic predictions are filled with inaccuracies, partial truths and mis-representations.
· Algorithmic Un-accountability – Algorithms lead to specific predictions that are often un-explainable. The fact that most of the algorithms used for algorithmic profiling are un-explainable, makes them unaccountable. How can we trust their decisions, if we cannot explain them?
Our Team will be working on different interconnected research projects. Prof. Barassi will be leading a 2-year-long qualitative investigation – based on critical discourse analysis and in-depth interviews – into the conflicts over algorithmic profiling in Europe, which is funded by the HSG Basic Research Fund. Dr. Antje Scharenberg will be working on a postdoctoral research project investigating the challenges of algorithmic profiling for human agency. Ms. Marie Poux-Berthe will be working on a three-year PhD Research on digital media and technologies and misconstruction of old age and aging and Ms Rahi Patra will be focusing on her PhD research on health surveillance technologies, algorithmic bias and their implications on human rights and privacy.
We believe that the understanding of human errors in algorithms has become a top priority of our times, because they shed light on the fact that the race for AI innovation is often shaped by stereotypical and reductionist understandings of human nature, and by new emerging conflicts about what it means to be human.
Leader contributor(s)
Member contributor(s)
Funder
Notes
Project overview: https://mcm.unisg.ch/en/forschung/forschungsprojekte/forschungsprojekte-media-and-culture/human-error-project
Project website: https://thehumanerrorproject.ch/
Project research report 1: https://thehumanerrorproject.ch/ai-errors-mapping-debate-european-media-report/
Project website: https://thehumanerrorproject.ch/
Project research report 1: https://thehumanerrorproject.ch/ai-errors-mapping-debate-european-media-report/
Division(s)
Eprints ID
247943
results