Fahse, TobiasTobiasFahseBlohm, IvoIvoBlohmvan Giffen, BenjaminBenjaminvan Giffen2023-04-132023-04-132022-12-09https://www.alexandria.unisg.ch/handle/20.500.14171/107993Algorithmic forecasts outperform human forecasts by 10% on average. State-of-the-art machine learning (ML) algorithms have further expanded this discrepancy. Because a variety of other activities rely on them, sales forecasting is critical to a company's profitability. However, individuals are hesitant to use ML forecasts. To overcome this algorithm aversion, explainable artificial intelligence (XAI) can be a solution by making ML systems more comprehensible by providing explanations. However, current XAI techniques are incomprehensible for laymen, as they impose too much cognitive load. We contribute to this research gap by investigating the effectiveness in terms of forecast accuracy of two example-based explanation approaches. We conduct an online experiment based on a two-by-two between-subjects design with factual and counterfactual examples as experimental factors. A control group has access to ML predictions, but not to explanations. We report results of this study: While factual explanations significantly improved participants' decision quality, counterfactual explanations did not.enEffectiveness of Example-Based Explanations to Improve Human Decision Quality in Machine Learning Forecasting Systemsconference paper