The increase in automating complicated physical processes using Cyber-Physical Systems (CPS) raises the complexity of CPS and their behavior. It creates the necessity to make them explainable. The popular Explainable Artificial Intelligence (XAI) methodologies employed to explain the behavior of CPS usually overlook the impact of physical and virtual context when explaining the outputs of decision-making software models, which are essential factors in explaining CPS’ behavior to stakeholders. Hence in this article, we survey the most relevant XAI methods to identify their shortcomings and applicability in explaining the behavior of CPS. Our main findings are (i) Several papers emphasize the relevance of context in describing CPS. However, the approaches for explaining CPS fall short of being context-aware; (ii) the explanation delivery mechanisms use low-level visualization tools that make the explanations unintelligible. Finally (iii), these unintelligible explanations lack actionability. Therefore, we propose to enrich the explanations further with contextual information using Semantic Technologies, user feedback, and enhanced explanation visualization techniques to improve their understandability. To that end, context-aware explanation and better explanation presentation based on knowledge graphs might be a promising research direction for explainable CPS