Options
Kimberly Garcia
Last Name
Garcia
First name
Kimberly
Email
kimberly.garcia@unisg.ch
Now showing
1 - 10 of 14
-
PublicationGaze-enabled activity recognition for augmented reality feedback( 2024-03-16)
;Andrew DuchowskiKrzysztof KrejtzHead-mounted Augmented Reality (AR) displays overlay digital information on physical objects. Through eye tracking, they provide insights into user attention, intentions, and activities, and allow novel interaction methods based on this information. However, in physical environments, the implications of using gaze-enabled AR for human activity recognition have not been explored in detail. In an experimental study with the Microsoft HoloLens 2, we collected gaze data from 20 users while they performed three activities: Reading a text, Inspecting a device, and Searching for an object. We trained machine learning models (SVM, Random Forest, Extremely Randomized Trees) with extracted features and achieved up to 89.6% activity-recognition accuracy. Based on the recognized activity, our system—GEAR—then provides users with relevant AR feedback. Due to the sensitivity of the personal (gaze) data GEAR collects, the system further incorporates a novel solution based on the Solid specification for giving users fine-grained control over the sharing of their data. The provided code and anonymized datasets may be used to reproduce and extend our findings, and as teaching material.Type: journal articleJournal: Computers & GraphicsVolume: 119Issue: Special Section on Eye Gaze VISA -
PublicationMR Object Identification and Interaction: Fusing Object Situation Information from Heterogeneous Sources(ACM, 2023-09-28)
;Khakim Akhunov ;Federico CarboneKasim Sinan YildirimThe increasing number of objects in ubiquitous computing environments creates a need for effective object detection and identification mechanisms that permit users to intuitively initiate interactions with these objects. While multiple approaches to such object detection-including through visual object detection, fiducial markers, relative localization, or absolute spatial referencing-are available, each of these suffers from drawbacks that limit their applicability. In this paper, we propose ODIF, an architecture that permits the fusion of object situation information from such heterogeneous sources and that remains vertically and horizontally modular to allow extending and upgrading systems that are constructed accordingly. We furthermore present BLEARVIS, a prototype system that builds on the proposed architecture and integrates computer-vision (CV) based object detection with radio-frequency (RF) angle of arrival (AoA) estimation to identify BLE-tagged objects. In our system, the front camera of a Mixed Reality (MR) head-mounted display (HMD) provides a live image stream to a vision-based object detection module, while an antenna array that is mounted on the HMD collects AoA information from ambient devices. In this way, BLEARVIS is able to differentiate between visually identical objects in the same environment and can provide an MR overlay of information (data and controls) that relates to them. We include experimental evaluations of both, the CV-based object detection and the RF-based AoA estimation, and discuss the applicability of the combined RF and CV pipelines in different ubiquitous computing scenarios. This research can form a starting point to spawn the integration of diverse object detection, identification, and interaction approaches that function across the electromagnetic spectrum, and beyond.Type: journal articleJournal: Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT)Volume: 7Issue: 3DOI: 10.1145/3610879Scopus© Citations 2 -
PublicationGEAR: Gaze-enabled augmented reality for human activity recognition(ACM, 2023-05-30)
;Hermann, Jonas ;Jenss, Kay ErikSoler, Marc EliasHead-mounted Augmented Reality (AR) displays overlay digital information on physical objects. Through eye tracking, they allow novel interaction methods and provide insights into user attention, intentions, and activities. However, only few studies have used gaze-enabled AR displays for human activity recognition (HAR). In an experimental study, we collected gaze data from 10 users on a HoloLens 2 (HL2) while they performed three activities (i.e., read, inspect, search). We trained machine learning models (SVM, Random Forest, Extremely Randomized Trees) with extracted features and achieved an up to 98.7% activity-recognition accuracy. On the HL2, we provided users with an AR feedback that is relevant to their current activity. We present the components of our system (GEAR) including a novel solution to enable the controlled sharing of collected data. We provide the scripts and anonymized datasets which can be used as teaching material in graduate courses or for reproducing our findings.Type: conference paper -
PublicationActionable Contextual Explanations for Cyber-Physical SystemsOver the past two decades, Cyber-Physical Systems (CPS) have emerged as critical components in various industries, integrating digital and physical elements to improve efficiency and automation, from smart manufacturing and autonomous vehicles to advanced healthcare devices. However, the increasing complexity of CPS and their deployment in highly dynamic contexts undermine user trust. This motivates the investigation of methods capable of generating explanations about the behavior of CPS. To this end, Explainable Artificial Intelligence (XAI) methodologies show potential. However, these approaches do not consider contextual variables that a CPS may be subjected to (e.g., temperature, humidity), and the provided explanations are typically not actionable. In this article, we propose an Actionable Contextual Explanation System (ACES) that considers such contextual influences. Based on a user query about a behavioral attribute of a CPS (for example, vibrations and speed), ACES creates contextual explanations for the behavior of such a CPS considering its context. To generate contextual explanations, ACES uses a context model to discover sensors and actuators in the physical environment of a CPS and obtains time-series data from these devices. It then cross-correlates these time-series logs with the user-specified behavioral attribute of the CPS. Finally, ACES employs a counterfactual explanation method and takes user feedback to identify causal relationships between the contextual variables and the behavior of the CPS. We demonstrate our approach with a synthetic use case; the favorable results obtained, motivate the future deployment of ACES in real-world scenarios.Type: conference paper
-
PublicationSOCRAR: Semantic OCR through Augmented Reality(ACM, 2022-11-11)To enable people to interact more efficiently with virtual and physical services in their surroundings, it would be beneficial if information could more fluently be passed across digital and non-digital spaces. To this end, we propose to combine semantic technologies with Optical Character Recognition on an Augmented Reality (AR) interface to enable the semantic integration of (written) information located in our everyday environments with Internet of Things devices. We hence present SOCRAR, a system that is able to detect written information from a user’s physical environment while contextualizing this data through a semantic backend. The SOCRAR system enables in-band semantic translation on an AR interface, permits semantic filtering and selection of appropriate device interfaces, and provides cognitive offloading by enabling users to store information for later use. We demonstrate the feasibility of SOCRAR through the implementation of three concrete scenarios.Type: conference paperJournal: Proceedings of the 12th International Conference on the Internet of Things
-
PublicationSemantic Knowledge for Autonomous Smart Farming( 2022-09-14)
;Burattini, SamueleType: book section -
PublicationTowards Privacy-Friendly Smart Products(IEEE, 2021-12-13)
;Zihlmann, Zaïra ;Tamo-Larrieux, AureliaHooss, JohannesType: book section -
PublicationThe Right to Customization: Conceptualizing the Right to Repair for Informational Privacy( 2021)
;Tamo-Larrieux, Aurelia ;Zihlmann, ZaïraType: book section -
PublicationType: book sectionJournal: 2020 IEEE 6th International Conference on Collaboration and Internet Computing (CIC)
-
PublicationGaze-based Opportunistic Privacy-preserving Human-Agent Collaboration(ACM, 2024-05-11)This paper introduces a novel system to enhance the spatiotemporal alignment of human abilities in agent-based workflows. This optimization is realized through the application of Linked Data and Semantic Web technologies and the system makes use of gaze data and contextual information. The showcased prototype demonstrates the feasibility of implementing such a system, where we specifically emphasize the system’s ability to constrain the dissemination of privacy-relevant information.Type: conference contribution
Scopus© Citations 3