Now showing 1 - 5 of 5
  • Publication
    MRTranslate: Bridging Language Barriers in the Physical World Using a Mixed Reality Point-and-Translate System
    Language barriers pose significant challenges in our increasingly globalized world, hindering effective communication and access to information. Existing translation tools often disrupt the current activity flow and fail to provide seamless user experiences. In this paper, we contribute the design, implementation, and evaluation of \MRTranslate, an assistive Mixed Reality (MR) prototype that enables seamless translations of real-world text. We instructed 12 participants to translate items on a food menu using MRTranslate, which we compared to state-of-the-art translation apps, including Google Translate and Google Lens. Findings from our user study reveal that when utilising a fully functional implementation of MRTranslate, participants achieve success in up to 91.67% of their translations whilst also enjoying the visual translation of the unfamiliar text. Although the current translation apps were well perceived, participants particularly appreciated the convenience of not having to grab a smartphone and manually input the text for translation when using MRTranslate. We believe that MRTranslate, along with the empirical insights we have gained, presents a valuable step towards a future where MR transforms language translation and holds the potential to assist individuals in various day-to-day experiences.
  • Publication
    Designing Grocery Shopping Experiences for Virtual Reality
    Online grocery shopping offers time-saving efficiency and convenience, yet many people still prefer physical shopping for trust in food freshness and other sensory experiences. While online stores are evolving to offer new user experiences, such as supporting eco-friendly or ethical shopping, the desktop and mobile platforms limit the engagement of such experiences. Virtual Reality (VR) presents an opportunity to create immersive and rich grocery shopping experiences, closing the gap between the convenience of online shopping and the sensory experience of physical shopping. Yet, designing VR grocery stores remains relatively unexplored. In this paper, we investigate the long-term potential of VR grocery stores, focusing on meeting individual needs. Through a co-design workshop, participants brainstormed the design of product displays, in-shop navigation, shopping carts, social shopping, among others. Based on our findings, we provide design recommendations for future VR grocery shopping to develop meaningful alternatives to existing shopping experiences for groceries.
  • Publication
    Being in the Zone: Investigating the Effectiveness of In-Vehicle Multi-Sensory Affective Displays
    (ACM, 2024-06)
    Previous research highlights the benefits of affective interfaces in improving drivers’ emotional state and performance. This poster investigates the impact of a multisensory affective display, encompassing visual, auditory, and olfactory stimuli. A preliminary evaluation with eight participants in easy driving conditions informed scent intensity and placement adjustments. In a subsequent study under more challenging driving scenarios, the majority of participants (14 out of 20) reported minimal awareness of the system, while self-reported less skilled drivers perceived the system, experienced distractions and cognitive overload. Our findings suggest that the effects of multisensory affective displays on drivers vary and are influenced by driving difficulty, driver skill, and mental state, such as the state of flow. Future research is encouraged to explore novel interface designs that consider drivers' skills and cognitive states to adapt accordingly.
  • Publication
    Gaze into Fintech: Assessing the Influence of Financial Literacy on Interaction Behaviour Using Eyetracking
    (ACM, 2024-06) ;
    Andrin Benz
    ;
    Financial technology (fintech) has become more accessible to diverse users, including finance practitioners and non-experts. We conducted a usability test with six participants, evaluating a fintech app using eye-tracking technology for finance practitioners and non-finance experts. Our analysis revealed that participants with limited financial knowledge had difficulty understanding financial information and navigating the software, indicated by a higher revisitation rate on financial labels. We also observed higher cognitive overload or uncertainty among non-practitioners, evidenced by gaze-click discrepancies. These insights aim to inform the design of more accessible fintech applications.