Blind FLM: An Enhanced Keystroke-Level Model for Visually Impaired Smartphone Interaction
Abstract
The Keystroke-Level Model (KLM) is a predictive model used to numerically predict how long it takes an expert user to accomplish a task. KLM has been successfully used to model conventional interactions, however, it does not thoroughly render smartphone touch interactions or accessible interfaces (e.g. screen readers). On the other hand, the Fingerstroke-level Model (FLM) extends KLM to describe and assess mobile-based game applications, which marks it as a candidate model for predicting smartphone touch interactions.This paper aims to further extend FLM for visually impaired smartphone users. An initial user study identified basic elements of blind users’ interactions that were used to extend FLM; the new model is called “Blind FLM’”. Then an additional user study was conducted to determine the applicability of the new model for describing blind users’ touch interactions with a smartphone, and to compute the accuracy of the new model. Blind FLM evaluation showed that it can predict blind users’ performance with an average error of 2.36%.
Origin | Files produced by the author(s) |
---|
Loading...