Published Research · HCI · IJHCI 2024

From Parking Meters
to Vending Machines

A mixed-methods usability study of self-service technologies in the urban built environment, surfacing the pervasive design failures that frustrate everyday interactions.

7
Self-Service Technologies
30
Participants
9
Global Themes
2024
Published IJHCI
29.25 42.37 49.91 53.62 55.52 56.23 56.57

The Problem

The machines we use every day are badly designed. Nobody was studying them.

Self-service technologies are everywhere. Parking meters, ATMs, ticket machines, grocery checkouts - they are the invisible infrastructure of urban life. And they are consistently, frustratingly hard to use. Despite their ubiquity, the HCI community had largely left them unstudied as a category. Most research focused on single devices or specific redesigns, leaving the systemic, cross-device issues unexamined.

This study set out to change that. We studied seven diverse SSTs in their real, operational environments to understand not just where things went wrong, but why - and what kinds of design failures showed up across the whole class of technology.

Research design, thematic analysis, and data analysis.

Research Design
Contributed to the design of the mixed-methods study protocol - shaping how participants interacted with each SST and how data was structured for analysis.
Thematic Analysis
Participated in the multi-phase inductive thematic analysis - independently coding transcripts in NVivo, then collaborating across a three-day workshop to build and refine the global theme structure.
Data Analysis Design
Contributed to the design of the data analysis approach - how qualitative themes and quantitative SUS scores were triangulated to surface coherent, cross-device insights.

Methodology

Mixed methods, real environments, real people.

30 participants were walked through an urban route and asked to interact with seven SSTs in their natural settings. Sessions combined task-based think-alouds, System Usability Scale evaluations, semi-structured interviews, and researcher field notes - all audio and video recorded for analysis.

Phase 1
Field Sessions
Participants completed pre-set tasks with each SST while thinking aloud. GoPro recordings captured both the interface and participants' physical responses - body language, confusion, frustration.
Phase 2
SUS + Interviews
After each SST, participants completed a System Usability Scale survey and a semi-structured interview. This paired quantitative usability scores with rich qualitative reasoning about the experience.
Phase 3
Collaborative Analysis
Three researchers independently coded transcripts in NVivo, then collaborated across a three-day workshop to confirm, consolidate, and elevate 33 SST-specific themes into 9 global patterns.

Not one SST scored as "good." Not one.

The System Usability Scale treats anything below 68 as below average, and 80+ as good. Every single SST in our study scored below 60. 84% of individual user responses fell into D or F grade ranges.

Train Ticket Machine
TTM · Best performer
56.57
Mean SUS score
Grade D
Shopping Centre Directory
SCD
56.23
Mean SUS score
Grade D
Fast-Food Kiosk
FFK
55.52
Mean SUS score
Grade D
Grocery Store Checkout
GSCK
53.62
Mean SUS score
Grade D
Automatic Teller Machine
ATM
49.91
Mean SUS score
Grade F
Drinks Vending Machine
DVM
42.37
Mean SUS score
Grade F
Parking Meter
PM · Worst performer
29.25
Mean SUS score
Grade F
84%
of user responses fell in D or F grade ranges

Thematic Analysis

33 SST-specific themes. 9 patterns that crossed every device.

Independent coding of interview transcripts and field notes surfaced 33 device-level themes. Through collaborative workshop analysis, these were consolidated into nine global themes that described the recurring design failures cutting across all seven SSTs.

Theme 01
Clarity of Guidance
SSTs failed to give users timely, meaningful instruction. Verbose text was ignored; icons were misread. Users were left to figure it out by trial and error.
Theme 02
Confidence and Trust
Poor feedback eroded users' trust that the machine was doing what they intended. When money was involved, even minor ambiguity felt threatening.
Theme 03
Interface Cohesion and Capability
Fragmented input and output mechanisms - buttons on one component, feedback on another - left users unsure where to look and what had happened.
Theme 04
Efficiency and Legibility
Low-contrast text, environmental glare, small font sizes, and cumbersome input mechanisms made interactions slow and physically difficult.
Theme 05
Feedback
SSTs routinely failed to confirm successful actions. Users completed steps without knowing if they had worked - and errors provided little guidance for recovery.
Theme 06
Recoverability
When things went wrong, most SSTs provided no path to self-recovery. The need for human assistance was frequent, and often embarrassing in public.
Theme 07
Social Pressure
Being watched - by a queue, passersby, or cameras - amplified every usability issue. Participants became more anxious and more error-prone under observation.
Theme 08
Assumed Knowledge
Several SSTs presumed familiarity with the technology or surrounding context - like knowing the difference between light and heavy rail - that many users simply didn't have.
Theme 09
Comprehensibility and Accessibility
Poor information hierarchy, absent wayfinding, and a near-total lack of non-visual affordances disadvantaged users with varying physical and cognitive abilities.
Participant Quote
"It was really unclear, and they have made no attempt to make it clear. I feel like I have been tricked."
Participant P29, on the Drinks Vending Machine

Two root causes behind all of it.

The nine themes pointed to two underlying design failures: systems that overwhelmed users' cognitive bandwidth with too much or too little information at the wrong time, and interactions that systematically destroyed the trust users needed to complete their task.

Cognitive load
The lowest-scoring SSTs had the most fragmented interfaces. When input and feedback were separated across physical space, users' attention couldn't keep up. Progressive disclosure - revealing information only as needed - was identified as the key design remedy.
Fragile trust
Trust between a user and an SST is earned slowly and broken instantly. When machines failed to confirm actions, gave ambiguous feedback, or involved money, users felt unsafe and lost confidence rapidly. Psychological safety must be at the center of every SST interaction.
Design neglect
The pervasive poor usability of SSTs is not an accident - it reflects decades of design disengagement with this class of technology. SSTs are not a transitional technology on the way to automation. They are an interaction modality with millions of daily users that deserve serious design attention.

Impact

Published in the International Journal of Human-Computer Interaction, 2024.

The first broad usability study to examine self-service technologies as a category, rather than as isolated devices. The findings provide a foundation for sector-wide design standards and evidence that SST usability is a solvable problem - with the right design investment.

4,785
Article views
12
Citing articles
IJHCI
Vol. 40, No. 16
Open
Access publication
Full Citation
Henderson, H., Grace, K., Gulbransen-Diaz, N., Klaassens, B., Leong, T. W., & Tomitsch, M. (2024). From Parking Meters to Vending Machines: A Study of Usability Issues in Self-Service Technologies. International Journal of Human-Computer Interaction, 40(16), 4365-4379. doi.org/10.1080/10447318.2023.2212228
🥚 You found me

5 things you didn’t ask,
but here we are anyway.