Skip to content →
↑ Leon Paternoster

Kiosk testing: Different users, different results

We don’t read enough about the results of kiosk testing, maybe because people don’t kiosk test enough. But Thomas wrote an excellent post on some last minute tests on the MoMu website, sharing his methodology and five findings.

I can’t recommend this testing enough. It’ll snag major usability problems, challenge your assumptions and help get into users’ minds. It’s also relatively easy to set up, so you can use it to test any website change, not just a whole new release.

You’ll want to test your “typical” website’s users rather than just anyone (although this is still useful in identifying problems that will trip anyone up), or, worse still, whoever’s paying for the site. We all want to build websites usable by anyone, and your marketing department will no doubt have a new audience to target (which is probably younger than your current audience). But if you run a website, you at least need to be aware of who is using it at the moment, and whether you’re going to confuse them with a change.

Which is why I found Thomas’s first finding interesting: Hiding navigation is totally OK as it contradicts lots of testing I’ve done. Now, kiosk testing can take some thought. On the MoMu website the navigation menu toggle button is very clearly styled with a nice big drop shadow, and uses a label rather than an icon. Perhaps the results would have been different if they’d used a standard hamburger icon.

Top of the MoMu website with a 'Menu' button in the top right hand corner

The MoMu navigation menu toggle

Nonetheless, I suspect the Suffolk Libraries’ audience may have made a difference. It’s older, and perhaps less comfortable with toggles and switches, especially if they haven’t been styled clearly. We therefore use toggles very sparingly on our site. Off the top of my head, there’s just a search icon at narrow widths and accordions in event listings.

The point is you’re testing something in context. How well is it designed? Who’s using it? Does everyone experience it in the same way? Change any of these factors, and you’ll likely get a different result – even if some findings are more relevant to all users than others.