We’re pretty fortunate. Even as many places shut down transport services, we partnered with Toyota Mobility Foundation (TMF) to launch new transport services for healthcare workers in 3 new markets — all in a span of 2–6 weeks.
Bangkok, Manila, Jakarta — three of the toughest but most lucrative mega-cities in Southeast Asia. It was our privilege to work on such a meaningful project, but also a nail-biting experience. I felt like we were flying blind.
Before Covid-19 struck, we typically took 2–3 months to pull the trigger on launching in a new city. We would carefully work with our clients to study the current transportation solution, fly down to observe any existing transport services and conduct user research to understand behaviours and expectations in each market. This would be followed by a series of road tests to calibrate our speed maps and refine our service parameters (I’ll write about that another time).
With the TMF deployments, we trained the bus operators and drivers remotely, and trusted our partners on the ground to get things up and running and ensure that eligible users found out about the service.
We’re planning some new features and changes to the booking flow for our Just in Time product line and wanted to test them with our users in Manila.
Since we can’t travel right now, we decided to try remote usability testing with Maze. Maze allows you to set up a prototype and send it to testers, who can then follow the instructions on screen and complete a series of tasks and/or answer questions entirely remotely.
Maze allows you to set up “Missions” which basically track task completion, measuring it against an ideal path which you set. It also allows you to set questions with screens to test user understanding.
At the end of the testing, Maze generates a report with heat maps, screen view time and other statistics from which you can gather insight into the overall usability of various flows, points of friction, and areas where users got tripped up.
For missions, it provides statistics on task completion (“Success”), splitting this into “Direct Success” and “Indirect Success” — the former indicates that users followed your ideal path and the latter indicates that they deviated (“Off-Path”) but managed to complete the task nonetheless.
I was quite worried that no one would bother tapping on our notification and going through the 10 missions/questions which we’d set, so I was thrilled when we got 51 click-throughs and 39 individuals who spent enough time to go through all of the 5 missions and 5 questions.
We currently have 785 registered users in Manila but only about 300 users regularly use our service (the rest fall out of our service area unfortunately) so depending on how you calculate it, we either had 6% or 16% response rate. To put this in context, the typical response rate for B2C surveys is 13%-16% while mobile surveys typically only get 3%-5% response.
I think our shameless click-bait title helped.
We got some good insight into which of our new flows had higher friction and the heat maps were really useful in helping us understand where users were getting confused (or their expectation of what they should be doing to complete the task).
The other interesting insight was that users were learning through the testing/repeated tasks. We had two similar booking tasks, around the start and end of the missions sections — average time on the task decreased by over 30% for the second task compared to the first. We also saw a decline in wrong taps, with only 6 across all users.
I was really impressed overall with the Maze platform. I find it extremely relevant to our times and useful for remote usability testing at scale.
Ironically though, it made me feel the loss of not being able to do usability testing in person even more. When users didn’t behave as we’d expected, we weren’t there listening to them talk through what they were doing and there was no way we could probe more deeply on why they were doing what they did.
Post-Covid, I can still see us using remote testing to hone in on the areas we need to study more deeply through face-to-face testing or video calls. Alternatively, we might use this to do a final set of larger scale validation before we start development.
If you’re on the fence regarding remote usability testing, I’d say it’s definitely worth a shot. Best of all, our first test fit comfortably within the free tier for Maze so we got to try it out without even having to commit to purchasing it (it’s $25/seat for the first paid tier).