Introduction

For the past 15 years, customer satisfaction has been primarily measured using the same methods. The principle of surveying consumers by asking an NPS (Net Promoter Score) question has been proven effective and reliable. However, in a more complex world, it becomes a necessity to look further, and deeper. Additionally, predicting customer behavior has become an increasingly influential need for companies, and surveying can be a valuable source of data.

Based on a bike-sharing service, I illustrate how traditional customer satisfaction metrics can be improved and validate the proposed solutions through user tests.

Problem Statement

Speaking about the NPS survey, let us imagine we received 1000 customer surveys and a score of 35. We had 50% promoters, 15% detractors and 25% passives. How can be passives turned into promoters? What can be done about the detractors? How can the score and the customer experience be improved?

NPS is about learning and improving, and a number alone will not provide the information necessary for improving customer experience. One needs to go deeper to understand the reasons behind a score. One needs to have a conversation. A question that is being asked can vary depending on the customer’s previous choices. If they are unhappy with delivery, it should automatically route to relevant follow-up questions. Not only does it enable better analysis, but it also shows respondents that a company listens to their feedback.

Brainstorming

Before designing a customer survey for a bike-sharing company, I conducted detailed research on the topic. It included common problem areas of bike-sharing from the user perspective and the reasons for which users might not like their rides. To get started quickly, I organized brainstorming with peers who use bike-sharing daily. I raised the following question ‘What problem can you anticipate or have experienced before renting a bike; or during and after your ride’. After 30 minutes, I collected written responses from the group and combined user problems in five categories: 1) bike 2) external conditions, 3) app, 4) service, 5) branding & emotions.

Low fidelity Wireframe

My work focuses on mobile surveys; thus, all wireframes are for mobile devices. First, I created wireframes for testing purposes. I compiled a list of screens to cover all possible scenarios, namely problem arias of a bike-sharing service discovered at the workshop. I began with grayscale wireframes to detail out the flows. I also made a low fidelity prototype to test the idea with users and fix potential problems at an early stage. The main purpose of these wireframes was to create multiple flows and explore different ideas. Before moving on to high fidelity visual designs and implementation, I created five prototypes to understand how the new solutions work. These prototypes were used to run the first round of user tests. I decided to run the tests by users, who cycle daily and are familiar with the NPS and other customer surveys so that I can check which flow has better performance. In some of the flows, I use a scale from 0 to 10 as NPS, in some scales from 0 to 5, as well as stars. The overall goal was to understand what works better for users.

To measure the success of these improvements I tracked:

- how well does each flow attract the attention of the user;
- how many users finish each flow or get bored and drop out;
- how much information they have provided in the survey.

High fidelity Prototype



A/B Testing

For my tests, I needed to find respondents who use bike-sharing. The survey should appear after the user locks his bike, so I came up with the idea to spend some time and ask people that were returning their bikes in the city center of Berlin. Before raising any questions, I explained my motivation to the respondents so that they would engage. I also conducted some user tests remotely, with the help of my Russian friends, who submitted surveys after returning their shared bikes. It enabled me to collect opinions from respondents from different countries. The task was formulated as follows. ‘You are returning your bike. Please fill in a survey about your ride. Feel free to answer the questions you want. If you do not wish to answer any questions, please close the survey’. Ultimately, I ran the test on 32 users, eight of which participated in both surveys. Upon the completion of the survey, participants were also asked about their visual satisfaction with the questionnaire, whether they enjoyed its simplicity. Another question was if users would complete the survey in the real world if it appeared in the app after returning a bike; and how often it should appear before it starts to annoy them.

Bubble

I want to build a prototype of a service that specializes in creating a different kind of surveys, that I named «Bubble».

Sketches

I usually start the design process with low fidelity sketches. This is the way I explore more technical aspects of the design. I sketched a draft of the service with elements and screens that were necessary for users’ goals on paper to see if the idea works. I concentrated only on the desktop version of the service.



High fidelity Prototype

Conclusion

Traditional online surveys are losing relevance, but it does not mean that customer feedback is no longer relevant. By providing an example of a manually created conversational survey for a bike-sharing company, I proved that it is possible to have a more engaging and personalized survey experience, which guarantees higher completion rates and better insight into the user needs. Conversational surveys can be used to track and improve customer satisfaction by measuring customer emotion. They allow us to go beyond the metrics and gain a deeper understanding of user responses.