Quantifying the UX of Customer Communications
Utilising quantitative experience metrics to evaluate improvements
Project Overview
- Client: Superannuation Company
A superannuation company wanted to improve the communications experience of their customer care service. We used quantitative experience metrics to evaluate whether our re-designed communications were improving across two rounds of user testing.
My Role
- As associate director, I played an oversight role which included research methods planning, project management and stakeholder management. I mentored a colleague through research execution, research analysis and writing the final technical report.
Objective
Due to the volume and complexity of customer inquiries, approximately 108 email templates are in use by the superannuation customer care team. The majority are text heavy and use language that presumes recipients are financially literate.
Efficacy of email responses are not being formally tracked, so it is not known how emails are impacting on the customer experience.
We set out to understand how customers engaged with the emails and to form a set of communication principles to inform future written communications. We also used the principles to design new email communications and tested them against the old emails using experiential metrics.
Research Overview
The following represents an overview of the program:
- Facilitated a stakeholder workshop to align our understanding to the problem space.
- Conducted user testing research with existing members to evaluate current email responses.
- Developed a set of customer communication principles based on research.
- Re-designed email responses based on new customer communication principles and research.
- Conducted a second round of user testing research to validate whether the re-designed email responses improved the customer experience.
Round 1 User Testing
Round 1 user testing consisted of 20 participants evaluating 3 sample current email responses. For each email, we used scenario-based testing to approximate an authentic customer experience. Each participant was given a sample query and was asked to imagine that they had submitted the query through the online form. Expectations regarding the email response were captured before the customer care response was shown on screen in an email client, as if they received the response in their email. The queries and email responses were real scenarios sourced from the customer care centre’s database.
We evaluated the experience of each email response using both quantitative and qualitative methods.
Quantifying the User Experience
Evaluating the experience of written communications is an unusual task and is not well documented in UX practice. I had to determine an appropriate method to measure the experience of receiving customer care emails. I evaluated methods based on their scientific validity and settled on the AttrakDiff questionnaire, which is supported by academic research by Marc Hassenzahl. AttrakDiff measures the pragmatic, hedonic and attractive qualities of a product. The questionnaire asks participants to assess 28 pairs of opposite adjectives juxtaposed on a Likert-type scale. For each pair of words, participants had to mark which word best described their experience.
We also asked participants to rate their satisfaction with the email response, so that we could cross-reference their satisfaction with the AttrakDiff experiential qualities.
To further investigate the content, participants were given a printout of the email response and tasked with highlighting items they liked and disliked, as to evaluate how well the email served them. The activity not only provided a visual prompt for the researcher to probe the thoughts customers were having about specific content in the email, it also provided a visual summary of where the email supported the customer and where it fell short.
I overlaid all participant highlighter activities for a particular email in Photoshop to visually demonstrate the combined strengths and weaknesses of the email.
Analysis
Data that was recorded during the 20 user testing sessions was grouped into reoccurring observations that informed insights. These insights were further synthesised into major themes and, along with supporting quantitative data, were used to generate 9 Customer Communication Principles.
The Principles were also informed by the widely recognised Usability Heuristics for User Interface Design, which were developed by Jacob Nielsen and Rolph Molich and adapted for the application of these email communications.
New Email Prototypes
Based on the user testing insights and the Communication Principles, we re-designed the customer query email responses to evaluate in a second round of user testing.
We collaborated with the client to ensure the redesigned emails’ content was realistic before they were tested with the participants.
Round 2 User Testing
The newly designed emails were evaluated in a second round of user testing. We conducted the same research activities that were employed in the first round to compare the customer experience between the two different designs. The second round of user testing consisted of:
- Placing the participant in scenario-based tasks.
- Asking participants to rate their satisfaction with the customer care email response they received in the scenario.
- Using the AttrakDiff questionnaire to evaluate dimensions of the experience.
- Using the highlighter activity to prompt discussion.
Analysis
With two rounds of user testing complete, we were able to evaluate whether our newly designed emails were better serving customers’ needs.
Satisfaction Compared
The graph to the right provides a comparison of the satisfaction ratings participants gave for the original emails in Round 1 and the re-designed emails in Round 2.
By comparing the satisfaction scores, we can be 98% certain that the re-designed emails have a higher satisfaction rating than the original emails.
We used a 2-sample t-test to compare the two sets of data.
Experience Compared
The graph illustrates the AttrakDiff results comparing perceptions of the original emails to the re-designed emails.
Data points closer to the left of the graph represent an association with negative adjectives, whereas points to the right represent an association with positive adjectives.
The overall positive shift shown in the graph from Round 1 to Round 2 data shows that participants described the re-designed emails with more positive adjectives than the current emails.
The graph illustrates 90% confidence intervals for each of the qualities.
Outcomes
During Round 2 testing, participants’ data indicated the redesigned emails upheld the Customer Communication Principles.
The AttrakDiff and satisfaction scores from Round 2 user testing can be used as a benchmark to monitor how the customer experience is impacted as additional changes to communications are implemented.
The Customer Communication Principles will be used to formulate new email template responses and replace the former email templates. Managers can now employ quality assurance with their staff by checking that email responses adhere to the Customer Communication Principles.
Finally, this project marked the first application of using experience metrics to evaluate customer care communications. Since this project, our company has focused on developing more scientific ways to quantify the customer experience for a diverse range of products.