Common Pitfalls and Considerations When Evaluating Experiences

With the ever-increasing popularity of focus on the Customer Experience, we are seeing a rise in the number of companies who are incorporating some form of experience evaluation for their concepts and prototypes during their creative process, in an effort to rapidly iterate and support better decision-making.
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.
360 degrees feedback drawn by hand isolated on blackboard
360 degrees feedback drawn by hand isolated on blackboard

With the ever-increasing popularity of focus on the Customer Experience, we are seeing a rise in the number of companies who are incorporating some form of experience evaluation for their concepts and prototypes during their creative process, in an effort to rapidly iterate and support better decision-making.

Capturing and using target audience insight is an imperative part of the design process. From input on strategic direction, pain points, concepts and adoption barriers, this kind of data plays a crucial role in the whole lifecycle.

When conducted well, evaluations and the data they provide can be incredibly powerful to an organization affording them the ability to:

  • Gather insights from the target audience so that the designers and developers can see points of view that they may not have thought of.

  • Provide a way to separate leadership or team member ego or opinion from what needs to be done for success.
  • Enable a focus on where to spend effort short-term and long-term for better prioritization and optimization.
  • Stand as a metric that drives cross-functional common and aligned behavior towards a shared goal.
  • From expert reviews and user testing, to design analytic tools, there are multiple forms of evaluation techniques that can be utilized. The most imperative point, however, is to incorporate into your test design exactly how you will review the experience you are providing depending on its purpose and your constraints, such as time or code freeze dates.

    Here are some things to think about when you are incorporating testing with the target audience for your concepts or prototypes:

    Avoid some of these common pitfalls

    While these may seem obvious, unfortunately, we see these 3 mistakes come up time and time again with teams charged with user testing. If you are new to user testing or have been charged with the task make sure you:

    Do not just ask participant's if they agree or disagree

    Telling the participant what you think and then asking if they agree or disagree makes it less likely that you will be able to capture their open thoughts. You want to know what is inside their head, not what they think you want to hear! You should try to keep language as neutral as possible. More importantly, notice what they do or do not do as well as their body language to gain deep insights into how they really feel about your experience.

    Do not ask participants how they would rate it out of 5 as your metric of success in small rapid user testing:

    Often management or leadership do not look at the in-depth report of such sessions, they are more interested in the "easy to digest" overview. You know -- some metric that they can use to make a decision, plus a few bullet points. This metric should not be the number of participants that rated your concept 4 or 5 out of 5. While this can prove to be a quick and easy-to-digest metric, there are so many nuances to consider -- and bad data leads to bad decisions.

    It is your job, as the person in charge of bringing the voice of the user to the decision table, to formulate valid data in an easy-to-digest manner. One of the nuances that we see is people using the 5-point scale when showing the participant a concept for something they do not have today. If a user does not have this solution today, and there is some form of benefit no matter how small, they could easily just say 4 or 5 -- why not? Also, they may just say it so as not to hurt your feelings, especially if you designed it as well. Your measure is better off when defined to align with what success means to the target audience for that particular experience. Let's all take a moment to remember Walmart's $1.85 billion mistake of basing decisions on the wrong questions.

    Do not write down only what you think about what they are saying:

    Write, or if you can, record everything. Do not write down only what you have analyzed in your own head -- leave the analysis for later. There are two reasons why people that do not write down what the person is actually saying have a negative effect on results:

    1. They are prone to missing the next statement or reaction.
  • They are actually working from their own thoughts and delivering this to the team as the user's perspective.
  • This is not to say that you shouldn't use shorthand, just do not interpret your participant's meaning in that moment.

    Some things to think about if you are planning a user test with a concept

    Think about what you are measuring:

    Perception is a user's reality. Experience is the perception of their interactions with the solution or brand. So, think about how you will understand their perception and if you are meeting their expectations, without leading them. Also, make sure that you first understand what the success factors are for the experience you are evaluating, then define how and what you measure based on this.

    Use observation to look at what they do, not just what they say:

    Make sure you are observing the participant, what they are doing, how they are reacting and their body language. Sometimes it is helpful to have various video angles (especially for experiences that are a combination of physical space and digital) or another researcher present so that they can be focused on observation. When the purpose aligns, we often utilize Experience Labs for this.

    Drive to action -- take concept and feedback through to development:

    The whole point of your effort is to improve the solution, so, make sure you are getting the most value in driving the results to actions. Firstly, understand that improvements have a scale that is linked to effort and resources. So, when recommending, think about what could be a good enough fix, vs. the best fix, and make sure these levels are clear to decision makers so that they can make the best trade-offs. Also, make the recommendations clear and easy-to-digest for development teams. Making clear the change, what it is and the main point as to why the change is needed from the user insights you gathered, helps them better understand the user and enables them to embrace the changes.

    Evaluating experiences

    We have seen a need arise for user testing that enables both an easy-to-digest measurement to track and trend alongside deep insights that actually enable our clients to make the right improvements and action their results. We identified some key trends in issues we saw our clients facing due to the fact that:

    • Deep insights can be hard to digest by leadership or development teams.

  • Efforts were seen as taking too long in running the tests and analyzing the results.
  • When using a metric to relate to management and leadership, often times the metric painted an invalid picture.
  • We used our approach to measuring experience to address this problem. The Experience Review methodology we designed for applications was recently featured in an IDC PeerScape that focuses on bringing together best practice knowledge from around the industry.

    Our approach consists of creating an Experience Index for the experience of the solution we are evaluating. This is based on first understanding what a user would be doing with the application, and then using these user goals as the core flows and basis for the testing.

    During the sessions we use a variety of techniques to look at insights around:

    • Expectations of each of the core flows -- speed, ease and usability and whether they are met.

  • User overall feeling -- measured over a range of variables to indicate their overall perception of the interaction (which is the core of experience).
  • UI and UX Flow -- review of the UI elements, the flow and if it has been optimized.
  • The deep insights gathered are then used to create the metric-based overview that management is presented with. This keeps the report aligned to the actual results as opposed to a separate scale-based question, where true meaning can easily be lost.

    Because we are able to capture deep insights into any issues with the application being analyzed, as well as offering a way to map back to measure, the Experience Index allows:

    1. Easy hand over to the development team.
  • Prioritization and trade-offs to be made smartly between effort (time and cost) and value by the product team and leadership.
  • We found having an Experience Index also enables our clients to demonstrate measurable improvements and aid objectivity across functions, as well as across strategic decisions, trade-offs and investments.

    So remember, when evaluating your experiences:

    • First, understand what success is for the experience and then design what and how to capture it.

  • Make sure you also look at what participants are doing and their body language, not just what they are saying.
  • Make sure your results are both easy-to-digest and to act on.
  • --------

    Need an experience review? Ask us today!

    2016-07-20-1469037558-7028926-ScreenShot20160720at10.57.42AM.png

    • Enables measureable improvements in Experience Score.
    • Overview enables a rapid view of which areas need work.
    • Interactive dashboard enables drill-downs to analysis and recommendations, increasing speed to action.

    Popular in the Community

    Close

    What's Hot