blossomOften we present data without thinking about it too much.

We might actively think about the metrics we are computing (and avoid rookie analysis mistakes).

But it is rare that we, "Web Analysts", actually think, I mean think, about the story we are telling.

I think that's because that is not our job (I mean that in all seriousness).

Our job is to report data. On good days it is to understand and segment and morph and present analysis.

But we don't think about the implications of the data in a grander context and we don't think about the role we can play in connecting with the Business, the Marketers and be so bold as to try and change behavior of decision makers. Change company cultures.

This blog post is a short story about my small attempt at changing the culture and setting a higher bar for everyone. Using data.

The Use Case:

The data in question was survey data. This one was specifically about a day long conference / training / marketing event for current and prospective customers.

On a five point scale for each Presenter the Attendees were asked to rate "how satisfied were you with the presentation and content".

Quite straightforward.

Here are the results:

customer satisfaction survey result

But you can also imagine getting this kind of data from your free website survey, like 4Q from iPerceptions ["Based on today's visit, how would you rate your site experience overall?"].

Or if you use free page level surveys from Get Satisfaction or Kampyle ["Please share your ratings for this page."]

In those cases you would analyze performance of content or the website.

The Data Analysis:

On surface this is not that difficult a problem to analyze.

Here is a common path I have seen people take in reporting this data, JAI! Just Average It! :)

average satisfication survey results

The actual formula is to take the average of the last three columns (satisfied through extremely satisfied).

This is ok I suppose.

I find people have a hard time with smaller numbers and then you throw in the decimals and you might as well call it quits.

Your boss, Bruce, eyeballs this and says: "Looks like everyone performed well today, let's uncork the champagne."


Those a bit more experienced amongst you know this and what we might see from you is not averaging but rather a more traditional Satisfaction computation.

customer satisfaction survey analysis

The formula is to add the three ratings (satisfied through extremely satisfied) and divide that by the total number of responses. For Jonny: (6+12+0)/18

A bit better from a communication stand point.

6.0 is a number hanging in the air naked, without context, and hence is hard to truly "get".

100% on the other hand has some context (100 is max!) and so a simple minded highly paid executive can "get" it. Jonny, Chris and Apple did spectacularly well. Will and Brian get a hug, Guy was great (come on, 89% is not bad!!).

The Problem.

Well two really. One minor and one major.

The minor problem is (as you saw in Guy's case immediately above) percentages have a certain nasty habit of making some things look better than they are. From a perception perspective.

My hypothesis is that in general human beings think anything over 75% is great.

So maybe we should not use percentages.

My major problem is that this kind of analysis:

  1. rewards meeting expectations
  2. does not penalize mediocrity

Both are a disservice in terms of trying to make the business great. I have to admit they are signs of business as usual, let's get our paycheck attitude.

Think of mediocrity. Why in the name of all that is holy and pure should we let anyone off the hook for earning a dissatisfied rating? So sub optimal!!

Consider "meeting expectations". I was upset that our company was not shooting higher. Accepting a rating of Satisfied essentially translates to: "as long as we don't suck, let's accept that as success".

What a low bar.


I believe that every business should try to be great. Every interaction should aim to create delight. It won't always be the case, but its what we should shoot for.

And its what we should measure and reward.


Because our way of life should be to create "brand evangelists" through customer interactions that create delight.

You like us so much, because we worked so hard, because we set ourselves such a high bar, that you will go out and tell others. Be our Brand Evangelist.


So we don't have to do that.

The Solution.

Now it is very important to point out that worrying about all of the above was not in my job description. As the Manager of a small team of Analysts (or as an Analyst) I am supposed to supply what's asked for (sure with some analysis).

But I made two major changes to the calculation, and one minor.

  1. Partly inspired by the Net Promoter concept I decided to discard the Satisfied rating.

    When we spend money Marketing (/Sales / Teaching / Advocating) I am aiming for delight.

  2. Decided to penalize us for any negative ratings (even if slightly negative).
  3. Index the results for optimal communication impact.

I call the new metric: Brand Evangelists Index. (Ok so its a bit wordy.) BEI.

The actual formula applied was:

{ [ (Very Sat + Ext Sat) - (Not Sat + Not At All Sat) ] / # Responses } *100

The Results.

Here's what the success measurement looked like:

brand evangelists index

The result was a radically different understanding of quality and impact of each Presenter.

Not obvious?

Check out all three measures next to each other:

comparing satisfaction formulas


You can see how the Brand Evangelists Index separates the wheat from the chaff so well.

Compare Jonny's scores for example. Pretty solid before, now a bit less stellar.

In fact Will who initially scored worse then Jonny is now 11 points (!!) higher than Jonny.

That's because the BEI rewards Will's ability to give a "delight" experience to a lot more people (as should be the case).

Compare the unique case of Guy Berryman.

In other computations Guy was dead last but there was not much difference between him and say Will and Brian. Just a few points.

But the Brand Evangelists Index shows that Guy was not just a little bad, he was badly bad.

Sure he got a couple bad ratings but Guy failed miserably at creating delight.

He failed at creating Brand Evangelists.

And if we invest money, in these times or in good times, we demand more. Guy can't do, or has to do a lot better.

Note that Apple could also use some mentoring and evolution.

winner outcome

The Outcome.

Initial a few people said what the freak! Some stones were thrown.

But I took the concept of the Brand Evangelists Index two levels higher and presented it to the VP and the CMO.

They adored it.

The reasons were that the Brand Evangelists Index

  1. demanded higher return on investment
  2. it set a higher bar for performance and
  3. it was truly customer centric

The BEI became the standard way of scoring performance in the company.

[In case it inspires you: That year I received the annual Marketer of the Year award (for the above work and other things like that). Imagine that. An Analyst getting the highest Marketer award!]

The Punch Line.

When you present data think of not just the data you are presenting but what are you measuring really and how you can lift up your company.

You have the data. You have immense power.

Now your turn.

What do you think of the Brand Evangelists Index? How would you have done it better? Got your own heroic stories to share? I would love to know how you used data to alter a company's culture.


Couple other related posts you might find interesting:

Social Bookmarks:

  • services sprite
  • services sprite
  • services sprite
  • services sprite
  • services sprite
  • services sprite
  • services sprite
  • services sprite
  • services sprite
  • services sprite
  • services sprite