Recognition programs have been around forever. Sales managers like using prizes to motivate people, it feels right to give people a memento for long service, and restaurants see value in naming a good worker “employee of the month.”
Most organizations view these programs the same way they view refreshing the office with new furniture; it is a nice thing to do, and companies believe it makes business sense. The payoff, however, is uncertain.
Since the payoff of recognition programs is uncertain, they are often designed without a lot of rigour. The boss has an idea for how they should run the program so that is how it is done. But what would happen if we applied analytics to recognition so that we could accurately measure what works and the size of the impact? Could recognition be a poster child for evidence-based management?
Emotion before data
Before you start to gather data on the impact—or lack of impact—of recognition programs you need to face up to the fact that this is an emotional, even ideological, topic. People often fall into one of two camps. The first camp believes in celebrations, prizes and bringing fun into the workplace. They are the cheerleaders of life. The other group believes people should put their shoulder to the wheel without fanfare and that the people who constantly need a pat on the back to do their work should seek employment elsewhere.
We need to step back from the emotion or we will end up with a heated argument instead of a reasoned decision. In working with managers, HR should recognize that they might have strong views. It is best to set up the issue as a matter of inquiry: start with a hypothesis that for a certain group of workers a certain recognition scheme will improve outcomes. That hypothesis might be true or false; but rather than let managers argue about it, simply say “Let’s find out.”
Finding out if recognition works
A large bank had a recognition program for front line staff and the question was whether this program was a waste of money or whether it improved sales. The interesting backstory is that, since it was a big organization, the recognition program was run from a web platform and so every instance of recognition was recorded. Similarly, sales were tracked by employee. The data to see if recognition worked was already there, sitting on the company’s servers.
This, by the way, is a good example of “Big Data”. The essential characteristic of Big Data is that we typically collect a lot of data in the everyday course of running our operations. In the past we may have been happy to rely on the instincts of managers on whether recognition worked, however now that all the relevant data has already been collected we can do a much more rigorous assessment.
The actual analysis required the skills of someone with a deep training in statistical analysis. Gordon Green, Chief Strategy Officer of Rideau Recognition, sought assistance from Dr. Charles Scherbaum of the City University of New York. Essentially, Scherbaum looked at different slices of time to see if acts of recognition were followed by periods of increased performance. Doing this is not a trivial exercise; however it is well within the skills of people trained for this kind of analysis.
At last the organization could find out with a high degree of confidence if their recognition program had an impact on sales and whether that impact was sufficient to justify the cost.
I know readers will be keen to know the results, but hold off for just a moment. The real lesson here is that in an era when we are gathering lots of data and have access to experts with a high degree of analytical skill we can bring far more rigour to decision making than in the past. The point in this case was whether recognition worked. The greater point, in general, is that we can use evidence instead of emotion to make decisions.
The evidence in this case was unambiguous: the recognition program had a significant positive effect on performance and that the size of the effect easily covered the cost of the program.
Having embraced an evidence-based approach, the bank went on to do experiments to fine tune their program. For example, they gave one group of managers coaching in how to give recognition and compared their results to a group that had not been coached. They found the coaching paid off. The actual finding was that specific recognition (“You did this thing well”) was better than general recognition (“Good job”).
The work did not stop there. They were also able to look at what types of recognition had the biggest impact (social recognition worked particularly well) and whether doubling the amount of rewards would make recognition even more effective (it didn’t.)
Generalizing the findings
Does this prove that recognition will work in your firm? No, it does not prove that, but let’s reverse the reasoning: is there evidence that removing a recognition program will save the company money? Based on this case, the evidence suggests that removing recognition would hurt sales—especially if we are giving managers training in how to give recognition.
We need to be careful that the cheerleaders of this world do not use this finding as a hammer in the ideological battle against the task masters. The point is to make good decisions not win an argument. Let us create an HR function that can guide managers past assumptions and entrenched views of human nature. If we can get people to agree on a common goal, and to investigate the evidence on how best to achieve that goal, then we are on the road to better run organizations.