We hear more and more about the virtues of HR Analytics. But how much of it is hype? And is focus on numbers preventing us from focusing on people?
Hence the neologism, banalytics – data that appears on first sight to be useful, but in practice diverts attention from the real issue. Of course, there is a place for numerical data – for example, in identifying trends such as career progression for minority employees or differences in the level of psychological safety in various departments.
Some examples of banalytics include:
- Generic 360-degree feedback. Not only is the data unreliable (poor managers can receive higher scores than good ones) but the person rated is under pressure to improve on things that may not be fully in their control and may not reflect their developmental priorities identified by other means.
- Bench strength – often looks much stronger than it really is, when you strip out duplicants (people assigned to more than one option) and whether people would actually take the role. Also very difficult to align the roles being benchmarked with the dynamic changes in what roles are most important for the organisation.
- Employee engagement surveys usually don’t provide actionable data. They do not provide the team leader with information on the causes of direct employees’ dissatisfaction, nor on what to do about them. They also don’t distinguish very well between general dissatisfaction and specific behaviours by a team leader.
The point of analytic data is to inform policy and decision-making. The following are some of the questions that can be helpful in assessing the utility of analytics:
- Are the frequency of data collection and the rate of change in what is being measured aligned?
- Does the data lead directly to decision-making?
- If it does, what is the structural gap between where the decision is made and where it is implemented? (The bigger the gap, the greater the chances of unexpected consequences.)
- How closely does the measure reflect strategic business priorities?
- Who determines the utility and appropriateness of the measure? (For example, are neurodivergent employees involved in the design and implementation of measurements relating to neurodivergence in the workplace?)
- What is the balance and relationship between quantitative and qualitative data? (Over-reliance on either alone can distort perceptions.)
- Does the analytic enhance or supplant real conversations with people?
- To what extent are responses dependent on shifting context? Is this sufficiently taken into account?
- Are the constructs being measured sufficiently distinct?
- What’s the evidence base behind a diagnostic? (Many of the most popular instruments have very poor validity!)
- If the analytic is intended to be predictive, how accurate has its predictions been to date?
- Do key people believe the data, when they see it?
In short, it’s time for the honeymoon period for HR analytics to give way to more pragmatic, more evidence-based, more human approaches.
©️David Clutterbuck, 2024