Source: UN Geneva via Flickr
The proliferation of business and human rights indicators seems unstoppable.
Human rights metrics have been included in almost all sustainability and Corporate Social Responsibility tools, from reporting frameworks like the Global Reporting Initiative's G4 Sustainability Reporting Guidelines to ethical ratings like the FTSE4Good Index Series. The objective of numerous new initiatives, like Ranking Digital Rights and the Corporate Human Rights Benchmark, is specifically to assess whether companies respect human rights or not.
"Measuring" business and human rights has become so important that even the UN Working Group on business and human rights recently identified it as a priority for its mandate.
Against this background, it is important to highlight four key challenges when measuring corporate respect for human rights.
1) The "normative" challenge
In 2011, the UN Human Rights Council unanimously endorsed the UN Guiding Principles on Business and Human Rights as the most authoritative global standard for preventing and addressing the risk of adverse human rights impacts linked to business activity.
The UN Guiding Principles offer significant guidance on what companies should do to respect human rights. Yet, they also leave many questions unanswered.
In practice, this means that the production of numerous business and human rights indicators is not a merely technical exercise, but an implicit normative process creating new standards.
The "multi-dimensionality" of human rights abuses offers a good example of this challenge. Adverse human rights impacts have multiple dimensions, for instance severity (killing is different from wounding), frequency (wounding ten is different from wounding one) and range (wounding three saboteurs and their seven children is different from wounding ten saboteurs).
Unfortunately, no consensus exists yet about how many dimensions human rights metrics should take into consideration, or about how these dimensions should be weighed against each other. For instance, how does one compare the killing of two union leaders with the wounding of twenty indigenous women?
2) The methodological challenge
The production of metrics is often susceptible to significant distortions. For instance, indicators often force collected information into a limited number of categories - such as in a scale from 1 to 5. This inevitably places items that are different into the same category.
Taking an example from governments, a comparison between Canada and Somalia on a 1 to 5 human rights scale would not detect any difference between Somalia in 2008, which might have been a promising year for Somali standards, and Somalia in 2012. Both years would still require a score of 5 in comparison with the 1 of Canada.
Similarly, what happens in a 1-to-5-scale indicator of corporate impact on indigenous rights when a company moves from displacing 200 people one year to 1,000 the following year? What if 200 displaced people had already warranted a score of 5?
A solution could be the employment of wider scales (from 0 to 100, for instance). However, this strategy faces another problem: false precision. Is it relevant to say that two companies score 78 and 85 on freedom of association? What does it mean that there is "three times" more freedom to unionize in one company than in another?
3) The practical challenge
Any good measurement relies on good data. Unfortunately, corporate self-reporting and third-party documentation (the main sources of information on business and human rights issues) are both problematic.
The flaws with corporate self-reporting are related to scope and reliability. First, companies often report on their policies. However, they disclose little information on due diligence procedures and almost nothing on adverse human rights impacts (with a few exceptions, such as employee fatalities). Second, self-reported data is often contested. A company may argue that it performed extensive stakeholder consultations, but others may point at the exclusion of vulnerable communities.
Third-party reports clearly provide invaluable assessments of corporate behaviour. However, these documents are frequently expressed in narrative or anecdotal form, which is difficult to aggregate and standardize for comparative purposes. Furthermore, these sources never cover all corporate operations. Their findings may therefore reflect the exception--an extraordinarily rosy or gloomy picture--rather than the rule.
4) The political challenge
I have already highlighted that those who control the production of business and human rights indicators inevitably make important normative decisions, even if they are sold as merely objective and technical ones. The "political" consequence of this reality is that the producers of indicators (such as sustainability data providers and/or Western-based "experts") may surreptitiously become the winners in the struggle over the creation and acceptance of new corporate responsibility standards. More legitimate bodies, such as the Human Rights Council or national parliaments, lose out because of their inability to come up with more precise definitions.
In addition, the mere "language" of indicators may play against accountability.
To begin with, using indicators introduces a risk of condoning a low level of human rights abuses. From a human rights perspective, every adverse human rights impact is one too many. What business and human rights indicators often do, in contrast, is give the false impression that a "good" score (for instance, a 2 in a scale from 1 to 5) equates to "good enough" behaviour.
Business and human rights metrics are also hard to challenge, mainly because of their aura of objectivity. Contestation of misleading indicators usually require detailed (but not media-appealing) proof of the inaccuracy of data input and/or of the methodology used.
My argument is not that we should avoid measuring corporate respect for human rights. The development of human-rights based indicators triggers much-needed processes, such as clarification of responsibilities and disclosure of information.
However, the business and human rights community should proceed with care. In particular, it should not fall victims to the erroneous "article of faith" that some data are always better than no data. As clarified by a former president of the American Statistical Association, an indicator is only a tool, not an end in itself. It can be seen "as a crutch, indispensable, but still a crutch . . . if it is not proportioned to the needs of the user, it can hinder as well as help."