Site icon Aragon Research

Defining Effective Metrics Based on Hybrid Causation & Correlation Data

by Betsy Burton

I recently had a discussion with a client who shared some frustration because he felt that his management was creating more metrics and making conclusions based on metrics that he felt had little to do with the actual performance.

The problem is that the organization’s leadership had fallen into the trap of confusing correlation and causation, and furthermore it was making policy mandates based on loosely-defined correlated metrics.

It is critical that business and technology leaders understand the difference between correlation and causation when making decisions. Leaders must also recognize that the vast majority of business and technology decisions are based on correlated data points.

Correlation Shows Relatedness

Correlation is about measuring and understanding how two things are related. In terms of performance metrics, it is about understanding how two or more metrics may be related, and how they may help illustrate a business need or issue.

Business and IT leaders must not assume that related metrics cause a business change; they may just be related.

Causation Illustrates Direct Cause

Causation is about illustrating direct provable cause between a specific measurable action or event and a specific outcome. The challenge is that causation metrics require scientifically derived evidence.

They are significantly harder to define and prove, but they do show a direct cause and effect. For this reason, most organizations are unable to fund this type of research.

Understanding how to work with correlation and causation metrics and understanding the difference between them is vital.

Best Practice Is to Use a Combination of Metrics

The best practice is for business and technology leaders to use a combination of metrics, including,

1)     Leverage causation data from well-known scientific organizations, such as Scientific America, research organizations, such as Pew Research, and universities, such as Harvard Business Review.

2)     Use correlation data, such as surveys, that are statistically valid (e.g., large number of survey results, valid non-leading questions, and double-blind surveys, etc.).

3)     Use empirical data with the awareness of inherent bias

Results that weigh toward causation data, and significant survey results can be used to inform policies. Metrics that weigh toward surveys and empirical data should not be used to set policies.

Bottom Line

It is critical to not confuse causation and correlation metrics, and to use each for the appropriate level of guidance, investment, and policymaking. Misuse of data can lead to bad investment decisions, employee frustration and cultural damage.

For more information on this topic see newly published Research Note “Performance Metrics: Differentiate Correlation and Causation.”

Exit mobile version