Skip to main content
Category: Laws
Type: Management Law
Origin: Economics, 1975, Charles Goodhart
Also known as: Goodhart’s Law, The Goodhart Effect
Quick Answer — Goodhart’s Law states that when a measure becomes a target, it ceases to be a good measure. First articulated by British economist Charles Goodhart in 1975, this principle reveals how well-intentioned metrics can backfire when people optimize for the measure rather than the underlying goal. Understanding this helps you design better performance systems and recognize when quantitative goals are being gamed.

What is Goodhart’s Law?

Goodhart’s Law describes a fundamental problem with using metrics to drive behavior: when any metric becomes a target, people find ways to “game” it, making the metric stop reflecting what it originally measured. The law emerged from observations about monetary policy, but it applies everywhere organizations use numbers to evaluate performance.
When a measure becomes a target, it ceases to be a good measure.
The core insight is that any metric that is being actively optimized will be subject to gaming. Once a number becomes the goal, the system shifts to maximizing that number—often at the expense of the underlying objective. This happens because people are creative, motivated, and often face pressure to hit targets regardless of how the numbers are achieved.

Goodhart’s Law in 3 Depths

  • Beginner: Recognize that any metric you publish will be gamed. If you measure something publicly, expect people to optimize for it—even if that undermines the original purpose.
  • Practitioner: Use multiple metrics rather than single targets. Avoid publishing exact formulas or weights, as this gives people a blueprint for gaming.
  • Advanced: Design metrics that are hard to game by making them lagging indicators, composite measures, or by changing them periodically to prevent optimization.

Origin

Charles Goodhart (1935–2023) was a British economist who served as Chief Economist at the Bank of England. His law emerged from his work on monetary policy in the 1970s. Goodhart observed that when central banks targeted specific monetary aggregates (like M3 money supply), those aggregates stopped behaving in the predictable ways they had when they were simply measured but not targeted. In a 1975 paper, Goodhart wrote: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.” This insight quickly generalized beyond economics to become one of the most cited principles in management, organizational behavior, and policy design. Goodhart’s work influenced how governments and organizations think about performance measurement, prompting more careful consideration of unintended consequences when setting numeric goals.

Key Points

1

Metrics drive behavior, even when unintended

When you measure something and tie consequences to it, people change their behavior to optimize the metric—regardless of whether that improves the actual outcome you care about.
2

Gaming is rational given incentive structures

Given pressure to hit targets, people will find the path of least resistance to the number. This isn’t moral failure—it’s predictable response to incentives.
3

The target becomes the goal

Once a metric is announced as a target, the underlying goal gets forgotten. People focus on the measurable rather than the important.
4

Multiple metrics resist gaming

Using a portfolio of measures makes gaming harder because there’s no single number to optimize. Composite or lagging indicators are harder to manipulate.

Applications

Performance Management

Avoid single-KPI cultures. Use balanced scorecards with changing metrics, and never publish exact formulas that could be gamed.

Policy Design

When creating public policies with numeric targets, anticipate gaming. Build in safeguards, use multiple measures, and plan for periodic metric revision.

Education Assessment

Standardized test scores became targets in education, leading to teaching to the test. Goodhart’s Law explains why this was predictable.

Software Development

Optimizing for lines of code leads to bloated software. Measuring bug count leads to hiding bugs. Use outcome-based metrics instead.

Case Study

The UK National Health Service Waiting Time Targets

In the early 2000s, the UK NHS set a target that 90% of patients should wait no more than 4 hours in emergency departments. This well-intentioned target aimed to reduce dangerous overcrowding. However, hospitals began gaming the metric in ways that undermined patient safety. Some hospitals started “boarding” patients in ambulances outside the ER—keeping them technically outside the hospital so the clock wouldn’t start. Others diverted ambulances to neighboring hospitals, moving the problem rather than solving it. Some patients were admitted to the hospital through different pathways that excluded them from the target. By 2015, investigations revealed that the target had created perverse incentives. Patient safety didn’t improve proportionally to the metric improvements. The target was eventually relaxed in 2022 after years of controversy. This case illustrates how a clear, measurable goal can produce exactly the wrong behavior when it becomes the target rather than a signal of underlying performance.

Boundaries and Failure Modes

When the principle doesn’t apply:
  • Private, non-competitive metrics: When metrics aren’t tied to rewards or penalties and aren’t visible to those being measured, gaming pressure is lower.
  • Intrinsic motivation contexts: When people genuinely care about the outcome and metrics are seen as feedback rather than targets, Goodhart effects are weaker.
  • Novel metrics: Newly introduced metrics that people haven’t yet figured out how to game may work temporarily.
Common misuses:
  • Justifying metric removal: Using Goodhart’s Law to argue against all measurement. The law doesn’t say metrics are useless—it says they must be designed carefully.
  • Ignoring that some metrics still work: Not all metrics are equally vulnerable to gaming. Lagging indicators and outcomes are harder to manipulate than activities.
  • Assuming all gaming is malicious: People gaming metrics are often responding rationally to incentive structures, not trying to deceive.

Common Misconceptions

Wrong. Goodhart’s Law doesn’t forbid measurement—it warns against treating metrics as targets. Measurement without targets can still provide useful feedback.
Wrong. Goodhart’s Law assumes rational responses to incentives. Given target pressure, even honest people will find ways to “hit the number” without violating rules.
Wrong. Transparency has benefits. The solution is to design metrics that are harder to game: composite measures, outcomes, lagging indicators, or frequently changing metrics.
Goodhart’s Law connects to other important ideas about measurement, incentives, and organizational behavior.

Peter Principle

Like Goodhart’s Law, the Peter Principle shows how well-intentioned organizational practices produce unintended failures when they become targets.

Campbell's Law

Similar to Goodhart’s Law, Campbell’s Law states that the more any quantitative social indicator is used for social decision-making, the more it will be subject to corruption and collision.

McNamara Fallacy

The fallacy of measuring only what is easily quantifiable, ignoring qualitative factors that may be more important.

Cobra Effect

A situation where incentives produce unintended negative consequences, exactly the pattern Goodhart’s Law predicts.

Survivorship Bias

When we measure only what succeeds (because failures aren’t counted), we get distorted metrics that can be gamed.

Gresham's Law

Like Gresham’s Law in economics, Goodhart’s Law describes how good measures get driven out when bad ones become targets.

One-Line Takeaway

Never make a metric a target—use metrics as signals for diagnosis, but design incentive systems around outcomes, not numbers.