Skip to main content
Category: Laws
Type: Social Science Law
Origin: Sociology, 1979, Donald Campbell
Also known as: Campbell’s Law
Quick Answer — Campbell’s Law states that the more any quantitative social indicator is used for social decision-making, the more it will be subject to corruption and collision. First articulated by American social scientist Donald Campbell in 1979, this law predicts that any metric used to make important decisions will eventually become corrupted. Understanding this helps you recognize the fragility of quantitative governance and design more resilient measurement systems.

What is Campbell’s Law?

Campbell’s Law, named after American social psychologist Donald Campbell, describes how quantitative measures lose their validity when used as the primary basis for making important decisions. While similar to Goodhart’s Law, Campbell’s Law specifically addresses social metrics—measures of human behavior, education, health, and societal outcomes.
The more any quantitative social indicator is used for social decision-making, the more it will be subject to corruption and collision.
The key insight is that when numbers become the currency of decision-making, people and organizations will optimize for those numbers. This creates a selection pressure that distorts the very phenomenon being measured. The metric becomes “good” at being measured but no longer “good” at representing the underlying reality.

Campbell’s Law in 3 Depths

  • Beginner: Any metric tied to important decisions will be gamed. The higher the stakes, the more intense the gaming.
  • Practitioner: Use multiple measures and rotate them periodically. Never let a single number become the primary decision criterion.
  • Advanced: Design metrics that measure outcomes rather than activities, use composite indices, and build in verification mechanisms.

Origin

Donald Campbell (1918–1986) was an American social psychologist known for his work on experimental and quasi-experimental design. He articulated this law in 1979 in a paper discussing the limitations of social indicators in policy-making. Campbell observed that social statistics—like crime rates, educational test scores, or economic indicators—were originally developed to describe society. However, when policymakers began using these numbers to make decisions about resource allocation, evaluations, and punishments, the numbers changed their nature. They stopped being purely descriptive and became targets to be hit. His law became foundational in policy circles, influencing how governments think about accountability, performance measurement, and the unintended consequences of quantified governance.

Key Points

1

High stakes accelerate corruption

The more consequential a metric becomes (tied to funding, promotions, or penalties), the more aggressively it will be manipulated. Low-stakes metrics remain more valid.
2

Metrics change behavior in predictable ways

When you measure something and link it to decisions, people optimize for the measurement. This is not corruption in a moral sense—it’s a rational response to incentives.
3

Collision emerges as people game the system

“Collision” refers to the tendency for those being measured to collude with those measuring. Together, they can create the appearance of success without actual improvement.
4

The indicator and the thing become disconnected

Over time, the metric and what it originally measured drift apart. The indicator becomes a self-contained game with its own internal logic.

Applications

Education Policy

Standardized test scores became the primary measure of school quality, leading to teaching to the test and school choice systems that cream the best students.

Criminal Justice

Metrics like crime rates and arrest numbers drive policing decisions, sometimes leading to manipulations that don’t reduce actual crime.

Healthcare

Hospital readmission rates and patient satisfaction scores are used to evaluate quality, but can incentivize avoiding difficult patients or gaming how readmissions are counted.

Economic Policy

GDP, unemployment figures, and inflation targets drive major policy decisions, creating incentives for statistical manipulation or policy choices that optimize numbers over welfare.

Case Study

The US No Child Left Behind Testing Regime

The No Child Left Behind Act (2001) made standardized test scores the primary measure of school and student success. Schools that failed to meet “adequate yearly progress” faced increasingly severe consequences: loss of funding, mandatory restructuring, and staff replacements. The results were predictable from Campbell’s Law. Rather than improving education, the regime produced widespread gaming. Schools focused resources on “bubble students” likely to pass, neglected non-tested subjects like arts and physical education, and in some cases, outright cheating. Studies showed that while test scores rose, actual learning outcomes as measured by other assessments did not improve proportionally. By 2015, even the Department of Education acknowledged the unintended consequences. The Every Student Succeeds Act replaced many of the most punitive elements, reflecting a growing recognition that high-stakes testing had created exactly the corruption Campbell predicted.

Boundaries and Failure Modes

When the principle doesn’t apply:
  • Metrics with built-in verification: Some systems can cross-validate metrics against each other, making gaming more difficult.
  • Private metrics: When organizations measure themselves for internal improvement without external consequences, corruption is less likely.
  • Multiple competing measures: When several different metrics are used, each with different biases, gaming becomes harder.
Common misuses:
  • Using Campbell’s Law to oppose all measurement: The law warns about high-stakes, single-metric systems—not about measurement itself.
  • Assuming all gaming is conscious manipulation: Much gaming emerges from rational responses to incentives without malicious intent.
  • Ignoring that some metrics still provide value: Campbell’s Law is probabilistic, not absolute. Some metrics remain useful despite corruption pressures.

Common Misconceptions

Wrong. Subjective judgments are also subject to bias and manipulation. The solution is better metric design, not abandoning measurement.
Wrong. Any organization using metrics for important decisions—corporations, nonprofits, schools—faces the same dynamics.
Wrong. Adding more metrics just creates more targets to be gamed. The solution is designing metrics that are harder to corrupt.
Campbell’s Law intersects with other important principles about measurement and governance.

Goodhart's Law

Goodhart’s Law is closely related but applies more broadly to any metric, while Campbell’s Law specifically addresses social indicators used in policy.

Peter Principle

Both laws describe how well-intentioned organizational systems produce failures when metrics become targets.

McNamara Fallacy

The fallacy of measuring only what’s quantifiable while ignoring qualitative factors that matter most.

Cobra Effect

When incentives produce the opposite of their intended outcome, as Campbell’s Law predicts.

Survivorship Bias

When metrics only capture successes (because failures aren’t counted), they become distorted representations of reality.

Gresham's Law

Like Gresham’s Law, Campbell’s Law describes how good measures get driven out when they’re subject to corruption pressures.

One-Line Takeaway

Any metric used for important decisions will eventually become corrupted—design systems with multiple measures, rotating indicators, and outcome-based metrics to maintain validity.