Documentation Index
Fetch the complete documentation index at: https://meta.niceshare.site/llms.txt
Use this file to discover all available pages before exploring further.
Category: Fallacies
Type: Causal & Inductive Fallacy
Origin: Scientific method and legal reasoning norms on total-evidence assessment
Also known as: Selective evidence
Type: Causal & Inductive Fallacy
Origin: Scientific method and legal reasoning norms on total-evidence assessment
Also known as: Selective evidence
Quick Answer — Cherry Picking is a reasoning error that creates a false conclusion by showing only supportive data and hiding disconfirming evidence. The fix is simple: define the full evidence set first, then evaluate both confirming and opposing signals with the same standard.
What is Cherry Picking?
Cherry Picking is the practice of selecting a non-representative subset of evidence and presenting it as if it were the whole picture.If the sample is chosen after seeing outcomes, confidence in the conclusion should drop immediately.The tactic appears in media, business dashboards, policy debate, and personal decision-making because selective visibility can look like strong proof.
Cherry Picking in 3 Depths
- Beginner: Notice when someone shows only wins, best cases, or short time windows.
- Practitioner: Ask what was excluded, why it was excluded, and whether exclusion rules were set before results.
- Advanced: Audit selection mechanisms, base rates, and publication bias together; treat missing evidence as part of the evidence.
Origin
The idea is old in logic, but modern practice was shaped by statistics, evidence-based medicine, and scientific reproducibility norms. Core institutions in science and law emphasize that claims must survive total-evidence review, not favorable-fragment review. Meta-research and reporting standards evolved largely to reduce selective reporting and post-hoc narrative construction.Key Points
Cherry Picking is less about one bad chart and more about a biased evidence pipeline.Selection rules must be pre-committed
If inclusion criteria are written only after results are known, they can be tuned to produce a desired story.
Counterevidence is diagnostic, not optional
Contradictory data often reveals boundary conditions, not failure of analysis.
Time windows can manufacture trends
Choosing start and end dates strategically can create artificial growth or decline.
Applications
Use these safeguards when evidence quality determines costly decisions.Product Analytics
Require reporting of both headline metrics and omitted segments before prioritization.
Policy Communication
Publish the full denominator and uncertainty range, not only dramatic examples.
Hiring and Performance Reviews
Review complete period data, not only peak months or exceptional incidents.
Personal Learning
Track failed attempts and abandoned strategies alongside successes.
Case Study
A widely cited case is vaccine-autism misinformation around the discredited 1998 Lancet paper by Andrew Wakefield. The claim drew disproportionate attention to a tiny, non-representative sample while large epidemiological studies later found no causal link between MMR vaccination and autism. In the United Kingdom, MMR uptake dropped from above 90% to around 80% in some regions in the early 2000s, followed by measles resurgence events. The lesson is that selective evidence can shift public behavior long before stronger total-evidence reviews catch up.Boundaries and Failure Modes
Not all evidence filtering is fallacious. Legitimate narrowing is necessary when criteria are pre-registered, methodologically justified, and consistently applied. Cherry Picking becomes likely when criteria are unstable, counterevidence is hidden, or only emotionally convenient examples are circulated.Common Misconceptions
Good diagnosis requires distinguishing necessary focus from manipulative omission.Any summary is Cherry Picking
Any summary is Cherry Picking
No. Summaries are valid when scope and exclusion rules are explicit and reproducible.
Only bad-faith actors do this
Only bad-faith actors do this
No. Cognitive comfort and deadline pressure can produce selective evidence even without malicious intent.
More data always solves it
More data always solves it
Not by itself. More data still fails if selection and reporting logic remain opaque.
Related Concepts
These pages help separate evidence quality problems from interpretation problems.Texas Sharpshooter Fallacy
Drawing a target after finding a pattern.
Survivorship Bias Fallacy
Ignoring the invisible failures distorts inference.
Confirmation Bias
Favors information that supports existing beliefs.