Logical Fallacies

LogFall

A practical logical-fallacies reference with clear explanations, usable examples, and teaching tools.

Fallacy profile

Sharpshooter fallacy

Occurs when someone highlights the data cluster that supports a favored story only after looking at the results, then treats that hand-picked pattern as if it had been the tested target all along.

EvidentialConceptual

Definition

Occurs when someone highlights the data cluster that supports a favored story only after looking at the results, then treats that hand-picked pattern as if it had been the tested target all along.

Illustrative example

I examined dozens of neighborhoods, found the one where crime fell after my program launched, and declared the program a proven success.

Teaching gauges

These 0-100 gauges are teaching aids for comparing fallacies. They are editorial classroom estimates, not measured statistics.

Very common

70

Common in today's rhetoric

Appears regularly in everyday public rhetoric.

Tricky

45

Easy to spot

Often hides inside wording, framing, or technical detail.

Almost automatic

85

Easy to innocently commit

Very easy for well-meaning people to commit without noticing.

Intermediate

55

Difficulty

Needs some practice with categories, evidence, or debate structure.

High schoolScientific reasoning

Reference

Family

Statistical/Sampling Fallacy

The reasoning misuses rates, probabilities, samples, distributions, or other quantitative expectations.

Quick check

What evidence is missing, selected, or overstretched here?

Why it misleads

A fuller explanation of how the fallacy works and why it can look persuasive.

This is close to cherry picking, but the emphasis is on drawing the target around the bullet holes after the shots land. Post hoc pattern-hunting can make randomness, noise, or mixed data look like confirmation.

That's like saying...

Instead of leading with the label, this analogy answers the shape of the reasoning move directly so the mistake is easier to see in plain language.

Fallacious claim

I examined dozens of neighborhoods, found the one where crime fell after my program launched, and declared the program a proven success.

That's like saying...

That's like firing bullets at a barn and painting the bullseye afterward around the tightest cluster. A hand-picked pattern is being treated as if it were the tested target all along.

Caveat

This label is easy to overuse. The point here is not to call every weak argument by this name, but to reserve it for the exact misstep it describes.

Common misapplication

Do not use this label simply because the evidence is incomplete. It applies when the argument claims more support than the evidence has actually earned.

Use the label only when...

Use this label only when someone highlights the data cluster that supports a favored story only after looking at the results, then treats that hand-picked pattern as if it had been the tested target all along. If the real problem is that a claim is treated as true, reasonable, or justified mainly because many people believe it, share it, or act on it, the better label is Argumentum ad populum.

Often confused with

These near neighbors are easy to mix up, so use the comparison to see the exact difference.

Comparison

Argumentum ad populum

Why people mix them up: Both often look like evidential and conceptual mistakes at first glance.

Exact difference: Sharpshooter fallacy happens when someone highlights the data cluster that supports a favored story only after looking at the results, then treats that hand-picked pattern as if it had been the tested target all along. Argumentum ad populum happens when a claim is treated as true, reasonable, or justified mainly because many people believe it, share it, or act on it.

Quick split: What evidence is missing, selected, or overstretched here? Then compare it with What evidence is missing, selected, or overstretched here?

Comparison

Denying a remote hypothetical

Why people mix them up: Both often look like evidential and conceptual mistakes at first glance.

Exact difference: Sharpshooter fallacy happens when someone highlights the data cluster that supports a favored story only after looking at the results, then treats that hand-picked pattern as if it had been the tested target all along. Denying a remote hypothetical happens when a hypothetical test case is dismissed as irrelevant merely because it is rare, extreme, or unlikely, even though the principle under debate is supposed to be universal.

Quick split: What evidence is missing, selected, or overstretched here? Then compare it with What evidence is missing, selected, or overstretched here?

Practice And Repair

Extra teaching tools that show why the fallacy is persuasive, what to look for, and how to correct it.

Why it matters

Why this mistake matters

Sharpshooter fallacy threatens rationality because someone highlights the data cluster that supports a favored story only after looking at the results, then treats that hand-picked pattern as if it had been the tested target all along.

Main reasoning problem

Someone highlights the data cluster that supports a favored story only after looking at the results, then treats that hand-picked pattern as if it had been the tested target all along.

Why this kind of mistake matters

It overstates, understates, cherry-picks, or misallocates the force of evidence.

Check yourself

The assessment area now uses mixed 10-question sets, so the fallacy is not announced in the title before the quiz begins.

What the assessment does

You will work through a mixed set of fallacy-identification questions. Focused links from a fallacy page will quietly include this fallacy among nearby look-alikes without announcing the answer in the page title.

Questions to ask

Use these category-based prompts to audit similar arguments.

Prompt 1

What evidence is missing, selected, or overstretched here?

Prompt 2

Are the categories being used carefully, or are unlike things being treated as alike?

Case studies

Each case study explains why the example fits the fallacy and links back to its source whenever source information is available.

Fact-check: Trump keeps claiming noncitizen voting is a big problem. It's not

NPR's October 12, 2024 fact check on noncitizen-voting claims is a good case study in the gap between isolated anecdotes and population-level conclusions. It shows how a few suspicious stories can feel decisive even when the base rates and verified counts point the other way. The fallacy here is Sharpshooter fallacy: someone highlights the data cluster that supports a favored story only after looking at the results, then treats that hand-picked pattern as if it had been the tested target all along. That matters here because this is close to cherry picking, but the emphasis is on drawing the target around the bullet holes after the shots land. A better analysis would remember that post hoc pattern-hunting can make randomness, noise, or mixed data look like confirmation.

NPR · 2024-10-12

Survivorship bias

Britannica's overview of survivorship bias, especially its retelling of Abraham Wald's aircraft analysis, is a strong historical case of the visible sample misleading people about the full set. It earns its keep anywhere a page needs a real example of selection effects masquerading as a complete picture. The fallacy here is Sharpshooter fallacy: someone highlights the data cluster that supports a favored story only after looking at the results, then treats that hand-picked pattern as if it had been the tested target all along. That matters here because this is close to cherry picking, but the emphasis is on drawing the target around the bullet holes after the shots land. A better analysis would remember that post hoc pattern-hunting can make randomness, noise, or mixed data look like confirmation.

Britannica · 2026-01-01

Economic and election commentary often selects the time window, county, or subgroup that flatters the preferred narrative after the fact, then presents that slice as the decisive measure. The fallacy here is Sharpshooter fallacy: someone highlights the data cluster that supports a favored story only after looking at the results, then treats that hand-picked pattern as if it had been the tested target all along. That matters here because this is close to cherry picking, but the emphasis is on drawing the target around the bullet holes after the shots land. A better analysis would remember that post hoc pattern-hunting can make randomness, noise, or mixed data look like confirmation.

AI benchmarking and startup marketing sometimes advertise the few tasks where a model shines while leaving out the wider distribution of tasks where the results are weaker or unstable. The fallacy here is Sharpshooter fallacy: someone highlights the data cluster that supports a favored story only after looking at the results, then treats that hand-picked pattern as if it had been the tested target all along. That matters here because this is close to cherry picking, but the emphasis is on drawing the target around the bullet holes after the shots land. A better analysis would remember that post hoc pattern-hunting can make randomness, noise, or mixed data look like confirmation.

Related fallacies

Nearby entries chosen by shared categories and family resemblance.