Fact-check: Trump keeps claiming noncitizen voting is a big problem. It's not
NPR's October 12, 2024 fact check on noncitizen-voting claims is a good case study in the gap between isolated anecdotes and population-level conclusions. It shows how a few suspicious stories can feel decisive even when the base rates and verified counts point the other way. The fallacy here is Sharpshooter fallacy: someone highlights the data cluster that supports a favored story only after looking at the results, then treats that hand-picked pattern as if it had been the tested target all along. That matters here because this is close to cherry picking, but the emphasis is on drawing the target around the bullet holes after the shots land. A better analysis would remember that post hoc pattern-hunting can make randomness, noise, or mixed data look like confirmation.
NPR · 2024-10-12
Survivorship bias
Britannica's overview of survivorship bias, especially its retelling of Abraham Wald's aircraft analysis, is a strong historical case of the visible sample misleading people about the full set. It earns its keep anywhere a page needs a real example of selection effects masquerading as a complete picture. The fallacy here is Sharpshooter fallacy: someone highlights the data cluster that supports a favored story only after looking at the results, then treats that hand-picked pattern as if it had been the tested target all along. That matters here because this is close to cherry picking, but the emphasis is on drawing the target around the bullet holes after the shots land. A better analysis would remember that post hoc pattern-hunting can make randomness, noise, or mixed data look like confirmation.
Britannica · 2026-01-01
Economic and election commentary often selects the time window, county, or subgroup that flatters the preferred narrative after the fact, then presents that slice as the decisive measure. The fallacy here is Sharpshooter fallacy: someone highlights the data cluster that supports a favored story only after looking at the results, then treats that hand-picked pattern as if it had been the tested target all along. That matters here because this is close to cherry picking, but the emphasis is on drawing the target around the bullet holes after the shots land. A better analysis would remember that post hoc pattern-hunting can make randomness, noise, or mixed data look like confirmation.
AI benchmarking and startup marketing sometimes advertise the few tasks where a model shines while leaving out the wider distribution of tasks where the results are weaker or unstable. The fallacy here is Sharpshooter fallacy: someone highlights the data cluster that supports a favored story only after looking at the results, then treats that hand-picked pattern as if it had been the tested target all along. That matters here because this is close to cherry picking, but the emphasis is on drawing the target around the bullet holes after the shots land. A better analysis would remember that post hoc pattern-hunting can make randomness, noise, or mixed data look like confirmation.