Interactive Calculator · C-Store POS Fraud Detection Program

Fraud Impact & Detection ROI

Model fraud exposure, detection rates by category, and the financial impact of the ML-assisted detection program. All calculations update as you adjust parameters — start with your store parameters, then tune each fraud category.

Architecture: IForest + LOF + Rules (v1) CV: Phase 2 (reserved in formula) Scoring: Per transaction Pilot: 200 stores
1
Portfolio Parameters — set your store baseline; all downstream figures update automatically
$
stores
Pilot: 200 stores  ·  Full rollout target: 13,000

1.5 %
Industry range: 0.5–2% for C-store & quick-serve. Calibrate after distribution analysis.
80 %
% of fraud currently caught without ML
85 %
Expected % of fraud caught with ML active. Must be ≥ baseline. Drives "Additional Fraud Found by ML" directly.
2
Portfolio Overview
Show:
3
Fraud Category Breakdown — adjust share, baseline detection %, and ML detection % per category
Category Share Annual $ Detection   Baseline → ML
Share sliders are proportional — categories auto-normalize to their relative weight. ML detection % represents total caught with the ML system active (must be ≥ baseline).
Annual Fraud Exposure by Category ($)
4
Detection Status per Category — Annual Dollars — baseline detected · ML additional · still undetected
Baseline Detected ML Additional Detection Still Undetected
5
Financial Impact & Program ROI
$
Compute, licensing, AP analyst time — per store, per month
Total Portfolio Cost
per month across all stores
Period:
Baseline Detected
Fraud flagged without ML system
ML Additional Detected
Additional fraud flagged above baseline
Net Program Benefit
ML detected minus system cost
System Cost
Program ROI
net benefit ÷ system cost
Payback Period
months to break even
Per-Store Net
About the estimates
Detection rates are starting-point estimates for program planning — calibrate against real data distributions before setting targets. ML detection % per category reflects total caught with the system active, not just the incremental lift. Sweethearting shows limited ML lift in v1 — CV (Phase 2) is required for meaningful detection of that scheme.
Estimates only — calibrate with real data before program reporting