About usHelpServicesBlogContact us
Backtesting

Common Backtesting Mistakes (and How to Avoid False Edges)

L
Leena Shah
28 Jan 2026
8 min read
#Backtesting#Data Quality#Overfitting#Walk-Forward#Slippage#Commission
Common Backtesting Mistakes (and How to Avoid False Edges)

Backtests fail when assumptions are generous. If you ignore fees, borrow costs, partial fills, and market impact, you will overstate performance. Survivorship bias and lookahead errors are quieter killers that only surface when live P&L diverges sharply from the research notebook. Without reproducible seeds, environment captures, and clear data provenance, every rerun becomes a different experiment.

Red Flags in Research

Watch for these signals before trusting a backtest:

  • Equity curves that never show sideways periods or meaningful drawdowns
  • Parameter sets tuned to a single symbol or short date range with no validation set
  • Trades that assume instant fills at mid-price with zero queue position
  • Data with no corporate action handling, bad ticks, or timezone normalization
  • Performance jumps that coincide with data vendor changes or missing delisted symbols

Make Backtests More Honest

Model realistic friction: borrow fees on short positions, maker/taker fees for crypto, and slippage that scales with volatility. Use walk-forward testing with unseen periods, and validate on different instruments or venues to avoid memorizing a regime. Enforce order throttles and queue-depth assumptions so your backtest cannot place liquidity that wouldn’t realistically execute.

"If a strategy only works in backtests that ignore fees and impact, it doesn’t work."

Once live, compare fill prices, slippage, and latency to the backtest assumptions. Close the loop quickly: adjust models, tighten limits, and keep a log of research changes so you can reproduce every decision. Maintain a changelog of data updates, parameter tweaks, and environment versions so you can trace any divergence to a concrete cause.

Practical Backtest Hygiene

Habits that keep research credible:

  • Freeze and version datasets; rerun baselines whenever the dataset changes
  • Use commission/fee schedules that match your broker tier, not marketing numbers
  • Stress-test with worse-than-expected slippage and check if the edge survives
  • Compare fills to benchmarks like VWAP or arrival price and chart the drift
  • Store seeds and package versions in the notebook so results are reproducible
L

Leena Shah

Author

Passionate about creating innovative solutions and sharing knowledge with the community.

Share this article