Signal Pilot
🟡 Intermediate • Lesson 32 of 82

Backtesting Reality: That Perfect Expectancy Is Lying to You

16 min read • Strategy Validation
0%
You're making progress!
Keep reading to mark this lesson complete

Your backtest: 8R expectancy. 3.5 profit factor. $50,000 profit on $10,000 account.

You go live. First week: -2R expectancy. 0.4 profit factor. Account down 12%.

What happened? Your backtest lied.

🚨 Real Talk

Most backtests are pure fantasy. They assume perfect fills, zero slippage, no spreads, and magical entries that don't exist in live markets.

If your backtest looks too good to be true, it is. You curve-fitted to noise, not signal.

In this lesson, you'll learn:

  • Why 90% of backtests fail when taken live
  • The 4 deadly sins: overfitting, look-ahead bias, unrealistic costs, survivorship bias
  • How to model slippage and spreads that actually reflect reality
  • In-sample vs. out-of-sample testing (the only validation that matters)
⚡ Quick Wins for Tomorrow (Click to expand)
  1. Add realistic costs to your backtest — Include 0.03-0.05% slippage per trade + actual spread. If results collapse, your edge was fake.
  2. Split your data 70/30 — Optimize on 70% of data, validate on 30% you've never seen. If out-of-sample fails, you curve-fitted.
  3. Paper trade for 30 days before live — If paper results differ significantly from backtest, something's wrong. Don't go live until they match.

Real-World Example: How a "Perfect" 3.8R Expectancy Backtest Became -1.2R Live Disaster

Background: In March 2024, Chris developed a potential breakout strategy after 6 months of optimization. The backtest looked incredible: 3.8R expectancy, 2.9 profit factor, 67% win rate tested on 2 years of SPY data (2022-2023). They deployed $50,000 in April 2024. By June 2024, the account was down $8,400 (-16.8%). Here's the complete autopsy.

The "Perfect" Backtest (2022-2023 Data)
Backtest Results (Jan 2022 - Dec 2023, 2 Years)
Metric Value Chris's Thoughts
Total Trades 156 Good sample size
Win Rate 67.3% (105W / 51L) "Amazing! Way above average."
Avg Win / Avg Loss +2.4R / -1R "Excellent R:R, winners are 2.4× losers."
Profit Factor 2.9 "Professional-grade number."
Expectancy +3.8R per trade "Holy grail territory."
Max Drawdown -8.2% "Tiny drawdown. Extremely safe."
Total Return (2 years) +$42,800 (+428%) "This is the one. I'm quitting my job."

Chris's optimization process:

  • Tested 50 different EMA periods (found EMA(34) "perfect")
  • Tested 20 ATR multipliers for stops (found 1.47× ATR "optimal")
  • Tested 15 profit target levels (found 2.8R "best")
  • Final strategy: EMA(34) crossover + 1.47× ATR stop + 2.8R target
  • Total combinations tested: 50 × 20 × 15 = 15,000 variations

What Chris didn't include in the backtest:

  • Slippage (assumed perfect fills at exact prices)
  • Bid-ask spread (assumed zero spread)
  • Commissions (forgot to model them)
  • Order rejection (assumed 100% fill rate)
  • Out-of-sample testing (never tested on unseen data)
Live Trading Reality (April-June 2024, 10 Weeks)
Live Trading Results (10 Weeks, $50K Account)
Metric Backtest Live Reality Difference
Win Rate 67.3% 51.4% (19W / 18L) -15.9% worse
Avg Win +2.4R (+$1,200) +1.8R (+$720) -25% smaller
Avg Loss -1R (-$500) -1.3R (-$650) -30% larger
Profit Factor 2.9 0.88 -70% collapse
Expectancy +3.8R -1.2R -5R NEGATIVE
Max Drawdown -8.2% -21.4% 2.6× worse
Total Return (10 weeks) +$9,600 projected -$8,400 (-16.8%) $18K swing
The Complete Cost Breakdown: Where $18K Disappeared
Anatomy of the Performance Gap (37 Trades, 10 Weeks)
Cost Category Description Impact
1. Overfitting (Curve-Fit) EMA(34) + 1.47× ATR worked perfectly on 2022-2023 data but was optimized to historical noise. 2024 market conditions = different noise. -$6,200
2. Slippage (Not Modeled) Average 0.08% slippage per trade × $25K avg position × 37 trades round-trip = $5,920 in slippage costs -$5,920
3. Bid-Ask Spread SPY spread $0.01 × 500 shares avg × 37 trades = $185. Options (used for leverage) spread = $0.05-$0.15 × 20 contracts × 37 trades = $2,220 total spread cost -$2,405
4. Commissions Options: $0.65 per contract × 20 contracts × 37 trades × 2 (round-trip) = $962 -$962
5. Order Rejections 8 trades had limit orders that never filled (missed 5 winners, missed 3 losers). Net opportunity cost: 5 winners @ +1.8R = +9R missed = $4,500 -$4,500
6. Look-Ahead Bias Backtest code used close[0] to calculate potential entry but entered on that same candle close (impossible live). Live entries = next candle open. Average 0.3% worse potential entry = $2,775 degradation -$2,775
7. Execution Delays Platform latency + order review time = 2-8 second delay. Fast-moving breakouts moved 0.15% average before fills = $1,387 slippage from delays -$1,387
TOTAL DEGRADATION FROM BACKTEST: -$24,149

The Math:

  • Backtest projected profit (10 weeks): +$9,600
  • Minus degradation costs: -$24,149
  • Expected live result: -$14,549
  • Actual live result: -$8,400 (better than expected because Chris closed the strategy early)
What Should Have Been Done: The Proper Validation
Step What Chris Did (Wrong) What Should Be Done (Right)
1. Data Split Used all 2022-2023 data for optimization Use 2022 for optimization, hold out 2023 for out-of-sample test
2. Optimization Tested 15,000 parameter combinations Use standard parameters (EMA 20/50, 2× ATR stops). Limit testing to <10 variations
3. Cost Modeling Assumed zero costs Model 0.10% slippage + $0.01 spread + $2 commission per trade = -$30/trade minimum
4. Look-Ahead Check Used same-candle close for entry signal + execution Signal on close[0], execution on open[1] (next candle). Realistic timing.
5. Out-of-Sample Test Never tested on unseen data Test on 2023 data (untouched). If performance degrades >30%, strategy is overfit
6. Walk-Forward Single backtest period 6-month optimization windows, test on next 3 months. Repeat rolling forward. Check consistency.
The Out-of-Sample Test Chris Should Have Done

Proper method:

  1. In-sample period: Optimize on 2022 data only (don't touch 2023)
  2. Find parameters: EMA(34) + 1.47× ATR (same result, but only using 2022)
  3. Lock parameters: NO MORE CHANGES. Parameters are frozen.
  4. Out-of-sample test: Run exact same parameters on 2023 data
  5. Compare results: If 2023 performance is <70% of 2022, strategy is overfit

What would have happened:

Period Expectancy Profit Factor Verdict
2022 (in-sample) +4.2R 3.1 Optimized performance
2023 (out-of-sample) +1.2R 1.4 ⚠️ 71% degradation
Conclusion: Strategy is moderately overfit. 2023 expectancy = 29% of 2022. With realistic costs (-$30/trade = -0.6R), net expectancy = +0.6R (barely profitable). DO NOT DEPLOY.

The lesson: Chris's backtest was a fantasy. Testing 15,000 parameter combinations guarantees you'll find one that fits historical noise perfectly. The "perfect" 3.8R expectancy was curve-fit garbage. Out-of-sample testing would have revealed this before losing $8,400. Backtesting isn't about finding the best historical performance—it's about finding what will work going forward. And that requires unseen data, realistic costs, and zero look-ahead bias.

Part 1: The Four Ways Backtests Lie

Why Your Perfect Strategy Will Fail Live

Let's start with the uncomfortable truth: If you optimized a strategy to fit historical data, you didn't find an edge. You found noise.

Curve-Fitting to Random Noise

What it is: Optimizing parameters until backtest looks perfect

Example:

  • You test 100 different EMA periods (10, 11, 12... 110)
  • EMA(47) gives 4.2R expectancy in backtest
  • You think you found the "magic number"

Reality: EMA(47) just happened to fit that specific data set. Forward test? -0.8R expectancy. You learned the noise, not the signal.

Fix: Limit optimization. Use standard parameters. Test on unseen data.

Using Future Data (Impossible in Live Trading)

What it is: Code that "peeks" into future candles to make current decisions

Example:

if close[0] > high[5]:  # Uses FUTURE 5 candles
    enter_long()

The problem: In backtesting, you have future data. In live trading, you don't.

Reality: This code literally cannot execute live. Your backtest is testing a strategy that doesn't exist.

Fix: Only use data available at the moment of decision.

Assuming Perfect Fills at Perfect Prices

Backtest assumption: "I buy at $100.00 exactly"

Reality: Market order fills at $100.12 (spread + slippage)

Impact over 100 trades:

  • Backtest profit: $5,000
  • Spread cost ($0.10/trade): -$1,000
  • Slippage ($0.05/trade): -$500
  • Commissions ($2/trade): -$200

Reality: Real profit = $3,300 (34% less than backtest)

Testing Only on Survivors

What it is: Backtesting on current S&P 500 stocks only

The problem: You're excluding 200+ stocks that got delisted/bankrupted

Example:

  • Backtest on 2024 S&P 500 list: 1.8R expectancy, 2.1 profit factor
  • But in 2020, you'd have been long Hertz (bankrupt), Wirecard (fraud), etc.

Reality: Your expectancy excludes catastrophic losses from delistings. Real expectancy: -0.2R (losing system).

💡 The Aha Moment

A backtest isn't reality. It's a simulation. And if your simulation includes impossible assumptions, your live results will be impossible too.

Part 2: Modeling Realistic Costs

The Hidden Costs That Kill Strategies

Here's what most backtests ignore—and why yours is probably 20-40% too optimistic:

Step 1: Model Bid-Ask Spread

What it is: The difference between buy price and sell price

Example:

  • SPY bid: $450.00 / ask: $450.01 → Spread = $0.01
  • You buy: Pay $450.01 (half spread = $0.005)
  • You sell: Receive $450.00 (half spread = $0.005)

Cost per round trip: $0.01

For 1,000 shares: $0.01 × 1,000 = $10 per trade

100 trades: $1,000 in spread costs

Step 2: Model Slippage

What it is: Difference between intended price and actual fill

Realistic slippage by condition:

  • Normal liquidity: 0.01-0.02%
  • Volatile markets: 0.05-0.10%
  • Thin liquidity: 0.10-0.30%
  • Large position size: 0.20-0.50%

Conservative assumption: 0.05% per trade

$10,000 position × 0.05% = $5 slippage per side = $10 round trip

Step 3: Model Commissions

Example: Interactive Brokers

  • $0.005 per share, $1 minimum
  • 200 shares × $0.005 = $1 per side
  • Round trip = $2

100 trades: $200 in commissions

Step 4: Total Realistic Cost

Per trade (round trip):

  • Spread: $10
  • Slippage: $10
  • Commission: $2

Total: $22 per trade

If your strategy makes $50/trade gross → $28 net (44% reduction!)

Backtest Adjustment Example

Before vs. After Realistic Costs

Strategy: 100 trades, $50 avg profit per trade


Backtest (fantasy):

100 trades × $50 = $5,000 profit


After realistic costs:

  • Gross: $5,000
  • Spread: -$1,000
  • Slippage: -$1,000
  • Commissions: -$200

Net profit: $2,800 (44% reduction)

Part 3: In-Sample vs. Out-of-Sample

The Only Validation That Matters

Here's the professional way to validate a strategy:

Split your data into two periods.

  1. In-Sample (60-70%): Develop and optimize your strategy here
  2. Out-of-Sample (30-40%): Validate on UNSEEN data

🎯 Example: 6 Years of Data (2018-2024)

In-Sample: 2018-2022 (optimize here)

  • Test different parameters
  • Find best-performing rules
  • Refine potential entry/potential exit logic

Out-of-Sample: 2023-2024 (validate here)

  • Run strategy with ZERO changes
  • Compare performance to in-sample
  • If similar → Not overfit!
  • If much worse → Overfit to in-sample noise

Red Flags of Overfitting

🚩 Warning Signs Your Strategy Is Curve-Fit

  • Sharpe > 3.0: Too good to be true
  • Expectancy > 5R: Extremely rare (verify execution assumptions)
  • Only 20-30 trades: Sample size too small (statistically meaningless)
  • Perfect equity curve: Smooth line with no drawdowns (impossible)
  • Tested on 1 asset only: Likely fit to that specific regime
  • 10+ optimized parameters: You fit the noise, not the signal
Part 4: Walk-Forward Analysis

The Professional Validation Method

In-sample/out-of-sample is good. Walk-forward is better.

How it works:

📊 Walk-Forward Framework

Period 1: Train on 2018-2019 → Test on 2020

Period 2: Train on 2019-2020 → Test on 2021

Period 3: Train on 2020-2021 → Test on 2022

Period 4: Train on 2021-2022 → Test on 2023


If consistent across all periods: Robust strategy

If performance degrades: Overfit or regime-dependent

🎓 Key Takeaways

  • Model realistic costs: Spread, slippage, commissions (20-40% profit reduction)
  • Avoid overfitting: Limit parameters, test across multiple markets
  • In-sample vs. out-of-sample: Develop on 60-70%, validate on 30-40%
  • Walk-forward testing: Verify consistency across time periods
  • Red flags: Sharpe > 3.0, expectancy > 5R, < 30 trades
  • Forward test before live: Paper trade 3-6 months minimum
Practice Exercise

🎯 Backtest Validation Practice

Exercise: Validate Your Strategy Using the Red Flag Detector

Take your current strategy (or a strategy you're considering) and run it through validation:

  1. Backtest on 60% of your data (in-sample). Record: trades, expectancy, Sharpe, max DD
  2. Identify parameters used (EMAs, thresholds, etc.). Did you optimize these by testing multiple values?
  3. Test on remaining 40% of data (out-of-sample). Compare results to in-sample.
  4. Use the validation scorecard from the template (8 categories, 60 points max)
  5. Calculate your score: 50-60 = proceed, 40-49 = revise, <40 = scrap
  6. If score ≥50: Paper trade for 30-60 trades before going live

Goal: Catch curve-fitting, look-ahead bias, and unrealistic assumptions BEFORE they destroy your live account. Better to scrap a bad strategy in backtesting than lose real money.

Test Your Understanding

🎮 Quick Check

Q: You backtest a strategy: 5.2R expectancy, 4.2 Sharpe ratio, tested on 25 trades, using EMA(47) because it gave the best results after testing 100 different periods. What's wrong?

A) Nothing—great strategy!
B) All of it: Sample size too small, Sharpe & expectancy too high, EMA(47) is curve-fit
C) Just the expectancy is too high
D) Just the sample size is too small
Correct! Three major red flags: (1) Only 25 trades = statistically insignificant, (2) Sharpe 4.2 = suspiciously high (likely overfit), (3) EMA(47) found by testing 100 periods = curve-fit to noise. This strategy will fail live. Need: 100+ trades, realistic Sharpe (1.5-2.5), standard parameters.

You backtest a mean-reversion strategy on SPY from 2010-2024. Results: 2.8R expectancy, 68% win rate, $120K profit. Data includes only current S&P 500 components (stocks in the index today). You go live in 2025—it fails. What went wrong?

A) Nothing—2025 market conditions changed, unforeseeable
B) Survivorship bias—backtest excluded 30-40% of stocks that delisted/went bankrupt during 2010-2024
C) Sample size too small—need more than 14 years of data
D) Expectancy too high—2.8R is unrealistic for mean reversion
Correct! Classic survivorship bias trap. Testing on "current S&P 500 components" means you only tested WINNERS—stocks that survived 14 years and made it into today's index. You excluded bankrupt companies, delisted stocks, and failed businesses (30-40% of total). These dead stocks would have triggered your mean-reversion entries but never recovered, destroying expectancy. Adam's exact mistake—$47K loss. Solution: Use point-in-time data that includes ALL stocks that existed at each date, including future delistings. Survivorship bias inflates backtest results by 20-50%.

You walk-forward test your strategy: Train 2018-2020, test 2021 (pass). Train 2018-2021, test 2022 (pass). Train 2018-2022, test 2023 (pass). You didn't model slippage or commissions. Should you trade this strategy live?

A) Yes—passed all walk-forward tests, robust strategy
B) No—must add realistic costs (0.2-0.4% slippage + commissions) and retest. If still profitable, then yes.
C) Yes—walk-forward validation is sufficient, costs don't matter much
D) No—need at least 10 walk-forward periods to validate
Correct! Walk-forward tests prove robustness (not overfit to single period), but you MUST model realistic costs before trading live. Slippage (0.1-0.2% per side = 0.2-0.4% round-trip) + commissions ($2-5/trade) can reduce expectancy by 30-60%. A strategy with 2.5R expectancy and no costs might drop to 1.2R with costs—still profitable but barely. Paul's mistake: 3.2R backtest → 0.8R live after slippage destroyed edge. Add costs LAST (after walk-forward), then if expectancy stays >1.5R and win rate >45%, trade it live. Never skip cost modeling—it's the difference between profit and pain.

In backtesting, perfection is suspicion. Realism is edge. Model costs, avoid overfitting, forward test—or fail live.

Related Lessons

Intermediate #34

Trade Journal Mastery

Track live performance and compare to backtest results.

Read Lesson →
Beginner #9

Position Sizing

Apply proper position sizing in backtests for realistic results.

Read Lesson →
Intermediate #30

Regime Recognition

Test your strategy across all regimes for robustness.

Read Lesson →

⏭️ Coming Up Next

Lesson #33: Advanced Risk Management—Professional Frameworks — Learn Kelly Criterion, dynamic position sizing, and drawdown protocols.

Educational only. Trading involves substantial risk of loss. Past performance does not guarantee future results.

💬 Discussion (0 comments)

0/1000

Loading comments...

← Previous Lesson Next Lesson →