Continuing ignition timing debate from the 416 thread.

So, realistically how much power are we talking about for the investment? 1% 2% ?

I'm not saying it's not worth it but it definitely feels like "the little things" chasing a fully optimized timing curve.

I do understand the idea behind achieving a steady state before a result is realized, but how does that translate to real world, where steady state is purely academic?

I do have some experience dyno testing. Back in the day I was on a formula SAE student team, and we flogged the hell out of a cbr600 engine, "optimizing" all these things. That engine would sit pinned at 10,000 rpm while we tinkered with timing and AF.

You know what? We ended up having to make massive changes to the map to make the thing drivable. It didn't make as much power in the dyno, but it sure was easier to drive.

Im not arguing against the general theme here. It is interesting. But I can't help thinking it's all a bit academic.

Don't all the serious race teams tune based on load profiles that mimic specific tracks?
I look at it like this, instead of thinking about how much power was gained by accessing the 1-2-3%, what happens at peak torque on something with a very narrow tuning window (pump gas, boost, cast iron heads for example) and you’re 1-2-3% wrong in the other direction? That goes badly very quick and it pays to get it right.