One of the questions I am constantly asked, and am constantly asking myself, is when to do performance analysis. In the real world, people seem to either not do it at all, do it at the last minute, or do it when some customer yells about how bad the application is. It's understandable - there's a lot of pressure to get code out the door, and that pressure seems to increase every year.
But... I think that performance is very similar to other forms of bugs in software (and performance problems really should be thought of as bugs, not something else). If you keep on top of the bugs, they stay under control. If you leave them alone, you end up with a real mess on your hands. I realize this seems at odds with the old adage that says it's a bad idea to do performance tuning too early. But - it really isn't. I'm not talking about fine tuning code so that it runs slightly faster by changing 'i+=1' to 'i++' in some loop. What I'm talking about is ensuring that the general operation of the code performs reasonably, and that it doesn't degrade significantly from day to day. Just like you wouldn't accept a change into your agile build process that causes 50% of your unit tests to fail, you shouldn't be introducing a change into your code that causes a key application operation to run twice as slow as the performance metric that you've got for the operation.
Unfortunately, I don't think many folks take this approach. To me, it's like testing was 10 or 20 years ago when people would write a bunch of code, get it to build, then throw it over the wall to the 'test team', who would then find all sorts of basic bugs, open up defects, and have the developers fix the bugs. That was really inefficient. People realized it was just a lot better to get the code working early, through integrated unit testing as part of the development process, and that developers needed to be responsible for writing good quality code. So - I think we need to apply that same discipline to performance analysis. Instead of writing code, testing it, getting it working functionally, then throwing it over the wall to the performance analysis team, why not fix the (really obvious) performance problems as you go, as part of 'performance unit testing'? Just like standard functional unit testing, things will slip through the cracks, but the general performance quality should be much higher, and you won't end up building your application on a shoddy software base.
I'm not the first to suggest this, and hopefully not the last, but I thought I'd get my opinion out there.