What does /optimize do?

Cory, while talking about VB compilation options, asks “what exactly can one expect to gain by enabling optimizations?” It’s a good question, and one that the docs are kind of vauge about. Turning on optimizations, which is the default for release builds, does three major things at the compiler level:

  • It removes any NOP instructions that we would otherwise emit to assist in debugging. When optimizations are off (and debugging information is turned on), the compiler will emit NOP instructions for lines that don’t have any actual IL associated with them but which you might want to put a breakpoint on. The most common example of something like this would be the “End If“ of an “If” statement – there’s no actual IL emitted for an End If, so we don’t emit a NOP the debugger won’t let you set a breakpoint on it. Turning on optimizations forces the compiler not to emit the NOPs.
  • We do a simple basic block analysis of the generated IL to remove any dead code blocks. That is, we break apart each method into blocks of IL separated by branch instructions. By doing a quick analysis of how the blocks interrelate, we can identify any blocks that have no branches into them. Thus, we can figure out code blocks that will never be executed and can be omitted, making the assembly slightly smaller. We also do some minor branch optimizations at this point as well – for example, if you GoTo another GoTo statement, we just optimize the first GoTo to jump to the second GoTo’s target.
  • We emit a DebuggableAttribute with IsJITOptimizerDisabled set to False. Basically, this allows the run-time JIT to optimize the code how it sees fit, including reordering and inlining code. This will produce more efficient and smaller code, but it means that trying to debug the code can be very challenging (as anyone who’s tried it will tell you). The actual list of what the JIT optimizations are is something that I don’t know – maybe someone like Chris Brumme will chime in at some point on this.

The long and the short of it is that the optimization switch enables optimizations that might make setting breakpoints and stepping through your code harder. They’re all low-level optimizations, so they’re really applicable to pretty much every kind of application in existence. They’re all also very incremental optimizations, which means that the benefit of any one optimization is likely to be nearly unmeasurable except in the most degenerate cases but in aggregate can be quite significant. I definitely recommend everyone leave this option on for your release build…

8 thoughts on “What does /optimize do?

  1. Pingback: AddressOf.com

  2. Greg Robinson

    I just checked this setting on all of our projects. We have never changed this setting and it was not turned (box unchecked) on by default. We are using V 1.1

    1. paulvick

      Greg, make sure that you’re looking at the Release configuration. The Debug configuration has the option turned off, as you can imagine. I forget what you see if you choose "all configurations."

      The dead code analysis that the compiler currently does is done at the IL level which, by that point, no longer maps well to source code lines or anything in the IDE. However, we’re looking at giving warnings about dead code in the IDE for Whidbey as a part of a bunch of new warnings…

  3. John Morales


    The dead code block analysis would be valuable information to emit to the IDE, perhaps in the same manner as the new "what’s changed in my code" color bars.

    Could this analysis be made part of the pre-compile checks that happen in the background while the user is typing in code?

  4. Daniel Turini

    Shouldn’t dead code be a warning (or error!) instead of an optimization?
    Nobody writes code to not execute, it’s surely a mistake done by the programmer.

  5. Eric Mutta

    Hmmm, are those the only optimisations you do? sounds pretty minimal…any reason why this is so?

    I am guessing its because the JITer will do the rest, but wouldn’t there be more benefit in doing them earlier?

    1. paulvick

      The main reason why we rely on the JIT is because we deal exclusively with IL instead of good, old actual X86 code. Because IL is a high-level abstraction, we could easily optimize the IL in a way that completely hoses the X86 JIT code, making things worse rather than better. (This is a dirty secret of code optimization in general – it’s easy to make things worse.) This isn’t to say that it’s impossible to write an IL optimizer, just that it requires a lot of specialized knowledge of X86 and of the JIT behaviors, which neither the VB nor C# team has. It is something that we consider, though. On the other hand, the JIT works very well, so…

  6. Nick

    Dead code is common. Typically you would only use 5% of a library. A downside with .Net is that if you use one part, you load the lot. Mittigated by being shared.

    To remove all the dead code needs an entire program analysis. You can’t do this if you want a .NET/Java style of dynamic loading of classes. This is because you don’t know at compile time which parts of a library you are going to use.

    The only compiler that I’m aware of that does this is ISE’s Eiffel compiler. http://www.eiffel.com



Leave a Reply

Your email address will not be published. Required fields are marked *