Partager via


Profiling is great! ... What does it do?

The profiler has been gone from Visual Studio for awhile. I think... 6.0? Or maybe even 5.0 was the last incarnation. The core idea is to provide a tool to find those parts of your application that are preventing it from meeting performance goals, and to help document where the critical time being spent in the application.

As such, I would like anyone reading this to leave a comment and let me know what you expect a profiler to be, and what you expect it to do.

Should it only tell you about your CPU so you can find bad algorithms? Should it integrate other system information? And if so, what? Thread interactions? Perfmon Data? System event tracing logs? What needs to correlate? What doesn't. Has to work for .Net Code? Native code? Mixed? Asp.Net? Code running on my local machine? Code running in a deployed environment?

Obviously the answer is: Do all of it, all the time. But let's be realistic -> What information needs to be surfaced quickly and easily for you to make sure your projects have the best performance they can, and what would just be 'nice to have'.

[jrohde]

Comments

  • Anonymous
    May 24, 2004
    I'm happy to answer the question, based upon my extensive use of profilers for 20 years or so.

    A profiler needs to find bottlenecks, so they can be fixed. The bottlenecks are often algorithms. Profilers don't work well when the bottleneck is outside the code, eg. disk i/o or database engine. This is an area where some good advances could be made.

    Effective use of profilers needs repeatability of tests. A profiler should ideally come with some sort of UI or event capturing tool, with identical playback. Eg. record all keystrokes.

    To summarize:
    * profiler highlights bottleneck
    * developer fixes code
    * re-run profiler with same test, see if bottleneck fixed
    * repeat until no more bottlenecks
  • Anonymous
    May 24, 2004
    The comment has been removed
  • Anonymous
    May 24, 2004
    Sorry, not at TechEd. I've had a brief look as VTST doco, looks good.

    A few more thoughts:

    Profilers fall down in isolating particular bottlenecks. Ideally I want to just measure the throughput of the 3d card, ignoring memory/disk i/o. In this case, the question is where is the bottleneck - card i/o, card memory, card rendering, etc.
  • Anonymous
    May 25, 2004
    The comment has been removed
  • Anonymous
    May 25, 2004
    Don't worry, Waldemar. I assure you we are not skimping on native code. We are going to deliver native, managed, and mixed mode profiling.
  • Anonymous
    May 25, 2004
    I agree Waldemar. While we balance our efforts, I am of the opinion that in Native, you really want to know about CPU. With reference to the 3d card, we should be able to tell you how much time you spend in each D3D call (or any other external call), but we may not be able to break down what's happening inside that call. I can pretty much guarentee you that at this point we won't (in V1!) be allowing profiling of shader code.

    For Managed, I think we want to do a bit more than code coverage. There is a big issue around 'What!?! How many objects got allocated/collected during that method call?!?'. So we put extra effort on this side to try to help the developer get their code streamlined to the environment.

    The system level events that Eric mentions are tricky. Ideally we would like to incorporate system level events (ETW), but without accurate cycle values of when those events occured, trying to interleave that information with a trace of function calls has the risk of being misleading. :(
  • Anonymous
    June 16, 2009
    PingBack from http://workfromhomecareer.info/story.php?id=18229