Skip to main content

C# - Performance Benchmark Mistakes, Part Three [Beginner]


As I discussed in part one of this series, a C# program actually goes through two compilers before the code runs. First the C# compiler converts the code into "Common Intermediate Language", or "IL", which is written into the assembly -- the .EXE or .DLL file of the program. Then at run-time, immediately before a method is called for the first time, the aptly-named "just in time" compiler -- the jitter, for short -- transforms the IL into actual executable machine code. Because computer programmers love to verb words, this process is called "jitting a method". 

The jitter was built to be fast, but it's not infinitely fast. You should expect that the first time you run a method it will be considerably slower; before it can run for the first time all its IL has to be translated into machine code. And of course, if the methods that it calls in turn are all being called for the first time then they've got to be jitted as well. 

But wait, it gets worse. Suppose you are calling a method for the first time, and it in turn creates an object from a library. If that library is being used for the first time in this process, not only does the code for the constructor have to be jitted, but before it can be jitted the library has to be found on disk, mapped into virtual memory, analyzed for security problems, and any code in the static constructors of the newly-created object has to be jitted and executed. 

In short, calling code for the first time can be massively more expensive than calling it for the second time. So this brings us to...

Mistake #6: Treat the first run as nothing special when measuring average performance.

In order to get a good result out of a benchmark test in a world with potentially expensive startup costs due to jitting code, loading libraries and calling static constructors, you've got to apply some careful thought about what you're actually measuring.

If, for example, you are benchmarking for the specific purpose of analyzing startup costs then you're going to want to make sure that you measure only the first run. If on the other hand you are benchmarking part of a service that is going to be running millions of times over many days and you wish to know the average time that will be taken in a typical usage then the high cost of the first run is irrelevant and therefore shouldn't be part of the average. Whether you include the first run in your timings or not is up to you; my point is, you need to be cognizant of the fact that the first run has potentially very different costs than the second.

Let's illustrate the enormous effect that the jitter can have on the first run of a program by modifying our example from last time. Let's suppose we've written our own implementation of the standard "quicksort" algorithm and we want to benchmark its performance:
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;

static class Extensions
{
  public static void Quicksort(this int[] items)
  {
    Quicksort(items, 0, items.Length - 1);
  } 
  static void Quicksort(int[] items, int leftBound, int rightBound)
  {
    int left = leftBound;
    int right = rightBound;
    int pivotIndex = leftBound + (rightBound - leftBound) / 2;
    int pivot = items[pivotIndex]; 
    while (left <= right)
    {
      // Find an item greater than the pivot
      while (items[left] < pivot)
        left += 1;
      // Find an item less than the pivot
      while (pivot < items[right])
        right -= 1;

      // Swap them
      if (left <= right)
      {
        Swap(ref items[left], ref items[right]);
        left += 1;
        right -= 1;
      }
    }

    // Everything less than the pivot is now left of the pivot.
    // Sort this portion.
    if (leftBound < right)
      Quicksort(items, leftBound, right);
    // Same for the right.
    if (left < rightBound)
      Quicksort(items, left, rightBound);        
  }
  private static void Swap(ref int x, ref int y)
  {
    int t = x;
    x = y;
    y = t;
  } 
}
 
That's a pretty standard implementation of quicksort in C#. Now let's make some modifications to our benchmark test of last time. Here we'll make a random list, copy it to an array twice so that we know that both runs will be operating on exactly the same data, and see if we can deduce the cost of jitting those methods the first time they're called:
class P
{        
  static void Main()
  {
    var original = new List<int>();
    const int size = 1000;
    var random = new Random();
    for(int i = 0; i < size; ++i)
      original.Add(random.Next());

    var arr1 = original.ToArray();
    var arr2 = original.ToArray();

    var stopwatch = new System.Diagnostics.Stopwatch();
    stopwatch.Start();
    arr1.Quicksort();
    stopwatch.Stop();
    Console.WriteLine(stopwatch.Elapsed);
    stopwatch.Reset();
    stopwatch.Start();
    arr2.Quicksort();
    stopwatch.Stop();
    Console.WriteLine(stopwatch.Elapsed);
  }
}
 
When I run this on my laptop (again, remembering to compile into release mode, and running without the debugger attached) a typical result is 3500 microseconds for the first run and 700 microseconds for the second run; in other words, the first run here took roughly five times longer than the second run on average. It must have taken the jitter about 2.8 milliseconds on average to find the IL and translate it into efficient machine code. 

Of course, that factor is relative, and I chose the array size somewhat arbitrarily. If we were sorting an array with a million elements then the ~3 millisecond jit cost would be a barely-noticeable bump. If we were sorting an array with twenty elements then the jit cost wouldn't be five times worse; it could be a hundred times worse.

Moreover, it's important to note that different jitters give different results on different machines and in different versions of the .NET framework. The time taken to jit can vary greatly, as can the amount of optimization generated in the machine code. The jit compilers on the Windows 32 bit desktop, Windows 64 bit desktop, Silverlight running on a Mac, and the "compact" jitter that runs when you have a C# program in XNA on XBOX 360 all have potentially different performance characteristics. That's...

Mistake #7: Assuming that run-time characteristics in one environment tell you what behavior will be in a different environment.

Run your benchmarks in the actual environment that the code will be running in; use machines that have the same hardware and software that will typically be used by the customers who ultimately will run the code. 

Next time in this series we'll take a look at how the garbage collector can affect performance benchmarks.

Comments

Popular posts from this blog

C# Snippet - Shuffling a Dictionary [Beginner]

Randomizing something can be a daunting task, especially with all the algorithms out there. However, sometimes you just need to shuffle things up, in a simple, yet effective manner. Today we are going to take a quick look at an easy and simple way to randomize a dictionary, which is most likely something that you may be using in a complex application. The tricky thing about ordering dictionaries is that...well they are not ordered to begin with. Typically they are a chaotic collection of key/value pairs. There is no first element or last element, just elements. This is why it is a little tricky to randomize them. Before we get started, we need to build a quick dictionary. For this tutorial, we will be doing an extremely simple string/int dictionary, but rest assured the steps we take can be used for any kind of dictionary you can come up with, no matter what object types you use. Dictionary < String , int > origin = new Dictionary < string , int >(); ...

C# WPF Printing Part 2 - Pagination [Intermediate]

About two weeks ago, we had a tutorial here at SOTC on the basics of printing in WPF . It covered the standard stuff, like popping the print dialog, and what you needed to do to print visuals (both created in XAML and on the fly). But really, that's barely scratching the surface - any decent printing system in pretty much any application needs to be able to do a lot more than that. So today, we are going to take one more baby step forward into the world of printing - we are going to take a look at pagination. The main class that we will need to do pagination is the DocumentPaginator . I mentioned this class very briefly in the previous tutorial, but only in the context of the printing methods on PrintDialog , PrintVisual (which we focused on last time) and PrintDocument (which we will be focusing on today). This PrintDocument function takes a DocumentPaginator to print - and this is why we need to create one. Unfortunately, making a DocumentPaginator is not as easy as...

C# WPF Tutorial - Implementing IScrollInfo [Advanced]

The ScrollViewer in WPF is pretty handy (and quite flexible) - especially when compared to what you had to work with in WinForms ( ScrollableControl ). 98% of the time, I can make the ScrollViewer do what I need it to for the given situation. Those other 2 percent, though, can get kind of hairy. Fortunately, WPF provides the IScrollInfo interface - which is what we will be talking about today. So what is IScrollInfo ? Well, it is a way to take over the logic behind scrolling, while still maintaining the look and feel of the standard ScrollViewer . Now, first off, why in the world would we want to do that? To answer that question, I'm going to take a an example from a tutorial that is over a year old now - Creating a Custom Panel Control . In that tutorial, we created our own custom WPF panel (that animated!). One of the issues with that panel though (and the WPF WrapPanel in general) is that you have to disable the horizontal scrollbar if you put the panel in a ScrollV...