C# Performance tips and tricks

| 11 min. (2167 words)

At Raygun, we’re a pretty polyglot group of developers. Various parts of our code base are written in different languages and frameworks — whatever is best for the job. That said, large parts of Raygun written with .NET, and we’re big .NET fans.

Given the prevalence of C# applications (C# has been in the top 5 on the TIOBE index for about 10 years!) and the massive scale of data Raygun deals with, we’re often called on to do C# optimization work. Most of our biggest performance gains come from really rethinking a problem and approaching it from a whole new angle.

So here, I wanted to share some C# optimization techniques and performance tips that have helped in our recent work. Some of these will be more useful for your specific needs than others, but there should be something for everyone.

In this post:

  1. Use a profiler
  2. The higher the level, the slower the speed
  3. Release builds vs. debug builds
  4. Look at the bigger picture
  5. Memory locality matters
  6. Don’t overwork the GC
  7. Avoid empty destructors
  8. Avoid unnecessary boxing and unboxing
  9. Beware of string concatenation
  10. Stay up to date on C#

1. Every developer should use a profiler

There are some great .NET profilers out there. I personally use the dotTrace profiler from the JetBrains team. I know Jason on our team gets a lot of value from the Red Gate profiler also. Every developer should have a profiler installed, and use it.

I can’t count the number of times that I’ve assumed the slow part of an application was in one area, when in fact it was somewhere else completely. Profilers help with that. Furthermore, sometimes, it’s helped me find bugs – a part that was slow was only slow because it was doing something incorrectly (that wasn’t being picked up properly by a unit test).

This is the first, and effectively mandatory, step of any optimization work you’re going to be doing, whether in a C# application or any other language.

TL:DR

  • Profiling will speed up the process of finding the real causes of slowdowns
  • The .NET ecosytem offers a range of excellent profilers, including dotTrace and Red Gate
  • Profiling can also be great for catching bugs

2. The higher the level, the slower the speed (usually)

This is just a smell that I’ve picked up on. The higher the level of abstraction you’re using, the slower it will often be. A common example here that I’ve found is using LINQ when you’re inside a busy part of code (perhaps inside a loop being called millions of times). LINQ is great for expressing something quickly that might otherwise take a bunch of lines of code, but you’re often leaving performance on the table.

Don’t get me wrong – LINQ is great for allowing you to crank out a working app. But in performance-focused parts of your codebase, you can be giving away too much. Especially since it’s so easy to chain together so many operations.

My own specific example involved a .SelectMany().Distinct().Count(). Given this was being called tens of millions of times (critical hot point found by my profiler) it was stacking up to a huge amount of the running time. I took another approach and reduced the execution time by several orders of magnitude.

TL;DR

  • While abstraction might seem neater and faster, it can also hide unnecessary complexity and redundant runtime
  • Again, a profiler can help detect this!

3. Don’t underestimate release builds vs. debug builds

I’d been hacking away and was pretty happy with the performance I was getting. Then I realized I’d been doing all my tests inside Visual Studio (I often write my performance tests to run as unit tests also, so I can more easily run just the part I care about). We all know that release builds have optimizations enabled. So I did a release build and called the methods I was testing from a console app.

I got a great turnaround with this. My code had been optimized like crazy by me, so it really was time for some of the micro-optimizations of the .NET JIT compiler to shine. I gained about an extra 30% performance with the optimizations enabled!

This reminds me of a story I read online a while back, an old game programming tale from the 90s when memory limitations were super tight. Late in the development cycle the team would ultimately run out of memory and start thinking about what had to be removed or downgraded to fit inside the tiny memory footprint available. The senior developer had expected this, based on his experience, and had allocated 1MB of memory with junk data at the very start of the project. He then saved the day and solved the problem by removing the 1MB!

Having the free memory there gave the team the cushion they needed and they shipped on time.

Why do I share this? It’s similar in performance land – get something running well enough in debug mode and you get some “free” performance in a release build. Bonus.

TL:DR

  • Do optimization work while still in debugging to get an ultra-fast release build

4. Look at the bigger picture

There are some fantastic algorithms out there. Most we don’t need on a day-to-day, or even month-to-month basis. It is, however, worth knowing they exist. All too often, I discover a much better approach to solving a problem once I do some research. A developer doing research before writing code is about as likely as a developer doing proper analysis before writing code. We always want to dive right into the IDE, but look before you leap. Next time, instead of waiting to run into a roadblock, go looking for community knowledge from the jump.

Often when looking at performance problems, we focus too heavily on a single line or method. This can be a mistake – looking at the big picture can help you improve performance far more significantly by reducing the work that needs to be done.

TL:DR

  • Check forums, Reddit, blogs, etc before you blow hours on a complex problem
  • Try not to fixate on one line as the root cause of poor performance; zoom out and consider where it fits and the fundamental approach you’ve taken

5. Memory locality matters

Let’s assume we have an array of arrays. Effectively it’s a table, 3000×3000 in size. We want to count how many slots have a value greater than zero in them.

Question – which of these two is faster?

for (int i = 0; i < _map.Length; i++)
{
    for (int n = 0; n < _map.Length; n++)
    {
        if (_map[i][n] > 0)
        {
            result++;
        }
    }
}
for (int i = 0; i < _map.Length; i++)
{
    for (int n = 0; n < _map.Length; n++)
    {
        if (_map[n][i] > 0)
        {
            result++;
        }
    }
}

Answer? The first one. How much so? In my tests, I got about an 8x performance improvement on this loop!

Notice the difference? It’s the order that we’re walking this array of arrays ([i][n] vs. [n][i]). Memory locality does indeed matter in .NET, even though we’re well abstracted from managing memory ourselves.

In my case, this method was being called millions of times (hundreds of millions of times to be exact) and therefore any performance I could squeeze out of this resulted in a sizeable win. Again, thanks to my ever-handy profiler for making sure I was focused on the right place!

  • While .NET is object-oriented, you still need to look for performance opportunities in memory allocation
  • Again, a profiler can help you find these opportunities

6. Relieve the pressure on the garbage collector

C#/.NET features garbage collection, the process that determines which objects are currently obsolete and removing them to free space in memory. What that means is that in C#, unlike in languages like C++, you don’t have to manually take care of the removal of objects that are no longer useful in order to claim their space in memory. Instead, the garbage collector (GC) handles all of that, so you don’t have to.

The problem is that there’s no free lunch. The collection process itself causes a performance penalty, so you don’t really want the GC to collect all the time. So how do you avoid that?

There are many useful techniques to avoid putting too much pressure on the GC. Here, I’ll focus on a single tip: avoid unnecessary allocations. What that means is to avoid things like this:

List<Product> products = new List<Product>();

products = productRepo.All();

The first line creates an instance of the list that’s completely useless since the very next line returns another instance and assign its reference to the variable. Now imagine the two lines above are inside a loop that executes thousands of times?

The code above might look like a silly example, but I’ve seen code like this in production — and not just a single time. Don’t focus on the example itself but on the general advice: don’t create objects unless they’re really needed.

Due to the way the GC works in .NET (it’s a generational GC process), newer objects are more likely to be collected than old ones. That means that the creation of many new, short-lived objects might trigger the GC to run.

TL:DR

  • Garbage collection can hinder performance — be discerning about creating unnecessary memory allocation work for the GC, for instance by being careful when you choose to create objects

7. Don’t use empty destructors

The title says it all — don’t add empty destructors to your classes. An entry is added to the Finalize queue for every class that has a destructor. Then, our old friend GC is called to process the queue when the destructor is called. An empty destructor means this is all for nothing.

Remember, GC execution isn’t cheap in terms of performance, as we’ve already mentioned. Don’t cause work for the GC unnecessarily.

TL:DR

  • Avoid empty destructors to preserve the GC: don’t use finalizers unless necessary, and use SafeHandle if needed

8. Avoid unnecessary boxing and unboxing

Boxing and unboxing are — like garbage collection — expensive processes, performance-wise. So, we want to avoid including them unnecessarily. But what do they do in practice?

Boxing is like creating a reference type box and putting a value of a value type inside it. In other words, it consists of converting a value type to “object” or to an interface type this value type implements. Unboxing is the opposite—it opens the box and extracts the value type from inside it. Why is that a problem?

Well, as we’ve mentioned, boxing and unboxing are expensive processes in themselves. Besides that, when you box a value you create another object on the heap, which puts additional pressure on—you’ve guessed it!—the GC.

So, how to avoid boxing and unboxing?

In a general way, you can do that by avoiding older APIs in .NET (version 1.0) that predate generics and, as such, have to rely on using the object type. For instance, prefer generic collections such as System.Collections.Generic.List, instead of something like System.Collections.ArrayList.

TL;DR

  • Boxing is efficient for treating small value types as reference types and can simplify your coding process but comes at a performance cost.
  • Avoid using older APIs that rely on object type.

9. Beware of string concatenation

In C#/.NET, strings are immutable. So, every time you perform some operations that look like they’re changing a string, they’re creating a new one instead. Such operations include methods like Replace and Substring, but also concatenation.

Beware of concatenating a large number of strings, especially inside a loop

So, the tip here is simple—beware of concatenating a large number of strings, especially inside a loop. In situations like this, use the System.Text.StringBuilder class, instead of using the “+” operator. That will ensure that new instances aren’t created for each part you concatenate.

TL;DR

  • You’re not modifying strings, you’re creating them, so avoid performing a bulk action on multiple strings
  • Use System.Text.StringBuilder instead

10. Stay up-to-date on the evolution of C#

To wrap up, here’s some very general advice — stay tuned to how C# changes and evolves. The C# team constantly delivers new features that can positively impact performance. There were literally hundreds of performance-enhancing improvements introduced in C# 11/.NET 7, including on-stack replacement, more efficient regexes, and enhancements to LINQ.

TL;DR

  • Microsoft frequently releases performance-enhancing features, and if you don’t know about them you can’t use them, so stay current with C#/.NET!

C# performance

This has been a collection of just a few things I’ve found useful for enhancing the performance of my .NET code. It’s worth investing the time to go through your code to make sure it’s performant and staying informed on updates. Your team and your customers will thank you!

Don’t let C# errors slow your code down. Raygun Crash Reporting gives you the code-level diagnostics you need to detect and solve errors fast.

Learn more and try Raygun Crash Reporting free for 14 days.