Back Arrow
From the blog

How to properly measure code speed in .NET

Imagine you have a solution to a problem or a task, and now you need to evaluate the optimality of this solution from a performance perspective.

Anton Vorotyncev

Head of Development

The most obvious way is to use StopWatch like this:

However, there are several issues with this method:

  • It is quite an inaccurate method, since the code being evaluated is executed only once, and the execution time can be affected by various side effects  such as hard disk performance, not warmed-up cache, processor context switching, and other applications.
  • It does  allow you to test the application in Production mode. During compilation, a significant part of the code is optimized automatically, without our participation, which can seriously affect the final result.
  • Your algorithm may perform well on a small dataset but underperform on a large one (or vice versa). Therefore, to test performance in different situations with different data sets, you will have to write new code for each scenario.

So what other options do we have? How can we evaluate the performance of our code properly? BenchmarkDotNet is the solution for this.

Benchmark setup

BenchmarkDotNet is a NuGet package that can be installed on any type of application to measure the speed of code execution. To do this, we only need two things :a class to perform the benchmarking code and a way to launch a runner to execute it.

Here's what a basic benchmarking class looks like:

Let’s break down this class, starting with the attributes.

MemoryDiagnoser attribute collects information about the Garbage Collector’s operation and the memory allocated during code execution.

Orderer attribute determines the order in which the final results are displayed in the table. In our case, it is set to FastestToSlowest, meaning the fastest code appears first, and the slowest last.

RankColumn attribute adds a column to the final report, numbering the results from 1 to X.

We have added the Benchmark attribute to the method itself. It marks the method as one of the test cases. And the Baseline=true parameter says that we will consider the performance of this method to be 100%. And then we will evaluate other algorithm options in relation to it.

To run the benchmark, we need the second piece of the puzzle:the Runner. It is simple: we go to our Program.cs (in a console application) and add one line with BenchmarkRunner:

Then, we build our application in Production mode and run the code for execution.

Analysis of results

If everything set up correctly, then after running the application, we will see how BenchmarkRunner executes our code multiple times and eventually produces the following report:

Important: any anomalous code executions (those much faster or slower than the average) will be excluded from the final report. We can see the clipped anomalies listed below the resulting table.

The report contains quite a lot of data about the performance of the code, including the version of the OS on which the test was run, the processor used, and the version of .Net. But the main information that interests us is the last table where we see:

  • Mean is the average time it takes to execute our code;
  • Error—an estimation error (half of the 99.9 percentile);
  • StdDev is the standard deviation of the estimate;
  • Ratio - a percentage estimate of improvement or deterioration in performance relative to Baseline - the basic method that we consider as the starting point (remember Baseline=true above?);
  • Rank - ranking;
  • Allocated - the memory allocated during execution of our method.

Real test

To make the final results more interesting, let's add a few more variants of our algorithm and see how the results change.

Now, the benchmark class will look like this:

Our focus now is  on benchmarking. We will leave the evaluation of the algorithms themselves for the next article.

And here is the result of performing such benchmarking:

We see that GetYearFromDateTime, our starting point, is the slowest and takes about 218 nanoseconds, while the fastest option, GetYearFromSpanWithManualConversion, takes only 6.2 nanoseconds —35 times faster than the original method.

We can also see how much memory was allocated for the two methods GetYearFromSplit and GetYearFromSubstring, and how long it took the Garbage Collector to clean up this memory (which also reduces overall system performance).

Working with Various Inputs

Finally, let’s discuss how to evaluate the performance of our algorithm on both large and small data sets. BenchmarkDotNet provides two attributes for this: Params and GlobalSetup.

Here is the benchmark class using these two attributes:

In our case, the Size field is parameterized and affects the code that runs in GlobalSetup.

As a result of executing GlobalSetup, we generate an initial array of 10, 1000 and 10000 elements to run all test scenarios. As mentioned earlier, some algorithms perform effectively only with a large or small number of elements.

Let's run this benchmark and look at the results:

Here, we can clearly see the performance of each method with 10, 1000 and 10000 elements: the Span method consistently  leads regardless of the input data size, while the NewArray method performs progressively worse as the data size increases.  


The BenchmarkDotNet library allows you to analyze the received data not only in text and tabular form but also graphically, in the form of graphs.

To demonstrate, we will create a benchmark class to measure the runtime of different sorting algorithms on the .NET8 platform, configured to run three times for different numbers of sorted elements: 1000, 5000, 10000. The sorting algorithms are:

  •  DefaultSort - the default sorting algorithm used in .NET8
  •  InsertionSort - insertion sort
  •  MergeSort - merge sort
  •  QuickSort - quick sort
  •  SelectSort - selection sorting

The benchmark results include a summary in the form of a table and a graph:

BenchmarkDotNet also generated separate graphs for each benchmark (in our case, for each sorting algorithm) based on the number of sorted elements:


We have covered the basics of working with BenchmarkDotNet and how it helps us evaluate the results of our work, making informed decisions about which code to keep, rewrite or delete. 

This approach allows us to build the most productive systems, ultimately improving user experiences.

It's easy to start working with us. Just fill the brief or call us.

Find out more
White Arrow
From the blog
Related articles

How personalisation works in Sitecore XM Cloud

Anna Bastron

In my previous article, I shared a comprehensive troubleshooting guide for Sitecore XM Cloud tracking and personalisation. This article visualises what happens behind the scenes when you enable personalisation and tracking in your Sitecore XM Cloud applications.


Server and client components in Next.js: when, how and why?

Sergei Pestov

All the text and examples in this article refer to Next.js 13.4 and newer versions, in which React Server Components have gained stable status and became the recommended approach for developing applications using Next.js.


Formalizing API Workflow in .NET Microservices

Artyom Chernenko

Let's talk about how to organize the interaction of microservices in a large, long-lived product, both synchronously and asynchronously.


Hidden Aspects of TypeScript and How to Resolve Them

Andrey Stepanov

We suggest using a special editor to immediately check each example while reading the article. This editor is convenient because you can switch the TypeScript version in it.


Troubleshooting tracking and personalisation in Sitecore XM Cloud

Anna Gevel

One of the first things I tested in Sitecore XM Cloud was embedded tracking and personalisation capabilities. It has been really interesting to see what is available out-of-the-box, how much flexibility XM Cloud offers to marketing teams and what is required from developers to set it up.


Mastering advanced tracking with Kentico Xperience

Dmitry Bastron

We will take you on a journey through a real-life scenario of implementing advanced tracking and analytics using Kentico Xperience 13 DXP.


Why is Kentico of such significance to us?

Anastasia Medvedeva

Kentico stands as one of our principal development tools, we believe it would be fitting to address why we opt to work with Kentico and why we allocate substantial time to cultivating our experts in this DXP.


Where to start learning Sitecore - An interview with Sitecore MVP Anna Gevel

Anna Gevel

As a software development company, we at Byteminds truly believe that learning and sharing knowledge is one of the best ways of growing technical expertise.


Sitecore replatforming and upgrades

Anastasia Medvedeva

Our expertise spans full-scale builds and support to upgrades and replatforming.


How we improved page load speed for Next.js ecommerce website by 50%

Sergei Pestov

How to stop declining of the performance indicators of your ecommerce website and perform optimising page load performance.


Sitecore integration with Azure Active Directory B2C

Dmitry Bastron

We would like to share our experience of integrating Sitecore 9.3 with the Azure AD B2C (Azure Active Directory Business to Consumer) user management system.


Activity logging with Xperience by Kentico

Dmitry Bastron

We'll dive into practical implementation in your Xperience by Kentico project. We'll guide you through setting up a custom activity type and show you how to log visitor activities effectively.


Interesting features of devtools for QA

Egor Yaroslavcev

Chrome DevTools serves as a developer console, offering an array of in-browser tools for constructing and debugging websites and applications.


Kentico replatforming and upgrades

Anastasia Medvedeva

Since 2015, we've been harnessing Kentico's capabilities well beyond its core CMS functions.


Umbraco replatforming and upgrades

Anastasia Medvedeva

Our team boasts several developers experienced in working with Umbraco, specialising in development, upgrading, and replatforming from other CMS to Umbraco.


Sitecore Personalize: tips & tricks for decision models and programmable nodes

Anna Gevel

We've collected various findings around decision models and programmable nodes working with Sitecore Personalize.


Fixed Price, Time & Materials, and Retainer: How to Choose the Right Agreement for Your Project with Us

Andrey Stepanov

We will explain how these agreements differ from one another and what projects they are suitable for.

Customer success

Enterprise projects: what does a developer need to know?

Fedor Kiselev

Let's talk about what enterprise development is, what nuance enterprise projects may have, and which skills you need to acquire to successfully work within the .NET stack.


Headless CMS. Identifying Ideal Use Cases and Speeding Up Time-to-Market

Andrey Stepanov

All you need to know about Headless CMS. We also share the knowledge about benefits of Headless CMS, its pros and cons.

Headless CMS

Dynamic URL routing with

We'll consider the top-to-bottom approach for modeling content relationships, as it is more user-friendly for content editors working in the admin interface.

Kontent Ai
This website uses cookies. View Privacy Policy.