C# vs C - Big performance difference


I'm finding massive performance differences between similar code in C anc C#.

The C code is:

#include <stdio.h>
#include <time.h>
#include <math.h>

    int i;
    double root;

    clock_t start = clock();
    for (i = 0 ; i <= 100000000; i++){
        root = sqrt(i);
    printf("Time elapsed: %f\n", ((double)clock() - start) / CLOCKS_PER_SEC);   


And the C# (console app) is:

using System;
using System.Collections.Generic;
using System.Text;

namespace ConsoleApplication2
    class Program
        static void Main(string[] args)
            DateTime startTime = DateTime.Now;
            double root;
            for (int i = 0; i <= 100000000; i++)
                root = Math.Sqrt(i);
            TimeSpan runTime = DateTime.Now - startTime;
            Console.WriteLine("Time elapsed: " + Convert.ToString(runTime.TotalMilliseconds/1000));

With the above code, the C# completes in 0.328125 seconds (release version) and the C takes 11.14 seconds to run.

The c is being compiled to a windows executable using mingw.

I've always been under the assumption that C/C++ were faster or at least comparable to C#.net. What exactly is causing the C to run over 30 times slower?

EDIT: It does appear that the C# optimizer was removing the root as it wasn't being used. I changed the root assignment to root += and printed out the total at the end. I've also compiled the C using cl.exe with the /O2 flag set for max speed.

The results are now: 3.75 seconds for the C 2.61 seconds for the C#

The C is still taking longer, but this is acceptable

3/26/2009 4:53:01 PM

Accepted Answer

Since you never use 'root', the compiler may have been removing the call to optimize your method.

You could try to accumulate the square root values into an accumulator, print it out at the end of the method, and see what's going on.

Edit : see Jalf's answer below

5/23/2017 12:10:08 PM

You must be comparing debug builds. I just compiled your C code, and got

Time elapsed: 0.000000

If you don't enable optimizations, any benchmarking you do is completely worthless. (And if you do enable optimizations, the loop gets optimized away. So your benchmarking code is flawed too. You need to force it to run the loop, usually by summing up the result or similar, and printing it out at the end)

It seems that what you're measuring is basically "which compiler inserts the most debugging overhead". And turns out the answer is C. But that doesn't tell us which program is fastest. Because when you want speed, you enable optimizations.

By the way, you'll save yourself a lot of headaches in the long run if you abandon any notion of languages being "faster" than each others. C# no more has a speed than English does.

There are certain things in the C language that would be efficient even in a naive non-optimizing compiler, and there are others that relies heavily on a compiler to optimize everything away. And of course, the same goes for C# or any other language.

The execution speed is determined by:

  • the platform you're running on (OS, hardware, other software running on the system)
  • the compiler
  • your source code

A good C# compiler will yield efficient code. A bad C compiler will generate slow code. What about a C compiler which generated C# code, which you could then run through a C# compiler? How fast would that run? Languages don't have a speed. Your code does.

Licensed under: CC-BY-SA with attribution
Not affiliated with: Stack Overflow