I guess I'm in the camp that if performance of exceptions impacts your application then you're throwing WAY too many of them. Exceptions should be for exceptional conditions, not as routine error handling.
That said, my recollection of how exceptions are handled is essentially walking up the stack finding a catch statement that matches the type of the exception thrown. So performance will be impacted most by how deep you are from the catch and how many catch statements you have.
Barebones exception objects in C# are fairly lightweight; it's usually the ability to encapsulate an InnerException that makes it heavy when the object tree becomes too deep.
As for a definitive report, I'm not aware of any, although a cursory dotTrace profile (or any other profiler) for memory consumption and speed will be fairly easy to do.
The performance hit with exceptions seems to be at the point of generating the exception object (albeit too small to cause any concerns 90% of the time). The recommendation therefore is to profile your code - if exceptions are causing a performance hit, you write a new high-perf method that does not use exceptions. (An example that comes to mind would be (TryParse introduced to overcome perf issues with Parse which uses exceptions)
Having read that exceptions are costly in terms of performance I threw together a simple measurement program, very similar to the one Jon Skeet published years ago. I mention this here mainly to provide updated numbers.
It took the program below 29914 milliseconds to process one million exceptions, which amounts to 33 exceptions per millisecond. That is fast enough to make exceptions a viable alternative to return codes for most situations.
Please note, though, that with return codes instead of exceptions the same program runs less than one millisecond, which means exceptions are at least 30,000 times slower than return codes. As stressed by Rico Mariani these numbers are also minimum numbers. In practice, throwing and catching an exception will take more time.
Measured on a laptop with Intel Core2 Duo T8100 @ 2,1 GHz with .NET 4.0 in release build not run under debugger (which would make it way slower).
This is my test code:
static void Main(string[] args)
{
int iterations = 1000000;
Console.WriteLine("Starting " + iterations.ToString() + " iterations...\n");
var stopwatch = new Stopwatch();
// Test exceptions
stopwatch.Reset();
stopwatch.Start();
for (int i = 1; i <= iterations; i++)
{
try
{
TestExceptions();
}
catch (Exception)
{
// Do nothing
}
}
stopwatch.Stop();
Console.WriteLine("Exceptions: " + stopwatch.ElapsedMilliseconds.ToString() + " ms");
// Test return codes
stopwatch.Reset();
stopwatch.Start();
int retcode;
for (int i = 1; i <= iterations; i++)
{
retcode = TestReturnCodes();
if (retcode == 1)
{
// Do nothing
}
}
stopwatch.Stop();
Console.WriteLine("Return codes: " + stopwatch.ElapsedMilliseconds.ToString() + " ms");
Console.WriteLine("\nFinished.");
Console.ReadKey();
}
static void TestExceptions()
{
throw new Exception("Failed");
}
static int TestReturnCodes()
{
return 1;
}
In my case, exceptions were very expensive. I rewrote this:
public BlockTemplate this[int x,int y, int z]
{
get
{
try
{
return Data.BlockTemplate[World[Center.X + x, Center.Y + y, Center.Z + z]];
}
catch(IndexOutOfRangeException e)
{
return Data.BlockTemplate[BlockType.Air];
}
}
}
Into this:
public BlockTemplate this[int x,int y, int z]
{
get
{
int ix = Center.X + x;
int iy = Center.Y + y;
int iz = Center.Z + z;
if (ix < 0 || ix >= World.GetLength(0)
|| iy < 0 || iy >= World.GetLength(1)
|| iz < 0 || iz >= World.GetLength(2))
return Data.BlockTemplate[BlockType.Air];
return Data.BlockTemplate[World[ix, iy, iz]];
}
}
And I noticed a good speed increase of about 30 seconds. This function gets called at least 32,000 times at startup. The code isn't as clear as to what the intention is, but the cost savings were huge.
I did my own measurements to find out how serious the exceptions implication were. I didn't try to measure the absolute time for throwing/catching exception. I was mostly interested in how much slower a loop will become if an exception is thrown in each pass. The measuring code looks like this:
I'm working on a program that parses JSON files and extracts data from them, with Newtonsoft (Json.NET).
I rewrote this:
Option 1, with exceptions
try
{
name = rawPropWithChildren.Value["title"].ToString();
}
catch(System.NullReferenceException)
{
name = rawPropWithChildren.Name;
}
To this:
Option 2, without exceptions
if(rawPropWithChildren.Value["title"] == null)
{
name = rawPropWithChildren.Name;
}
else
{
name = rawPropWithChildren.Value["title"].ToString();
}
Of course, you don't really have context to judge about it, but here are my results (in debug mode):
Option 1, with exceptions.
38.50 seconds
Option 2, without exceptions.
06.48 seconds
To give a little bit of context, I'm working with thousands of JSON properties that can be null. Exceptions were thrown way too much, like maybe during 15% of the execution time. Well, not really precise, but they were thrown too many times.
I wanted to fix this, so I changed my code, and I did not understand why the execution time was so much faster. That was because of my poor exception handling.
So, what I've learned from this: I need to use exceptions only in particular cases and for things that can't be tested with a simple conditional statement. They also must be thrown the less often possible.
This is a kind of a random story for you, but I guess I would definitely think twice before using exceptions in my code from now on!
I recently measured C# exceptions (throw and catch) in a summation loop that threw an arithmetic overflow on every addition. Throw and catch of arithmetic overflow was around 8.5 microseconds = 117 KiloExceptions/second, on a quad-core laptop.
Exceptions are expensive, but there is more to it when you want to choose between exception and return codes.
Historically speaking the argument was: exceptions ensure that code is forced to handle the situation whereas return codes can be ignored. I never favoured these arguments as no programmer will want to ignore and break their codes on purpose - especially a good test team / or a well written test case will definitely not ignore return codes.
From a modern programming practices point of view, managing exceptions need to be looked at not only for their cost, but also for their viability.
First
Since most front ends will be disconnected from the API that is throwing exception. For example, a mobile app using a REST API. The same API can also be used for an Angular-based web frontend.
Either scenario will prefer return codes instead of exceptions.
Second
Nowadays, hackers randomly attempt to break all web utilities. In such a scenario, if they are constantly attacking your app's login API and if the app is constantly throwing exceptions, then you will end up dealing with thousands of exceptions a day. Of course, many will say the firewall will take care of such attacks. However, not all are spending money to manage a dedicated firewall or an expensive anti-spam service. It is better that your code is prepared for these scenarios.
Measured in microseconds (but it depends on stack depth):
Image from this article and he posts the testing code: .Net exceptions performance
Are they free? No.
Do you generally get what you pay for? Most of the time yes.
Are they slow? The answer should always be "Compared to what?" They are likely orders of magnitude faster than any connected service or data call.
Explanation:
I'm quite interested in the origins of this question. As far as I can tell it is residual distaste for marginally useful c++ exceptions. .Net exceptions have a wealth of info in them and allow for neat and tidy code without excessive checks for success and logging. I explain much of their benefit in another answer.
In 20 years of programming, I've never removed a throw or a catch to make something faster (not to say that I couldn't, just to say that there was lower hanging fruit and after that no body complained).
There is a separate question with competing answers, one catching an exception (no "Try" method was provided by the built-in method) and one that avoided exceptions.
I decided to do a head to head performance comparison of the two, and for a smaller number of columns the non-exception version was faster, but the exception version scaled better and eventually outperformed the exception-avoiding version:
The linqpad code for that test is below (including the graph rendering).
The point here though, is that this idea that "exceptions are slow" begs the question slower than what? If a deep-stack exception costs 500 microseconds, does it matter if it occurs in response to a unique constraint that took a database 3000 microseconds to create? In any case, this demonstrates a generalized avoiding of exceptions for performance reasons will not necessarily yield more performant code.
Code for performance test:
void Main()
{
var loopResults = new List<Results>();
var exceptionResults = new List<Results>();
var totalRuns = 10000;
for (var colCount = 1; colCount < 20; colCount++)
{
using (var conn = new SqlConnection(@"Data Source=(localdb)\MSSQLLocalDb;Initial Catalog=master;Integrated Security=True;"))
{
conn.Open();
//create a dummy table where we can control the total columns
var columns = String.Join(",",
(new int[colCount]).Select((item, i) => $"'{i}' as col{i}")
);
var sql = $"select {columns} into #dummyTable";
var cmd = new SqlCommand(sql,conn);
cmd.ExecuteNonQuery();
var cmd2 = new SqlCommand("select * from #dummyTable", conn);
var reader = cmd2.ExecuteReader();
reader.Read();
Func<Func<IDataRecord, String, Boolean>, List<Results>> test = funcToTest =>
{
var results = new List<Results>();
Random r = new Random();
for (var faultRate = 0.1; faultRate <= 0.5; faultRate += 0.1)
{
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
var faultCount=0;
for (var testRun = 0; testRun < totalRuns; testRun++)
{
if (r.NextDouble() <= faultRate)
{
faultCount++;
if(funcToTest(reader, "colDNE"))
throw new ApplicationException("Should have thrown false");
}
else
{
for (var col = 0; col < colCount; col++)
{
if(!funcToTest(reader, $"col{col}"))
throw new ApplicationException("Should have thrown true");
}
}
}
stopwatch.Stop();
results.Add(new UserQuery.Results{
ColumnCount = colCount,
TargetNotFoundRate = faultRate,
NotFoundRate = faultCount * 1.0f / totalRuns,
TotalTime=stopwatch.Elapsed
});
}
return results;
};
loopResults.AddRange(test(HasColumnLoop));
exceptionResults.AddRange(test(HasColumnException));
}
}
"Loop".Dump();
loopResults.Dump();
"Exception".Dump();
exceptionResults.Dump();
var combinedResults = loopResults.Join(exceptionResults,l => l.ResultKey, e=> e.ResultKey, (l, e) => new{ResultKey = l.ResultKey, LoopResult=l.TotalTime, ExceptionResult=e.TotalTime});
combinedResults.Dump();
combinedResults
.Chart(r => r.ResultKey, r => r.LoopResult.Milliseconds * 1.0 / totalRuns, LINQPad.Util.SeriesType.Line)
.AddYSeries(r => r.ExceptionResult.Milliseconds * 1.0 / totalRuns, LINQPad.Util.SeriesType.Line)
.Dump();
}
public static bool HasColumnLoop(IDataRecord dr, string columnName)
{
for (int i = 0; i < dr.FieldCount; i++)
{
if (dr.GetName(i).Equals(columnName, StringComparison.InvariantCultureIgnoreCase))
return true;
}
return false;
}
public static bool HasColumnException(IDataRecord r, string columnName)
{
try
{
return r.GetOrdinal(columnName) >= 0;
}
catch (IndexOutOfRangeException)
{
return false;
}
}
public class Results
{
public double NotFoundRate { get; set; }
public double TargetNotFoundRate { get; set; }
public int ColumnCount { get; set; }
public double ResultKey {get => ColumnCount + TargetNotFoundRate;}
public TimeSpan TotalTime { get; set; }
}