ThreadPool.QueueUserWorkItem vs Task.Factory.StartNew

What is difference between the below

ThreadPool.QueueUserWorkItem

vs

Task.Factory.StartNew

If the above code is called 500 times for some long running task, does it mean all the thread pool threads will be taken up?

Or will TPL (2nd option) be smart enough to just take up threads less or equal to number of processors?

35655 次浏览

If you're going to start a long-running task with TPL, you should specify TaskCreationOptions.LongRunning, which will mean it doesn't schedule it on the thread-pool. (EDIT: As noted in comments, this is a scheduler-specific decision, and isn't a hard and fast guarantee, but I'd hope that any sensible production scheduler would avoid scheduling long-running tasks on a thread pool.)

You definitely shouldn't schedule a large number of long-running tasks on the thread pool yourself. I believe that these days the default size of the thread pool is pretty large (because it's often abused in this way) but fundamentally it shouldn't be used like this.

The point of the thread pool is to avoid short tasks taking a large hit from creating a new thread, compared with the time they're actually running. If the task will be running for a long time, the impact of creating a new thread will be relatively small anyway - and you don't want to end up potentially running out of thread pool threads. (It's less likely now, but I did experience it on earlier versions of .NET.)

Personally if I had the option, I'd definitely use TPL on the grounds that the Task API is pretty nice - but do remember to tell TPL that you expect the task to run for a long time.

EDIT: As noted in comments, see also the PFX team's blog post on choosing between the TPL and the thread pool:

In conclusion, I’ll reiterate what the CLR team’s ThreadPool developer has already stated:

Task is now the preferred way to queue work to the thread pool.

EDIT: Also from comments, don't forget that TPL allows you to use custom schedulers, if you really want to...

No, there is no extra cleverness added in the way the ThreadPool threads are utilized, by using the Task.Factory.StartNew method (or the more modern Task.Run method). Calling Task.Factory.StartNew 500 times (with long running tasks) is certainly going to saturate the ThreadPool, and will keep it saturated for a long time. Which is not a good situation to have, because a saturated ThreadPool affects negatively any other independent callbacks, timer events, async continuations etc that may also be active during this 500-launched-tasks period.

The Task.Factory.StartNew method schedules the execution of the supplied Action on the TaskScheduler.Current, which by default is the TaskScheduler.Default, which is the internal ThreadPoolTaskScheduler class. Here is Action0 of the ThreadPoolTaskScheduler.QueueTask method:

protected internal override void QueueTask(Task task)
{
if ((task.Options & TaskCreationOptions.LongRunning) != 0)
{
// Run LongRunning tasks on their own dedicated thread.
Thread thread = new Thread(s_longRunningThreadWork);
thread.IsBackground = true; // Keep this thread from blocking process shutdown
thread.Start(task);
}
else
{
// Normal handling for non-LongRunning tasks.
bool forceToGlobalQueue = ((task.Options & TaskCreationOptions.PreferFairness) != 0);
ThreadPool.UnsafeQueueCustomWorkItem(task, forceToGlobalQueue);
}
}

As you can see the execution of the task is scheduled on the ThreadPool anyway. The ThreadPool.UnsafeQueueCustomWorkItem is an internal method of the ThreadPool class, and has some nuances (bool forceGlobal) that are not publicly exposed. But there is nothing in it that changes the behavior of the ThreadPool when it becomes saturated¹. This behavior, currently, is not particularly sophisticated either. The thread-injection algorithm just injects one new thread in the pool every 500 msec, until the saturation incident ends.

¹ The ABC0 is said to be saturated when the demand for work surpasses the current availability of threads, and the threshold SetMinThreads above which new threads are no longer created on demand has been reached.