Threading/Concurrency vs Parallelism

Mon, November 3, 2008, 02:24 AM under ParallelComputing
To take advantage of multiple cores from our software, ultimately threads have to be used. Because of this fact, some developers fall in the trap of equating multithreading to parallelism. That is not accurate.

You can have multithreading on a single core machine, but you can only have parallelism on a multi core machine (or multi proc, but I treat them the same). The quick test: If on a single core machine you are using threads and it makes perfect sense for your scenario, then you are not "doing parallelism", you are just doing multithreading. If that same code runs on a multi core machine, any overall speedups that you may observe are accidental – you did not "think parallelism".

The mainstream devs that I know claim they are comfortable with multithreading and when you drill into "what scenarios they are enabling when using threads" there are 2 patterns that emerge. The first has to do with keeping the UI responsive (and the UI thread affinity issue): spin a thread to carry out some work and then marshal the results back to the UI (and in advanced scenarios, communicate progress events and also offer cancellation). The second has to do with I/O of one form or another: call a web service on one thread and while waiting for the results, do some other work on another; or carry out some file operation asynchronously and continue doing work on the initiating thread. When the results from the (disc/network/etc) IO operation are available, some synchronization takes place to merge the data from the executor to the requestor. The .NET framework has very good support for all of the above, for example with things like the BackgroundWorker and the APM pattern (BeginXYZ/EndXYZ) or even the event-based asynchronous pattern. Ultimately, it all comes down to using the ThreadPool directly or indirectly.

The previous paragraph summarized the opportunities (and touched on the challenges) of leveraging concurrency on a single core machine (like most of us do today). The end user goal is to improve the perceived performance or to perhaps improve the overall performance by hiding latency. All of the above is applicable on multi-core machines too, but it is not parallelism. On a multi-core machine there is an additional opportunity to improve the actual performance of your compute bound operations, by bringing parallel programming into the picture. (More on this in another post).

Another way of putting it is that on a single core you can use threads and you can have concurrency, but to achieve parallelism on a multi-core box you have to identify in your code the exploitable concurrency: the portions of your code that can truly run at the same time.
Monday, 03 November 2008 05:14:00 (Pacific Standard Time, UTC-08:00)
I think perhaps this shows that many of the problems being solved today aren't compute bound, they are to do with collating and managing io in a responsive fashion.

Certainly there are many "massively parallelisable" tasks, like graphics or the like, but for many things just having a bunch of tasks, in a thread pool with some lightweight messaging is enough.
Anonymous
Monday, 03 November 2008 05:42:00 (Pacific Standard Time, UTC-08:00)
> To take advantage of multiple cores from our software, ultimately threads have to be used.

False! processes can be used as well... I suggest you tell some Erlang users that they need threads to use multiple cores and see what they say. Or tell users of python's multiprocessing module.
Monday, 03 November 2008 09:49:45 (Pacific Standard Time, UTC-08:00)
Anonymous: Why anonymous and not drop just your name? Anyway, you are correct that today most of the problems in mainstream software do not need parallelization. This *will* change – the only question is "when". It is driven by the hardware.

Bill: Thanks for your enthusiasm. Re-read my quote in your reply: ultimately threads have to be used. What do you think the OS schedules on the multiple cores? Threads or processes? There are also other programming models that use message passing so the developer never explicitly creates threads. Again, what do you think the OS schedules, threads or messages?
Tuesday, 04 November 2008 21:17:00 (Pacific Standard Time, UTC-08:00)
Very nice post. I equate this to the same jargon as in tier vs layer or defect vs bug. It actually does say a lot when using these words properly.
Wednesday, 05 November 2008 06:56:00 (Pacific Standard Time, UTC-08:00)
We also benefit from multi-cores for servers where each connection is handled in a separate thread or process.

In this case we were initially looking to simplify application development by removing the need for state machines with the added benefit that w/ multi-cores we greatly benefit from true parallelism.

An optimization that also works is the thread/process pool where each threads / process handles multiple (but not all) connections. In this case we are back w/ a state machine and benefit from multi-cores parallelism.

Other than that, grid computing (e.g. boinc) does true parallelism on multi-core.
Friday, 07 November 2008 12:43:42 (Pacific Standard Time, UTC-08:00)
Bart: yes there is a lot of jargon and subtle differences between terms. The main point of my post is that wherever you have A does not mean you have B, however B is achievable by A in some circumstances (A and B are multithreading and parallelism, of course). It kind of reminds me of the flawed logic you'd hear from kids: the sea has water, I can drink water, and therefore I can drink from the sea ;-)

Jean: yes server side there is interest in parallelism too. However, on the server side most of the times the goal of keeping your cores busy is achieved via throughput. You typically have more network requests than cores, so you are good to go. Introducing parallelism there per request would probably hurt your overall performance (or at least not improve it). For truly compute bound operations in a cluster of servers, HPC comes into the picture. For me, the game changer is parallelism on the client. Those are the technologies that I am focusing on. Having said that, we are definitely looking to scale our new programming models to work equally well on a single node and on multiple nodes – stay tuned :-)
Comments are closed.