Why Care About Parallelism aka The Inevitable Shift

Sun, February 15, 2009, 09:56 PM under ParallelComputing
That was the title of a 45-minute (inc. questions) presentation I gave last December. It was a basic introduction to the manycore shift including what is parallelism and why software developers should care. The session ended by touching at a very high level on what Microsoft is doing in this space.

It was slides only (well, there were 3 demo/sample apps shown but no code) and you can download the deck in a ZIP file (the slides are a montage from many other decks of other Microsoft employees).

The basic flow has as follows
slide 3: Understanding Moore's Law
slide 4-7: Moore's law is still alive, but is not translated to higher frequencies. That is mainly due to the power wall, which at the end of the day means more heat than the CPU manufacturers can deal with
slide 8: So instead the manycore shift enters with CPU manufacturers adding more cores to the die rather than making a single one go faster. Predictions are for 100-core machines within 5 years (for the record, these are not my predictions)
slide 9: For us software developers, to take advantage of the increased total horsepower of manycore machines (on the client/desktop) you must employ parallelism. No other magic bullet or automatic gain. It is naïve to think that we will not need the increased speed or that we don’t know what to do with it:
a. We have been taking advantage (implicitly or explicitly) of increased CPU speeds over our entire existence. Why do we think we'll stop now?
b. Every shift in the software industry (whether it is the move from console to GUI apps, or desktop to mobile apps or even the recent client side web programming advancements) has been partly possible due to being able to take advantage of higher processor speeds. Why will the next shift in computing (whatever that is) be different?
slide 10: DEMO the morphing application (same one I showed at Tech Ed EMEA)
slide 11: Important to note that not all of the additional cores will be as fast as today's CPUs – they will more likely be of lower frequency; so to get even the same output that we get from one core today, we'll have to use parallelism to leverage more than one cores.
slide 12: Also important to note that it isn’t just Microsoft telling this story. Virtually every industry player is predicting the same situation.
slide 13: So the question is: what do I do with all those cores? Besides the same goals that good multithreading has (responsiveness, scalability and latency-awareness) parallelism takes it to the next level.
slide 14 to 15: Obey Amdahl's Law: do the same thing, but genuinely faster
slide 16: Obey Gustafson's law: do more stuff in the same time!
slide 17: Use speculative execution.
slide 18: DEMO the RayTracer application (same one I showed at PDC)
slide 19: "OK, I am sold. I must use parallelism. Show me how"… "Well, actually it is darn hard today if you try and use traditional multithreading to achieve parallelism"
slide 20: Microsoft established the Parallel Computing Initiative to address the goals/symptoms above
slide 21: Not the only team in Microsoft thinking about this problem. Attacking it from many angles.
slide 22: DEMO Baby Names application
slide 23: I bet you want to see some code… Read the Summary slide, and let's move on to the next session.