The last word on C++ AMP...

Fri, August 31, 2012, 12:47 PM under GPGPU | ParallelComputing

Well, not the last word, but the last blog post I plan to do here on that topic.

Over the last 12 months, I have published 45 blog posts related to C++ AMP on the Parallel Programming in Native Code, and the rest of the team has published even more. Occasionally I'll link to some of them from my own blog here, but today I decided to stop doing that - so if you relied on my personal blog pointing you to C++ AMP content, it is time you subscribed to the msdn blog. I will continue to blog about other topics here of course, so stay tuned.

So, for the last time, I encourage you to read the latest two blog posts I published on the team blog bringing together essential reading material on C++ AMP

Got questions on C++ AMP? Hit the msdn forum!


Get started with C++ AMP

Tue, July 24, 2012, 11:22 PM under GPGPU | ParallelComputing

With the imminent release of Visual Studio 2012, even if you do not classify yourself as a C++ developer, C++ AMP is something you should learn so you can understand how to speed up your loops by offloading to the GPU the computation performed in the loop (assuming you have large number of iterations/data). We have many C# customers who are using C++ AMP through pinvoke, and of course many more directly from C++.

So regardless of your programming language, I hope you'll find helpful these short videos that help you get started with C++ AMP

  1. C++ AMP core API introduction... from scratch
  2. Tiling Introduction - C++ AMP
  3. Matrix Multiplication with C++ AMP
  4. GPU debugging in Visual Studio 2012

In particular the work we have done for parallel and GPU debugging in Visual Studio 2012 is market leading, so check it out!


Attend my Fusion sessions

Wed, June 6, 2012, 07:50 PM under Events | GPGPU | ParallelComputing

The inaugural Fusion conference was 1 year ago in June 2011 and I was there doing a demo in the keynote, and also presenting a breakout session. If you look at the abstract and title for that session you won't see the term "C++ AMP" in there because the technology wasn't announced and we didn't want to spill the beans ahead of the keynote, where the technology was announced. It was only an announcement, we did not give any bits out, and in fact the first bits came three months later in September 2011 with the Beta following in February 2012.

So it really feels great 1 year later, to be back at Fusion presenting two sessions on C++ AMP, demonstrating our progress from that announcement, to the Visual Studio 2012 Release Candidate that came out last week.

If you are attending Fusion (in person or virtually later), be sure to watch my two-part session. Part 1 is PT-3601 on Tuesday 4pm and part 2 is PT-3602 on Wednesday 4pm. Here is the shared abstract for both parts:

Harnessing GPU Compute with C++ AMP

C++ AMP is an open specification for taking advantage of accelerators like the GPU. In this session we will explore the C++ AMP implementation in Microsoft Visual Studio 2012. After a quick overview of the technology understanding its goals and its differentiation compared with other approaches, we will dive into the programming model and its modern C++ API. This is a code heavy, interactive, two-part session, where every part of the library will be explained. Demos will include showing off the richest parallel and GPU debugging story on the market, in the upcoming Visual Studio release.

See you there!


Attend my GTC sessions

Sun, May 13, 2012, 01:36 PM under Events | GPGPU | ParallelComputing

The last GTC conference in the US was 2 years ago and I was there as an attendee. You may recall from that blog post that we were running UX studies at the time.

It really feels great 2 years later, to be back at GTC presenting two sessions on C++ AMP, demonstrating our progress that includes input from those early studies.

If you are attending GTC (in person or virtually later), be sure to watch my two-part session. Part 1 is S0242 on Wednesday 5pm and part 2 is S0244 on Thursday 10am. Here is the shared abstract for both parts:

Harnessing GPU Compute with C++ AMP

C++ AMP is an open specification for taking advantage of accelerators like the GPU. In this session we will explore the C++ AMP implementation in Microsoft Visual Studio 11 Beta. After a quick overview of the technology understanding its goals and its differentiation compared with other approaches, we will dive into the programming model and its modern C++ API. This is a code heavy, interactive, two-part session, where every part of the library will be explained. Demos will include showing off the richest parallel and GPU debugging story on the market, in the upcoming Visual Studio release.

See you there!


Screencasts introducing C++ AMP

Thu, April 12, 2012, 07:07 PM under GPGPU | ParallelComputing

It has been almost 2.5 years since I last recorded a screencast, and I had forgotten how time consuming they are to plan/record/edit/produce/publish, but at the same time so much fun to see the end result!

So below are links to 4 screencasts to teach you C++ AMP basics from scratch (even if you class yourself as a .NET developer you'll be able to follow).

  1. Setup code - part 1
  2. array_view, extent, index - part 2
  3. parallel_for_each - part 3
  4. accelerator - part 4

If you have comments/questions about what is shown in each video, please leave them at each video recoding. If you have generic questions about C++ AMP, please ask in the C++ AMP MSDN forum.


My MSDN magazine articles are live

Mon, April 2, 2012, 07:01 AM under GPGPU | ParallelComputing

Five years ago I wrote my first MSDN magazine article, and 21 months later I wrote my second MSDN Magazine article (during the VS 2010 Beta). By my calculation, that makes it two and a half years without having written anything for my favorite developer magazine!

So, I came back with a vengeance, and in this month's April issue of the MSDN magazine you can find two articles from yours truly - enjoy:

  1. A Code-Based Introduction to C++ AMP
  2. Introduction to Tiling in C++ AMP

For more on C++ AMP, please remember that I blog about it on our team blog, and we take questions in our MSDN forum.


C++ AMP Video Overview

Fri, February 24, 2012, 03:44 PM under GPGPU | ParallelComputing

I hope to be recording some C++ AMP screencasts for channel9 soon (you'll find them through my regular screencasts link on the left), and in all of them I will assume you have watched this short interview overview of C++ AMP.

image 

Note: I think there were some technical problems with streaming so best to download the "High Quality WMV" or switch to progressive format.


.NET access to the GPU for compute purposes

Thu, December 1, 2011, 07:42 PM under GPGPU | ParallelComputing

In the distant past I talked about GPGPU and Microsoft's then approach of DirectCompute. Since then of course we now have C++ AMP coming out with Visual Studio 11, so there is a mainstream easier way for developers to access the GPU for compute purposes, using C++.

The question occasionally arises of how can a .NET developer access the GPU for compute purposes from their C# (or VB) code. The answer is by interoping from the managed code to a native DLL and in the native DLL use C++ AMP.

As a long term .NET developer myself, I can tell you this is straightforward. Sure, there could have been a managed wrapper for C++ AMP, but honestly that is the reason we have interop – it doesn't make much sense to invest resources to solve a problem that is already solved (most developer customers would prefer investments in other areas of Visual Studio!). Besides, interoping from C# to C++ is much easier than interoping to some of the other older approaches of GPGPU programming ;-)

To help you get started with the interop approach, Igor Ostrovsky has previously shared the "Hello World" version of interoping from C# to C++ AMP in his blog post:

…we then were asked specifically about how to interop from C# to C++ AMP in a Metro style application on Windows 8, so Igor delivered again with this post:

Have fun!


Short interview on C++ AMP

Mon, October 17, 2011, 05:04 PM under GPGPU | ParallelComputing

While at the BUILD conference a month ago, I run into Bruno Boucard who asked me a few questions about C++ AMP. I just returned from vacation to find that he uploaded the 15-minute interview, so here is a direct link to youtube

play

http://www.youtube.com/watch?v=a2IbOe_ogGE

What DX level does my graphics card support? Does it go to 11?

Fri, October 14, 2011, 05:13 PM under GPGPU | ParallelComputing

Recently I run into a situation that I have run into quite a few times. Someone encounters a machine and the question arises: "Is there a DirectX 11 card in this machine?". Typically the reason you are interested in that is because cards with DirectX 11 drivers fully support DirectCompute (and by extension C++ AMP) for GPGPU programming. The driver specifically is WDDM (1.1 on Windows 7 and Windows 8 introduces WDDM 1.2 with cool new capabilities).

There are many ways for figuring out if you have a DirectX11 card, so here are the approaches that you can use, with a bonus right at the end of the post.

Run DxDiag

WindowsKey + R, type DxDiag and hit Enter. That is the DirectX diagnostic tool, which unfortunately, only tells you on the "System" tab what is the highest version of DirectX installed on your machine. So if it reports DirectX 11, that doesn't mean you have a DX11 driver! The "Display" tab has a promising "DDI version" label, but unfortunately that doesn't seem to be accurate on the machines I've tested it with (or I may be misinterpreting its use). Either way, this tool is not the one you want for this purpose, although it is good for telling you the WDDM version among other things.

Use the Microsoft hardware page

There is a Microsoft Windows 7 compatibility center, that lists all hardware (tip: use the advanced search) and you could try and locate your device there… good luck.

Use Wikipedia or the hardware vendor's website

Use the Wikipedia page for the vendor cards, for both nvidia and amd. Often this information will also be in the specifications for the cards on the IHV site, but is is nice that wikipedia has a single page per vendor that you can search etc. There is a column in the tables for API support where you can see the DirectX version.

Check if it is one of these recommended DX11 cards

You may not have a DirectX 11 card and are interested in purchasing one. While I am in no position to make recommendations, I will list here some cards from two big IHVs that we know are DirectX 11 capable.

  • Some AMD (aka ATI) cards
    • Low end, inexpensive DX11 hardware:
      • Radeon 5450, 5550, 6450, 6570
    • Mid range (decent perf, single precision):
      • Radeon 5750, 5770, 6770, 6790
    • High end (capable of double precision):
      • Radeon 5850, 5870, 6950, 6970
    • Single precision APUs:
      • AMD E-Series APUs
      • AMD A-Series APUs
  • Some NVIDIA cards
    • Low end, inexpensive DX11 hardware:
      • GeForce GT430, GT 440, GT520, GTS 450
      • Quadro 400, 600
    • Mid-range (decent perf, single precision):
      • GeForce GTX 460, GTX 550 Ti, GTX 560, GTX 560 Ti
      • Quadro 2000
    • High end (capable of double precision):
      • GeForce GTX 480, GTX 570, GTX 580, GTX 590, GTX 595
      • Quadro 4000, 5000, 6000
      • Tesla C2050, C2070, C2075

Get the DirectX SDK and run DirectX Caps Viewer

Download and install the June 2010 DirectX SDK. As part of that you now have the DirectX Capabilities Viewer utility (find it in your start menu by searching for "DirectX Caps Viewer", the filename is DXCapsViewer.exe). It will list all your devices (emulated, and real hardware ones) under the first node. Expand the hardware entries and then expand again the Direct3D 11 folder. If you see D3D_FEATURE_LEVEL_11_ under that, then your card supports feature level 11 which means it supports DirectCompute and C++ AMP. In the following screenshot of one of my old laptops, the card only goes to feature level 10.

DirectX Caps Viewer

Run a utility from the web that just tells you!

Of course, writing some C++ AMP code that enumerates accelerators and lists the ones that are capable is trivial. However that requires that you have redistributed the runtime, so a more broadly applicable approach is to use the DX APIs directly to enumerate the DX11 capable cards. That is exactly what the development lead for C++ AMP has done and he describes and shares that utility at this post.


Give a session on C++ AMP – here is how

Wed, September 21, 2011, 06:53 PM under GPGPU | ParallelComputing

2/29/2012 added some Beta notes inline

Ever since presenting on C++ AMP at the AMD Fusion conference in June, then the Gamefest conference in August, and the BUILD conference in September, I've had numerous requests about my material from folks that want to re-deliver the same session. The C++ AMP session I put together has evolved over the 3 presentations to its final form that I used at BUILD, so that is the one I recommend you base yours on.BUILD session

Please get the slides and the recording from channel9 (I'll refer to slide numbers below).

This is how I've been presenting the C++ AMP session:

Context

  1. (slide 3, 04:18-08:18) Start with a demo, on my dual-GPU machine. I've been using the N-Body sample.
  2. (slide 4) Use an nvidia slide that has additional examples of performance improvements that customers enjoy with heterogeneous computing.
  3. (slide 5) Talk a bit about the differences today between CPU and GPU hardware, leading to the fact that these will continue to co-exist and that GPUs are great for data parallel algorithms, but not much else today. One is a jack of all trades and the other is a number cruncher.
  4. (slide 6) Use the APU example from amd, as one indication that the hardware space is still in motion, emphasizing that the C++ AMP solution is a data parallel API, not a GPU API. It has a future proof design for hardware we have yet to see.
  5. (slide 7) Provide more meta-data, as blogged about when I first introduced C++ AMP.

Code

  1. (slide 9-11) Introduce C++ AMP coding with a simplistic array-addition algorithm – the slides speak for themselves.
  2. (slide 12-13) index, and extent (Beta note: the old slide also refers to a grid class, which we removed in favor of just extent)
  3. (Slide 14-16) array, array_view and comparison between them.
  4. (Slide 17) parallel_for_each.
  5. (slide 18, 21) restrict.
  6. (slide 19-20) actual restrictions of restrict(amp) – the slides speak for themselves. (Beta note: the slide refers  to restrict(direct3d), which is now restrict(amp))
  7. (slide 22) bring it altogether with a matrix multiplication example.
  8. (slide 23-24) accelerator, and accelerator_view.
  9. (slide 26-29) Introduce tiling incl. tiled matrix multiplication [tiling probably deserves a whole session instead of 6 minutes!].

IDE

  1. (slide 34,37) Briefly touch on the concurrency visualizer. It supports GPU profiling, but enhancements specific to C++ AMP come at the Beta timeframe.
  2. (slide 35-36, 51:54-59:16) Demonstrate the GPU debugging experience in VS 11.

Summary

  1. (slide 39) Re-iterate some of the points of slide 7, and add the point that C++ AMP is an open specification.
  2. (slide 40) Links to content – see slide – including where all your questions should go: http://social.msdn.microsoft.com/Forums/en/parallelcppnative/threads.

Slides for similar presentation updated for Beta

The BUILD recording and slides are valid for the VS 11 Beta and beyond, with regards to C++ AMP - so watch the session and download those slides. Additionally, if you are going to repeat the session, I have updated the slides including some tweaks and you can download the updated deck here (note the slide numbers above do not map exactly to the new deck).

"But I don't have time for a full blown session, I only need 2 (or just 1, or 3) C++ AMP slides to use in my session on related topic X"

If all you want is a small number of slides, you can take some from the session above and customize them. But because I am so nice, I have created some slides for you, including talking points in the notes section. Download them here.


GPU Debugging with VS 11

Tue, September 20, 2011, 07:21 PM under GPGPU | ParallelComputing | VisualStudio

BELOW IS OUTDATED INFORMATION, PLEASE SEE MY UPDATED POST ON OUR TEAM BLOG:

http://blogs.msdn.com/b/nativeconcurrency/archive/2012/03/17/start-gpu-debugging-in-visual-studio-11.aspx

---

With VS 11 Developer Preview we have invested tremendously in parallel debugging for both CPU (managed and native) and GPU debugging. I'll be doing a whole bunch of blog posts on those topics, and in this post I just wanted to get people started with GPU debugging, i.e. with debugging C++ AMP code.

First I invite you to watch 6 minutes of a glimpse of the C++ AMP debugging experience though this video (ffw to minute 51:54, up until minute 59:16). Don't read the rest of this post, just go watch that video, ideally download the High Quality WMV.

Summary

GPU debugging essentially means debugging the lambda that you pass to the parallel_for_each call (plus any functions you call from the lambda, of course). CPU debugging means debugging all the code above and below the parallel_for_each call, i.e. all the code except the restrict(direct3d) lambda and the functions that it calls. With VS 11 you have to choose what debugger you want to use for a particular debugging session, CPU or GPU. So you can place breakpoints all over your code, then choose what debugger you want (CPU or GPU), and you'll only be able to hit breakpoints for the code type that the debugger engine understands – the remaining breakpoints will appear as unbound. If you want to hit the unbound breakpoints, you'd have to stop debugging, and start again with the other debugger. Sorry. We suck. We know. But once you are past that limitation, I think you'll find the experience truly rewarding – seriously!

Switching debugger engines

With the Developer Preview bits, one way to switch the debugger engine is through the project properties – see the screenshots that follow.

This one is showing the CPU option selected, which is basically the default that you are all familiar with:

image

This screenshot is showing the GPU option selected, by changing the debugger launcher (notice that this applies for both the local and remote case):

image

You actually do not have to open the project properties just for switching the debugger engine, you can switch the selection from the toolbar in VS 11 Developer Preview too – see following screenshot (the effect is the same as if you opened the project properties and switched there)

image

Breakpoint behavior

Here are two screenshots, one showing a debugging session for CPU and the other a debugging session for GPU (notice the unbound breakpoints in each case)

image

…and here is the GPU case (where we cannot bind the CPU breakpoints but can the GPU breakpoint, which is actually hit)

image

Give C++ AMP debugging a try

So to debug your C++ AMP code, pull down the drop down under the 'play' button to select the 'GPU C++ Direct3D Compute Debugger' menu option, then hit F5 (or the 'play' button itself). Then you can explore debugging by exploring the menus under the Debug and under the Debug->Windows menus. One way to do that exploration is through the C++ AMP debugging walkthrough on MSDN.

Another way to explore the C++ AMP debugging experience, you can use the moth.cpp code file, which is what I used in my BUILD session debugger demo. Note that for my demo I was using the latest internal VS11 bits, so your experience with the Developer Preview bits won't be identical to what you saw me demonstrate, but it shouldn't be far off.

Stay tuned for a lot more content on the parallel debugger in VS 11, both CPU and GPU, both managed and native.


Running C++ AMP kernels on the CPU

Mon, September 19, 2011, 07:32 PM under GPGPU | ParallelComputing

One of the FAQs we receive is whether C++ AMP can be used to target the CPU.

For targeting multi-core we have a technology we released with VS2010 called PPL, which has had enhancements for VS 11 – that is what you should be using! FYI, it also has a Linux implementation via Intel's TBB which conforms to the same interface.

When you choose to use C++ AMP, you choose to take advantage of massively parallel hardware, through accelerators like the GPU.

Having said that, you can always use the accelerator class to check if you are running on a system where the is no hardware with a DirectX 11 driver, and decide what alternative code path you wish to follow. 

In fact, if you do nothing in code, if the runtime does not find DX11 hardware to run your code on, it will choose the WARP accelerator which will run your code on the CPU, taking advantage of multi-core and SSE2 (depending on the CPU capabilities WARP also uses SSE3 and SSE 4.1 – it does not currently use AVX and on such systems you hopefully have a DX 11 GPU anyway).

A few things to know about WARP

  • It is our fallback CPU solution, not intended as a primary target of C++ AMP.
  • WARP stands for Windows Advanced Rasterization Platform and you can read old info on this MSDN page on WARP.
  • What is new in Windows 8 Developer Preview is that WARP now supports DirectCompute, which is what C++ AMP builds on.
  • It is not currently clear if we will have a CPU fallback solution for non-Windows 8 platforms when we ship.
  • When you create a WARP accelerator, its is_emulated property returns true.
  • WARP does not currently support double precision.

 

BTW, when we refer to WARP, we refer to this accelerator described above. If we use lower case "warp", that refers to a bunch of threads that run concurrently in lock step and share the same instruction. In the VS 11 Developer Preview, the size of warp in our Ref emulator is 4 – Ref is another emulator that runs on the CPU, but it is extremely slow not intended for production, just for debugging.


Links to C++ AMP and other content

Fri, September 16, 2011, 10:00 PM under GPGPU | Links | ParallelComputing | VisualStudio | Windows

A few links you may be interested in.


BUILD apps that use C++ AMP

Tue, September 13, 2011, 08:28 PM under Events | GPGPU | ParallelComputing

If you are a developer on the Microsoft platform, you are hopefully attending (live or virtually) the sessions of the BUILD conference, aka //build/ in Anaheim, CA. The conference sold out not long after it opened registration, and it achieved that without sharing *any* session details nor a meaningful agenda up until after the keynote today – impressive!

I am speaking at BUILD and hope you'll catch my talk at 9am on Friday (the last day of the conference) at Marriott Elite 2 Ballroom. Session details follow.

802 - Taming GPU compute with C++ AMP

Developers today inject parallelism into their compute-intensive applications in order to take advantage of multi-core CPU hardware. Beyond CPUs, however, compute accelerators such as general-purpose GPUs can provide orders of magnitude speed-ups for data parallel algorithms. How can you as a C++ developer fully utilize this heterogeneous hardware from your Visual Studio environment?  How can you benefit from this tremendous performance boost in your Visual C++ solutions without sacrificing developer productivity?  The answers will be presented in this session about C++ Accelerated Massive Parallelism.

I'll be covering a lot of the material I've been recently blogging about on my blog that you are reading, which I have also indexed over on our team blog under the title: "C++ AMP in a nutshell".


tile_static, tile_barrier, and tiled matrix multiplication with C++ AMP

Sun, September 11, 2011, 06:23 PM under GPGPU | ParallelComputing

We ended the previous post with a mechanical transformation of the C++ AMP matrix multiplication example to the tiled model and in the process introduced tiled_index and tiled_extent. This is part 2.

tile_static memory

You all know that in regular CPU code, static variables have the same value regardless of which thread accesses the static variable. This is in contrast with non-static local variables, where each thread has its own copy.

Back to C++ AMP, the same rules apply and each thread has its own value for local variables in your lambda, whereas all threads see the same global memory, which is the data they have access to via the array and array_view.

In addition, on an accelerator like the GPU, there is a programmable cache, a third kind of memory type if you'd like to think of it that way (some call it shared memory, others call it scratchpad memory). Variables stored in that memory share the same value for every thread in the same tile. So, when you use the tiled model, you can have variables where each thread in the same tile sees the same value for that variable, that threads from other tiles do not. The new storage class for local variables introduced for this purpose is called tile_static. You can only use tile_static in restrict(amp) functions, and only when explicitly using the tiled model. What this looks like in code should be no surprise, but here is a snippet to confirm your mental image, using a good old regular C array

  // each tile of threads has its own copy of locA,
  // shared among the threads of the tile
  tile_static float locA[16][16]; 

Note that tile_static variables are scoped and have the lifetime of the tile, and they cannot have constructors or destructors.

tile_barrier

In amp.h one of the types introduced is tile_barrier. imageYou cannot construct this object yourself (although if you had one, you could use a copy constructor to create another one). So how do you get one of these? You get it, from a tiled_index object. Beyond the 4 properties returning index objects, tiled_index has another property, barrier, that returns a tile_barrier object. The tile_barrier class exposes the method wait (and other wait overloads).

15:  // Given a tiled_index object named t_idx
16:  t_idx.barrier.wait();

17:  // more code

…in the code above, all threads in the tile will reach line 16 before a single one progresses to line 17. Note that all threads must be able to reach the barrier, i.e. if you had branchy code in such a way which meant that there is a chance that not all threads could reach line 16, then the code above would be illegal.

Tiled Matrix Multiplication Example – part 2

So now that we added to our understanding the concepts of tile_static and tile_barrier, let me obfuscate rewrite the matrix multiplication code so that it takes advantage of tile_static memory. Before you start reading this, I suggest you get a cup of your favorite non-alcoholic beverage to enjoy while you try to fully understand the code.

01: void MatrixMultiplyTiled(vector<float>& vC, 
         const vector<float>& vA, 
         const vector<float>& vB, int M, int N, int W)
02: {
03:   static const int TS = 16;

04:   array_view<const float,2> a(M, W, vA);
05:   array_view<const float,2> b(W, N, vB);
06:   array_view<float,2> c(M,N,vC); c.discard_data();

07:   parallel_for_each(c.extent.tile< TS, TS >(),
08:   [=] (tiled_index< TS, TS> t_idx) restrict(amp) 
09:   {
10:     int row = t_idx.local[0]; int col = t_idx.local[1];
11:     float sum = 0.0f;

12:     for (int i = 0; i < W; i += TS) {
13:        tile_static float locA[TS][TS], locB[TS][TS];
14:        locA[row][col] = a(t_idx.global[0], col + i);
15:        locB[row][col] = b(row + i, t_idx.global[1]);
16:        t_idx.barrier.wait();

17:        for (int k = 0; k < TS; k++)
18:          sum += locA[row][k] * locB[k][col];

19:        t_idx.barrier.wait();
20:     }

21:     c[t_idx.global] = sum;
22:   });
23: }

Notice that all the code up to line 9 is the same as per the changes we made in part 1 of tiling introduction. If you squint, the body of the lambda itself preserves the original algorithm on lines 10, 11, and 17, 18, and 21. The difference being that those lines use new indexing and the tile_static arrays; the tile_static arrays are declared and initialized on the brand new lines 13-15. On those lines we copy from the global memory represented by the array_view objects (a and b), to the tile_static vanilla arrays (locA and locB)we are copying enough to fit a tile. Because in the code that follows on line 18 we expect the data for this tile to be in the tile_static storage, we need to synchronize the threads within each tile with a barrier, which we do on line 16 (to avoid accessing uninitialized memory on line 18). We also need to synchronize the threads within a tile on line 19, again to avoid the race between lines 14, 15 (retrieving the next set of data for each tile and overwriting the previous set) and line 18 (not being done processing the previous set of data). Luckily, as part of the awesome C++ AMP debugger in Visual Studio there is an option that helps you find such races, but that is a story for another blog post another time.

May I suggest reading the next section, and then coming back to re-read and walk through this code with pen and paper to really grok what is going on, if you haven't already? Cool.

Why would I introduce this tiling complexity into my code?

Funny you should ask that, I was just about to tell you. There is only one reason we tiled our extent, had to deal with finding a good tile size, ensure the number of threads we schedule are correctly divisible with the tile size, had to use a tiled_index instead of a normal index, and had to understand tile_barrier and to figure out where we need to use it, and double the size of our lambda in terms of lines of code: the reason is to be able to use tile_static memory.

Why do we want to use tile_static memory? Because accessing tile_static memory is around 10 times faster than accessing the global memory on an accelerator like the GPU, e.g. in the code above, if you can get 150GB/second accessing data from the array_view a, you can get 1500GB/second accessing the tile_static array locA. And since by definition you are dealing with really large data sets, the savings really pay off. We have seen tiled implementations being twice as fast as their non-tiled counterparts.

Now, some algorithms will not have performance benefits from tiling (and in fact may deteriorate), e.g. algorithms that require you to go only once to global memory may not benefit from tiling, since with tiling you already have to fetch the data once from global memory! Other algorithms may benefit, but you may decide that you are happy with your code being 50 times faster than the serial-version you had, and you do not need to invest to make it 100 times faster. Also algorithms with more than 3 dimensions, which C++ AMP supports in the simple model, cannot be tiled.

Also note that in future releases, we may invest in making the non-tiled model, which already uses tiling under the covers, go the extra step and use tile_static memory on your behalf, but it is obviously way to early to commit to anything like that, and we certainly don't do any of that today.


Scheduling thread tiles with C++ AMP

Sat, September 10, 2011, 08:30 PM under GPGPU | ParallelComputing

This post assumes you are totally comfortable with, what some of us call, the simple model of C++ AMP, i.e. you could write your own matrix multiplication. We are now ready to explore the tiled model, which builds on top of the non-tiled one.

Tiling the extent

We know that when we pass an extent to the parallel_for_each call, it determines the number of threads to schedule and their index values (including dimensionality). For the single-, two-, and three- dimensional cases you can go a step further and subdivide the threads into what we call tiles of threads (others may call them thread groups).

So here is a single-dimensional example:

  extent<1> e(20);   // 20 units in a single dimension with indices from 0-19
  tiled_extent<4> te = e.tile<4>();

…on the 2nd line we subdivided the single-dimensional space into 5 single-dimensional tiles each having 4 elements, and we captured that result in a concurrency::tiled_extent (a new class in amp.h).

Let's move on swiftly to another example, in pictures, this time 2-dimensional:

image

So we start on the left with a 2-dimensional extent which has 8*6=48 threads. We then have two different examples of tiling. In the first case, in the middle, we subdivide the 48 threads into tiles where each has 4*3=12 threads, hence we have 2*2=4 tiles. In the second example, on the right, we subdivide the original input into tiles where each has 2*2=4 threads, hence we have 4*3=12 tiles. Notice how you can play with the tile size and achieve different number of tiles. The numbers you pick must be such that the original total number of threads (in our example 48), remains the same, and every tile must have the same size.

Of course, you still have no clue why you would do that, but stick with me. First, we should see how we can use this tiled_extent, since the parallel_for_each function that we know expects a extent.

Tiled parallel_for_each and tiled_index

It turns out that we have additional overloads of parallel_for_each that accept a tiled_extent instead of a extent. However, those overloads, also expect that the lambda you pass in accepts a concurrency::tiled_index (new in amp.h), not an index<N>. So how is a tiled_index different to an index?

A tiled_index object, can have only 1 or 2 or 3 dimensions (matching exactly the tiled_extent), and consists of 4 index objects that are accessible via properties: global, local, tile_origin, and tile. The global index is the same as the index we know and love: the global thread ID. The local index is the local thread ID within the tile. The tile_origin index returns the global index of the thread that is at position 0,0 of this tile, and the tile index is the position of the tile in relation to the overall grid. Confused? Here is an example accompanied by a picture that hopefully clarifies things:image

  array_view<int, 2> data(8, 6, p_my_data);
  parallel_for_each(data.extent.tile<2,2>(), [=](tiled_index<2,2> t_idx) restrict(amp) { ... });

Given the code above and the picture on the right, what are the values of each of the 4 index objects that the t_idx variables exposes, when the lambda is executed by T (highlighted in the picture on the right)?

If you can't work it out yourselves, the solution follows:

  • t_idx.global       = index<2> (6,3)
  • t_idx.local          = index<2> (0,1)
  • t_idx.tile_origin = index<2> (6,2)
  • t_idx.tile             = index<2> (3,1)

Don't move on until you are comfortable with this… the picture really helps, so use it.

Tiled Matrix Multiplication Example – part 1

Let's paste here the C++ AMP matrix multiplication example, bolding the lines we are going to change (can you guess what the changes will be?)

01:  void MatrixMultiplyTiled_Part1(vector<float>& vC, 
         const vector<float>& vA, 
         const vector<float>& vB, int M, int N, int W)
02:  {
03:
04:    array_view<const float,2> a(M, W, vA);
05:    array_view<const float,2> b(W, N, vB);
06:    array_view<float,2> c(M, N, vC); c.discard_data();

07:    parallel_for_each(c.extent, 
08:    [=](index<2> idx) restrict(amp) {
09:
10:      int row = idx[0]; int col = idx[1];
11:      float sum = 0.0f;
12:      for(int i = 0; i < W; i++)
13:        sum += a(row, i) * b(i, col);
14:      c[idx] = sum;
15:    });
16:  }

To turn this into a tiled example, first we need to decide our tile size. Let's say we want each tile to be 16*16 (which assumes that we'll have at least 256 threads to process, and that c.extent.size() is divisible by 256, and moreover that c.extent[0] and c.extent[1] are divisible by 16). So we insert at line 03 the tile size (which must be a compile time constant).

03: static const int TS = 16;

...then we need to tile the extent to have tiles where each one has 16*16 threads, so we change line 07 to be as follows

07: parallel_for_each(c.extent.tile<TS,TS>(),

...that means that our index now has to be a tiled_index with the same characteristics as the tiled_extent, so we change line 08

08: [=](tiled_index<TS, TS> t_idx) restrict(amp) {

...which means, without changing our core algorithm, we need to be using the global index that the tiled_index gives us access to, so we insert line 09 as follows

09: index<2> idx = t_idx.global;

...and now this code just works and it is tiled!

Closing thoughts on part 1

The process we followed just shows the mechanical transformation that can take place from the simple model to the tiled model (think of this as step 1). In fact, when we wrote the matrix multiplication example originally, the compiler was doing this mechanical transformation under the covers for us (and it has additional smarts to deal with the cases where the total number of threads scheduled is not divisible by the tile size). The point is that the thread scheduling is always tiled, even when you use the non-explicitly-tiled model.

But with this mechanical transformation, we haven't gained anything… Hint: our goal with explicitly using the tiled model is to gain even more performance.

In the next post, we'll evolve this further (beyond what the compiler can automatically do for us, in this first release), so you can see the full usage of the tiled model and its benefits…


Matrix Multiplication with C++ AMP

Fri, September 9, 2011, 08:22 PM under GPGPU | ParallelComputing

As part of our API tour of C++ AMP, we looked recently at parallel_for_each. I ended that post by saying we would revisit parallel_for_each after introducing array and array_view. Now is the time, so this is part 2 of parallel_for_each, and also a post that brings together everything we've seen until now.

The code for serial and accelerated

Consider a naïve (or brute force) serial implementation of matrix multiplication 

0:  void MatrixMultiplySerial(std::vector<float>& vC, 
        const std::vector<float>& vA, 
        const std::vector<float>& vB, int M, int N, int W)
1:  {
2:    for (int row = 0; row < M; row++) 
3:    {
4:      for (int col = 0; col < N; col++)
5:      {
6:        float sum = 0.0f;
7:        for(int i = 0; i < W; i++)
8:          sum += vA[row * W + i] * vB[i * N + col];
9:        vC[row * N + col] = sum;
10:     }
11:   }
12: }

We notice that each loop iteration is independent from each other and so can be parallelized. If in addition we have really large amounts of data, then this is a good candidate to offload to an accelerator. First, I'll just show you an example of what that code may look like with C++ AMP, and then we'll analyze it. It is assumed that you included at the top of your file #include <amp.h>

13:  void MatrixMultiplySimple(std::vector<float>& vC, 
         const std::vector<float>& vA, 
         const std::vector<float>& vB, int M, int N, int W)
14:  {
15:    concurrency::array_view<const float,2> a(M, W, vA);
16:    concurrency::array_view<const float,2> b(W, N, vB);
17:    concurrency::array_view<float,2> c(M, N, vC); c.discard_data();
18:    concurrency::parallel_for_each(c.extent, 
19:    [=](concurrency::index<2> idx) restrict(amp) {
20:      int row = idx[0]; int col = idx[1];
21:      float sum = 0.0f;
22:      for(int i = 0; i < W; i++)
23:        sum += a(row, i) * b(i, col);
24:      c[idx] = sum;
25:    });
26:  }

First a visual comparison, just for fun: The beginning and end is the same, i.e. lines 0,1,12 are identical to lines 13,14,26. The double nested loop (lines 2,3,4,5 and 10,11) has beenMatrix Multiplication image from wikipedia transformed into a parallel_for_each call (18,19,20 and 25). The core algorithm (lines 6,7,8,9) is essentially the same (lines 21,22,23,24). We have extra lines in the C++ AMP version (15,16,17). Now let's dig in deeper.

Using array_view and extent

When we decided to convert this function to run on an accelerator, we knew we couldn't use the std::vector objects in the restrict(amp) function. So we had a choice of copying the data to the the concurrency::array<T,N> object, or wrapping the vector container (and hence its data) with a concurrency::array_view<T,N> object from amp.h – here we used the latter (lines 15,16,17). Now we can access the same data through the array_view objects (a and b) instead of the vector objects (vA and vB), and the added benefit is that we can capture the array_view objects in the lambda (lines 19-25) that we pass to the parallel_for_each call (line 18) and the data will get copied on demand for us to the accelerator.

Note that line 15 (and ditto for 16 and 17) could have been written as two lines instead of one:

  extent<2> e(M, W);
  array_view<const float, 2> a(e, vA);

In other words, we could have explicitly created the extent object instead of letting the array_view create it for us under the covers through the constructor overload we chose. The benefit of the extent object in this instance is that we can express that the data is indeed two dimensional, i.e a matrix. When we were using a vector object we could not do that, and instead we had to track via additional unrelated variables the dimensions of the matrix (i.e. with the integers M and W) – aren't you loving C++ AMP already?

Note that the const before the float when creating a and b, will result in the underling data only being copied to the accelerator and not be copied back – a nice optimization. A similar thing is happening on line 17 when creating array_view c, where we have indicated that we do not need to copy the data to the accelerator, through the discard_data call.

The kernel dispatch

On line 18 we make the call to the C++ AMP entry point (parallel_for_each) to invoke our parallel loop or, as some may say, dispatch our kernel.

The first argument we need to pass describes how many threads we want for this computation. For this algorithm we decided that we want exactly the same number of threads as the number of elements in the output matrix, i.e. in array_view c which will eventually update the vector vC. So each thread will compute exactly one result. Since the elements in c are organized in a 2-dimensional manner we can organize our threads in a two-dimensional manner too. We don't have to think too much about how to create the first argument (a extent) since the array_view object helpfully exposes that as a property. Note that instead of c.extent we could have written extent<2>(M, N) – the result is the same in that we have specified M*N threads to execute our lambda.

The second argument is a restrict(amp) lambda that accepts an index object. Since we elected to use a two-dimensional extent as the first argument of parallel_for_each, the index will also be two-dimensional and as covered in the previous posts it represents the thread ID, which in our case maps perfectly to the index of each element in the resulting array_view.

The kernel itself

The lambda body (lines 20-24), or as some may say, the kernel, is the code that will actually execute on the accelerator. It will be called by M*N threads and we can use those threads to index into the two input array_views (a,b) and write results into the output array_view ( c ).

The four lines (21-24) are essentially identical to the four lines of the serial algorithm (6-9). The only difference is how we index into a,b,c versus how we index into vA,vB,vC. The code we wrote with C++ AMP is much nicer in its indexing, because the dimensionality is a first class concept, so you don't have to do funny arithmetic calculating the index of where the next row starts, which you have to do when working with vectors directly (since they store all the data in a flat manner).

I skipped over describing line 20. Note that we didn't really need to read the two components of the index into temporary local variables. This mostly reflects my personal choice, in some algorithms to break down the index into local variables with names that make sense for the algorithm, i.e. in this case row and col. In other cases it may i,j,k or x,y,z, or M,N or whatever. Also note that we could have written line 24 as: c(idx[0], idx[1])=sum  or  c(row, col)=sum instead of the simpler c[idx]=sum

Targeting a specific accelerator

Imagine that we had more than one hardware accelerator on a system and we wanted to pick a specific one to execute this parallel loop on. So there would be some code like this anywhere before line 18:

  vector<accelerator> accs = MyFunctionThatChoosesSuitableAccelerators();
  accelerator acc = accs[0];

…and then we would modify line 18 so we would be calling another overload of parallel_for_each that accepts an accelerator_view as the first argument, so it would become:

  concurrency::parallel_for_each(acc.default_view, c.extent,

...and the rest of your code remains the same… how simple is that?


array and array_view from amp.h

Thu, September 8, 2011, 12:02 AM under GPGPU | ParallelComputing

This is a very long post, but it also covers what are probably the classes (well, array_view at least) that you will use the most with C++ AMP, so I hope you enjoy it!

Overview

The concurrency::array and concurrency::array_view template classes represent multi-dimensional data of type T, of N dimensions, specified at compile time (and you can later access the number of dimensions via the rank property). If N is not specified, it is assumed that it is 1 (i.e. single-dimensional case). They are rectangular (not jagged).array

The difference between them is that array is a container of data, whereas array_view is a wrapper of a container of data. So in that respect, array behaves like an STL container, whereas the closest thing an array_view behaves like is an STL iterator (albeit with random access and allowing you to view more than one element at a time!).

The data in the array (whether provided at creation time or added later) resides on an accelerator (which is specified at creation time either explicitly by the developer, or set to the default accelerator at creation time by the runtime) and is laid out contiguously in memory. The data provided to the array_view is not stored by/in the array_view, because the array_view is simply a view over the real source (which can reside on the CPU or other accelerator). The underlying data is copied on demand to wherever the array_view is accessed. Elements which differ by one in the least significant dimension of the array_view are adjacent in memory.

array objects must be captured by reference into the lambda you pass to the parallel_for_each call, whereas array_view objects must be captured by value (into the lambda you pass to the parallel_for_each call). After you are done reading this post, feel free to visit another post dedicated to capturing data.

Creating array and array_view objects and relevant properties

You can create array_view objects from other array_view objects of the same rank and element type (shallow copy, also possible via assignment operator) so they point to the same underlying data, and you can also create array_view objects over array objects of the same rank and element type e.g.

  array_view<int,3> a(b); // b can be another array or array_view of ints with rank=3

Note: Unlike the constructors above which can be called anywhere, the ones in the rest of this section can only be called from CPU code.

You can create array objects from other array objects of the same rank and element type (copy and move constructors) and from other array_view objects, e.g.

  array<float,2> a(b); // b can be another array or array_view of floats with rank=2

To create an array from scratch, you need to at least specify an extent object, e.g. array<int,3> a(myExtent);. Note that instead of an explicit extent object, there are convenience overloads when N<=3 so you can specify 1-, 2-, 3- integers (dependent on the array's rank) and thus have the extent created for you under the covers. At any point, you can access the array's extent thought the extent property. The exact same thing applies to array_view (extent as constructor parameters, incl. convenience overloads, and property).

While passing only an extent object to create an array is enough (it means that the array will be written to later), it is not enough for the array_view case which must always wrap over some other container (on which it relies for storage space and actual content). So in addition to the extent object (that describes the shape you'd like to be viewing/accessing that data through), to create an array_view from another container (e.g. std::vector) you must pass in the container itself (which must expose .data() and a .size() methods, e.g. like std::array does), e.g.

  array_view<int,2> aaa(myExtent, myContainerOfInts);

Similarly, you can create an array_view from a raw pointer of data plus an extent object.

Back to the array case, to optionally initialize the array with data, you can pass an iterator pointing to the start (and optionally one pointing to the end of the source container) e.g.

  array<double,1> a(5, myVector.begin(), myVector.end());

We saw that arrays are bound to an accelerator at creation time, so in case you don’t want the C++ AMP runtime to assign the array to the default accelerator, all array constructors have overloads that let you pass an accelerator_view object, which you can later access via the accelerator_view property.

Note that at the point of initializing an array with data, a synchronous copy of the data takes place to the accelerator, and then to copy any data back we'll see that an explicit copy call is required. This does not happen with the array_view where copying is on demand...

refresh and synchronize on array_view

Note that in the previous section on constructors, unlike the array case, there was no overload that accepted an accelerator_view for array_view. That is because the array_view is simply a wrapper, so the allocation of the data has already taken place before you created the array_view. When you capture an array_view variable in your call to parallel_for_each, the copy of data between the non-CPU accelerator and the CPU takes place on demand (i.e. it is implicit, versus the explicit copy that has to happen with the array). There are some subtleties to the on-demand-copying that we cover next.array_view

The assumption when using an array_view is that you will continue to access the data through the array_view, and not through the original underlying source, e.g. the pointer to the data that you passed to the array_view's constructor. So if you modify the data through the array_view on the GPU, the original pointer on the CPU will not "know" that, unless one of two things happen:

  • you access the data through the array_view on the CPU side, i.e. using indexing that we cover below
  • you explicitly call the array_view's synchronize method on the CPU (this also gets called in the array_view's destructor for you)

Conversely, if you make a change to the underlying data through the original source (e.g. the pointer), the array_view will not "know" about those changes, unless you call its refresh method.

Finally, note that if you create an array_view of const T, then the data is copied to the accelerator on demand, but it does not get copied back, e.g.

  array_view<const double, 5> myArrView(…); // myArrView will not get copied back from GPU

There is also a similar mechanism to achieve the reverse, i.e. not to copy the data of an array_view to the GPU.

copy_to, data, and global copy/copy_async functions

Both array and array_view expose two copy_to overloads that allow copying them to another array, or to another array_view, and these operations can also be achieved with assignment (via the = operator overloads).

Also both array and array_view expose a data method, to get a raw pointer to the underlying data of the array or array_view, e.g. float* f = myArr.data();. Note that for array_view, this only works when the rank is equal to 1, due to the data only being contiguous in one dimension as covered in the overview section.

Finally, there are a bunch of global concurrency::copy functions returning void (and corresponding concurrency::copy_async functions returning a future) that allow copying between arrays and array_views and iterators etc.

Note that for array, all copying described throughout this post is deep copying, as per other STL container expectations. You can never have two arrays point to the same data.

indexing into array and array_view plus projection

Reading or writing data elements of an array is only legal when the code executes on the same accelerator as where the array was bound to. In the array_view case, you can read/write on any accelerator, not just the one where the original data resides, and the data gets copied for you on demand. In both cases, the way you read and write individual elements is via indexing as described next.

To access (or set the value of) an element, you can index into it by passing it an index object via the subscript operator. Furthermore, if the rank is 3 or less, you can use the function ( ) operator to pass integer values instead of having to use an index object. e.g.

  array<float,2> arr(someExtent, someIterator); //or array_view<float,2> arr(someExtent, someContainer);
  index<2> idx(5,4);
  float f1 = arr[idx];
  float f2 = arr(5,4); //f2 ==f1
  //and the reverse for assigning, e.g.
  arr(idx[0], 7) = 6.9;

Note that for both array and array_view, regardless of rank, you can also pass a single integer to the subscript operator which results in a projection of the data, and (for both array and array_view) you get back an array_view of rank N-1 (or if the rank was 1, you get back just the element at that location).

Not Covered

In this already very long post, I am not going to cover three very cool methods (and related overloads) that both array and array_view expose: view_as, section, reinterpret_as. We'll revisit those at some point in the future, probably on the team blog.


parallel_for_each from amp.h – part 1

Tue, September 6, 2011, 07:52 PM under GPGPU | ParallelComputing

This posts assumes that you've read my other C++ AMP posts on index<N> and extent<N>, as well as about the restrict modifier. It also assumes you are familiar with C++ lambdas (if not, follow my links to C++ documentation).

Basic structure and parameters

Now we are ready for part 1 of the description of the new overload for the concurrency::parallel_for_each function. The basic new parallel_for_each method signature returns void and accepts two parameters:

  • a extent<N>
  • a restrict(amp) lambda, whose signature is such that it returns void and accepts an index of the same rank as the extent

So it looks something like this (with generous returns for more palatable formatting) assuming we are dealing with a 2-dimensional space:

  // some_code_A
  parallel_for_each(
    e, // e  is of type extent<2>
    [ ](index<2> idx) restrict(amp)
    {
      // kernel code
    }
  );
  // some_code_B

The parallel_for_each will execute the body of the lambda (which must have the restrict modifier), on the GPU. We also call the lambda body the "kernel". The kernel will be executed multiple times, once per scheduled GPU thread. The only difference in each execution is the value of the index object (aka as the GPU thread ID in this context) that gets passed to your kernel code. The number of GPU threads (and the values of each index) is determined by the extent object you pass, as described next.

In this context, one way to think about it is that the extent generates a number of index objects. So for the example above, if your extent was setup by some_code_A as follows:

  extent<2> e(2,3);

...then given that: e.size()==6, e[0]==2, and e[1]==3

...the six index<2> objects it generates (and hence the values that your lambda would receive) are:

   (0,0) (1,0) (0,1) (1,1) (0,2) (1,2)

So what the above means is that the lambda body with the algorithm that you wrote will get executed 6 times and the index<2> object you receive each time will have one of the values just listed above (of course, each one will only appear once, the order is indeterminate, and they are likely to call your code at the same exact time). Obviously, in real GPU programming, you'd typically be scheduling thousands if not millions of threads, not just 6.

If you've been following along you should be thinking: "that is all fine and makes sense, but what can I do in the kernel since I passed nothing else meaningful to it, and it is not returning any values out to me?"

Passing data in and out

It is a good question, and in data parallel algorithms indeed you typically want to pass some data in, perform some operation, and then typically return some results out. The way you pass data into the kernel, is by capturing variables in the lambda (again, if you are not familiar with them, follow the links about C++ lambdas), and the way you use data after the kernel is done executing is simply by using those same variables.

In the example above, the lambda was written in a fairly useless way with an empty capture list: [ ](index<2> idx) restrict(amp), where the empty square brackets means that no variables were captured.

If instead I write it like this [&](index<2> idx) restrict(amp), then all variables in the some_code_A region are made available to the lambda by reference, but as soon as I try to use any of those variables in the lambda, I will receive a compiler error. This has to do with one of the amp restrictions, where essentially only one type can be captured by reference: objects of the new concurrency::array class that I'll introduce in the next post (suffice for now to think of it as a container of data).

If I write the lambda line like this [=](index<2> idx) restrict(amp), all variables in the some_code_A region are made available to the lambda by value. This works for some types (e.g. an integer), but not for all, as per the restrictions for amp. In particular, no useful data classes work except for one new type we introduce with C++ AMP: objects of the new concurrency::array_view class, that I'll also introduce in the next post. Also note that if you capture some variable by value, you could use it as input to your algorithm, but you wouldn’t be able to observe changes to it after the parallel_for_each call (e.g. in some_code_B region since it was passed by value) – the exception to this rule is the array_view since (as we'll see in a future post) it is a wrapper for data, not a container.

Finally, for completeness, you can write your lambda, e.g. like this [av, &ar](index<2> idx) restrict(amp) where av is a variable of type array_view and ar is a variable of type array - the point being you can be very specific about what variables you capture and how.

So it looks like from a large data perspective you can only capture array and array_view objects in the lambda (that is how you pass data to your kernel) and then use the many threads that call your code (each with a unique index) to perform some operation. You can also capture some limited types by value, as input only. When the last thread completes execution of your lambda, the data in the array_view or array are ready to be used in the some_code_B region. We'll talk more about all this in future posts…

(a)synchronous

Please note that the parallel_for_each executes as if synchronous to the calling code, but in reality, it is asynchronous. I.e. once the parallel_for_each call is made and the kernel has been passed to the runtime, the some_code_B region continues to execute immediately by the CPU thread, while in parallel the kernel is executed by the GPU threads. However, if you try to access the (array or array_view) data that you captured in the lambda in the some_code_B region, your code will block until the results become available. Hence the correct statement: the parallel_for_each is as-if synchronous in terms of visible side-effects, but asynchronous in reality.

 

That's all for now, we'll revisit the parallel_for_each description, once we introduce properly array and array_view – coming next.


concurrency::extent from amp.h

Mon, September 5, 2011, 06:23 PM under GPGPU | ParallelComputing

Overview

We saw in a previous post how index<N> represents a point in N-dimensional space and in this post we'll see how to define the N-dimensional space itself. image

With C++ AMP, an N-dimensional space can be specified with the template class extent<N> where you define the size of each dimension.

From a look and feel perspective, you'd expect the programmatic interface of a point type and size type to be similar (even though the concepts are different). Indeed, exactly like index<N>, extent<N> is essentially a coordinate vector of N integers ordered from most- to least- significant, BUT each integer represents the size for that dimension (and hence cannot be negative).

So, if you read the description of index, you won't be surprised with the below description of extent<N>

  • There is the rank field returning the value of N you passed as the template parameter.
  • You can construct one extent from another (via the copy constructor or the assignment operator), you can construct it by passing an integer array, or via convenience constructor overloads for 1- 2- and 3- dimension extents. Note that the parameterless constructor creates an extent of the specified rank with all bounds initialized to 0.
  • You can access the components of the extent through the subscript operator (passing it an integer).
  • You can perform some arithmetic operations between extent objects through operator overloading, i.e. ==, !=, +=, -=, +, -.
  • There are operator overloads so that you can perform operations between an extent and an integer: -- (pre- and post- decrement), ++ (pre- and post- increment), %=, *=, /=, +=, –= and, finally, there are additional overloads for plus and minus (+,-) between extent<N> and index<N> objects, returning a new extent object as the result.

In addition to the usual suspects, extent offers a contains function that tests if an index is within the bounds of the extent (assuming an origin of zero). It also has a size function that returns the total linear size of this extent<N> in units of elements.

Example code

  extent<2> e(3, 4);
  _ASSERT(e.rank == 2);
  _ASSERT(e.size() == 3 * 4);
  e += 3;
  e[1] += 6;
  e = e + index<2>(3,-4);
  _ASSERT(e == extent<2>(9, 9));
  _ASSERT( e.contains(index<2>(8, 8)));
  _ASSERT(!e.contains(index<2>(8, 9)));

 

Usage

The extent class on its own simply defines the size of the N-dimensional space. We'll see in future posts that when you create containers (arrays) and wrappers (array_views) for your data, it is an extent<N> object that you'll need to use to create those (and use an index<N> object to index into them). We'll also see that it is a extent<N> object that you pass to the new parallel_for_each function that I'll cover in the next post.


concurrency::index from amp.h

Sun, September 4, 2011, 09:40 PM under GPGPU | ParallelComputing

Overview

C++ AMP introduces a new template class index, where N can be any value greater than zero, that represents a unique point in N-dimensional space, e.g. if N=2 then an index<2> object represents a point in 2-dimensional space. This class is essentially a coordinate vector of N integers representing a position in space relative to the origin of that space. It is ordered from most-significant to least-significant (so, if the 2-dimensional space is rows and columns, the first component represents the rows). The underlying type is a signed 32-bit integer, and component values can be negative.

The rank field returns N.

Creating an index

image

The default parameterless constructor returns an index with each dimension set to zero, e.g.

  index<3> idx; //represents point (0,0,0)

An index can also be created from another index through the copy constructor or assignment, e.g.

  index<3> idx2(idx); //or index<3> idx2 = idx;

To create an index representing something other than 0, you call its constructor as per the following 4-dimensional example:

  int temp[4] = {2,4,-2,0};
  index<4> idx(temp);

Note that there are convenience constructors (that don’t require an array argument) for creating index objects of rank 1, 2, and 3, since those are the most common dimensions used, e.g.

  index<1> idx(3);
  index<2> idx(3, 6);
  index<3> idx(3, 6, 12);

Accessing the component values

You can access each component using the familiar subscript operator, e.g.

One-dimensional example:

  index<1> idx(4);
  int i = idx[0]; // i=4

Two-dimensional example:

  index<2> idx(4,5);
  int i = idx[0]; // i=4
  int j = idx[1]; // j=5

Three-dimensional example:

  index<3> idx(4,5,6);
  int i = idx[0]; // i=4
  int j = idx[1]; // j=5
  int k = idx[2]; // k=6

Basic operations

Once you have your multi-dimensional point represented in the index, you can now treat it as a single entity, including performing common operations between it and an integer (through operator overloading): -- (pre- and post- decrement), ++ (pre- and post- increment), %=, *=, /=, +=, -=,%, *, /, +, -. There are also operator overloads for operations between index objects, i.e. ==, !=, +=, -=, +, –.

Here is an example (where no assertions are broken):

  index<2> idx_a;
  index<2> idx_b(0, 0);
  index<2> idx_c(6, 9);
  _ASSERT(idx_a.rank == 2);
  _ASSERT(idx_a == idx_b);
  _ASSERT(idx_a != idx_c);

  idx_a += 5;
  idx_a[1] += 3;
  idx_a++;
  _ASSERT(idx_a != idx_b);
  _ASSERT(idx_a == idx_c);

  idx_b = idx_b + 10;
  idx_b -= index<2>(4, 1);
  _ASSERT(idx_a == idx_b);

Usage

You'll most commonly use index<N> objects to index into data types that we'll cover in future posts (namely array and array_view). Also when we look at the new parallel_for_each function we'll see that an index<N> object is the single parameter to the lambda, representing the (multi-dimensional) thread index…

In the next post we'll go beyond being able to represent an N-dimensional point in space, and we'll see how to define the N-dimensional space itself through the extent<N> class.


concurrency::accelerator_view

Sat, September 3, 2011, 08:32 PM under GPGPU | ParallelComputing

Overview

We saw previously that accelerator represents a target for our C++ AMP computation or memory allocation and that there is a notion of a default accelerator. We ended that post by introducing how one can obtain accelerator_view objects from an accelerator object through the accelerator class's default_view property and the create_view method. concurrency::accelerator_view

The accelerator_view objects can be thought of as handles to an accelerator.

You can also construct an accelerator_view given another accelerator_view (through the copy constructor or the assignment operator overload). Speaking of operator overloading, you can also compare (for equality and inequality) two accelerator_view objects between them to determine if they refer to the same underlying accelerator.

We'll see later that when we use concurrency::array objects, the allocation of data takes place on an accelerator at array construction time, so there is a constructor overload that accepts an accelerator_view object. We'll also see later that a new concurrency::parallel_for_each function overload can take an accelerator_view object, so it knows on what target to execute the computation (represented by a lambda that the parallel_for_each also accepts).

Beyond normal usage, accelerator_view is a quality of service concept that offers isolation to multiple "consumers" of an accelerator. If in your code you are accessing the accelerator from multiple threads (or, in general, from different parts of your app), then you'll want to create separate accelerator_view objects for each thread.

flush, wait, and queuing_mode

When you create an accelerator_view via the create_view method of the accelerator, you pass in an option of queuing_mode_immediate or queuing_mode_automatic, which are the two members of the queuing_mode enum. At any point you can access this value from the queuing_mode property of the accelerator_view.

When the queuing_mode value is queuing_mode_automatic (which is the default), any commands sent to the device such as kernel invocations and data transfers (e.g. parallel_for_each and copy, as we'll see in future posts), will get submitted as soon as the runtime sees fit (that is the definition of immediate).

When the value of queuing_mode is queuing_mode_immediate, the commands will be submitted/flushed immediately.

To send all buffered commands to the device for execution, there is a non-blocking flush method that you can call. If you wish to block until all the commands have been sent, there is a wait method you can call (which also flushes). You can read more to understand C++ AMP's queuing_mode.

Querying information

Just like accelerator, accelerator_view exposes the is_debug and version properties. In fact, you can always access the accelerator object from the accelerator property on the accelerator_view class to access the accelerator interface we looked at previously.

Accelerator also exposes a function that helps you stay aware of the progress of execution. You can read more about accelerator_view::create_marker.

Interop with D3D (aka DX)

If your app that uses C++ AMP to compute data also uses DirectX rendering shaders, e.g. pixel shaders, you can benefit by integrating C++ AMP into your graphics pipeline. One of the building blocks for that is being able to use the same device context from both the compute kernel and the other shaders. You can do that by going from accelerator_view to device context (and vice versa), through part of our interop API in amp.h: *get_device, create_accelerator_view. You can read more on DirectX interop.


concurrency::accelerator

Wed, August 31, 2011, 07:12 PM under GPGPU | ParallelComputing

Overview

An accelerator represents a "target" on which C++ AMP code can execute and where data can reside. Typically (but not necessarily) an accelerator is a GPU device. Accelerators are represented in C++ AMP as objects of the accelerator class.concurrency::accelerator

For many scenarios, you do not need to obtain an accelerator object, since the runtime has a notion of a default accelerator, which is what it thinks is the best one in the system. Examples where you need to deal with accelerator objects are if you need to pick your own accelerator (based on your specific criteria), or if you need to use more than one accelerators from your app.

Construction and operator usage

You can query and obtain a std::vector of all the accelerators on your system, which the runtime discovers on startup.

Beyond enumerating accelerators, you can also create one directly by passing to the constructor a system-wide unique path to a device if you know it (i.e. the “Device Instance Path” property for the device in Device Manager), e.g. accelerator acc(L"PCI\\VEN_1002&DEV_6898&SUBSYS_0B001002etc");

There are some predefined strings (for predefined accelerators) that you can pass to the accelerator constructor (and there are corresponding constants for those on the accelerator class itself, so you don’t have to hardcode them every time). Examples are the following:

  • accelerator::default_accelerator represents the default accelerator that the C++ AMP runtime picks for you if you don’t pick one (the heuristics of how it picks one will be covered in a future post). Example: accelerator acc;
  • accelerator::direct3d_ref represents the reference rasterizer emulator that simulates a direct3d device on the CPU (in a very slow manner). This emulator is available on systems with Visual Studio installed and is useful for debugging. More on debugging in general in future posts. Example: accelerator acc(accelerator::direct3d_ref);
  • accelerator::direct3d_warp represents WARP which is the current CPU fallback. Example: accelerator acc(accelerator::direct3d_warp);
  • accelerator::cpu_accelerator represents the CPU. In this first release the only use of this accelerator is for using the staging arrays technique. Example: accelerator acc(accelerator::cpu_accelerator);

You can also create an accelerator by shallow copying another accelerator instance (via the corresponding constructor) or simply assigning it to another accelerator instance (via the operator overloading of =). Speaking of operator overloading, you can also compare (for equality and inequality) two accelerator objects between them to determine if they refer to the same underlying device.

Querying accelerator characteristics

Given an accelerator object, you can access its description, version, device path, size of dedicated memory in KB, whether it is some kind of emulator, whether it has a display attached, whether it supports double precision, and whether it was created with the debugging layer enabled for extensive error reporting.

Below is example code that accesses some of the properties; in your real code you'd probably be checking one or more of them in order to pick an accelerator (or check that the default one is good enough for your specific workload):

  vector<accelerator> accs = accelerator::get_all(); 
  std::for_each(accs.begin(), accs.end(), [] (accelerator acc) 
  { 
    std::wcout << "New accelerator: " << acc.description << std::endl; 
    std::wcout << "device_path = " << acc.device_path << std::endl; 
    std::wcout << "version = " << (acc.version >> 16) << '.' << (acc.version & 0xFFFF) << std::endl; 
    std::wcout << "dedicated_memory = " << acc.dedicated_memory << " KB" << std::endl; 
    std::wcout << "doubles = " << ((acc.supports_double_precision) ? "true" : "false") << std::endl; 
    std::wcout << "limited_doubles = " << ((acc.supports_limited_double_precision) ? "true" : "false") << std::endl; 
    std::wcout << "has_display = " << ((acc.has_display) ? "true" : "false") << std::endl;
    std::wcout << "is_emulated = " << ((acc.is_emulated) ? "true" : "false") << std::endl; 
    std::wcout << "is_debug = " << ((acc.is_debug) ? "true" : "false") << std::endl; 
    std::cout << std::endl; 
  }); 

accelerator_view

In my next blog post I'll cover a related class: accelerator_view. Suffice to say here that each accelerator may have from 1..n related accelerator_view objects. You can get the accelerator_view from an accelerator via the default_view property, or create new ones by invoking the create_view method that creates an accelerator_view object for you (by also accepting a queuing_mode enum value of queuing_mode_automatic or queuing_mode_immediate that we'll also explore in the next blog post).


"Hello World" in C++ AMP

Sun, June 26, 2011, 06:02 PM under GPGPU | ParallelComputing

UPDATE: I encourage you to visit a newer and better post with a C++ AMP matrix multiplication example.

Some say that the equivalent of "hello world" code in the data parallel world is matrix multiplication :)

Below is the before C++ AMP and after C++ AMP code. For more on what it all means, watch the recording of my C++ AMP introduction (the example below is part of the session).

    void MatrixMultiply(vector<float>& vC, 
			    const vector<float>& vA,
			    const vector<float>& vB, 
			    int M, int N, int W )
    {
        for (int y = 0; y < M; y++) 
        {
            for (int x = 0; x < N; x++) 
            {
                float sum = 0;
                for(int i = 0; i < W; i++)
                {
                    sum += vA[y * W + i] * vB[i * N + x];
                }
                vC[y * N + x] = sum;
	    }
        }
    }
Change the function to use C++ AMP and hence offload the computation to the GPU, and now the calling code (which I am not showing) needs no changes and the overall operation gives you really nice speed up for large datasets… 
    #include <amp.h>
    using namespace concurrency;

    void MatrixMultiply(vector<float>& vC, 
			    const vector<float>& vA,
			    const vector<float>& vB, 
			    int M, int N, int W )
    {
        array_view<const float,2>      a(M, W, vA);
        array_view<const float,2>      b(W, N, vB);
        array_view<writeonly<float>,2> c(M, N, vC); 

        parallel_for_each(
            c.grid,
            [=](index<2> idx) mutable restrict(direct3d) 
            {
                float sum = 0;
                for(int i = 0; i < a.x; i++) 
                {
                    sum += a(idx.y, i) * b(i, idx.x);
                }
                c[idx] = sum;
            }
        );
    }

Again, you can understand the elements above, by using my C++ AMP presentation slides and recording

Stay tuned for more…


C++ AMP recording and slides

Fri, June 17, 2011, 02:51 PM under Events | GPGPU | ParallelComputing

Yesterday we announced C++ Accelerated Massive Parallelism.

Many of you want to know more about the API instead of just meta information. I will trickle more code over the coming months leading up to the date when we will share actual bits. Until you have bits in your hand, it is only your curiosity that is blocked, so I ask you to be patient with that and allow me to release this on our own schedule ;-)

You can now watch my 45-minute session introducing C++ AMP on channel9. You will also want to download the slides (pdf), because they are not readable in the recording.


C++ Accelerated Massive Parallelism

Wed, June 15, 2011, 09:16 AM under GPGPU | ParallelComputing

At AMD's Fusion conference Herb Sutter announced in his keynote session a technology that our team has been working on that we call C++ Accelerated Massive Parallelism (C++ AMP) and during the keynote I showed a brief demo of an app built with our technology. After the keynote, I go deeper into the technology in my breakout session. If you read both those abstracts, you'll get some information about what C++ AMP is, without being too explicit since we published the abstracts before the technology was announced.

You can find the official online announcement at Soma's blog post.

Here, I just wanted to capture the key points about C++ AMP that can serve as an introduction and an FAQ. So, in no particular order…

C++ AMP

  1. lowers the barrier to entry for heterogeneous hardware programmability and brings performance to the mainstream, without sacrificing developer productivity or solution portability.
  2. is designed not only to help you address today's massively parallel hardware (i.e. GPUs and APUs), but it also future proofs your code investments with a forward looking design.
  3. is part of Visual C++. You don't need to use a different compiler or learn different syntax.
  4. is modern C++. Not C or some other derivative.
  5. is integrated and supported fully in Visual Studio 11. Editing, building, debugging, profiling and all the other goodness of Visual Studio work well with C++ AMP.
  6. provides an STL-like library as part of the existing concurrency namespace and delivered in the new amp.h header file.
  7. makes it extremely easy to work with large multi-dimensional data on heterogeneous hardware; in a manner that exposes parallelization.
  8. introduces only one core C++ language extension.
  9. builds on DirectX (and DirectCompute in particular) which offers a great hardware abstraction layer that is ubiquitous and reliable. The architecture is such, that this point can be thought of as an implementation detail that does not surface to the API layer.

Stay tuned on my blog for more over the coming months where I will switch from just talking about C++ AMP to showing you how to use the API with code examples…


DirectCompute Lectures

Sun, August 29, 2010, 10:34 PM under GPGPU | ParallelComputing

Previously I shared resources to get you started with DirectCompute, for taking advantage of GPGPUs in an a way that doesn't tie you to a hardware vendor (e.g. nvidia, amd).

I just stumbled upon and had to share a lecture series on channel9 on DirectCompute!

Here are direct links to the episodes that are up there now:

  1. DirectCompute Expert Roundtable Discussion
  2. DirectCompute Lecture Series 101- Introduction to DirectCompute
  3. DirectCompute Lecture Series 110- Memory Patterns
  4. DirectCompute Lecture Series 120- Basics of DirectCompute Application Development
  5. DirectCompute Lecture Series 210- GPU Optimizations and Performance
  6. DirectCompute Lecture Series 230- GPU Accelerated Physics
  7. DirectCompute Lecture Series 250- Integration with the Graphics Pipeline

Having watched these I recommend them all, but if you only want to watch a few, I suggest #2, #3, #4 and #5. Also, you should download the "WMV (High)" so you can see the code clearly and be able to Ctrl+Shift+G for fast playback…

TIP: To subscribe to channel9 GPU content, use this RSS feed.


Are you a GPGPU developer? Participate in our UX study

Sat, July 31, 2010, 02:12 PM under GPGPU | ParallelComputing | UserInterfaceDesign

You know that I work on the parallel debugger in Visual Studio and I've talked about GPGPU before and I have also mentioned UX. Below is a request from my UX colleagues that pulls all of it together.

If you write and debug parallel code that uses GPUs for non-graphical, computationally intensive operations keep reading. The Microsoft Visual Studio Parallel Computing team is seeking developers for a 90-minute research study. The study will take place via LiveMeeting or at a usability lab in Redmond, depending on your preference.

We will walk you through an example of debugging GPGPU code in Visual Studio with you giving us step-by-step feedback. ("Is this what you would you expect?", "Are we showing you the things that would help you?", "How would you improve this")

The walkthrough utilizes a “paper” version of our current design. After the walkthrough, we would then show you some additional design ideas and seek your input on various design tradeoffs.

Are you interested or know someone who might be a good fit? Let us know at this address: devur@microsoft.com. Those who participate (and those who referred them), will receive a gratuity item from a list of current Microsoft products.


DirectCompute

Fri, February 19, 2010, 04:00 PM under ParallelComputing | GPGPU
In my previous blog post I introduced the concept of GPGPU ending with:
On Windows, there is already a cross-GPU-vendor way of programming GPUs and that is the Direct X API. Specifically, on Windows Vista and Windows 7, the DirectX 11 API offers a dedicated subset of the API for GPGPU programming: DirectCompute. You use this API on the CPU side, to set up and execute the kernels on the GPU. The kernels are written in a language called HLSL (High Level Shader Language). You can use DirectCompute with HLSL to write a "compute shader", which is the term DirectX uses for what I've been referring to in this post as "kernel".
In this post I want to share some links to get you started with DirectCompute and HLSL.

1. Watch the recording of the PDC 09 session: DirectX11 DirectCompute.

2. If session recordings is your thing there are two more on DirectCompute from nvidia's GTC09 conference 1015 (pdf, mp4) and 1411 (mp4 plus the presenter's written version of the session).

3. Over at gamedev there is an old Compute Shader tutorial. At the same site, there is a 3-part blog post on Compute Shader: Introduction, Resources and Addressing.

4. From PDC, you can also download the DirectCompute Hands On Lab.

5. When you are ready to get your hands even dirtier, download the latest Windows DirectX SDK (at the time of writing the latest is dated Feb 2010).

6. Within the SDK you'll find a Compute Shader Overview and samples such as: Basic, Sort, OIT, NBodyGravity, HDR Tone Mapping.

7. Talking of DX11/DirectCompute samples, there are also a couple of good ones on this URL.

8. The documentation of the various APIs is available online. Here are just some good (but far from complete) taster entry points into that: numthreads, SV_DispatchThreadID, SV_GroupThreadID, SV_GroupID, SV_GroupIndex, D3D11CreateDevice, D3DX11CompileFromFile, CreateComputeShader, Dispatch, D3D11_BIND_FLAG, GSSetShader.

GPGPU

Fri, February 19, 2010, 03:58 PM under ParallelComputing | GPGPU
What
GPU obviously stands for Graphics Processing Unit (the silicon powering the display you are using to read this blog post). The extra GP in front of that stands for General Purpose computing.

So, altogether GPGPU refers to computing we can perform on GPU for purposes beyond just drawing on the screen. In effect, we can use a GPGPU a bit like we already use a CPU: to perform some calculation (that doesn’t have to have any visual element to it). The attraction is that a GPGPU can be orders of magnitude faster than a CPU.

Why
When I was at the SuperComputing conference in Portland last November, GPGPUs were all the rage. A quick online search reveals many articles introducing the GPGPU topic. I'll just share 3 here: pcper (ignoring all pages except the first, it is a good consumer perspective), gizmodo (nice take using mostly layman terms) and vizworld (answering the question on "what's the big deal").

The GPGPU programming paradigm (from a high level) is simple: in your CPU program you define functions (aka kernels) that take some input, can perform the costly operation and return the output. The kernels are the things that execute on the GPGPU leveraging its power (and hence execute faster than what they could on the CPU) while the host CPU program waits for the results or asynchronously performs other tasks.

However, GPGPUs have different characteristics to CPUs which means they are suitable only for certain classes of problem (i.e. data parallel algorithms) and not for others (e.g. algorithms with branching or recursion or other complex flow control). You also pay a high cost for transferring the input data from the CPU to the GPU (and vice versa the results back to the CPU), so the computation itself has to be long enough to justify the overhead transfer costs. If your problem space fits the criteria then you probably want to check out this technology.

How
So where can you get a graphics card to start playing with all this? At the time of writing, the two main vendors ATI (owned by AMD) and NVIDIA are the obvious players in this industry. You can read about GPGPU on this AMD page and also on this NVIDIA page. NVIDIA's website also has a free chapter on the topic from the "GPU Gems" book: A Toolkit for Computation on GPUs.

If you followed the links above, then you've already come across some of the choices of programming models that are available today. Essentially, AMD is offering their ATI Stream technology accessible via a language they call Brook+; NVIDIA offers their CUDA platform which is accessible from CUDA C. Choosing either of those locks you into the GPU vendor and hence your code cannot run on systems with cards from the other vendor (e.g. imagine if your CPU code would run on Intel chips but not AMD chips). Having said that, both vendors plan to support a new emerging standard called OpenCL, which theoretically means your kernels can execute on any GPU that supports it. To learn more about all of these there is a website: gpgpu.org. The caveat about that site is that (currently) it completely ignores the Microsoft approach, which I touch on next.

On Windows, there is already a cross-GPU-vendor way of programming GPUs and that is the DirectX API. Specifically, on Windows Vista and Windows 7, the DirectX 11 API offers a dedicated subset of the API for GPGPU programming: DirectCompute. You use this API on the CPU side, to set up and execute the kernels that run on the GPU. The kernels are written in a language called HLSL (High Level Shader Language). You can use DirectCompute with HLSL to write a "compute shader", which is the term DirectX uses for what I've been referring to in this post as a "kernel". For a comprehensive collection of links about this (including tutorials, videos and samples) please see my blog post: DirectCompute.

Note that there are many efforts to build even higher level languages on top of DirectX that aim to expose GPGPU programming to a wider audience by making it as easy as today's mainstream programming models. I'll mention here just two of those efforts: Accelerator from MSR and Brahma by Ananth.