Conversation
|
I think we also need to clean up the description of the problem to highlight the transpose of the indices as well as the CPU implementations to make it consistent with the gpu versions. |
|
Thanks for getting this started @MrBurmark. I took a follow up pass; but I think I could use a second set of eyes on the Kernel implementation. Would anyone in @LLNL/raja-core be able to take a look? |
… bugfix/burmark1/transpose
|
@artv3 I think you've taken over this PR. There are come compilation issues that are causing the CI checks to fail. Can you get to them or should I try to find time to fix them? I assume we want this in the release. |
I can take a look, but I don't think this should hold up the release. |
|
@MrBurmark @artv3 I fixed the compilation errors in this, but the results are wrong..... |
There was a problem hiding this comment.
The code generates the wrong results. Either the code is wrong or the check result method is wrong. I haven't dug into this.
There are also other changes that could improve this and remove some confusion. For example, the printResult() routine assumes that the matrix passed in is At but the first call in the code (that can be commented out) is A. I would print out which matrix is being printed in a statement before each printResult() call and make the print method not name the matrix.
|
I think we also have some inconsistent expression for the CPU RAJA::kernel examples. That is one thing that I needed to revisit. |
|
If I can find a few spare minutes, I will work on it. But, I can't promise I will get to it. |
|
@llnl/raja-core this may need another review. |
| }); | ||
|
|
||
| RAJA::loop_icount<loop_pol_2>(ctx, col_tile, [&] (int col, int tx) { | ||
| RAJA::loop_icount<loop_pol_2>(ctx, row_tile, [&] (int row, int ty) { | ||
| RAJA::loop_icount<loop_pol_2>(ctx, col_tile, [&] (int row_t, int ty) { |
There was a problem hiding this comment.
Do you want to add the sync for all of the policy cases?
Summary
Fix the cuda and hip matrix tutorial. Fix spacing, add proper synchronization, map threads properly in teams implementation.