/ani/mrses

To get this branch, use:
bzr branch http://suren.me/webbzr/ani/mrses
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
Configuration
=============
 1. For CELL, set MAX_PPU to 0 and undef MAX_SPU, PPU are
 to slow to be used
 2. For x86, Intel Math Kernel library is fine, for PPU
 Goto is best (but still too slow). Reference designs
 a bit slower if compiled with recent gcc.


Expectations
============
 1. SPU's are limited by local store (local memory). It is
 only 256 KB. Application uses width * (nA + nB) (+ 
 alignment corrections) for data buffer and some amount
 of temporary buffers, mainly dependent on width size.
 2. properties > width ;)
 3. The pointers between PPU and SPU are transfered as 32
 bit integers. For this reason it is better to compile
 a PPU application as 32 bit binary, for safety.
 4. Calls to mrses_iterate with NULL and non-NULL ires should
 not be mixed.

ToDo
====
 1. SPU's have 128 registers. I have used this registers for 
 matrix multiplication, but it would be nice to optimize in
 the same way cholesky decomposition, etc.
 
 2. The vectorizations used for SPU can be migrated to PPU
 and Intel architecture.
 
 3. SPU is dual issue: memory access and operations can be
 performed in parallel if properly aligned (no code reord-
 ering is supported by SPU).
 
 4. DMA is asynchronous, interleaving computations and mem-
 ory transfers will permit to neglect transfer time.
 
 5. Not clear why PPU are 10 times slower than Intel on the
 same clock speed. By design or something is completely 
 wrong. Cache 256KB should be no problem.
 
 6. If last question is resolved, it would be nice to move, 
 the histogram computation to SPE.
 
 7. Using hyperthreading server, the computation per thread
 approx. 2 times slower (in sum OK, yet). Even if you decrease
 amount of used PPU's, it would be anyway slower. Somehow
 processes are not bound to certain core but migrating here
 and there and this probably causes slowdowns... Needs more
 investigations overall.
  
 8. Replace matrix multiplication with vector-to-matrix 
 multiplication in PPE.
 
 9. Somehow interleave operations in iterate mode when ires
 is supplied.