Nvidia cuda 공식 문서를 찾아보니, 메모리별 소요 클럭에 대한 내용이 자세히 나온다.
local memory 나 device memory 와 같이 off-chip memory(그래픽 카드에 내장되지 않은)는 400~800 클럭에 접근하고
shared memory나 register와 같이 on-ship memory(그래픽 카드에 내장된)는 10~20 클럭 정도에 접근이 가능하다.
constant memory는 2.0 에서 부터는 1회에 대해서 400~800 클럭이고, caching이 되면서 10~20 클럭에 읽어올 수 있다.
다르게 말하면 매우 빈번하게(클럭 빈도로는 40회 이상?) 읽어오는 데이터의 경우에는
무조건 shared memory에 읽어와서 빠르게 읽는게 전체 실행 속도에 유리할 것으로 보인다.
(읽어와서 다시 저장을 해야 한다면 80회 이상으로 늘어날지도?)
[출처 : CUDA_C_Programming_Guide.pdf]
local memory 나 device memory 와 같이 off-chip memory(그래픽 카드에 내장되지 않은)는 400~800 클럭에 접근하고
shared memory나 register와 같이 on-ship memory(그래픽 카드에 내장된)는 10~20 클럭 정도에 접근이 가능하다.
constant memory는 2.0 에서 부터는 1회에 대해서 400~800 클럭이고, caching이 되면서 10~20 클럭에 읽어올 수 있다.
다르게 말하면 매우 빈번하게(클럭 빈도로는 40회 이상?) 읽어오는 데이터의 경우에는
무조건 shared memory에 읽어와서 빠르게 읽는게 전체 실행 속도에 유리할 것으로 보인다.
(읽어와서 다시 저장을 해야 한다면 80회 이상으로 늘어날지도?)
5.2 Maximize Utilization 5.2.3 Multiprocessor Level If all input operands are registers, latency is caused by register dependencies, i.e. some of the input operands are written by some previous instruction(s) whose execution has not completed yet. In the case of a back-to-back register dependency (i.e. some input operand is written by the previous instruction), the latency is equal to the execution time of the previous instruction and the warp schedulers must schedule instructions for different warps during that time. Execution time varies depending on the instruction, but it is typically about 22 clock cycles for devices of compute capability 1.x and 2.x and about 11 clock cycles for devices of compute capability 3.0, which translates to 6 warps for devices of compute capability 1.x and 22 warps for devices of compute capability 2.x and higher (still assuming that warps execute instructions with maximum throughput, otherwise fewer warps are needed). For devices of compute capability 2.1 and higher, this is also assuming enough instruction-level parallelism so that schedulers are always able to issue pairs of instructions for each warp. If some input operand resides in off-chip memory, the latency is much higher: 400 to 800 clock cycles. The number of warps required to keep the warp schedulers busy during such high latency periods depends on the kernel code and its degree of instruction-level parallelism. In general, more warps are required if the ratio of the number of instructions with no off-chip memory operands (i.e. arithmetic instructions most of the time) to the number of instructions with off-chip memory operands is low (this ratio is commonly called the arithmetic intensity of the program). If this ratio is 15, for example, then to hide latencies of about 600 clock cycles, about 10 warps are required for devices of compute capability 1.x and about 40 for devices of compute capability 2.x and higher (with the same assumptions as in the previous paragraph).
5.3
Maximize Memory Throughput
The first step in maximizing overall memory throughput for the application is to minimize data transfers with low bandwidth.
That means minimizing data transfers between the host and the device, as detailed in Section 5.3.1, since these have much lower bandwidth than data transfers between global memory and the device.
That also means minimizing data transfers between global memory and the device by maximizing use of on-chip memory: shared memory and caches (i.e. L1/L2 caches available on devices of compute capability 2.x and higher, texture cache and constant cache available on all devices).
|
'Programming > openCL & CUDA' 카테고리의 다른 글
cuda 에서 device memory의 용량을 초과하는 malloc은 위험해! (0) | 2012.06.06 |
---|---|
nvcc 는 int main()을 좋아해 (0) | 2012.06.05 |
ubuntu 에서 vectorAdd 직접 컴파일 하기 (0) | 2012.06.03 |
CUDA devicequery - ION 330 (0) | 2012.06.02 |
cuda 5 preview (0) | 2012.06.02 |