Exam 2: Nvidia CUDA and GPU Programming
Exam 1: Understanding Cuda Kernel Code and Host GPU Interactions24 Questions
Exam 2: Nvidia CUDA and GPU Programming24 Questions
Exam 3: Parallel Execution in CUDA, Sorting Algorithms, and Search Techniques25 Questions
Exam 4: Characteristics and Operations in Parallel Programming24 Questions
Exam 5: Communication Patterns and Algorithms in Distributed Systems25 Questions
Exam 6: Communication Operations and Algorithms in Parallel Computing25 Questions
Exam 7: Parallel Computing and Graph Algorithms25 Questions
Exam 8: Parallel Processing and GPU Architecture25 Questions
Exam 9: Computer Architecture and Multiprocessor Systems24 Questions
Exam 10: Sorting Algorithms and Pipelined Systems25 Questions
Exam 11: Algorithms and Parallel Formulations25 Questions
Exam 12: Cuda Model, Parallelism, Memory System, and Communication Models25 Questions
Exam 13: Parallel Communication and Runtime Analysis25 Questions
Exam 14: Parallel Computing and GPU Architecture25 Questions
Exam 15: Exploring Multiprocessor Systems and Parallel Algorithms25 Questions
Exam 16: Parallel Processing and Algorithms25 Questions
Exam 17: Distributed Systems and Computing34 Questions
Select questions type
Triple angle brackets mark in a statement inside main function, what does it indicates?
Free
(Multiple Choice)
4.9/5
(33)
Correct Answer:
A
What is the equivalent of general C program with CUDA C:
Int main(void)
{
Printf("Hello, World!\n");
Return 0;
}
Free
(Multiple Choice)
4.8/5
(34)
Correct Answer:
B
NVIDIA CUDA Warp is made up of how many threads?
Free
(Multiple Choice)
4.8/5
(31)
Correct Answer:
D
The CUDA architecture consists of --------- for parallel computing kernels and functions.
(Multiple Choice)
5.0/5
(34)
CUDA provides ------- warp and thread scheduling. Also, the overhead of thread creation is on the order of ----.
(Multiple Choice)
4.9/5
(34)
FADD, FMAD, FMIN, FMAX are ----- supported by Scalar Processors of NVIDIA GPU.
(Multiple Choice)
4.8/5
(37)
If variable a is host variable and dev_a is a device (GPU) variable, to copy input from variable a to variable dev_a select correct statement:
(Multiple Choice)
4.9/5
(33)
The NVIDIA G80 is a ---- CUDA core device, the NVIDIA G200 is a ---- CUDA core device, and the NVIDIA Fermi is a ---- CUDA core device.
(Multiple Choice)
4.8/5
(35)
Each streaming multiprocessor (SM) of CUDA herdware has ------ scalar processors (SP).
(Multiple Choice)
4.7/5
(36)
If variable a is host variable and dev_a is a device (GPU) variable, to allocate memory to dev_a select correct statement:
(Multiple Choice)
4.8/5
(36)
In CUDA memory model there are following memory types available:
A. Registers;
B. Local Memory;
C. Shared Memory;
D. Global Memory;
E. Constant Memory;
F. Texture Memory.
(Multiple Choice)
4.8/5
(30)
CUDA Hardware programming model supports:
A. fully generally data-parallel archtecture;
B. General thread launch;
C. Global load-store;
D. Parallel data cache;
E. Scalar architecture;
F. Integers, bit operation
(Multiple Choice)
4.8/5
(31)
A simple kernel for adding two integers:
__global__ void add( int *a, int *b, int *c ) { *c = *a + *b; }
Where __global__ is a CUDA C keyword which indicates that:
(Multiple Choice)
4.7/5
(34)
IADD, IMUL24, IMAD24, IMIN, IMAX are ----------- supported by Scalar Processors of NVIDIA GPU.
(Multiple Choice)
4.7/5
(32)
Showing 1 - 20 of 24
Filters
- Essay(0)
- Multiple Choice(0)
- Short Answer(0)
- True False(0)
- Matching(0)