Exam 4: Characteristics and Operations in Parallel Programming
Exam 1: Understanding Cuda Kernel Code and Host GPU Interactions24 Questions
Exam 2: Nvidia CUDA and GPU Programming24 Questions
Exam 3: Parallel Execution in CUDA, Sorting Algorithms, and Search Techniques25 Questions
Exam 4: Characteristics and Operations in Parallel Programming24 Questions
Exam 5: Communication Patterns and Algorithms in Distributed Systems25 Questions
Exam 6: Communication Operations and Algorithms in Parallel Computing25 Questions
Exam 7: Parallel Computing and Graph Algorithms25 Questions
Exam 8: Parallel Processing and GPU Architecture25 Questions
Exam 9: Computer Architecture and Multiprocessor Systems24 Questions
Exam 10: Sorting Algorithms and Pipelined Systems25 Questions
Exam 11: Algorithms and Parallel Formulations25 Questions
Exam 12: Cuda Model, Parallelism, Memory System, and Communication Models25 Questions
Exam 13: Parallel Communication and Runtime Analysis25 Questions
Exam 14: Parallel Computing and GPU Architecture25 Questions
Exam 15: Exploring Multiprocessor Systems and Parallel Algorithms25 Questions
Exam 16: Parallel Processing and Algorithms25 Questions
Exam 17: Distributed Systems and Computing34 Questions
Select questions type
Scaling Characteristics of Parallel Programs Ts is
Free
(Multiple Choice)
4.8/5
(37)
Correct Answer:
B
cost-optimal parallel systems have an efficiency of ___
Free
(Multiple Choice)
4.8/5
(36)
Correct Answer:
A
efficient implementation of basic communication operation can improve
Free
(Multiple Choice)
4.9/5
(32)
Correct Answer:
A
Group communication operations are built using_____ Messenging primitives.
(Multiple Choice)
4.9/5
(38)
Speedup tends to saturate and efficiency _____ as a consequence of Amdahl's law.
(Multiple Choice)
4.7/5
(40)
the cost of the parallel algorithm is higher than the sequential run time by a factor of __
(Multiple Choice)
4.7/5
(42)
The n × n matrix is partitioned among n processors, with each processor storing complete ___ of the matrix.
(Multiple Choice)
4.9/5
(33)
The n × n matrix is partitioned among n2 processors such that each processor owns a _____ element.
(Multiple Choice)
4.9/5
(36)
how many basic communication operations are used in matrix vector multiplication
(Multiple Choice)
4.8/5
(42)
Data items must be combined piece-wise and the result made available at
(Multiple Choice)
4.8/5
(32)
A parallel algorithm is evaluated by its runtime in function of
(Multiple Choice)
4.7/5
(41)
Speedup obtained when the problem size is _______ linearly with the number of processing elements.
(Multiple Choice)
4.7/5
(30)
one processor has a piece of data and it need to send to everyone is
(Multiple Choice)
4.7/5
(46)
The load imbalance problem in Parallel Gaussian Elimination: can be alleviated by using a ____ mapping
(Multiple Choice)
4.7/5
(34)
The processors compute ______ product of the vector element and the loval matrix
(Multiple Choice)
4.8/5
(32)
Showing 1 - 20 of 24
Filters
- Essay(0)
- Multiple Choice(0)
- Short Answer(0)
- True False(0)
- Matching(0)