Sail E0 Webinar

MCQs

Total Questions : 34 | Page 3 of 4 pages
Question 21. Parallel Execution
  1.    A sequential execution of a program, one statement at a time
  2.    Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time
  3.    A program or set of instructions that is executed by a processor.
  4.    None of these
 Discuss Question
Answer: Option B. -> Execution of a program by more than one task, with each task being able to execute the same or different statement at the same moment in time
Answer: (b)
Question 22. Coarse-grain Parallelism
  1.    In parallel computing, it is a qualitative measure of the ratio of computation to communication
  2.    Here relatively small amounts of computational work are done between communication events
  3.    Relatively large amounts of computational work are done between communication / synchronization events
  4.    None of these
 Discuss Question
Answer: Option C. -> Relatively large amounts of computational work are done between communication / synchronization events
Answer: (c)
Question 23. In shared Memory:
  1.    Here all processors access, all memory as global address space
  2.    Here all processors have individual memory
  3.    Here some processors access, all memory as global address space and some not
  4.    None of these
 Discuss Question
Answer: Option A. -> Here all processors access, all memory as global address space
Answer: (a)
Question 24. Cache Coherent UMA (CC-UMA) is
  1.    Here all processors have equal access and access times to memory
  2.    Here if one processor updates a location in shared memory, all the other processors know about the update.
  3.    Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
  4.    None of these
 Discuss Question
Answer: Option B. -> Here if one processor updates a location in shared memory, all the other processors know about the update.
Answer: (b)
Question 25. Scalability refers to a parallel system’s (hardware and/or software) ability
  1.    To demonstrate a proportionate increase in parallel speedup with the removal of some processors
  2.    To demonstrate a proportionate increase in parallel speedup with the addition of more processors
  3.    To demonstrate a proportionate decrease in parallel speedup with the addition of more processors
  4.    None of these
 Discuss Question
Answer: Option B. -> To demonstrate a proportionate increase in parallel speedup with the addition of more processors
Answer: (b)
Question 26. Uniform Memory Access (UMA) referred to
  1.    Here all processors have equal access and access times to memory
  2.    Here if one processor updates a location in shared memory, all the other processors know about the update.
  3.    Here one SMP can directly access memory of another SMP and not all processors have equal access time to all memories
  4.    None of these
 Discuss Question
Answer: Option A. -> Here all processors have equal access and access times to memory
Answer: (a)
Question 27. Collective communication
  1.    It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
  2.    It involves two tasks with one task acting as the sender/producer of data, and the other acting as the receiver/consumer.
  3.    It allows tasks to transfer data independently from one another.
  4.    None of these
 Discuss Question
Answer: Option A. -> It involves data sharing between more than two tasks, which are often specified as being members in a common group, or collective.
Answer: (a)
Question 28. Parallel computing can include
  1.    Single computer with multiple processors
  2.    Arbitrary number of computers connected by a network
  3.    Combination of both A and B
  4.    None of these
 Discuss Question
Answer: Option C. -> Combination of both A and B
Answer: (c)
Question 29. In shared Memory
  1.    Changes in a memory location effected by one processor do not affect all other processors.
  2.    Changes in a memory location effected by one processor are visible to all other processors
  3.    Changes in a memory location effected by one processor are randomly visible to all other processors.
  4.    None of these
 Discuss Question
Answer: Option B. -> Changes in a memory location effected by one processor are visible to all other processors
Answer: (b)
Question 30. Here a single program is executed by all tasks simultaneously. At any moment in time, tasks can be executing the same or different instructions within the same program. These programs usually have the necessary logic programmed into them to allow different tasks to branch or conditionally execute only those parts of the program they are designed to execute.
  1.    Single Program Multiple Data (SPMD)
  2.    Multiple Program Multiple Data (MPMD)
  3.    Von Neumann Architecture
  4.    None of these
 Discuss Question
Answer: Option A. -> Single Program Multiple Data (SPMD)
Answer: (a)

Latest Videos

Latest Test Papers