Sail E0 Webinar

MCQs

Total Questions : 10
Question 1. Point out the wrong statement :
  1.    If you set the HBase service into maintenance mode, then its roles (HBase Master and all Region Servers) are put into effective maintenance mode
  2.    If you set a host into maintenance mode, then any roles running on that host are put into effective maintenance mode
  3.    Putting a component into maintenance mode prevent events from being logged
  4.    None of the mentioned
 Discuss Question
Answer: Option C. -> Putting a component into maintenance mode prevent events from being logged


Maintenance mode only suppresses the alerts that those events would otherwise generate.


Question 2. Point out the wrong statement :
  1.    classNAME displays the class name needed to get the Hadoop jar
  2.    Balancer Runs a cluster balancing utility
  3.    An administrator can simply press Ctrl-C to stop the rebalancing process
  4.    None of the mentioned
 Discuss Question
Answer: Option A. -> classNAME displays the class name needed to get the Hadoop jar


classpath prints the class path needed to get the Hadoop jar and the required libraries.


Question 3. __________ mode is a Namenode state in which it does not accept changes to the name space.
  1.    Recover
  2.    Safe
  3.    Rollback
  4.    None of the mentioned
 Discuss Question
Answer: Option C. -> Rollback


dfsadmin runs a HDFS dfsadmin client.


Question 4. _________ command is used to copy file or directories recursively.
  1.    dtcp
  2.    distcp
  3.    dcp
  4.    distc
 Discuss Question
Answer: Option B. -> distcp


Usage of the distcp command: hadoop distcp .


Question 5. Which of the following is a common reason to restart hadoop process ?
  1.    Upgrade Hadoop
  2.    React to incidents
  3.    Remove worker nodes
  4.    All of the mentioned
 Discuss Question
Answer: Option D. -> All of the mentioned


The most common reason administrators restart Hadoop processes is to enact configuration changes.


Question 6. __________ Manager's Service feature monitors dozens of service health and performance metrics about the services and role instances running on your cluster.
  1.    Microsoft
  2.    Cloudera
  3.    Amazon
  4.    None of the mentioned
 Discuss Question
Answer: Option B. -> Cloudera


Manager's Service feature presents health and performance data in a variety of formats.


Question 7. Point out the correct statement :
  1.    All hadoop commands are invoked by the bin/hadoop script
  2.    Hadoop has an option parsing framework that employs only parsing generic options
  3.    archive command creates a hadoop archive
  4.    All of the mentioned
 Discuss Question
Answer: Option A. -> All hadoop commands are invoked by the bin/hadoop script


Running the hadoop script without any arguments prints the description for all commands.


Question 8. Which of the following scenario may not be a good fit for HDFS ?
  1.    HDFS is not suitable for scenarios requiring multiple/simultaneous writes to the same file
  2.    HDFS is suitable for storing data related to applications requiring low latency data access
  3.    HDFS is suitable for storing data related to applications requiring low latency data access
  4.    None of the mentioned
 Discuss Question
Answer: Option A. -> HDFS is not suitable for scenarios requiring multiple/simultaneous writes to the same file


HDFS can be used for storing archive data since it is cheaper as HDFS allows storing the data on low cost commodity hardware while ensuring a high degree of fault-tolerance.


Question 9. Point out the wrong statement :
  1.    Replication Factor can be configured at a cluster level (Default is set to 3) and also at a file level
  2.    Block Report from each DataNode contains a list of all the blocks that are stored on that DataNode
  3.    User data is stored on the local file system of DataNodes
  4.    DataNode is aware of the files to which the blocks stored on it belong to
 Discuss Question
Answer: Option D. -> DataNode is aware of the files to which the blocks stored on it belong to


NameNode is aware of the files to which the blocks stored on it belong to.


Question 10. The need for data replication can arise in various scenarios like :
  1.    Replication Factor is changed
  2.    DataNode goes down
  3.    Data Blocks get corrupted
  4.    All of the mentioned
 Discuss Question
Answer: Option D. -> All of the mentioned


Data is replicated across different DataNodes to ensure a high degree of fault-tolerance.


Latest Videos

Latest Test Papers