Get all latest (August) Cloudera CCA-500 Actual Test 21-30

Ensurepass

 

QUESTION 21

Assuming a cluster running HDFS, MapReduce version 2 (MRv2) on YARN with all settings at their default, what do you need to do when adding a new slave node to cluster?

 

A.

Nothing, other than ensuring that the DNS (or/etc/hosts files on all machines) contains any entry for the new node.

B.

Restart the NameNode and ResourceManager daemons and resubmit any running jobs.

C.

Add a new entry to /etc/nodes on the NameNode host.

D.

Restart the NameNode of dfs.number.of.nodes in hdfs-site.xml

 

Correct Answer: A

Explanation:

http://wiki.apache.org/hadoop/FAQ#I_have_a_new_node_I_want_to_add_to_a_running_H adoop_cluster.3B_how_do_I_start_services_on_jus

t_one_node.3F

 

 

QUESTION 22

You are working on a project where you need to chain together MapReduce, Pig jobs. You also need the ability to use forks, decision points, and path joins. Which ecosystem project should you use to perform these actions?

 

A.

Oozie

B.

ZooKeeper

C.

HBase

D.

Sqoop

E.

HUE

 

Correct Answer: A

 

 

QUESTION 23

You decide to create a cluster which runs HDFS in High Availability mode with automatic failover, using Quorum Storage. What is the purpose of ZooKeeper in such a configuration?

 

A.

It only keeps track of which NameNode is Active at any given time.

B.

It monitors an NFS mount point and reports if the mount point disappears.

C.

It both keeps track of which NameNode is Active at any given time, and manages the Edits file. Which is a log of changes to the HDFS filesystem.

D.

If only manages the Edits file, which is log of changes to the HDFS filesystem.

E.

Clients connect to ZooKeeper to determine which NameNode is Active.

 

Correct Answer: A

Explanation:

http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH4/latest/PDF/CDH4-High-Availability-Guide.pdf(page 15)

 

 

QUESTION 24

Each node in your Hadoop cluster, running YARN, has 64GB memory and 24 cores. Your yarn.site.xml has the following configuration:

 

<property>

 

<name>yarn.nodemanager.resource.memory-mb</name>

 

<value>32768</value>

 

</property>

 

<property>

 

<name>yarn.nodemanager.resource.cpu-vcores</name>

 

<value>12</value>< /font>

 

</property>

 

You want YARN to launch no more than 16 containers per node. What should you do?

 

A.

Modify yarn-site.xml with the following property:

<name>yarn.scheduler.minimum-allocation-mb</name>

<value>2048</value>

B.

Modify yarn-sites.xml with the following property:

<name>yarn.scheduler.minimum-allocation-mb</name>

<value>4096</value>

C.

Modify yarn-site.xml with the following property:

<name>yarn.nodemanager.resource.cpu-vccores</name>

D.

No action is needed: YARN’s dynamic resource allocation automatically optimizes the node memory and cores

 

Correct Answer: A

 

 

QUESTION 25

Which is the default scheduler in YARN?

 

A.

YARN doesn’t configure a default scheduler, you must first assign an appropriate scheduler class in yarn-site.xml

B.

Capacity Scheduler

C.

Fair Scheduler

D.

FIFO Scheduler

 

Correct Answer: B

Explanation:

http://hadoop.apache.org/docs/r2.4.1/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html

 

 

QUESTION 26

On a cluster running CDH 5.0 or above, you use the hadoop fs -put command to write a 300MB file into a previously empty directory using an HDFS block size of 64 MB. Just after this command has finished writing 200 MB of this file, what would another use see when they look in directory?

 

A.

The directory will appear to be empty until the entire file write is completed on the cluster.

B.

They will see the file with a ._COPYING_ extension on its name. If they view the file, they will see contents of the file up to the last completed block (as each 64MB block is written, that block becomes available).

C.

They will see the file with a ._COPYING_ extension on its name. If they attempt to view the file, they will get a ConcurrentFileAccessException until the entire file write is completed on the cluster.

D.

They will see the file with its original name. If they attempt to view the file, they will get a ConcurrentFileAccessException until the entire file write is completed on the cluster.

 

Correct Answer: B

 

 

 

 

 

 

QUESTION 27

Which scheduler would you deploy to ensure that your cluster allows short jobs to finish within a reasonable time without starting long-running jobs?

 

A.

Complexity Fair Scheduler (CFS)

B.

Capacity Scheduler

C.

Fair Scheduler

D.

FIFO Scheduler

 

Correct Answer: C

Explanation:

http://hadoop.apache.org/docs/r1.2.1/fair_scheduler.html

 

 

QUESTION 28

For each YARN job, the Hadoop framework generates task log file. Where are Hadoop task log files stored?

 

A.

Cached by the NodeManager managing the job containers, then written to a log directory on the NameNode

B.

Cached in the YARN container running the task, then copied into HDFS on job completion

C.

In HDFS, in the directory of the user who generates the job

D.

On the local disk of the slave mode running the task

 

Correct Answer: D

 

 

QUESTION 29

What two processes must you do if you are running a Hadoop cluster with a single NameNode and six DataNodes, and you want to change a configuration parameter so that it affects all six DataNodes. (Choose two)

 

A.

You must modify the configuration files on the NameNode only. DataNodes read their configuration from the master nodes

B.

You must modify the configuration files on each of the six SataNodes machines

C.

You don’t need to restart any daemon, as they will pick up changes automatically

D.

You must restart the NameNode daemon to apply the changes to the cluster

E.

You must restart all six DatNode daemon to apply the changes to the cluster

 

Correct Answer: BD

 

 

QUESTION 30

Your company stores user profile records in an OLTP databases. You want to join these records with web server logs you have already ingested into the Hadoop file system. What is the best way to obtain and ingest these user records?

 

A.

Ingest with Hadoop streaming

B.

Ingest using Hive’s IQAD DATA command

C.

Ingest with sqoop import

D.

Ingest with Pig’s LOAD command

E.

Ingest using the HDFS put command

 

Correct Answer: C

 

Free VCE & PDF File for Cloudera CCA-500 Real Exam

Instant Access to Free VCE Files: CompTIA | VMware | SAP …
Instant Access to Free PDF Files: CompTIA | VMware | SAP …

This entry was posted in CCA-500 Real Exam (August) and tagged , , , , , , . Bookmark the permalink.