Multiple Choice
You want to migrate an on-premises Hadoop system to Cloud Dataproc. Hive is the primary tool in use, and the data format is Optimized Row Columnar (ORC) . All ORC files have been successfully copied to a Cloud Storage bucket. You need to replicate some data to the cluster's local Hadoop Distributed File System (HDFS) to maximize performance. What are two ways to start using Hive in Cloud Dataproc? (Choose two.)
A) Run the gsutil utility to transfer all ORC files from the Cloud Storage bucket to HDFS. Mount the Hive tables locally.
B) Run the gsutil utility to transfer all ORC files from the Cloud Storage bucket to any node of the Dataproc cluster. Mount the Hive tables locally.
C) Run the gsutil utility to transfer all ORC files from the Cloud Storage bucket to the master node of the Dataproc cluster. Then run the Hadoop utility to copy them do HDFS. Mount the Hive tables from HDFS.
D) Leverage Cloud Storage connector for Hadoop to mount the ORC files as external Hive tables. Replicate external Hive tables to the native ones.
E) Load the ORC files into BigQuery. Leverage BigQuery connector for Hadoop to mount the BigQuery tables as external Hive tables. Replicate external Hive tables to the native ones.
Correct Answer:

Verified
Correct Answer:
Verified
Q2: You use a dataset in BigQuery for
Q3: You work for a bank. You have
Q4: Your company's customer and order databases are
Q5: You are developing an application on Google
Q6: By default, which of the following windowing
Q8: Cloud Dataproc is a managed Apache Hadoop
Q9: You work for an advertising company, and
Q10: You have developed three data processing jobs.
Q11: Flowlogistic Case Study Company Overview Flowlogistic is
Q12: You operate a logistics company, and you