Multiple Choice
Your company's on-premises Apache Hadoop servers are approaching end-of-life, and IT has decided to migrate the cluster to Google Cloud Dataproc. A like-for-like migration of the cluster would require 50 TB of Google Persistent Disk per node. The CIO is concerned about the cost of using that much block storage. You want to minimize the storage cost of the migration. What should you do?
A) Put the data into Google Cloud Storage.
B) Use preemptible virtual machines (VMs) for the Cloud Dataproc cluster.
C) Tune the Cloud Dataproc cluster so that there is just enough disk for all data.
D) Migrate some of the cold data into Google Cloud Storage, and keep only the hot data in Persistent Disk.
Correct Answer:

Verified
Correct Answer:
Verified
Q183: Your company is in the process of
Q184: You want to build a managed Hadoop
Q185: You designed a database for patient records
Q186: You are migrating your data warehouse to
Q187: Cloud Bigtable is Google's _ Big Data
Q189: To give a user read permission for
Q190: An online retailer has built their current
Q191: Each analytics team in your organization is
Q192: You work for an advertising company, and
Q193: Google Cloud Bigtable indexes a single value