Multiple Choice
You've migrated a Hadoop job from an on-prem cluster to dataproc and GCS. Your Spark job is a complicated analytical workload that consists of many shuffing operations and initial data are parquet files (on average 200-400 MB size each) . You see some degradation in performance after the migration to Dataproc, so you'd like to optimize for it. You need to keep in mind that your organization is very cost-sensitive, so you'd like to continue using Dataproc on preemptibles (with 2 non-preemptible workers only) for this workload. What should you do?
A) Increase the size of your parquet files to ensure them to be 1 GB minimum.
B) Switch to TFRecords formats (appr. 200MB per file) instead of parquet files.
C) Switch from HDDs to SSDs, copy initial data from GCS to HDFS, run the Spark job and copy results back to GCS.
D) Switch from HDDs to SSDs, override the preemptible VMs configuration to increase the boot disk size.
Correct Answer:

Verified
Correct Answer:
Verified
Q128: Cloud Bigtable is a recommended option for
Q129: You used Cloud Dataprep to create a
Q130: Dataproc clusters contain many configuration files. To
Q131: You decided to use Cloud Datastore to
Q132: MJTelco Case Study Company Overview MJTelco is
Q134: The _ for Cloud Bigtable makes it
Q135: You have a data pipeline with a
Q136: You are designing a pipeline that publishes
Q137: You need to store and analyze social
Q138: You work for a shipping company that