Multiple Choice
Your analytics team wants to build a simple statistical model to determine which customers are most likely to work with your company again, based on a few different metrics. They want to run the model on Apache Spark, using data housed in Google Cloud Storage, and you have recommended using Google Cloud Dataproc to execute this job. Testing has shown that this workload can run in approximately 30 minutes on a 15-node cluster, outputting the results into Google BigQuery. The plan is to run this workload weekly. How should you optimize the cluster for cost?
A) Migrate the workload to Google Cloud Dataflow
B) Use pre-emptible virtual machines (VMs) for the cluster
C) Use a higher-memory node so that the job runs faster
D) Use SSDs on the worker nodes so that the job can run faster
Correct Answer:

Verified
Correct Answer:
Verified
Q26: You have a data stored in BigQuery.
Q27: Your company is currently setting up data
Q28: Which Cloud Dataflow / Beam feature should
Q29: You have a query that filters a
Q30: Which is not a valid reason for
Q32: You are designing an Apache Beam pipeline
Q33: When running a pipeline that has a
Q34: Which of the following statements about Legacy
Q35: You are choosing a NoSQL database to
Q36: What is the general recommendation when designing