Multiple Choice
You are implementing security best practices on your data pipeline. Currently, you are manually executing jobs as the Project Owner. You want to automate these jobs by taking nightly batch files containing non-public information from Google Cloud Storage, processing them with a Spark Scala job on a Google Cloud Dataproc cluster, and depositing the results into Google BigQuery. How should you securely run this workload?
A) Restrict the Google Cloud Storage bucket so only you can see the files
B) Grant the Project Owner role to a service account, and run the job with it
C) Use a service account with the ability to read the batch files and to write to BigQuery
D) Use a user account with the Project Viewer role on the Cloud Dataproc cluster to read the batch files and write to BigQuery
Correct Answer:

Verified
Correct Answer:
Verified
Q19: You need to create a near real-time
Q20: You need to choose a database for
Q21: Which of these numbers are adjusted by
Q22: You are designing storage for very large
Q23: You are building a model to make
Q25: You have a requirement to insert minute-resolution
Q26: You have a data stored in BigQuery.
Q27: Your company is currently setting up data
Q28: Which Cloud Dataflow / Beam feature should
Q29: You have a query that filters a