Multiple Choice
You are building a new application that you need to collect data from in a scalable way. Data arrives continuously from the application throughout the day, and you expect to generate approximately 150 GB of JSON data per day by the end of the year. Your requirements are: Decoupling producer from consumer Space and cost-efficient storage of the raw ingested data, which is to be stored indefinitely Near real-time SQL query Maintain at least 2 years of historical data, which will be queried with SQL Which pipeline should you use to meet these requirements?
A) Create an application that provides an API. Write a tool to poll the API and write data to Cloud Storage as gzipped JSON files.
B) Create an application that writes to a Cloud SQL database to store the data. Set up periodic exports of the database to write to Cloud Storage and load into BigQuery.
C) Create an application that publishes events to Cloud Pub/Sub, and create Spark jobs on Cloud Dataproc to convert the JSON data to Avro format, stored on HDFS on Persistent Disk.
D) Create an application that publishes events to Cloud Pub/Sub, and create a Cloud Dataflow pipeline that transforms the JSON event payloads to Avro, writing the data to Cloud Storage and BigQuery.
Correct Answer:

Verified
Correct Answer:
Verified
Q108: As your organization expands its usage of
Q109: To run a TensorFlow training job on
Q110: Your company is performing data preprocessing for
Q111: You're using Bigtable for a real-time application,
Q112: You are operating a Cloud Dataflow streaming
Q114: You want to analyze hundreds of thousands
Q115: Which of the following is NOT one
Q116: You create an important report for your
Q117: The Dataflow SDKs have been recently transitioned
Q118: You are deploying a new storage system