Multiple Choice
You have developed three data processing jobs. One executes a Cloud Dataflow pipeline that transforms data uploaded to Cloud Storage and writes results to BigQuery. The second ingests data from on-premises servers and uploads it to Cloud Storage. The third is a Cloud Dataflow pipeline that gets information from third-party data providers and uploads the information to Cloud Storage. You need to be able to schedule and monitor the execution of these three workflows and manually execute them when needed. What should you do?
A) Create a Direct Acyclic Graph in Cloud Composer to schedule and monitor the jobs.
B) Use Stackdriver Monitoring and set up an alert with a Webhook notification to trigger the jobs.
C) Develop an App Engine application to schedule and request the status of the jobs using GCP API calls.
D) Set up cron jobs in a Compute Engine instance to schedule and monitor the pipelines using GCP API calls.
Correct Answer:

Verified
Correct Answer:
Verified
Q5: You are developing an application on Google
Q6: By default, which of the following windowing
Q7: You want to migrate an on-premises Hadoop
Q8: Cloud Dataproc is a managed Apache Hadoop
Q9: You work for an advertising company, and
Q11: Flowlogistic Case Study Company Overview Flowlogistic is
Q12: You operate a logistics company, and you
Q13: Your company is in a highly regulated
Q14: You have a data pipeline that writes
Q15: Why do you need to split a