Multiple Choice
You need to create a data pipeline that copies time-series transaction data so that it can be queried from within BigQuery by your data science team for analysis. Every hour, thousands of transactions are updated with a new status. The size of the intitial dataset is 1.5 PB, and it will grow by 3 TB per day. The data is heavily structured, and your data science team will build machine learning models based on this data. You want to maximize performance and usability for your data science team. Which two strategies should you adopt? (Choose two.)
A) Denormalize the data as must as possible.
B) Preserve the structure of the data as much as possible.
C) Use BigQuery UPDATE to further reduce the size of the dataset.
D) Develop a data pipeline where status updates are appended to BigQuery instead of updated.
E) Copy a daily snapshot of transaction data to Cloud Storage and store it as an Avro file. Use BigQuery's support for external data sources to query.
Correct Answer:

Verified
Correct Answer:
Verified
Q219: You need to create a data pipeline
Q220: MJTelco Case Study Company Overview MJTelco is
Q221: Which methods can be used to reduce
Q222: MJTelco Case Study Company Overview MJTelco is
Q223: You operate a database that stores stock
Q225: You currently have a single on-premises Kafka
Q226: You work for a mid-sized enterprise that
Q227: All Google Cloud Bigtable client requests go
Q228: Which software libraries are supported by Cloud
Q229: You are developing a software application using