Multiple Choice
A company has set up and deployed its machine learning (ML) model into production with an endpoint using Amazon SageMaker hosting services. The ML team has configured automatic scaling for its SageMaker instances to support workload changes. During testing, the team notices that additional instances are being launched before the new instances are ready. This behavior needs to change as soon as possible. How can the ML team solve this issue?
A) Decrease the cooldown period for the scale-in activity. Increase the configured maximum capacity of instances.
B) Replace the current endpoint with a multi-model endpoint using SageMaker.
C) Set up Amazon API Gateway and AWS Lambda to trigger the SageMaker inference endpoint.
D) Increase the cooldown period for the scale-out activity.
Correct Answer:

Verified
Correct Answer:
Verified
Q15: A company will use Amazon SageMaker to
Q16: A company wants to classify user behavior
Q17: A technology startup is using complex deep
Q18: A company ingests machine learning (ML) data
Q19: An office security agency conducted a successful
Q21: A Machine Learning Specialist works for a
Q22: A Machine Learning Specialist wants to determine
Q23: A Data Science team is designing a
Q24: A company's Machine Learning Specialist needs to
Q25: A machine learning specialist stores IoT soil