Google Professional Machine Learning Engineer Professional-Machine-Learning-Engineer Question # 4 Topic 1 Discussion

Google Professional Machine Learning Engineer Professional-Machine-Learning-Engineer Question # 4 Topic 1 Discussion

Professional-Machine-Learning-Engineer Exam Topic 1 Question 4 Discussion:
Question #: 4
Topic #: 1

You recently developed a deep learning model using Keras, and now you are experimenting with different training strategies. First, you trained the model using a single GPU, but the training process was too slow. Next, you distributed the training across 4 GPUs using tf.distribute.MirroredStrategy (with no other changes), but you did not observe a decrease in training time. What should you do?


A.

Distribute the dataset with tf.distribute.Strategy.experimental_distribute_dataset


B.

Create a custom training loop.


C.

Use a TPU with tf.distribute.TPUStrategy.


D.

Increase the batch size.


Get Premium Professional-Machine-Learning-Engineer Questions

Contribute your Thoughts:


Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.