Nutanix documentation specifically identifies deduplication as providing the greatest benefit for common data sets, and it explicitly calls out full-clone VDI workloads as one of the strongest matches. In a full-clone desktop environment, many virtual desktops contain highly similar operating-system files, application binaries, and repeated data blocks. Capacity deduplication is designed to eliminate those repeated blocks and therefore reduce the total physical storage consumed by the environment. This is exactly why Nutanix recommends deduplication for such use cases.
The other answers are less targeted. Thin provisioning is useful generally but is not the signature optimization for repeated-content VDI storage. Erasure coding is powerful for write-cold data and capacity efficiency, but it is not the primary answer Nutanix recommends for full-clone VDI patterns. “Post-process map reduce” is not the standard storage-efficiency answer expected in this context. For exam purposes, whenever Nutanix asks about full-clone VDI, the important association is with capacity deduplication, because that feature directly exploits the high block similarity across cloned desktops. Therefore B is the correct and most authentic answer.
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit