A customer is using an older nearline SAS scale-out storage system to store data that is a static size. The customer recently purchased several HPC appliances to use that data.
What happens to the workflow?
A high-profile trading company requires latency to be sub-millisecond for their trading algorithms to work properly. They are currently using Direct Attached Storage devices, which is cumbersome to administer. The company needs a network storage solution for which to run the application and store market data.
Why should the company use SAN as the solution?
A current FlashBlade customer has a chip design process that is not meeting their SLAs. Although the customer is using fast remove, the IOPS on their 7x17TB FlashBlade have plateaued around 700K IOPS.
What should the architect recommend adding?
A customer reports far lower than expected performance on the FlashBlade, in a Oracle RMAN backup solution. The environment consists of four Oracle database nodes, each with 1x10Gb/s NIC, connected via NFS, to a fully populated FlashBlade chassis. Only a small number of blades are being used. Synthetic performance testing shows no performance issues in the network.
What should the architect suggest to causing this issue?
A customer is developing a AI workflow that Ingests sensor data to FlashBlade NFS from its cloud edge collection point for furthers analysis.
The customer’s HPC cluster mounts NFS to train the data but S3 to label.
Which challenge will the architect experience with this design?
A retail customer is designing a new application that will train an AI algorithm with metadata from purchase transactions.
The customer has the following constraints:
-Billions of transactions per hour
-Hundreds of thousands of clients
-Easily connect and disconnect from many network locations
-Asymmetric encryption across a WAN
-Resilience to network latency
Which protocol should the architect recommend?
A customer asks an architect to help troubleshoot low throughput in their high-performance compute (HPC) environment. All 70 HPC nodes have a single 10Gb connection to a 96 port 10Gb switch. The FlashBlade is connected to their dedicated HPC switch with 8x10Gb connections. The HPC application is using a single shared S3 bucket for the data being processed.
Which change is needed to increase throughput?
A customer wants to use FlashBlade as storage for a business critical, high-traffic SQL server. Why will this architecture fail?
A customer has two workflows. One workflow can use either SMB 3.0 or ISCSI. The other only leverage NFS.
What should the architect recommend as a strategy?
A state agency wants to store property deed image in multi-page .tif format on a FlashBlade NFS file system.
What compression ratio should the architect utilize for sizing?