Objective: Identify three required params for an OCI Data Flow app config.
Understand Data Flow: Runs Spark apps; needs compartment, storage, and identity.
Evaluate Options:
A: Archive path—Optional if script is in Object Storage—incorrect.
B: Local script path—Not needed; script is uploaded—incorrect.
C: Compartment—Required for resource scope—correct.
D: Bucket—Required for script storage/access—correct.
E: Display name—Required for app identification—correct.
Reasoning: C, D, E are mandatory metadata for Data Flow creation—script location is specified via bucket.
Conclusion: C, D, E are correct.
OCI documentation states: “To create a Data Flow application, configure the compartment OCID (C), Object Storage bucket for the PySpark script (D), and a display name (E) in the application object.” Local paths (B) or archives (A) are optional or handled separately—only C, D, E are required per OCI’s Data Flow API spec.
Oracle Cloud Infrastructure Data Flow Documentation, "Creating Applications".
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit