Chain-of-Thought (CoT) prompting explicitly instructs an LLM to provide intermediate reasoning steps, enhancing complex task performance—Option B is correct. Option A (Step-Back) reframes problems, not emits steps. Option C (Least-to-Most) breaks tasks into subtasks, not necessarily showing reasoning. Option D (In-Context Learning) uses examples, not reasoning steps. CoT improves transparency and accuracy.
OCI 2025 Generative AI documentation likely covers CoT under advanced prompting techniques.
Contribute your Thoughts:
Chosen Answer:
This is a voting comment (?). You can switch to a simple comment. It is better to Upvote an existing comment if you don't have anything to add.
Submit