Diffusion models, a class of generative AI models, operate in two phases: forward diffusion and reverse diffusion. According to NVIDIA’s documentation on generative AI (e.g., in the context of NVIDIA’s work on generative models), forward diffusion progressively injects noise into a data sample (e.g., an image or text embedding) over multiple steps, transforming it into a noise distribution. Reverse diffusion, conversely, starts with a noise vector and iteratively denoises it to generate a new sample that resembles the training data distribution. This process is central to models like DDPM (Denoising Diffusion Probabilistic Models). Option A is incorrect, as forward diffusion adds noise, not generates samples. Option B is false, as diffusion models typically use convolutional or transformer-based architectures, not recurrent networks. Option C is misleading, as diffusion does not align with bottom-up/top-down processing paradigms.
[References:, NVIDIA Generative AI Documentation: https://www.nvidia.com/en-us/ai-data-science/generative-ai/, Ho, J., et al. (2020). "Denoising Diffusion Probabilistic Models.", , ]
Submit