The Rectified Linear Unit (ReLU) activation function is defined as f(x)=max(0,x)f(x) = \max(0, x)f(x)=max(0,x). In the positive interval, where x>0x > 0x>0, the derivative of ReLU is always 1. This makes ReLU popular for deep learning networks because it helps avoid the vanishing gradient problem during backpropagation, ensuring efficient gradient flow.
[Reference: Huawei HCIA-AI Certification, Deep Learning Overview – Activation Functions., , , ]
Submit