CHAT GPT recommends the paper “Rectified Linear Units Improve Restricted Boltzmann Machines” by Vinod Nair and Geoffrey E. Hinton as it is one of the foundational papers introducing and exploring the benefits of ReLUs in neural networks. It also says it is a good starting point to learn about ReLUs and their advantages in machine learning models.
But, from your experience do you have any other papers or textbooks or even videos that you would recommend to someone learning about it? I don’t mind if they’re math heavy, as I do have a Bsc Honours in App Math.
Thanks!
ReLU itself is dead simple. Why it matters is more complicated
Here is all you need to learn about relu:
relu = max(0,x)
I don’t think there are books specifically focused on that, and probably there’s no need for it. Nonetheless, there’s much information scattered throughout papers, but the fundamental concepts to keep in mind are not that many, imho. ReLU is piecewise linear, and the pieces are the two halves of its domain. In one half it is just zero, in the other ReLU(x)=x, so it is very easy and fast to compute. It is enough to make it nonlinear, hence allow powerful expressivity and make a neural network potentially a universal approximator. Many or most activations are nil and that sparsity is useful when it’s not always the same set of unit having zero output. The drawbacks are related to the same characteristics: units may die (always output zero, never learning by backprop), there’s a point (0) where the derivative is undefined even if the function is continuous, and there’s no way to differentiate small and large negative values since they all result in a 0.
I believe it was first created in “Cognitron: A self organizing multilayered neural network”, but was not referred to as ReLU. It was popularized by “Deep Sparse Rectifier Neural Networks” and “Rectified Linear Units Improve Restricted Boltzmann Machines”.
In regard to deep learning and GPU use: It’s efficient compared to other activation functions because it consists of comparison and thresholding operations, and the derivative is just 1 when positive and 0 if not (for backpropagation). It’s effective because it adds non-linearity to layers of linear operations like the convolution.