The absolute basic mathematics that is required to understand basic ML/DL are calculus, linear algebra, probability and some convex optimisation. We are all aware of that.
But ML and DL has become a vast field both in breadth and depth. A single person can’t understand the field entirely. There are specialistions and sub-specialisations and further more.
If you work in a branch of ML/DL research where some other math fundamentals are needed to understand research papers and do innovative research, can you mention your field of work and the math fundamentals that are required to gain entry into your field?
Geometric deep learning is a relatively small but growing field heavily based on group theory and representation theory. My own research on the subject was quite foundational/general and also required differential geometry, gauge theory, harmonic analysis, and functional analysis. Everything centered around equivariance; bulding problem-dependent local/global symmetries into the network architecture in order to make use of weight sharing and reduce the amount of data needed for the network to learn.
I’m an undergraduate and I was first introduced to this field 2 years ago through a blog post on gauge equivariant cnn’s. I used to work at the time as a SE but the elegance of it all made me go back to college. You have any recommendations for projects at the undergrad level or people/programs to reach out to? (I have a thesis class next semester and I’d really love to do it on GDL)
I’m looking into a problem in this area now. I’m currently looking at the paper Equivariant Neural Rendering, but it doesn’t seem very sophisticated. Can you recommend any better geometrical approaches to the novel view synthesis problem? Over the past few days I have been reading a lot by Hinton about how CNNs are bad at geometry, but his own preferred solution of Capsule Networks doesn’t seem to scale very well.