At what point do you think there was an inflection point for technical expertise and credentials requires for mid-top tier ML roles? Or was there never one? To be specific, would knowing simple scikit-learn algorithms, or basics of decision trees/SVM qualify you for full-fledged roles only in the past or does it still today? At what point did FAANGs boldly state: preferred (required) to have publications at top-tier venues (ICLR, ICML, CVPR, NIPS, etc) in their job postings?
I use the word ‘creep’ in the same context ‘power creep’ is used in battle animes where the scale of power slowly gets to such an irrationally large scale that anything in the past looks extremely weak.
Back in late 2016 I landed my first ML role at a defense firm (lol) but to be fair had just watched a couple ML courses on YouTube, took maybe 2 ML grad courses, and had an incomplete working knowledge of CNNs. Never used Tensorflow, had some experience with Theano not sure if it’s exists anymore.
I’m certain that skill set would be insufficient in the 2023 ML industry. But it begs the question is this skill creep making the job market impenetrable for folks who were already working post 2012-2014.
Neural architectures are becoming increasingly complex. You want to develop a multi-modal architecture for an embodied agent? Well you better know a good mix of DL involving RL+CV+NLP. Improving latency on edge devices - how well do you know your ONNX/TensorRT/CUDA kernels, your classes likely didn’t even teach you those. Masters is the new bachelors degree, and that’s just to give you a fighting chance.
Yeah not sure if it was after the release of AlexNet in 2012, Tensorflow in 2015, Attention /Transformers in 2017 or now ChatGPT - but the skill creep is definitely creating an increasingly fast and growing technical rigor in the field. Close your eyes for 2 years and your models feel prehistoric and your CUDA, Pytorch, Nvidia Driver, NumPy versions need a fat upgrade.
Thoughts yall?
Yes.
But “the haves” like to pretend its not in order to make it seem like everything’s “fair”.
Goeff Hinton’s 1986 backpropagation research paper is like 4 pages.
Nowaday this is called a brain-fart.
And it was already invented like a dozen times. Also, its just chain rule.
Hintons paper was famous not because he claimed to invent backprop but because (iirc) it was the first instance of it being used to optimize neural nets.
Like the transformer paper is famous but it didn’t invent attention—just applied it in a novel way.