So!
I am a ML newbie and was wondering if one of you pros can help me out on my learning journey (tool use = google colab).
I have a csv file containing loan data where each row is a customer that applied for a loan. One of the columns is called TARGET and it shows whether the customer’s loan request was approved or not. All sorts of data points are captured e.g. age, gender, salary, employment details like industry, assets, etc.
I’ve done cross validation and found that GradientBasedClassifier and LGBM perform the best. Cross validation also tells me that their accuracy is between 68%-70%.
My problem is that I SUCK at hyper param optimisation. How do you go from 68 to +80%??? Or 90%?
For the curious ones, here is the dataset: https://drive.google.com/file/d/1IKNVstck6gnXvfGS-mVRMAE1RFrDNUgZ/view?usp=sharing
For some datasets, it’s not possible to go to 90%. Just figure out some reasonable ranges for your most important hyperparameters (you don’t have to optimize ALL of them), and do grid search for those values. That’s all there is to it.
As another comment stated using grid search, there’s a more optimised approach to this called Bayesian optimisation. There’s many algorithms you can implement for it. A personal favourite library of mine called optuna does it without the user having to think about the algorithm used for optimisation. (It employs a parzen estimator if y wanna get into the details you can look at that)
It’s wise to get a feel for the distribution of data so plot feature against feature to try and identify what should contribute to separability. Classification demands separable data. Consider testing a one dimensional feature set because that should be your worst case. If it’s no better than a random choice then you can’t expect it to help when combined with more features.
You might or might not benefit from using a kernel, some non linear transformations and feature engineering before concentrating on hyper parameters.
Sometimes removing a feature improves the result. Make sure you normalise when there are radically different scales.
And when you go for it on the whole dataset resist the urge to peek at the data. Ultimately you are looking for an algorithm that is not over fitting, and has a chance to perform on unseen data (of the same type).