So!
I am a ML newbie and was wondering if one of you pros can help me out on my learning journey (tool use = google colab).
I have a csv file containing loan data where each row is a customer that applied for a loan. One of the columns is called TARGET and it shows whether the customer’s loan request was approved or not. All sorts of data points are captured e.g. age, gender, salary, employment details like industry, assets, etc.
I’ve done cross validation and found that GradientBasedClassifier and LGBM perform the best. Cross validation also tells me that their accuracy is between 68%-70%.
My problem is that I SUCK at hyper param optimisation. How do you go from 68 to +80%??? Or 90%?
For the curious ones, here is the dataset: https://drive.google.com/file/d/1IKNVstck6gnXvfGS-mVRMAE1RFrDNUgZ/view?usp=sharing
As another comment stated using grid search, there’s a more optimised approach to this called Bayesian optimisation. There’s many algorithms you can implement for it. A personal favourite library of mine called optuna does it without the user having to think about the algorithm used for optimisation. (It employs a parzen estimator if y wanna get into the details you can look at that)