So!

I am a ML newbie and was wondering if one of you pros can help me out on my learning journey (tool use = google colab).

I have a csv file containing loan data where each row is a customer that applied for a loan. One of the columns is called TARGET and it shows whether the customer’s loan request was approved or not. All sorts of data points are captured e.g. age, gender, salary, employment details like industry, assets, etc.

I’ve done cross validation and found that GradientBasedClassifier and LGBM perform the best. Cross validation also tells me that their accuracy is between 68%-70%.

My problem is that I SUCK at hyper param optimisation. How do you go from 68 to +80%??? Or 90%?

For the curious ones, here is the dataset: https://drive.google.com/file/d/1IKNVstck6gnXvfGS-mVRMAE1RFrDNUgZ/view?usp=sharing

  • kduyehj@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It’s wise to get a feel for the distribution of data so plot feature against feature to try and identify what should contribute to separability. Classification demands separable data. Consider testing a one dimensional feature set because that should be your worst case. If it’s no better than a random choice then you can’t expect it to help when combined with more features.

    You might or might not benefit from using a kernel, some non linear transformations and feature engineering before concentrating on hyper parameters.

    Sometimes removing a feature improves the result. Make sure you normalise when there are radically different scales.

    And when you go for it on the whole dataset resist the urge to peek at the data. Ultimately you are looking for an algorithm that is not over fitting, and has a chance to perform on unseen data (of the same type).