How does the negative log likelihood function that’s used as the objective in gamma regression link back to shape and scale parameter of the gamma distribution?

I’ve been particularly looking at xgboost which use this definition:

https://github.com/dmlc/xgboost/blob/7663de956c37eb4dd528132214e68ba2851d9696/src/metric/elementwise_metric.cu#L270-L286

I don’t understand the purpose of psi in this equation given it is set to 1 and this would allow the final function can be simplified significantly.

The gamma deviance is more intuitive to me as it the deviance for an individual prediction Vs actual will remain constant for if the multiplicative difference prediction Vs actual stays constant which ties in with how I interpret the connection between mean and variance for a gamma distribution. To be exact when the gamma distribution is parameterised by the mean and a fixed shape parameter alpha as the mean increases the variance will increase.

  • jedi-son@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Isn’t this just the log liklihood of a gamma distribution. I’m not sure I understand the question.

    • Oppose_Worry_652@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      A lack of understanding on my part has lead to poor wording . I’m struggling to understand why the log likelihood used in the code is independent of the shape and scale parameters.

      I think I’ve just figured out my mistake… I was thinking the variance of the modelled residuals would vary with the mean but that’s not the correct interpretation