I’m a machine learning engineer and researcher. I got fed up with how difficult it is to understand why neural networks behave the way they do, so i wrote a library to help with it.

Comgra (computation graph analysis) is a library you can use with pytorch to extract all the tensor data you care about and visualize it graphically in a browser.

This allows for a much more detailed analysis of what is happening than the usual approach of using tensorboard. You can go investigate tensors as training proceeds, drill down into individual neurons, inspect single data sets that are of special interest to you, track gradients, compare statistics between different training runs, and more.

This tool has saved me a ton of time in my research by letting me check my hypotheses much more quickly than normal and by helping me understand how the different parts of my network really interact.

I hope this tool can save other people just as much time as it did me. I’m also open for suggestions on how to improve it further: Since I’m already gathering and visualizing a lot of network information, adding more automated analysis would not be much extra work.

    • Smart-Emu5581@alien.topOPB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Mechanistic Interpretability

      It’s primarily intended for debugging, but it can also help with mechanistic interpretability. Being able to see the internals of your network for any input and at different stages of training can help a lot with understanding what’s going on.

      • currentscurrents@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        IMO interpretability and debugging are inherently related. The more you know about how the network works, the easier it will be to debug it.