• 500g_Spekulatius@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I don’t care.

    All the research I do and did is in Pytorch, nearly all the research I use is done in Pytorch.

    So why should I switch? I would need to implement all the frameworks, tests etc. again and reimplement (and test+verify) all related work.

    No, thanks.

  • FishFar4370@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    No. If they change the name to KerasGPT 4.0, then it will be all I think, talk, and read about for 6 months.

  • KyxeMusic@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Started with Keras, wtf is this. Moved to PyTorch, oh this is so nice.

    Don’t plan to ever come back.

  • Relevant-Yak-9657@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Most people here would say that PyTorch is better, but IMO Keras is fine and no hate to tensorflow either. They just did a lot of questionable API design changes and FC has been weird on twitter. For me, it is pretty exciting, since Keras_core seems pretty stable as I use it and it is just another great framework for new people in deep learning or quick prototyping.

    • Erosis@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I get access to some awesome data loading and preprocessing tools with the pytorch backend then I swap to tensorflow for quantization for tflite model with almost no fuss.

      It was somewhat annoying going from torch to onnx to tflite previously. There’s a bunch of small roadbumps that you have to deal with.

      • Relevant-Yak-9657@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Yeah, unifying these tools feels like the best way to go for me too. I also like JAX for a similar reason because there are 50 different libraries with different use cases and it is easy to mix parts of them together, due to the common infrastructure. Like Keras losses + flax models + optax training + my custom libraries super classes. It’s great tbh.

  • underPanther@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Libraries like PyTorch and Jax are already high level libraries in my view. The low level stuff is C++/CUDA/XLA.

    I don’t really see the useful extra abstractions in Keras that would lure me to it.

    • Relevant-Yak-9657@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      As the others said, it’s a pain to reimplement common layers in JAX (specifically). PyTorch is much higher level in it’s nn API, but personally I despise rewriting the amazing training loop for every implementation. That’s why even JAX uses Flax for common layers, because why use an error prone operator like jax.lax.conv_from_dilated or whatever and fill its 10 arguments every time? I would rather use flax.linen.Conv2D or keras_core.layers.Conv2D in my Sequential layer and prevent debugging a million times. For PyTorch, model.fit() can just quickly suffice and later customized.

    • odd1e@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      What about not having to write your own training loop? Keras takes away a lot of boilerplate code, it makes your code more readable and less likely to contain bugs. I would compare it to scikit-learn: Sure, you can implement your own Random Forest, but why bother?

      • TheCloudTamer@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        The reality is that you nearly always need to break into that training abstraction, and so it is useless.

    • abio93@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      If you use Jax with Keras you are eseentialy doing: keras->jax->jaxpr->llvm->cuda/xla, with probably many more intermediate levels

  • skiddadle400@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Yes! I know all the cool kids use PyTorch but I find it is too much boilerplate. I like keras. So this is great news.

    • Relevant-Yak-9657@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yes, completely. In fact, tf.keras will just be changed internally (as in source code) as keras_core, but you won’t notice any difference in tf.keras (except for some removal of currently depracated legacy code and a visual update in .fit()).

  • bougnaderedeter@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    No, all these extra layers of abstraction are detrimental in my view. Too much magic happening without you knowing what unless you dive into their code, same with PyTorch lightning etc

  • Infamous-Bank-7739@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I think I spent enough hours with Pytorch that I see no reason going to Keras anymore. I really liked it for the very easy documentation, nice blog posts etc. Now I know all that stuff already in Pytorch, so it’s hard to see any reason to be excited.

  • nicmakaveli@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    no, I tried really hard to stick with TF, I learned the basics, back when you still had to deal with a computational graph, then I found tflearn, keras and my world changed.

    I would still have sticked to it, but google just doesn’t care enough about TF and I think its a waste of my ressources to learn it.

  • IRonyk@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Nope…

    Moved to PyTorch.

    Not coming back…

    Anything beyond the normal stuff is a pain,

  • spherical_projection@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    No, I finally changed from Tensorflow Keras to PyTorch and not going back.

    Why abandon keras? In the end I felt like I was always fighting some little bug or error that needed brute force guessing to resolve. The data loaders are painful. Dealing with idiosyncrasies of the model subclass system to do anything custom.

  • odd1e@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Yes, very much actually! People around here tend to forget that not everyone in ML is building billion-parameter LLMs, some of us just need a few basic building blocks and a model.fit() function to call

  • mfs619@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Keras broke the ice for me. The design of NNs used to take me a while to understand. It felt mechanic and meaningless. I was struggling hard to understand why adding or subtracting layers would help or hurt my models. I was trudging through tf documentation and honestly… I was very close to giving up.

    I built my first ANN, got better with keras, graduated to tf, built my first U-net and got more confidence. I think anyone that really criticizes keras doesn’t understand that it is like criticizing training wheels for a bike.

    You gotta learn to walk before you can run. You gotta learn baby nets before you are building monster segmentation models on recurrent convolutional neural nets. It takes time to understand the concepts and data flow.

    • Relevant-Yak-9657@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yes, same story. Keras allowed me to understand the basics. Personally, my journey has been as Keras for architecture, Pytorch/TensorFlow for implicit gradient differentiation, JAX for explicit gradient optimization, and then creating a library on JAX to understand how these libraries work.

    • unpublishedmadness@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      training wheels are horrible btw

      it’s much better to train kids with “pedal-less” bikes and then graduate them to pedals without training wheels, much easier to adapt to gaining balance etc.

      • elehman839@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        Yeah, Keras was sort of useful and sort of annoying, but training wheels just suck. What’s worst is when your kid falls while using training wheels. One a balance bike, you know you’re unstable. On training wheels, your kid has false faith and isn’t prepared for the tipover… especially if your kid is, at that moment, entranced with your scintillating lecture about the superiority of PyTorch.

      • mfs619@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        So, either you have not recently taught a kid to ride a bike or you are just trolling.

        So, I will counter your high ceiling with the low floor plan. The more a person rides a bike tw’s or not the better they will be at riding a bike. The tw’s get you riding more often and logging in the hours.

        You may be right about balance being a skill you develop without tw’s but the hours they will spend failing and falling down discourages the kids then they don’t want to play anymore.

        • unpublishedmadness@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          11 months ago

          I think you misunderstood me. In France they have those bikes for kids without pedals called “draisiennes” (I don’t know what it is in English).

          Kids on these bikes have no training wheels and they just “stroll” with them, lift their legs, and get used to manage the balance at speed. My friend’s kids who got used to it like that were able to pedal on their first “real bike” (with pedals) first time, without any training wheels.

          It makes the transition *a lot* easier apparently.