• AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.world
    link
    fedilink
    arrow-up
    34
    arrow-down
    5
    ·
    1 year ago

    Have developers be more mindful of the e-waste they’re contributing to by indirectly deprecating CPUs when they skip over portions of their code and say “nah it isn’t worth it to optimize that thing + everyone today should have a X cores CPU/Y GB of RAM anyway”. Complacency like that is what leads software that is as simple in functionality as equivalent software was one or two decades ago to be 10 times more demanding today.

    • space@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      18
      ·
      1 year ago

      Yes!! I enjoy playing with retro tech and was actually surprised on how much you can do with an ancient Pentium 2 machine, and how responsive the software at the time was.

      I really dislike how inefficient modern software is. Like stupid chat apps that use more RAM while sitting in the background than computers had 15-20 years ago…

    • onlinepersona@programming.devOP
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 year ago

      It leads to software obesity and is a real thing. I think it has to do with developer machines being beefy, so if you write something that runs on it and don’t have a shit machine to test it on, you don’t know just how badly it actually performs.

      But it also has to do with programming languages. It’s much much easier to prototype in Python or Javascript and often the prototype becomes the real thing. Who really has time (and/or money) to rewrite their now functional program in a language that is performant?
      IMO there doesn’t seem to be a clear solution.

      • whofearsthenight@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        I don’t think that even the languages are the problem, it’s the toolchain. While certainly if you went back to C or whatever, you can design more performant systems, I think the problem overall stems from modern toolchains being kinda ridiculous. It is entirely common in any language to load in massive libraries that suck up 100’s of mb of RAM (if not gigs) to get a slightly nicer function to lowercase text or something.

        The other confounding factor is “write once, run anywhere” which in practice means that there is a lot of shared code and such that does nothing on your machine. The most obvious example being Electron. Pretty much all of the Electron apps I use on the reg (which are mostly just Discord and slack) are conceptually simple apps that have analogues that used to run on a few hundred mbs of storage and 10’s of mb of RAM.

        Oh, one other sidetone - how many CPUs are wasting cycles on things that no one wants, like extremely complex ad-tracking/data mining/etc.

        I know why this is the case, and ease of development does enable us to have software that we probably otherwise wouldn’t, but this is a thing that I think is a real blight on modern computing, and I think it’s solvable. I mean, probably the dumbest idea, but improving translation layers to run platform-native code can be vastly improved. Especially in a world where we have generative AI, there has to be a way to say “hey, I’ve got this javascript function, I need this to work in kotlin, swift, c++, etc.”

          • porgamrer@programming.dev
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            LLVM is ironically a very slow compiler back-end, whose popularity has contributed to a general slow-down in compilation speed across the whole industry (it’s even slow at doing debug builds for fast iteration).

            WASM has some promise though

            • onlinepersona@programming.devOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Doesn’t really matter if the compiler is slow if the result is optimized and fast 🤷 Rust compiles slower than C, but that’s because C has no safeguards (excluding static typing). Very often the wasted CPU cycles are on the end of the user, not the developer.