From his website stallman.org:

Richard Stallman has cancer. Fortunately it is slow-growing and manageable follicular lymphona, so he will probably live many more years nonetheless. But he now has to be even more careful not to catch Covid-19.

Recent video of him speaking at GNU 40 Hacker Meeting. Screenshots of video stream.

    • lemmesay@discuss.tchncs.de
      link
      fedilink
      arrow-up
      6
      ·
      1 year ago

      GPT, for example, fails in calculation with problems like knapsack, adjacency matrix, Huffman tree, etc.

      it starts giving garbled output.

        • lloram239@feddit.de
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          The current LLMs can’t loop and can’t see individual digits, so their failure at seemingly simple math problems is not terrible surprising. For some problems it can help to rephrase the question in such a way that the LLM goes through the individual steps of the calculation, instead of telling you the result directly.

          And more generally, LLMs aren’t exactly the best way to do math anyway. Human’s aren’t any good at it either, that’s why we invented calculators, which can do the same task with a lot less computing power and a lot more reliably. LLMs that can interact with external systems are already available behind paywall.

            • lloram239@feddit.de
              link
              fedilink
              arrow-up
              4
              ·
              edit-2
              1 year ago

              Humans are wrong all the time and confidently so. And it’s an apples and oranges competition anyway, as ChatGPT has to cover essentially all human knowledge, while a single human only knows a tiny subset of it. Nobody expects a human to know everything ChatGPT knows in the first place. A human put into ChatGPTs place would not perform well at all.

              Humans make the mistake that they overestimate their own capabilities because they can find mistakes the AI makes, when they themselves wouldn’t be able to perform any better, at best they’d make different mistakes.

              • mexicancartel@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                1 year ago

                So same way it may not be able to code if it can’t do math. All i see it having is profound english knowledge, and the data inputted.

                Human knowledge is limited, i agree. But more knowledge is different from the ability to so called ‘think’. Maybe it can be done with a different type of neural network and usage of logical gates seperate from the neural networks

    • Communist@lemmy.ml
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      10
      ·
      1 year ago

      https://www.deepmind.com/blog/competitive-programming-with-alphacode

      People overestimate how much it matters that ai “doesn’t have the capacity to understand it’s output”

      Even if it doesn’t, is that a massive problem to overcome? There’s studies showing that if you have an ai list the potential problems with an output and then apply them to its own output it performs significantly better. Perhaps we’re just a recursive algorithm away from that.