Ilya from OpenAI have published a paper (2020) about Q* : a GPT-f model have capabilities in understand and resolve Math, Automated Theorem Proving.

https://arxiv.org/abs/2009.03393

When AI model can understand and really doing Math, that a critical jump.

    • _Lee_B_@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Strange, I thought they would naturally be rewarding the process, by rewarding each word that’s generated by the sequence to sequence model, rather than the final words, for example. Maybe they over-optimised and skipped training on all output.

    • nested_dreams@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      This definitely sounds like the paper. 100% worth the read, surprised I hadn’t heard much about it until this ordeal

      • wind_dude@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        PRM8k, made the rounds maybe 6+ months but they never publicly released the model.

        • dododragon@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I’ve recently just got into LLM’s have you tried these math models? They seem to follow math related instructions reasonably well.

          wizard-math:13b-q6_KMathLLM-MathCoder-CL-7B.Q8_0.ggufmetamath-mistral-7b.Q5_K_M.gguf

  • allende911@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    When AI model can understand and really doing Math, that a critical jump.

    Grammarly is free, my man

  • PopeSalmon@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    doesn’t seem directly related but surely it’s indirectly related, this is an interesting idea: “We demonstrate that iteratively training a value function on statements generated by our language model leads to improved prover performance, which immediately suggests a strategy for continuous self improvement: keep training on proofs generated by the prover.”

  • AnomalyNexus@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    This doesn’t smell right to me.

    All references around Q* and the drama around proto-AGI…e.g. Altman talking about veil of ignorance being pulled back seem to point to something that happened in the last couple of weeks. Not 2020.

    • BlipOnNobodysRadar@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      If they found a proto-AGI and it was relatively trivial to implement, it would be a good idea to throw competitors off the trail with a red herring.