Does the argument not make sense? Why not first evaluate the arguments for not open sourcing models on the face instead of reaching for people’s personal incentives to lie about it? Seems like people forgot step one and just went to the assumption of mal intent, like you said you did.
Given that we don’t really know how AI can be used for malicious purposes, might it make sense that the org with by-far the most powerful model chooses not to release their secrets, as to slow the pace of malicious use?
Is it possible that Altman believes this, or does his incentive to lie about it so greatly outweigh anything else that you can’t even consider the merits of the argument? I hear way too much about why OAI must be lying about this, not enough considering what they have to say.
I assumed that this was the case since Altman started moaning about dangers of AI waay back.
Moaning about it while still developing SOTA models
Does the argument not make sense? Why not first evaluate the arguments for not open sourcing models on the face instead of reaching for people’s personal incentives to lie about it? Seems like people forgot step one and just went to the assumption of mal intent, like you said you did.
Given that we don’t really know how AI can be used for malicious purposes, might it make sense that the org with by-far the most powerful model chooses not to release their secrets, as to slow the pace of malicious use?
Is it possible that Altman believes this, or does his incentive to lie about it so greatly outweigh anything else that you can’t even consider the merits of the argument? I hear way too much about why OAI must be lying about this, not enough considering what they have to say.