Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence | The White House

Basically - "any model trained with ~28M H100 hours, which is around $50M USD or - any cluster with 10^20 FLOPs, which is around 50,000 H100s, which only two companies currently have " - hat-tip to nearcyan on Twitter for this calculation.

Specific language below.

"   (i)   any model that was trained using a quantity of computing power greater than 1026 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 1023 integer or floating-point operations; and

(ii)  any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbit/s, and having a theoretical maximum computing capacity of 1020 integer or floating-point operations per second for training AI."

  • Cybernetic_Symbiotes@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    The numbers appear to have OpenAI’s finger-prints on them. I don’t know if they’re from an AI-risk mitigations perspective or for laying foundations for competitive barriers. Probably a mix of both.

    At 30 trillion tokens, 10^26 float ops caps you at ~550 billion parameters (using float ops = 6 * N * D). Does this indirectly leak anything about OpenAI’s current scaling? At 10 trillion tokens, it’s 1.7 Trillion parameters. Bigger vocabularies can stretch this limit a bit.