My understanding of LLM function calling is roughly as follows:

  1. You “list” all the functions the model can call in the prompt
  2. ???
  3. The model knows when to return the “function names” (either in json or otherwise) during conversation

Does anyone have any advice or examples on what prompt should I use?

  • amemingfullife@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 years ago

    I usually add context free grammar to make sure it always outputs valid JSON. Here’s the json.gbnf I use.

    root   ::= object
    value  ::= object | array | string | number | ("true" | "false" | "null") ws
    
    object ::=
      "{" ws (
                string ":" ws value
        ("," ws string ":" ws value)*
      )? "}" ws
    
    array  ::=
      "[" ws (
                value
        ("," ws value)*
      )? "]" ws
    
    string ::=
      "\"" (
        [^"\\] |
        "\\" (["\\/bfnrt] | "u" [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F]) # escapes
      )* "\"" ws
    
    number ::= ("-"? ([0-9] | [1-9] [0-9]*)) ("." [0-9]+)? ([eE] [-+]? [0-9]+)? ws
    
    # Optional space: by convention, applied in this grammar after literal chars when allowed
    ws ::= ([ \t\n] ws)?