My understanding of LLM function calling is roughly as follows:
- You “list” all the functions the model can call in the prompt
- ???
- The model knows when to return the “function names” (either in json or otherwise) during conversation
Does anyone have any advice or examples on what prompt should I use?
I usually add context free grammar to make sure it always outputs valid JSON. Here’s the json.gbnf I use.
root ::= object value ::= object | array | string | number | ("true" | "false" | "null") ws object ::= "{" ws ( string ":" ws value ("," ws string ":" ws value)* )? "}" ws array ::= "[" ws ( value ("," ws value)* )? "]" ws string ::= "\"" ( [^"\\] | "\\" (["\\/bfnrt] | "u" [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F] [0-9a-fA-F]) # escapes )* "\"" ws number ::= ("-"? ([0-9] | [1-9] [0-9]*)) ("." [0-9]+)? ([eE] [-+]? [0-9]+)? ws # Optional space: by convention, applied in this grammar after literal chars when allowed ws ::= ([ \t\n] ws)?