Vague prompts cost twice
A vague extraction prompt produces invalid JSON, triggers a retry, and the final successful call is what you pay for. Tight prompts are cheaper, not just better.
bizSupply meters successful AI calls only. That sounds reassuring until you realise that an LLM giving up after a parse error and succeeding on the second attempt is two AI calls of work but one chargeable call. The cost shows up as a higher credits-per-document figure, and the cause is almost always a prompt the LLM had to interpret rather than execute.
What "vague" looks like
- "Extract the relevant information from this contract" — relevant by whose standard?
- "Return the dates" — formatted how, in what timezone, named what?
- "List the parties" — as strings, objects, with addresses?
- No example output, no schema reference, no fallback when a field is genuinely missing.
What tight looks like
State the shape, the format, and what to do when something is missing. Reference the ontology by name. Show one short example. Set temperature: 0.1. The result is fewer retries, fewer parse errors, and a credit-per-document figure that stops drifting upward over time.