Fix chat completions response handling

This commit is contained in:
Jordan Wages 2026-04-19 02:10:10 -05:00
commit 02593e56d0
4 changed files with 66 additions and 7 deletions

View file

@ -15,7 +15,7 @@ Classification requests ask the model for structured JSON output with a required
- **Configurable endpoint** set the classification service base URL on the options page.
- **Model selection** load available models from the endpoint and choose one (or omit the model field).
- **Optional OpenAI auth headers** provide an API key plus optional organization/project headers when needed.
- **Request formats** use native OpenAI chat messages or choose Qwen, Mistral, Harmony (gpt-oss), or a custom templated message format.
- **Request formats** built-in formats use native chat messages; a custom format can still send one templated user message when needed.
- **Custom system prompts** tailor the instructions sent to the model for more precise results.
- **Persistent result caching** classification results and reasoning are saved to disk so messages aren't re-evaluated across restarts.
- **Advanced parameters** tune generation settings like temperature and topp from the options page. Unsupported OpenAI sampling fields are filtered out automatically.