Custom Template Not Working with /v1/chat/completions Endpoint #1
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Issue Summary
When using a custom template with the
/v1/chat/completionsendpoint, Sortana fails to send a properly formatted request. LM Studio responds with'messages' field is required, indicating that the request body is malformed.Steps to Reproduce
Setup:
openai/gpt-oss-20bmodelhttp://192.168.1.157:1234/v1/chat/completionsCustom Template Used:
Expected Behavior:
/v1/chat/completions"messages"array as per OpenAI Chat API specActual Behavior:
"prompt"field (Completion API format) instead of parsing it as"messages"(Chat API format){"prompt": "[{\"role\": \"system\", ...}]", "max_tokens": ..., ...}Error: 'messages' field is requiredDebug Output (Sortana)
Last Request Payload:
LM Studio Log:
Expected Request Format
For
/v1/chat/completions, the request should be:Root Cause
When using a custom template with
/v1/chat/completions, Sortana:"prompt"field (Completion API format)/v1/chat/completionsrequires a properly formatted"messages"arrayWorkaround
Use the
/v1/completionsendpoint with the OpenAI/ChatML standard template instead. This works correctly but may have other limitations depending on the model configuration.Additional Info
openai/gpt-oss-20b/v1/completions(works with ChatML template)/v1/chat/completions(fails with custom template)I think I got to the bottom of this. The way the engine is set up, it assumes using a
/v1/completionsAPI. I have added some logic to make the endpoint configuration value a "base" value, using it to create a completedcompletionsAPI URL.Currently testing a build with new parsing logic, model templates, and endpoint URI evaluation. If all is good will be releasing as v2.3.3.
Fixed in v2.3.3.