Add support for specifying model (Ollama OpenAI-compatible API requires it) #2
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
Hi! Thanks for Sortana — it’s a great add-on.
I’m trying to use Sortana with Ollama via its OpenAI-compatible API. The endpoint is reachable and works fine when
modelis provided, but Sortana appears not to send amodelfield in the request (or there is no way to configure it). Ollama then returns an error and Sortana does not classify/match.Environment
qwen2.5:14bhttp://ollama.lan:11434Expected behavior
Sortana should be able to call OpenAI-compatible endpoints like Ollama by including
modelin the payload (or providing a setting for it).Actual behavior
Ollama responds with an error because the request has no model (or model is empty), e.g.:
Sortana then does not classify / no actions are triggered.
Reproduction
http://ollama.lan:11434modelis missing.Proof that Ollama works with
modelCalling Ollama directly works as expected if
modelis set:This returns a normal completion response.
Suggested fix
Add a Settings field for the model name and include it in the request payload when calling
/v1/completions(and/or/v1/chat/completionsif supported).Example (pseudo-code):
Optionally, default to something sensible if the user selects a “local OpenAI-compatible endpoint”:
Bonus improvement (optional)
Some OpenAI-compatible servers primarily expect chat format. If Sortana ever supports chat completions, it would also be helpful to allow selecting between:
/v1/completions(prompt)/v1/chat/completions(messages)But the minimal change needed for Ollama is simply to send
model.Thanks again — happy to test a build/PR if needed.