Add support for specifying model (Ollama OpenAI-compatible API requires it) #2

Open
opened 2026-01-24 16:12:57 -06:00 by flome00 · 0 comments

Hi! Thanks for Sortana — it’s a great add-on.

I’m trying to use Sortana with Ollama via its OpenAI-compatible API. The endpoint is reachable and works fine when model is provided, but Sortana appears not to send a model field in the request (or there is no way to configure it). Ollama then returns an error and Sortana does not classify/match.

Environment

  • Thunderbird: 140.6.0esr (64-Bit)
  • Sortana: 2.2.0
  • Ollama: 0.13.5
  • Model: qwen2.5:14b
  • Ollama host: http://ollama.lan:11434

Expected behavior

Sortana should be able to call OpenAI-compatible endpoints like Ollama by including model in the payload (or providing a setting for it).

Actual behavior

Ollama responds with an error because the request has no model (or model is empty), e.g.:

{"error":{"message":"model '' not found","type":"api_error","param":null,"code":null}}

Sortana then does not classify / no actions are triggered.

Reproduction

  1. Configure Sortana endpoint to http://ollama.lan:11434
  2. Create a rule (e.g. “is this email a newsletter?”) and try to run it on selected messages
  3. Ollama returns an error because model is missing.

Proof that Ollama works with model

Calling Ollama directly works as expected if model is set:

curl -s http://ollama.lan:11434/v1/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "qwen2.5:14b",
    "prompt": "Return ONLY valid JSON: {\"match\": true, \"reason\": \"ok\"}",
    "temperature": 0,
    "max_tokens": 50
  }'

This returns a normal completion response.

Suggested fix

Add a Settings field for the model name and include it in the request payload when calling /v1/completions (and/or /v1/chat/completions if supported).

Example (pseudo-code):

// read from settings (new option)
const model = settings.model?.trim();

// build request payload
const payload = {
  // existing fields
  prompt,
  temperature,
  max_tokens,
  // ...
};

// Ollama/OpenAI-compatible servers need this
if (model) {
  payload.model = model;
}

Optionally, default to something sensible if the user selects a “local OpenAI-compatible endpoint”:

payload.model ??= "qwen2.5:14b";

Bonus improvement (optional)

Some OpenAI-compatible servers primarily expect chat format. If Sortana ever supports chat completions, it would also be helpful to allow selecting between:

  • /v1/completions (prompt)
  • /v1/chat/completions (messages)

But the minimal change needed for Ollama is simply to send model.

Thanks again — happy to test a build/PR if needed.

Hi! Thanks for Sortana — it’s a great add-on. I’m trying to use Sortana with **Ollama** via its **OpenAI-compatible API**. The endpoint is reachable and works fine when `model` is provided, but Sortana appears not to send a `model` field in the request (or there is no way to configure it). Ollama then returns an error and Sortana does not classify/match. ### Environment * Thunderbird: 140.6.0esr (64-Bit) * Sortana: 2.2.0 * Ollama: 0.13.5 * Model: `qwen2.5:14b` * Ollama host: `http://ollama.lan:11434` ### Expected behavior Sortana should be able to call OpenAI-compatible endpoints like Ollama by including `model` in the payload (or providing a setting for it). ### Actual behavior Ollama responds with an error because the request has no model (or model is empty), e.g.: ```json {"error":{"message":"model '' not found","type":"api_error","param":null,"code":null}} ``` Sortana then does not classify / no actions are triggered. ### Reproduction 1. Configure Sortana endpoint to `http://ollama.lan:11434` 2. Create a rule (e.g. “is this email a newsletter?”) and try to run it on selected messages 3. Ollama returns an error because `model` is missing. ### Proof that Ollama works with `model` Calling Ollama directly works as expected if `model` is set: ```bash curl -s http://ollama.lan:11434/v1/completions \ -H "Content-Type: application/json" \ -d '{ "model": "qwen2.5:14b", "prompt": "Return ONLY valid JSON: {\"match\": true, \"reason\": \"ok\"}", "temperature": 0, "max_tokens": 50 }' ``` This returns a normal completion response. ### Suggested fix Add a **Settings** field for the model name and include it in the request payload when calling `/v1/completions` (and/or `/v1/chat/completions` if supported). Example (pseudo-code): ```js // read from settings (new option) const model = settings.model?.trim(); // build request payload const payload = { // existing fields prompt, temperature, max_tokens, // ... }; // Ollama/OpenAI-compatible servers need this if (model) { payload.model = model; } ``` Optionally, default to something sensible if the user selects a “local OpenAI-compatible endpoint”: ```js payload.model ??= "qwen2.5:14b"; ``` ### Bonus improvement (optional) Some OpenAI-compatible servers primarily expect chat format. If Sortana ever supports chat completions, it would also be helpful to allow selecting between: * `/v1/completions` (prompt) * `/v1/chat/completions` (messages) But the minimal change needed for Ollama is simply to send `model`. Thanks again — happy to test a build/PR if needed.
Sign in to join this conversation.
No milestone
No project
No assignees
1 participant
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
wagesj45/Sortana#2
No description provided.