diff --git a/AGENTS.md b/AGENTS.md index fa42654..9e3261e 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -10,7 +10,7 @@ This file provides guidelines for codex agents contributing to the Sortana proje - `options/`: The options page HTML, JavaScript and bundled Bulma CSS (v1.0.3). - `details.html` and `details.js`: View AI reasoning and clear cache for a message. - `resources/`: Images and other static files. -- `prompt_templates/`: Custom/legacy templated message material kept in-repo for non-native prompt flows. +- `prompt_templates/`: Provider-specific templated message formats for non-OpenAI flows (qwen, mistral, harmony, plus legacy openai template material kept in-repo). - `build-xpi.ps1`: PowerShell script to package the extension. - `build-xpi.sh`: Bash script to package the extension. - `resources/svg2img.ps1`: PowerShell script to regenerate themed PNG icons from SVGs. @@ -38,7 +38,7 @@ There are currently no automated tests for this project. If you add tests in the Sortana targets `POST /v1/chat/completions`. The endpoint value stored in settings is a base URL; the full request URL is constructed by appending `/v1/chat/completions` (adding a slash when needed) and defaulting to `https://` if no scheme is provided. Endpoint normalization strips a trailing `/v1`, `/v1/chat/completions`, `/v1/completions`, or `/v1/models`. The options page can query `/v1/models` from the same base URL to populate the Model dropdown; selecting **None** omits the `model` field from the request payload. Advanced options allow an optional API key plus `OpenAI-Organization` and `OpenAI-Project` headers; these headers are only sent when values are provided. -Requests use a Chat Completions `messages` array and ask for strict JSON schema output via `response_format`. Built-in request formats send native `system` and `user` chat messages; only the custom format sends a single templated user message. Responses are parsed from `choices[0].message`, with `match` as a boolean and `reason` as a short string, and parsing falls back to the last JSON object if a backend prepends extra reasoning text. Unsupported OpenAI sampling fields are filtered out, and the saved `max_tokens` setting is translated to `max_completion_tokens`. +Requests use a Chat Completions `messages` array and ask for strict JSON schema output via `response_format`. Responses are parsed from `choices[0].message`, with `match` as a boolean and `reason` as a short string. Unsupported OpenAI sampling fields are filtered out, and the saved `max_tokens` setting is translated to `max_completion_tokens`. ## Documentation diff --git a/README.md b/README.md index 83bf683..a9c08ae 100644 --- a/README.md +++ b/README.md @@ -15,7 +15,7 @@ Classification requests ask the model for structured JSON output with a required - **Configurable endpoint** – set the classification service base URL on the options page. - **Model selection** – load available models from the endpoint and choose one (or omit the model field). - **Optional OpenAI auth headers** – provide an API key plus optional organization/project headers when needed. -- **Request formats** – built-in formats use native chat messages; a custom format can still send one templated user message when needed. +- **Request formats** – use native OpenAI chat messages or choose Qwen, Mistral, Harmony (gpt-oss), or a custom templated message format. - **Custom system prompts** – tailor the instructions sent to the model for more precise results. - **Persistent result caching** – classification results and reasoning are saved to disk so messages aren't re-evaluated across restarts. - **Advanced parameters** – tune generation settings like temperature and top‑p from the options page. Unsupported OpenAI sampling fields are filtered out automatically. diff --git a/modules/AiClassifier.js b/modules/AiClassifier.js index 3896ef6..7736b6f 100644 --- a/modules/AiClassifier.js +++ b/modules/AiClassifier.js @@ -268,7 +268,7 @@ Classification criterion: ${criterion}`; } function buildMessages(body, criterion) { - if (gTemplateName !== "custom") { + if (gTemplateName === "openai") { return [ { role: "system", @@ -391,56 +391,6 @@ function extractMessageContent(content) { }; } -function extractLastJsonObject(text) { - if (typeof text !== "string" || !text) { - return null; - } - - let last = null; - let start = -1; - let depth = 0; - let inString = false; - let escape = false; - - for (let i = 0; i < text.length; i += 1) { - const ch = text[i]; - if (inString) { - if (escape) { - escape = false; - continue; - } - if (ch === "\\") { - escape = true; - continue; - } - if (ch === "\"") { - inString = false; - } - continue; - } - if (ch === "\"") { - inString = true; - continue; - } - if (ch === "{") { - if (depth === 0) { - start = i; - } - depth += 1; - continue; - } - if (ch === "}" && depth > 0) { - depth -= 1; - if (depth === 0 && start !== -1) { - last = text.slice(start, i + 1); - start = -1; - } - } - } - - return last; -} - function parseMatch(result) { const message = result?.choices?.[0]?.message; if (!message || typeof message !== "object") { @@ -474,17 +424,8 @@ function parseMatch(result) { try { obj = JSON.parse(extracted.text); } catch (e) { - const candidate = extractLastJsonObject(extracted.text); - if (!candidate) { - reportParseError("Failed to parse JSON from AI response.", extracted.text.slice(0, 800)); - return { matched: false, reason: "" }; - } - try { - obj = JSON.parse(candidate); - } catch (inner) { - reportParseError("Failed to parse JSON from AI response.", extracted.text.slice(0, 800)); - return { matched: false, reason: "" }; - } + reportParseError("Failed to parse JSON from AI response.", extracted.text.slice(0, 800)); + return { matched: false, reason: "" }; } if (typeof obj?.match !== "boolean") { diff --git a/options/options.html b/options/options.html index 4367851..934fe2d 100644 --- a/options/options.html +++ b/options/options.html @@ -99,7 +99,7 @@ -
Built-in formats use native chat messages over Chat Completions. Custom format sends one templated user message.
+OpenAI Chat uses native chat messages. Other formats send one templated user message over Chat Completions.