Migrate classifier to chat completions

This commit is contained in:
Jordan Wages 2026-04-19 01:36:18 -05:00
commit d48557fe5b
5 changed files with 170 additions and 87 deletions

View file

@ -10,7 +10,7 @@ This file provides guidelines for codex agents contributing to the Sortana proje
- `options/`: The options page HTML, JavaScript and bundled Bulma CSS (v1.0.3).
- `details.html` and `details.js`: View AI reasoning and clear cache for a message.
- `resources/`: Images and other static files.
- `prompt_templates/`: Prompt template files for the AI service (openai, qwen, mistral, harmony).
- `prompt_templates/`: Provider-specific templated message formats for non-OpenAI flows (qwen, mistral, harmony, plus legacy openai template material kept in-repo).
- `build-xpi.ps1`: PowerShell script to package the extension.
- `build-xpi.sh`: Bash script to package the extension.
@ -33,10 +33,10 @@ There are currently no automated tests for this project. If you add tests in the
## Endpoint Notes
Sortana targets the `/v1/completions` API. The endpoint value stored in settings is a base URL; the full request URL is constructed by appending `/v1/completions` (adding a slash when needed) and defaulting to `https://` if no scheme is provided.
Sortana targets `POST /v1/chat/completions`. The endpoint value stored in settings is a base URL; the full request URL is constructed by appending `/v1/chat/completions` (adding a slash when needed) and defaulting to `https://` if no scheme is provided. Endpoint normalization strips a trailing `/v1`, `/v1/chat/completions`, `/v1/completions`, or `/v1/models`.
The options page can query `/v1/models` from the same base URL to populate the Model dropdown; selecting **None** omits the `model` field from the request payload.
Advanced options allow an optional API key plus `OpenAI-Organization` and `OpenAI-Project` headers; these headers are only sent when values are provided.
Responses are expected to include a JSON object with `match` (or `matched`) plus a short `reason` string; the parser extracts the last JSON object in the response text and ignores any surrounding commentary.
Requests use a Chat Completions `messages` array and ask for strict JSON schema output via `response_format`. Responses are parsed from `choices[0].message`, with `match` as a boolean and `reason` as a short string. Unsupported OpenAI sampling fields are filtered out, and the saved `max_tokens` setting is translated to `max_completion_tokens`.
## Documentation

View file

@ -4,22 +4,21 @@
Sortana is an experimental Thunderbird add-on that integrates an AI-powered filter rule.
It allows you to classify email messages by sending their contents to a configurable
HTTP endpoint. Sortana uses the `/v1/completions` API; the options page stores a base
URL and appends `/v1/completions` when sending requests. The endpoint should respond
with JSON indicating whether the message meets a specified criterion, including a
short reasoning summary.
Responses are parsed by extracting the last JSON object in the response text and
expecting a `match` (or `matched`) boolean plus a `reason` string.
HTTP endpoint. Sortana uses `POST /v1/chat/completions`; the options page stores a
base URL and appends `/v1/chat/completions` when sending classification requests.
The same base URL is used with `/v1/models` when refreshing the model list.
Classification requests ask the model for structured JSON output with a required
`match` boolean and `reason` string.
## Features
- **Configurable endpoint** set the classification service base URL on the options page.
- **Model selection** load available models from the endpoint and choose one (or omit the model field).
- **Optional OpenAI auth headers** provide an API key plus optional organization/project headers when needed.
- **Prompt templates** choose between OpenAI/ChatML, Qwen, Mistral, Harmony (gpt-oss), or provide your own custom template.
- **Request formats** use native OpenAI chat messages or choose Qwen, Mistral, Harmony (gpt-oss), or a custom templated message format.
- **Custom system prompts** tailor the instructions sent to the model for more precise results.
- **Persistent result caching** classification results and reasoning are saved to disk so messages aren't re-evaluated across restarts.
- **Advanced parameters** tune generation settings like temperature, topp and more from the options page.
- **Advanced parameters** tune generation settings like temperature and topp from the options page. Unsupported OpenAI sampling fields are filtered out automatically.
- **Markdown conversion** optionally convert HTML bodies to Markdown before sending them to the AI service.
- **Debug logging** optional colorized logs help troubleshoot interactions with the AI service.
- **Debug tab** view the last request payload and a diff between the unaltered message text and the final prompt.
@ -81,10 +80,11 @@ Sortana is implemented entirely with documented MailExtension/WebExtension APIs.
## Usage
1. Open the add-on's options and set the base URL of your classification service
(Sortana will append `/v1/completions`). Use the Model dropdown to load
`/v1/models` and select a model or choose **None** to omit the `model` field.
Advanced settings include optional API key, organization, and project headers
for OpenAI-hosted endpoints.
(Sortana will append `/v1/chat/completions`). Endpoints ending in `/v1`,
`/v1/chat/completions`, `/v1/completions`, or `/v1/models` are normalized back
to the same base URL. Use the Model dropdown to load `/v1/models` and select a
model or choose **None** to omit the `model` field. Advanced settings include
optional API key, organization, and project headers for OpenAI-hosted endpoints.
2. Use the **Classification Rules** section to add a criterion and optional
actions such as tagging, moving, copying, forwarding, replying,
deleting or archiving a message when it matches. Drag rules to
@ -99,6 +99,11 @@ Sortana is implemented entirely with documented MailExtension/WebExtension APIs.
configured rules.
4. If the toolbar icon shows a red X, it will clear after a few seconds. Open the Errors tab in Options to review the latest failures.
OpenAI Chat requests are sent with a `messages` array plus a strict
`response_format` JSON schema. Sortana maps the saved `max_tokens` setting to
`max_completion_tokens` for Chat Completions and only forwards OpenAI-supported
sampling fields.
### Example Filters
Here are some useful and fun example criteria you can use in your filters. Filters should be able to be answered as either `true` or `false`.

View file

@ -4,12 +4,12 @@
"doesntMatch": { "message": "doesn't match" },
"options.title": { "message": "AI Filter Options" },
"options.endpoint": { "message": "Endpoint" },
"options.template": { "message": "Prompt template" },
"options.template": { "message": "Request format" },
"options.customTemplate": { "message": "Custom template" },
"options.systemInstructions": { "message": "System instructions" },
"options.reset": { "message": "Reset to default" },
"options.placeholders": { "message": "Placeholders: {{system}}, {{email}}, {{query}}" },
"template.openai": { "message": "OpenAI / ChatML" },
"template.openai": { "message": "OpenAI Chat" },
"template.qwen": { "message": "Qwen" },
"template.mistral": { "message": "Mistral" },
"template.harmony": { "message": "Harmony (gpt-oss)" },

View file

@ -4,7 +4,7 @@ import { DEFAULT_AI_PARAMS } from "./defaultParams.js";
const storage = (globalThis.messenger ?? globalThis.browser).storage;
const COMPLETIONS_PATH = "/v1/completions";
const CHAT_COMPLETIONS_PATH = "/v1/chat/completions";
const MODELS_PATH = "/v1/models";
const SYSTEM_PREFIX = `You are an email-classification assistant.
@ -14,11 +14,26 @@ Read the email below and the classification criterion provided by the user.
const DEFAULT_CUSTOM_SYSTEM_PROMPT = "Determine whether the email satisfies the user's criterion.";
const SYSTEM_SUFFIX = `
Return ONLY a JSON object on a single line of the form:
{"match": true, "reason": "<short explanation>"} - if the email satisfies the criterion
{"match": false, "reason": "<short explanation>"} - otherwise
Return JSON that matches the requested schema exactly.
Set "match" to true when the email satisfies the criterion, otherwise false.
Set "reason" to a short explanation grounded in the email contents.`;
Do not add any other keys, text, or formatting.`;
const RESPONSE_FORMAT = {
type: "json_schema",
json_schema: {
name: "email_classification",
strict: true,
schema: {
type: "object",
properties: {
match: { type: "boolean" },
reason: { type: "string" },
},
required: ["match", "reason"],
additionalProperties: false,
},
},
};
let gEndpointBase = "http://127.0.0.1:5000";
let gEndpoint = buildEndpointUrl(gEndpointBase);
@ -44,7 +59,7 @@ function normalizeEndpointBase(endpoint) {
if (!base) {
return "";
}
base = base.replace(/\/v1\/(completions|models)\/?$/i, "");
base = base.replace(/\/v1(?:\/(?:chat\/completions|completions|models))?\/?$/i, "");
return base;
}
@ -55,7 +70,7 @@ function buildEndpointUrl(endpointBase) {
}
const withScheme = /^https?:\/\//i.test(base) ? base : `https://${base}`;
const needsSlash = withScheme.endsWith("/");
const path = COMPLETIONS_PATH.replace(/^\//, "");
const path = CHAT_COMPLETIONS_PATH.replace(/^\//, "");
return `${withScheme}${needsSlash ? "" : "/"}${path}`;
}
@ -201,7 +216,9 @@ async function setConfig(config = {}) {
if (typeof config.debugLogging === "boolean") {
setDebug(config.debugLogging);
}
if (gTemplateName === "custom") {
if (gTemplateName === "openai") {
gTemplateText = "";
} else if (gTemplateName === "custom") {
gTemplateText = gCustomTemplate;
} else {
gTemplateText = await loadTemplate(gTemplateName);
@ -243,6 +260,35 @@ function buildPrompt(body, criterion) {
return template.replace(/{{\s*(\w+)\s*}}/g, (m, key) => data[key] || "");
}
function buildUserMessage(body, criterion) {
return `Email contents:
${body}
Classification criterion: ${criterion}`;
}
function buildMessages(body, criterion) {
if (gTemplateName === "openai") {
return [
{
role: "system",
content: buildSystemPrompt(),
},
{
role: "user",
content: buildUserMessage(body, criterion),
},
];
}
return [
{
role: "user",
content: buildPrompt(body, criterion),
},
];
}
function getCachedResult(cacheKey) {
if (!gCacheLoaded) {
return null;
@ -263,14 +309,41 @@ function getReason(cacheKey) {
return cacheKey && entry ? entry.reason || null : null;
}
function buildPayload(text, criterion) {
let payloadObj = Object.assign({
prompt: buildPrompt(text, criterion)
}, gAiParams);
function buildOpenAiParams() {
const params = {};
if (Number.isFinite(gAiParams.max_tokens) && gAiParams.max_tokens > 0) {
params.max_completion_tokens = Math.trunc(gAiParams.max_tokens);
}
if (Number.isFinite(gAiParams.temperature)) {
params.temperature = gAiParams.temperature;
}
if (Number.isFinite(gAiParams.top_p)) {
params.top_p = gAiParams.top_p;
}
if (Number.isFinite(gAiParams.presence_penalty)) {
params.presence_penalty = gAiParams.presence_penalty;
}
if (Number.isFinite(gAiParams.frequency_penalty)) {
params.frequency_penalty = gAiParams.frequency_penalty;
}
if (Number.isInteger(gAiParams.seed) && gAiParams.seed >= 0) {
params.seed = gAiParams.seed;
}
return params;
}
function buildPayloadObject(text, criterion) {
const payloadObj = {
messages: buildMessages(text, criterion),
response_format: RESPONSE_FORMAT,
...buildOpenAiParams(),
};
if (gModel) {
payloadObj.model = gModel;
}
return JSON.stringify(payloadObj);
return payloadObj;
}
function reportParseError(message, detail) {
@ -290,78 +363,81 @@ function reportParseError(message, detail) {
}
}
function extractLastJsonObject(text) {
let last = null;
let start = -1;
let depth = 0;
let inString = false;
let escape = false;
function extractMessageContent(content) {
if (typeof content === "string") {
return { text: content, refusal: "" };
}
if (!Array.isArray(content)) {
return { text: "", refusal: "" };
}
for (let i = 0; i < text.length; i += 1) {
const ch = text[i];
if (inString) {
if (escape) {
escape = false;
const textParts = [];
const refusalParts = [];
for (const part of content) {
if (!part || typeof part !== "object") {
continue;
}
if (ch === "\\") {
escape = true;
continue;
}
if (ch === "\"") {
inString = false;
}
continue;
}
if (ch === "\"") {
inString = true;
continue;
}
if (ch === "{") {
if (depth === 0) {
start = i;
}
depth += 1;
continue;
}
if (ch === "}" && depth > 0) {
depth -= 1;
if (depth === 0 && start !== -1) {
last = text.slice(start, i + 1);
start = -1;
if (part.type === "text" && typeof part.text === "string") {
textParts.push(part.text);
}
if (part.type === "refusal" && typeof part.refusal === "string") {
refusalParts.push(part.refusal);
}
}
return last;
return {
text: textParts.join("\n").trim(),
refusal: refusalParts.join("\n").trim(),
};
}
function parseMatch(result) {
const rawText = result.choices?.[0]?.text || "";
const candidate = extractLastJsonObject(rawText);
if (!candidate) {
reportParseError("No JSON object found in AI response.", rawText.slice(0, 800));
const message = result?.choices?.[0]?.message;
if (!message || typeof message !== "object") {
reportParseError("AI response missing assistant message.", JSON.stringify(result).slice(0, 800));
return { matched: false, reason: "" };
}
if (typeof message.refusal === "string" && message.refusal.trim()) {
reportParseError("Model refused classification request.", message.refusal.slice(0, 800));
return { matched: false, reason: message.refusal.trim() };
}
if (message.parsed && typeof message.parsed === "object") {
const parsed = message.parsed;
if (typeof parsed.match === "boolean" && typeof parsed.reason === "string") {
return { matched: parsed.match, reason: parsed.reason };
}
}
const extracted = extractMessageContent(message.content);
if (extracted.refusal) {
reportParseError("Model refused classification request.", extracted.refusal.slice(0, 800));
return { matched: false, reason: extracted.refusal };
}
if (!extracted.text) {
reportParseError("AI response missing assistant message content.", JSON.stringify(message).slice(0, 800));
return { matched: false, reason: "" };
}
let obj;
try {
obj = JSON.parse(candidate);
obj = JSON.parse(extracted.text);
} catch (e) {
reportParseError("Failed to parse JSON from AI response.", candidate.slice(0, 800));
reportParseError("Failed to parse JSON from AI response.", extracted.text.slice(0, 800));
return { matched: false, reason: "" };
}
const matchValue = Object.prototype.hasOwnProperty.call(obj, "match") ? obj.match : obj.matched;
const matched = matchValue === true;
if (matchValue !== true && matchValue !== false) {
reportParseError("AI response missing valid match boolean.", candidate.slice(0, 800));
if (typeof obj?.match !== "boolean") {
reportParseError("AI response missing valid match boolean.", extracted.text.slice(0, 800));
return { matched: false, reason: "" };
}
if (typeof obj?.reason !== "string") {
reportParseError("AI response missing valid reason string.", extracted.text.slice(0, 800));
return { matched: false, reason: "" };
}
const reasonValue = obj.reason ?? obj.reasoning ?? obj.explaination;
const reason = typeof reasonValue === "string" ? reasonValue : "";
return { matched, reason };
return { matched: obj.match, reason: obj.reason };
}
function cacheEntry(cacheKey, matched, reason) {
@ -427,9 +503,10 @@ async function classifyText(text, criterion, cacheKey = null) {
return cached;
}
const payload = buildPayload(text, criterion);
const payloadObj = buildPayloadObject(text, criterion);
const payload = JSON.stringify(payloadObj);
try {
await storage.local.set({ lastPayload: JSON.parse(payload) });
await storage.local.set({ lastPayload: payloadObj });
} catch (e) {
aiLog('failed to save last payload', { level: 'warn' }, e);
}

View file

@ -93,12 +93,13 @@
</div>
<div class="field">
<label class="label" for="template">Prompt template</label>
<label class="label" for="template">Request format</label>
<div class="control">
<div class="select is-fullwidth">
<select id="template"></select>
</div>
</div>
<p class="help">OpenAI Chat uses native chat messages. Other formats send one templated user message over Chat Completions.</p>
</div>
<div id="custom-template-container" class="field is-hidden">
@ -203,7 +204,7 @@
</label>
</div>
<div class="field">
<label class="label" for="max_tokens">Max tokens</label>
<label class="label" for="max_tokens">Max completion tokens</label>
<div class="control">
<input class="input" type="number" id="max_tokens">
</div>