Revert "Migrate classifier to chat completions"
This reverts commit d48557fe5b.
This commit is contained in:
parent
ce793ff757
commit
245bb2e3e1
5 changed files with 87 additions and 170 deletions
|
|
@ -10,7 +10,7 @@ This file provides guidelines for codex agents contributing to the Sortana proje
|
||||||
- `options/`: The options page HTML, JavaScript and bundled Bulma CSS (v1.0.3).
|
- `options/`: The options page HTML, JavaScript and bundled Bulma CSS (v1.0.3).
|
||||||
- `details.html` and `details.js`: View AI reasoning and clear cache for a message.
|
- `details.html` and `details.js`: View AI reasoning and clear cache for a message.
|
||||||
- `resources/`: Images and other static files.
|
- `resources/`: Images and other static files.
|
||||||
- `prompt_templates/`: Provider-specific templated message formats for non-OpenAI flows (qwen, mistral, harmony, plus legacy openai template material kept in-repo).
|
- `prompt_templates/`: Prompt template files for the AI service (openai, qwen, mistral, harmony).
|
||||||
- `build-xpi.ps1`: PowerShell script to package the extension.
|
- `build-xpi.ps1`: PowerShell script to package the extension.
|
||||||
- `build-xpi.sh`: Bash script to package the extension.
|
- `build-xpi.sh`: Bash script to package the extension.
|
||||||
- `resources/svg2img.ps1`: PowerShell script to regenerate themed PNG icons from SVGs.
|
- `resources/svg2img.ps1`: PowerShell script to regenerate themed PNG icons from SVGs.
|
||||||
|
|
@ -35,10 +35,10 @@ There are currently no automated tests for this project. If you add tests in the
|
||||||
|
|
||||||
## Endpoint Notes
|
## Endpoint Notes
|
||||||
|
|
||||||
Sortana targets `POST /v1/chat/completions`. The endpoint value stored in settings is a base URL; the full request URL is constructed by appending `/v1/chat/completions` (adding a slash when needed) and defaulting to `https://` if no scheme is provided. Endpoint normalization strips a trailing `/v1`, `/v1/chat/completions`, `/v1/completions`, or `/v1/models`.
|
Sortana targets the `/v1/completions` API. The endpoint value stored in settings is a base URL; the full request URL is constructed by appending `/v1/completions` (adding a slash when needed) and defaulting to `https://` if no scheme is provided.
|
||||||
The options page can query `/v1/models` from the same base URL to populate the Model dropdown; selecting **None** omits the `model` field from the request payload.
|
The options page can query `/v1/models` from the same base URL to populate the Model dropdown; selecting **None** omits the `model` field from the request payload.
|
||||||
Advanced options allow an optional API key plus `OpenAI-Organization` and `OpenAI-Project` headers; these headers are only sent when values are provided.
|
Advanced options allow an optional API key plus `OpenAI-Organization` and `OpenAI-Project` headers; these headers are only sent when values are provided.
|
||||||
Requests use a Chat Completions `messages` array and ask for strict JSON schema output via `response_format`. Responses are parsed from `choices[0].message`, with `match` as a boolean and `reason` as a short string. Unsupported OpenAI sampling fields are filtered out, and the saved `max_tokens` setting is translated to `max_completion_tokens`.
|
Responses are expected to include a JSON object with `match` (or `matched`) plus a short `reason` string; the parser extracts the last JSON object in the response text and ignores any surrounding commentary.
|
||||||
|
|
||||||
## Documentation
|
## Documentation
|
||||||
|
|
||||||
|
|
|
||||||
29
README.md
29
README.md
|
|
@ -4,21 +4,22 @@
|
||||||
|
|
||||||
Sortana is an experimental Thunderbird add-on that integrates an AI-powered filter rule.
|
Sortana is an experimental Thunderbird add-on that integrates an AI-powered filter rule.
|
||||||
It allows you to classify email messages by sending their contents to a configurable
|
It allows you to classify email messages by sending their contents to a configurable
|
||||||
HTTP endpoint. Sortana uses `POST /v1/chat/completions`; the options page stores a
|
HTTP endpoint. Sortana uses the `/v1/completions` API; the options page stores a base
|
||||||
base URL and appends `/v1/chat/completions` when sending classification requests.
|
URL and appends `/v1/completions` when sending requests. The endpoint should respond
|
||||||
The same base URL is used with `/v1/models` when refreshing the model list.
|
with JSON indicating whether the message meets a specified criterion, including a
|
||||||
Classification requests ask the model for structured JSON output with a required
|
short reasoning summary.
|
||||||
`match` boolean and `reason` string.
|
Responses are parsed by extracting the last JSON object in the response text and
|
||||||
|
expecting a `match` (or `matched`) boolean plus a `reason` string.
|
||||||
|
|
||||||
## Features
|
## Features
|
||||||
|
|
||||||
- **Configurable endpoint** – set the classification service base URL on the options page.
|
- **Configurable endpoint** – set the classification service base URL on the options page.
|
||||||
- **Model selection** – load available models from the endpoint and choose one (or omit the model field).
|
- **Model selection** – load available models from the endpoint and choose one (or omit the model field).
|
||||||
- **Optional OpenAI auth headers** – provide an API key plus optional organization/project headers when needed.
|
- **Optional OpenAI auth headers** – provide an API key plus optional organization/project headers when needed.
|
||||||
- **Request formats** – use native OpenAI chat messages or choose Qwen, Mistral, Harmony (gpt-oss), or a custom templated message format.
|
- **Prompt templates** – choose between OpenAI/ChatML, Qwen, Mistral, Harmony (gpt-oss), or provide your own custom template.
|
||||||
- **Custom system prompts** – tailor the instructions sent to the model for more precise results.
|
- **Custom system prompts** – tailor the instructions sent to the model for more precise results.
|
||||||
- **Persistent result caching** – classification results and reasoning are saved to disk so messages aren't re-evaluated across restarts.
|
- **Persistent result caching** – classification results and reasoning are saved to disk so messages aren't re-evaluated across restarts.
|
||||||
- **Advanced parameters** – tune generation settings like temperature and top‑p from the options page. Unsupported OpenAI sampling fields are filtered out automatically.
|
- **Advanced parameters** – tune generation settings like temperature, top‑p and more from the options page.
|
||||||
- **Markdown conversion** – optionally convert HTML bodies to Markdown before sending them to the AI service.
|
- **Markdown conversion** – optionally convert HTML bodies to Markdown before sending them to the AI service.
|
||||||
- **Debug logging** – optional colorized logs help troubleshoot interactions with the AI service.
|
- **Debug logging** – optional colorized logs help troubleshoot interactions with the AI service.
|
||||||
- **Debug tab** – view the last request payload and a diff between the unaltered message text and the final prompt.
|
- **Debug tab** – view the last request payload and a diff between the unaltered message text and the final prompt.
|
||||||
|
|
@ -82,11 +83,10 @@ Sortana is implemented entirely with documented MailExtension/WebExtension APIs.
|
||||||
## Usage
|
## Usage
|
||||||
|
|
||||||
1. Open the add-on's options and set the base URL of your classification service
|
1. Open the add-on's options and set the base URL of your classification service
|
||||||
(Sortana will append `/v1/chat/completions`). Endpoints ending in `/v1`,
|
(Sortana will append `/v1/completions`). Use the Model dropdown to load
|
||||||
`/v1/chat/completions`, `/v1/completions`, or `/v1/models` are normalized back
|
`/v1/models` and select a model or choose **None** to omit the `model` field.
|
||||||
to the same base URL. Use the Model dropdown to load `/v1/models` and select a
|
Advanced settings include optional API key, organization, and project headers
|
||||||
model or choose **None** to omit the `model` field. Advanced settings include
|
for OpenAI-hosted endpoints.
|
||||||
optional API key, organization, and project headers for OpenAI-hosted endpoints.
|
|
||||||
2. Use the **Classification Rules** section to add a criterion and optional
|
2. Use the **Classification Rules** section to add a criterion and optional
|
||||||
actions such as tagging, moving, copying, forwarding, replying,
|
actions such as tagging, moving, copying, forwarding, replying,
|
||||||
deleting or archiving a message when it matches. Drag rules to
|
deleting or archiving a message when it matches. Drag rules to
|
||||||
|
|
@ -101,11 +101,6 @@ Sortana is implemented entirely with documented MailExtension/WebExtension APIs.
|
||||||
configured rules.
|
configured rules.
|
||||||
4. If the toolbar icon shows a red X, it will clear after a few seconds. Open the Errors tab in Options to review the latest failures.
|
4. If the toolbar icon shows a red X, it will clear after a few seconds. Open the Errors tab in Options to review the latest failures.
|
||||||
|
|
||||||
OpenAI Chat requests are sent with a `messages` array plus a strict
|
|
||||||
`response_format` JSON schema. Sortana maps the saved `max_tokens` setting to
|
|
||||||
`max_completion_tokens` for Chat Completions and only forwards OpenAI-supported
|
|
||||||
sampling fields.
|
|
||||||
|
|
||||||
### Example Filters
|
### Example Filters
|
||||||
|
|
||||||
Here are some useful and fun example criteria you can use in your filters. Filters should be able to be answered as either `true` or `false`.
|
Here are some useful and fun example criteria you can use in your filters. Filters should be able to be answered as either `true` or `false`.
|
||||||
|
|
|
||||||
|
|
@ -4,12 +4,12 @@
|
||||||
"doesntMatch": { "message": "doesn't match" },
|
"doesntMatch": { "message": "doesn't match" },
|
||||||
"options.title": { "message": "AI Filter Options" },
|
"options.title": { "message": "AI Filter Options" },
|
||||||
"options.endpoint": { "message": "Endpoint" },
|
"options.endpoint": { "message": "Endpoint" },
|
||||||
"options.template": { "message": "Request format" },
|
"options.template": { "message": "Prompt template" },
|
||||||
"options.customTemplate": { "message": "Custom template" },
|
"options.customTemplate": { "message": "Custom template" },
|
||||||
"options.systemInstructions": { "message": "System instructions" },
|
"options.systemInstructions": { "message": "System instructions" },
|
||||||
"options.reset": { "message": "Reset to default" },
|
"options.reset": { "message": "Reset to default" },
|
||||||
"options.placeholders": { "message": "Placeholders: {{system}}, {{email}}, {{query}}" },
|
"options.placeholders": { "message": "Placeholders: {{system}}, {{email}}, {{query}}" },
|
||||||
"template.openai": { "message": "OpenAI Chat" },
|
"template.openai": { "message": "OpenAI / ChatML" },
|
||||||
"template.qwen": { "message": "Qwen" },
|
"template.qwen": { "message": "Qwen" },
|
||||||
"template.mistral": { "message": "Mistral" },
|
"template.mistral": { "message": "Mistral" },
|
||||||
"template.harmony": { "message": "Harmony (gpt-oss)" },
|
"template.harmony": { "message": "Harmony (gpt-oss)" },
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@ import { DEFAULT_AI_PARAMS } from "./defaultParams.js";
|
||||||
|
|
||||||
const storage = (globalThis.messenger ?? globalThis.browser).storage;
|
const storage = (globalThis.messenger ?? globalThis.browser).storage;
|
||||||
|
|
||||||
const CHAT_COMPLETIONS_PATH = "/v1/chat/completions";
|
const COMPLETIONS_PATH = "/v1/completions";
|
||||||
const MODELS_PATH = "/v1/models";
|
const MODELS_PATH = "/v1/models";
|
||||||
|
|
||||||
const SYSTEM_PREFIX = `You are an email-classification assistant.
|
const SYSTEM_PREFIX = `You are an email-classification assistant.
|
||||||
|
|
@ -14,26 +14,11 @@ Read the email below and the classification criterion provided by the user.
|
||||||
const DEFAULT_CUSTOM_SYSTEM_PROMPT = "Determine whether the email satisfies the user's criterion.";
|
const DEFAULT_CUSTOM_SYSTEM_PROMPT = "Determine whether the email satisfies the user's criterion.";
|
||||||
|
|
||||||
const SYSTEM_SUFFIX = `
|
const SYSTEM_SUFFIX = `
|
||||||
Return JSON that matches the requested schema exactly.
|
Return ONLY a JSON object on a single line of the form:
|
||||||
Set "match" to true when the email satisfies the criterion, otherwise false.
|
{"match": true, "reason": "<short explanation>"} - if the email satisfies the criterion
|
||||||
Set "reason" to a short explanation grounded in the email contents.`;
|
{"match": false, "reason": "<short explanation>"} - otherwise
|
||||||
|
|
||||||
const RESPONSE_FORMAT = {
|
Do not add any other keys, text, or formatting.`;
|
||||||
type: "json_schema",
|
|
||||||
json_schema: {
|
|
||||||
name: "email_classification",
|
|
||||||
strict: true,
|
|
||||||
schema: {
|
|
||||||
type: "object",
|
|
||||||
properties: {
|
|
||||||
match: { type: "boolean" },
|
|
||||||
reason: { type: "string" },
|
|
||||||
},
|
|
||||||
required: ["match", "reason"],
|
|
||||||
additionalProperties: false,
|
|
||||||
},
|
|
||||||
},
|
|
||||||
};
|
|
||||||
|
|
||||||
let gEndpointBase = "http://127.0.0.1:5000";
|
let gEndpointBase = "http://127.0.0.1:5000";
|
||||||
let gEndpoint = buildEndpointUrl(gEndpointBase);
|
let gEndpoint = buildEndpointUrl(gEndpointBase);
|
||||||
|
|
@ -59,7 +44,7 @@ function normalizeEndpointBase(endpoint) {
|
||||||
if (!base) {
|
if (!base) {
|
||||||
return "";
|
return "";
|
||||||
}
|
}
|
||||||
base = base.replace(/\/v1(?:\/(?:chat\/completions|completions|models))?\/?$/i, "");
|
base = base.replace(/\/v1\/(completions|models)\/?$/i, "");
|
||||||
return base;
|
return base;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -70,7 +55,7 @@ function buildEndpointUrl(endpointBase) {
|
||||||
}
|
}
|
||||||
const withScheme = /^https?:\/\//i.test(base) ? base : `https://${base}`;
|
const withScheme = /^https?:\/\//i.test(base) ? base : `https://${base}`;
|
||||||
const needsSlash = withScheme.endsWith("/");
|
const needsSlash = withScheme.endsWith("/");
|
||||||
const path = CHAT_COMPLETIONS_PATH.replace(/^\//, "");
|
const path = COMPLETIONS_PATH.replace(/^\//, "");
|
||||||
return `${withScheme}${needsSlash ? "" : "/"}${path}`;
|
return `${withScheme}${needsSlash ? "" : "/"}${path}`;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -216,9 +201,7 @@ async function setConfig(config = {}) {
|
||||||
if (typeof config.debugLogging === "boolean") {
|
if (typeof config.debugLogging === "boolean") {
|
||||||
setDebug(config.debugLogging);
|
setDebug(config.debugLogging);
|
||||||
}
|
}
|
||||||
if (gTemplateName === "openai") {
|
if (gTemplateName === "custom") {
|
||||||
gTemplateText = "";
|
|
||||||
} else if (gTemplateName === "custom") {
|
|
||||||
gTemplateText = gCustomTemplate;
|
gTemplateText = gCustomTemplate;
|
||||||
} else {
|
} else {
|
||||||
gTemplateText = await loadTemplate(gTemplateName);
|
gTemplateText = await loadTemplate(gTemplateName);
|
||||||
|
|
@ -260,35 +243,6 @@ function buildPrompt(body, criterion) {
|
||||||
return template.replace(/{{\s*(\w+)\s*}}/g, (m, key) => data[key] || "");
|
return template.replace(/{{\s*(\w+)\s*}}/g, (m, key) => data[key] || "");
|
||||||
}
|
}
|
||||||
|
|
||||||
function buildUserMessage(body, criterion) {
|
|
||||||
return `Email contents:
|
|
||||||
${body}
|
|
||||||
|
|
||||||
Classification criterion: ${criterion}`;
|
|
||||||
}
|
|
||||||
|
|
||||||
function buildMessages(body, criterion) {
|
|
||||||
if (gTemplateName === "openai") {
|
|
||||||
return [
|
|
||||||
{
|
|
||||||
role: "system",
|
|
||||||
content: buildSystemPrompt(),
|
|
||||||
},
|
|
||||||
{
|
|
||||||
role: "user",
|
|
||||||
content: buildUserMessage(body, criterion),
|
|
||||||
},
|
|
||||||
];
|
|
||||||
}
|
|
||||||
|
|
||||||
return [
|
|
||||||
{
|
|
||||||
role: "user",
|
|
||||||
content: buildPrompt(body, criterion),
|
|
||||||
},
|
|
||||||
];
|
|
||||||
}
|
|
||||||
|
|
||||||
function getCachedResult(cacheKey) {
|
function getCachedResult(cacheKey) {
|
||||||
if (!gCacheLoaded) {
|
if (!gCacheLoaded) {
|
||||||
return null;
|
return null;
|
||||||
|
|
@ -309,41 +263,14 @@ function getReason(cacheKey) {
|
||||||
return cacheKey && entry ? entry.reason || null : null;
|
return cacheKey && entry ? entry.reason || null : null;
|
||||||
}
|
}
|
||||||
|
|
||||||
function buildOpenAiParams() {
|
function buildPayload(text, criterion) {
|
||||||
const params = {};
|
let payloadObj = Object.assign({
|
||||||
|
prompt: buildPrompt(text, criterion)
|
||||||
if (Number.isFinite(gAiParams.max_tokens) && gAiParams.max_tokens > 0) {
|
}, gAiParams);
|
||||||
params.max_completion_tokens = Math.trunc(gAiParams.max_tokens);
|
|
||||||
}
|
|
||||||
if (Number.isFinite(gAiParams.temperature)) {
|
|
||||||
params.temperature = gAiParams.temperature;
|
|
||||||
}
|
|
||||||
if (Number.isFinite(gAiParams.top_p)) {
|
|
||||||
params.top_p = gAiParams.top_p;
|
|
||||||
}
|
|
||||||
if (Number.isFinite(gAiParams.presence_penalty)) {
|
|
||||||
params.presence_penalty = gAiParams.presence_penalty;
|
|
||||||
}
|
|
||||||
if (Number.isFinite(gAiParams.frequency_penalty)) {
|
|
||||||
params.frequency_penalty = gAiParams.frequency_penalty;
|
|
||||||
}
|
|
||||||
if (Number.isInteger(gAiParams.seed) && gAiParams.seed >= 0) {
|
|
||||||
params.seed = gAiParams.seed;
|
|
||||||
}
|
|
||||||
|
|
||||||
return params;
|
|
||||||
}
|
|
||||||
|
|
||||||
function buildPayloadObject(text, criterion) {
|
|
||||||
const payloadObj = {
|
|
||||||
messages: buildMessages(text, criterion),
|
|
||||||
response_format: RESPONSE_FORMAT,
|
|
||||||
...buildOpenAiParams(),
|
|
||||||
};
|
|
||||||
if (gModel) {
|
if (gModel) {
|
||||||
payloadObj.model = gModel;
|
payloadObj.model = gModel;
|
||||||
}
|
}
|
||||||
return payloadObj;
|
return JSON.stringify(payloadObj);
|
||||||
}
|
}
|
||||||
|
|
||||||
function reportParseError(message, detail) {
|
function reportParseError(message, detail) {
|
||||||
|
|
@ -363,81 +290,78 @@ function reportParseError(message, detail) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
function extractMessageContent(content) {
|
function extractLastJsonObject(text) {
|
||||||
if (typeof content === "string") {
|
let last = null;
|
||||||
return { text: content, refusal: "" };
|
let start = -1;
|
||||||
}
|
let depth = 0;
|
||||||
if (!Array.isArray(content)) {
|
let inString = false;
|
||||||
return { text: "", refusal: "" };
|
let escape = false;
|
||||||
}
|
|
||||||
|
|
||||||
const textParts = [];
|
for (let i = 0; i < text.length; i += 1) {
|
||||||
const refusalParts = [];
|
const ch = text[i];
|
||||||
for (const part of content) {
|
if (inString) {
|
||||||
if (!part || typeof part !== "object") {
|
if (escape) {
|
||||||
|
escape = false;
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
if (part.type === "text" && typeof part.text === "string") {
|
if (ch === "\\") {
|
||||||
textParts.push(part.text);
|
escape = true;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
if (ch === "\"") {
|
||||||
|
inString = false;
|
||||||
|
}
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
if (ch === "\"") {
|
||||||
|
inString = true;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
if (ch === "{") {
|
||||||
|
if (depth === 0) {
|
||||||
|
start = i;
|
||||||
|
}
|
||||||
|
depth += 1;
|
||||||
|
continue;
|
||||||
|
}
|
||||||
|
if (ch === "}" && depth > 0) {
|
||||||
|
depth -= 1;
|
||||||
|
if (depth === 0 && start !== -1) {
|
||||||
|
last = text.slice(start, i + 1);
|
||||||
|
start = -1;
|
||||||
}
|
}
|
||||||
if (part.type === "refusal" && typeof part.refusal === "string") {
|
|
||||||
refusalParts.push(part.refusal);
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return {
|
return last;
|
||||||
text: textParts.join("\n").trim(),
|
|
||||||
refusal: refusalParts.join("\n").trim(),
|
|
||||||
};
|
|
||||||
}
|
}
|
||||||
|
|
||||||
function parseMatch(result) {
|
function parseMatch(result) {
|
||||||
const message = result?.choices?.[0]?.message;
|
const rawText = result.choices?.[0]?.text || "";
|
||||||
if (!message || typeof message !== "object") {
|
const candidate = extractLastJsonObject(rawText);
|
||||||
reportParseError("AI response missing assistant message.", JSON.stringify(result).slice(0, 800));
|
if (!candidate) {
|
||||||
return { matched: false, reason: "" };
|
reportParseError("No JSON object found in AI response.", rawText.slice(0, 800));
|
||||||
}
|
|
||||||
|
|
||||||
if (typeof message.refusal === "string" && message.refusal.trim()) {
|
|
||||||
reportParseError("Model refused classification request.", message.refusal.slice(0, 800));
|
|
||||||
return { matched: false, reason: message.refusal.trim() };
|
|
||||||
}
|
|
||||||
|
|
||||||
if (message.parsed && typeof message.parsed === "object") {
|
|
||||||
const parsed = message.parsed;
|
|
||||||
if (typeof parsed.match === "boolean" && typeof parsed.reason === "string") {
|
|
||||||
return { matched: parsed.match, reason: parsed.reason };
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
const extracted = extractMessageContent(message.content);
|
|
||||||
if (extracted.refusal) {
|
|
||||||
reportParseError("Model refused classification request.", extracted.refusal.slice(0, 800));
|
|
||||||
return { matched: false, reason: extracted.refusal };
|
|
||||||
}
|
|
||||||
if (!extracted.text) {
|
|
||||||
reportParseError("AI response missing assistant message content.", JSON.stringify(message).slice(0, 800));
|
|
||||||
return { matched: false, reason: "" };
|
return { matched: false, reason: "" };
|
||||||
}
|
}
|
||||||
|
|
||||||
let obj;
|
let obj;
|
||||||
try {
|
try {
|
||||||
obj = JSON.parse(extracted.text);
|
obj = JSON.parse(candidate);
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
reportParseError("Failed to parse JSON from AI response.", extracted.text.slice(0, 800));
|
reportParseError("Failed to parse JSON from AI response.", candidate.slice(0, 800));
|
||||||
return { matched: false, reason: "" };
|
return { matched: false, reason: "" };
|
||||||
}
|
}
|
||||||
|
|
||||||
if (typeof obj?.match !== "boolean") {
|
const matchValue = Object.prototype.hasOwnProperty.call(obj, "match") ? obj.match : obj.matched;
|
||||||
reportParseError("AI response missing valid match boolean.", extracted.text.slice(0, 800));
|
const matched = matchValue === true;
|
||||||
return { matched: false, reason: "" };
|
if (matchValue !== true && matchValue !== false) {
|
||||||
}
|
reportParseError("AI response missing valid match boolean.", candidate.slice(0, 800));
|
||||||
if (typeof obj?.reason !== "string") {
|
|
||||||
reportParseError("AI response missing valid reason string.", extracted.text.slice(0, 800));
|
|
||||||
return { matched: false, reason: "" };
|
|
||||||
}
|
}
|
||||||
|
|
||||||
return { matched: obj.match, reason: obj.reason };
|
const reasonValue = obj.reason ?? obj.reasoning ?? obj.explaination;
|
||||||
|
const reason = typeof reasonValue === "string" ? reasonValue : "";
|
||||||
|
|
||||||
|
return { matched, reason };
|
||||||
}
|
}
|
||||||
|
|
||||||
function cacheEntry(cacheKey, matched, reason) {
|
function cacheEntry(cacheKey, matched, reason) {
|
||||||
|
|
@ -503,10 +427,9 @@ async function classifyText(text, criterion, cacheKey = null) {
|
||||||
return cached;
|
return cached;
|
||||||
}
|
}
|
||||||
|
|
||||||
const payloadObj = buildPayloadObject(text, criterion);
|
const payload = buildPayload(text, criterion);
|
||||||
const payload = JSON.stringify(payloadObj);
|
|
||||||
try {
|
try {
|
||||||
await storage.local.set({ lastPayload: payloadObj });
|
await storage.local.set({ lastPayload: JSON.parse(payload) });
|
||||||
} catch (e) {
|
} catch (e) {
|
||||||
aiLog('failed to save last payload', { level: 'warn' }, e);
|
aiLog('failed to save last payload', { level: 'warn' }, e);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -93,13 +93,12 @@
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div class="field">
|
<div class="field">
|
||||||
<label class="label" for="template">Request format</label>
|
<label class="label" for="template">Prompt template</label>
|
||||||
<div class="control">
|
<div class="control">
|
||||||
<div class="select is-fullwidth">
|
<div class="select is-fullwidth">
|
||||||
<select id="template"></select>
|
<select id="template"></select>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
<p class="help">OpenAI Chat uses native chat messages. Other formats send one templated user message over Chat Completions.</p>
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div id="custom-template-container" class="field is-hidden">
|
<div id="custom-template-container" class="field is-hidden">
|
||||||
|
|
@ -204,7 +203,7 @@
|
||||||
</label>
|
</label>
|
||||||
</div>
|
</div>
|
||||||
<div class="field">
|
<div class="field">
|
||||||
<label class="label" for="max_tokens">Max completion tokens</label>
|
<label class="label" for="max_tokens">Max tokens</label>
|
||||||
<div class="control">
|
<div class="control">
|
||||||
<input class="input" type="number" id="max_tokens">
|
<input class="input" type="number" id="max_tokens">
|
||||||
</div>
|
</div>
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue