diff --git a/AGENTS.md b/AGENTS.md index aece578..c51f40c 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -5,14 +5,12 @@ This file provides guidelines for codex agents contributing to the Sortana proje ## Repository Overview - `background.js`: Handles startup tasks and coordinates message passing within the extension. -- `modules/`: Contains reusable JavaScript modules such as `AiClassifier.js`, - `defaultParams.js` and `themeUtils.js`. -- `options/`: The options page HTML, JavaScript and bundled Bulma CSS (v1.0.3). +- `modules/`: Contains reusable JavaScript modules such as `AiClassifier.js`. +- `options/`: The options page HTML, JavaScript and Bulma CSS. - `details.html` and `details.js`: View AI reasoning and clear cache for a message. - `resources/`: Images and other static files. -- `prompt_templates/`: Prompt template files for the AI service (openai, qwen, mistral, harmony). +- `prompt_templates/`: Prompt template files for the AI service. - `build-xpi.ps1`: PowerShell script to package the extension. -- `build-xpi.sh`: Bash script to package the extension. ## Coding Style @@ -29,14 +27,7 @@ This file provides guidelines for codex agents contributing to the Sortana proje ## Testing -There are currently no automated tests for this project. If you add tests in the future, specify the commands to run them here. For now, verification must happen manually in Thunderbird. Do **not** run the `ps1` build script or the SVG processing script. - -## Endpoint Notes - -Sortana targets the `/v1/completions` API. The endpoint value stored in settings is a base URL; the full request URL is constructed by appending `/v1/completions` (adding a slash when needed) and defaulting to `https://` if no scheme is provided. -The options page can query `/v1/models` from the same base URL to populate the Model dropdown; selecting **None** omits the `model` field from the request payload. -Advanced options allow an optional API key plus `OpenAI-Organization` and `OpenAI-Project` headers; these headers are only sent when values are provided. -Responses are expected to include a JSON object with `match` (or `matched`) plus a short `reason` string; the parser extracts the last JSON object in the response text and ignores any surrounding commentary. +There are currently no automated tests for this project. If you add tests in the future, specify the commands to run them here. For now, verification must happen manually in Thunderbird. ## Documentation @@ -50,7 +41,7 @@ Additional documentation exists outside this repository. - Thunderbird Add-on Store Policies - [Third Party Library Usage](https://extensionworkshop.com/documentation/publish/third-party-library-usage/) - Third Party Libraries - - [Bulma.css v1.0.3](https://github.com/jgthms/bulma/blob/1.0.3/css/bulma.css) + - [Bulma.css](https://github.com/jgthms/bulma) - Issue tracker: [Thunderbird tracker on Bugzilla](https://bugzilla.mozilla.org/describecomponents.cgi?product=Thunderbird) @@ -80,4 +71,5 @@ time the add-on loads after an update. Toolbar and menu icons reside under `resources/img` and are provided in 16, 32 and 64 pixel variants. When changing these icons, pass a dictionary mapping the sizes to the paths in `browserAction.setIcon` or `messageDisplayAction.setIcon`. -Use `resources/svg2img.ps1` to regenerate PNGs from the SVG sources. +Use `resources/svg/svg2img.ps1` to regenerate PNGs from the SVG sources. + diff --git a/README.md b/README.md index 5b908ea..4447c9b 100644 --- a/README.md +++ b/README.md @@ -4,38 +4,27 @@ Sortana is an experimental Thunderbird add-on that integrates an AI-powered filter rule. It allows you to classify email messages by sending their contents to a configurable -HTTP endpoint. Sortana uses the `/v1/completions` API; the options page stores a base -URL and appends `/v1/completions` when sending requests. The endpoint should respond -with JSON indicating whether the message meets a specified criterion, including a -short reasoning summary. -Responses are parsed by extracting the last JSON object in the response text and -expecting a `match` (or `matched`) boolean plus a `reason` string. +HTTP endpoint. The endpoint should respond with JSON indicating whether the +message meets a specified criterion. ## Features -- **Configurable endpoint** – set the classification service base URL on the options page. -- **Model selection** – load available models from the endpoint and choose one (or omit the model field). -- **Optional OpenAI auth headers** – provide an API key plus optional organization/project headers when needed. -- **Prompt templates** – choose between OpenAI/ChatML, Qwen, Mistral, Harmony (gpt-oss), or provide your own custom template. +- **Configurable endpoint** – set the classification service URL on the options page. +- **Prompt templates** – choose between several model formats or provide your own custom template. - **Custom system prompts** – tailor the instructions sent to the model for more precise results. - **Persistent result caching** – classification results and reasoning are saved to disk so messages aren't re-evaluated across restarts. - **Advanced parameters** – tune generation settings like temperature, top‑p and more from the options page. - **Markdown conversion** – optionally convert HTML bodies to Markdown before sending them to the AI service. - **Debug logging** – optional colorized logs help troubleshoot interactions with the AI service. -- **Debug tab** – view the last request payload and a diff between the unaltered message text and the final prompt. - **Light/Dark themes** – automatically match Thunderbird's appearance with optional manual override. -- **Automatic rules** – create rules that tag, move, copy, forward, reply, delete, archive, mark read/unread or flag/unflag messages based on AI classification. Rules can optionally apply only to unread messages and can ignore messages outside a chosen age range. +- **Automatic rules** – create rules that tag or move new messages based on AI classification. - **Rule ordering** – drag rules to prioritize them and optionally stop processing after a match. -- **Rule enable/disable** – temporarily turn a rule off without removing it. -- **Account & folder filters** – limit rules to specific accounts or folders. - **Context menu** – apply AI rules from the message list or the message display action button. -- **Status icons** – toolbar icons show when classification is in progress and briefly display success states. If a failure occurs the icon turns red briefly before returning to normal. -- **Error notification** – failed classification displays a notification in Thunderbird. -- **Session error log** – the Errors tab (visible only when errors occur) shows errors recorded since the last add-on start. +- **Status icons** – toolbar icons show when classification is in progress and briefly display success or error states. - **View reasoning** – inspect why rules matched via the Details popup. - **Cache management** – clear cached results from the context menu or options page. - **Queue & timing stats** – monitor processing time on the Maintenance tab. -- **Packaging scripts** – `build-xpi.ps1` (PowerShell) or `build-xpi.sh` (bash) build an XPI ready for installation. +- **Packaging script** – `build-xpi.ps1` builds an XPI ready for installation. - **Maintenance tab** – view rule counts, cache entries and clear cached results from the options page. ### Cache Storage @@ -70,34 +59,21 @@ Sortana is implemented entirely with standard WebExtension scripts—no custom e 1. Ensure PowerShell is available (for Windows) or adapt the script for other environments. -2. The Bulma stylesheet (v1.0.3) is already included as `options/bulma.css`. -3. Run `powershell ./build-xpi.ps1` or `./build-xpi.sh` from the repository root. - The script reads the version from `manifest.json` and creates an XPI in the - `release` folder. +2. Ensure the Bulma stylesheet (v1.0.4) is saved as `options/bulma.css`. You can + download it from . +3. Run `powershell ./build-xpi.ps1` from the repository root. The script reads + the version from `manifest.json` and creates an XPI in the `release` folder. 4. Install the generated XPI in Thunderbird via the Add-ons Manager. During development you can also load the directory as a temporary add-on. -5. To regenerate PNG icons from the SVG sources, run `resources/svg2img.ps1`. ## Usage -1. Open the add-on's options and set the base URL of your classification service - (Sortana will append `/v1/completions`). Use the Model dropdown to load - `/v1/models` and select a model or choose **None** to omit the `model` field. - Advanced settings include optional API key, organization, and project headers - for OpenAI-hosted endpoints. - 2. Use the **Classification Rules** section to add a criterion and optional - actions such as tagging, moving, copying, forwarding, replying, - deleting or archiving a message when it matches. Drag rules to - reorder them, check *Only apply to unread messages* to skip read mail, - set optional minimum or maximum message age limits, select the accounts or - folders a rule should apply to. Use the - slashed-circle/check button to disable or re-enable a rule. The small - circle buttons for optional conditions show a filled dot when active, and - check *Stop after match* to halt further processing. Forward and reply actions - open a compose window using the account that received the message. +1. Open the add-on's options and set the URL of your classification service. +2. Use the **Classification Rules** section to add a criterion and optional + actions such as tagging or moving a message when it matches. Drag rules to + reorder them and check *Stop after match* to halt further processing. 3. Save your settings. New mail will be evaluated automatically using the configured rules. -4. If the toolbar icon shows a red X, it will clear after a few seconds. Open the Errors tab in Options to review the latest failures. ### Example Filters @@ -125,7 +101,7 @@ Here are some useful and fun example criteria you can use in your filters. Filte For when you're ready to filter based on vibes. You can define as many filters as you'd like, each using a different prompt and -triggering tags, moves, copies, deletes, archives, read/unread changes or flag updates based on the model's classification. +triggering tags, moves, or actions based on the model's classification. ## Required Permissions @@ -133,14 +109,13 @@ Sortana requests the following Thunderbird permissions: - `storage` – store configuration and cached classification results. - `messagesRead` – read message contents for classification. -- `messagesMove` – move or copy messages when a rule specifies a target folder. - - `messagesUpdate` – change message properties such as tags, junk status, read/unread state and flags. +- `messagesMove` – move messages when a rule specifies a target folder. +- `messagesUpdate` – change message properties such as tags and junk status. - `messagesTagsList` – retrieve existing message tags for rule actions. -- `accountsRead` – list accounts and folders for move or copy actions. +- `accountsRead` – list accounts and folders for move actions. - `menus` – add context menu commands. - `tabs` – open new tabs and query the active tab. -- `notifications` – display error notifications. -- `compose` – create reply and forward compose windows for matching rules. +- Host permissions (`*://*/*`) – allow network requests to your configured classification service. ## Thunderbird Add-on Store Disclosures @@ -149,12 +124,10 @@ requires disclosure of third party libraries that are included in the add-on. Ev the disclosure is only required for add-on review, they'll be listed here as well. Sortana uses the following third party libraries: -- [Bulma.css v1.0.3](https://github.com/jgthms/bulma/blob/1.0.3/css/bulma.css) +- [Bulma.css v1.0.4](https://github.com/jgthms/bulma/blob/1.0.4/css/bulma.css) - MIT License - [turndown v7.2.0](https://github.com/mixmark-io/turndown/tree/v7.2.0) - MIT License -- [diff](https://github.com/google/diff-match-patch/blob/62f2e689f498f9c92dbc588c58750addec9b1654/javascript/diff_match_patch_uncompressed.js) - - Apache-2.0 license ## License @@ -170,3 +143,4 @@ how Thunderbird's WebExtension and experiment APIs can be extended. Their code provided invaluable guidance during development. - Icons from [cc0-icons.jonh.eu](https://cc0-icons.jonh.eu/) are used under the CC0 license. + diff --git a/_locales/en-US/messages.json b/_locales/en-US/messages.json index 017e1a3..97a356b 100644 --- a/_locales/en-US/messages.json +++ b/_locales/en-US/messages.json @@ -12,27 +12,11 @@ "template.openai": { "message": "OpenAI / ChatML" }, "template.qwen": { "message": "Qwen" }, "template.mistral": { "message": "Mistral" }, - "template.harmony": { "message": "Harmony (gpt-oss)" }, "template.custom": { "message": "Custom" }, "options.save": { "message": "Save" }, "options.debugLogging": { "message": "Enable debug logging" }, "options.htmlToMarkdown": { "message": "Convert HTML body to Markdown" }, "options.stripUrlParams": { "message": "Remove URL tracking parameters" }, "options.altTextImages": { "message": "Replace images with alt text" }, - "options.collapseWhitespace": { "message": "Collapse long whitespace" }, - "options.tokenReduction": { "message": "Aggressive token reduction" } - ,"action.read": { "message": "read" } - ,"action.flag": { "message": "flag" } - ,"action.copy": { "message": "copy" } - ,"action.delete": { "message": "delete" } - ,"action.archive": { "message": "archive" } - ,"action.forward": { "message": "forward" } - ,"action.reply": { "message": "reply" } - ,"param.markRead": { "message": "mark read" } - ,"param.markUnread": { "message": "mark unread" } - ,"param.flag": { "message": "flag" } - ,"param.unflag": { "message": "unflag" } - ,"param.address": { "message": "address" } - ,"param.replyAll": { "message": "reply all" } - ,"param.replySender": { "message": "reply sender" } + "options.collapseWhitespace": { "message": "Collapse long whitespace" } } diff --git a/ai-filter.sln b/ai-filter.sln index 7ded341..f41f23f 100644 --- a/ai-filter.sln +++ b/ai-filter.sln @@ -44,7 +44,6 @@ Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "prompt_templates", "prompt_ prompt_templates\mistral.txt = prompt_templates\mistral.txt prompt_templates\openai.txt = prompt_templates\openai.txt prompt_templates\qwen.txt = prompt_templates\qwen.txt - prompt_templates\harmony.txt = prompt_templates\harmony.txt EndProjectSection EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "resources", "resources", "{68A87938-5C2B-49F5-8AAA-8A34FBBFD854}" @@ -57,15 +56,9 @@ Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "img", "img", "{F266602F-175 resources\img\average-16.png = resources\img\average-16.png resources\img\average-32.png = resources\img\average-32.png resources\img\average-64.png = resources\img\average-64.png - resources\img\check-16.png = resources\img\check-16.png - resources\img\check-32.png = resources\img\check-32.png - resources\img\check-64.png = resources\img\check-64.png resources\img\circle-16.png = resources\img\circle-16.png resources\img\circle-32.png = resources\img\circle-32.png resources\img\circle-64.png = resources\img\circle-64.png - resources\img\circledots-16.png = resources\img\circledots-16.png - resources\img\circledots-32.png = resources\img\circledots-32.png - resources\img\circledots-64.png = resources\img\circledots-64.png resources\img\clipboarddata-16.png = resources\img\clipboarddata-16.png resources\img\clipboarddata-32.png = resources\img\clipboarddata-32.png resources\img\clipboarddata-64.png = resources\img\clipboarddata-64.png @@ -102,35 +95,13 @@ Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "img", "img", "{F266602F-175 resources\img\upload-16.png = resources\img\upload-16.png resources\img\upload-32.png = resources\img\upload-32.png resources\img\upload-64.png = resources\img\upload-64.png - resources\img\x-16.png = resources\img\x-16.png - resources\img\x-32.png = resources\img\x-32.png - resources\img\x-64.png = resources\img\x-64.png EndProjectSection EndProject Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "js", "js", "{21D2A42C-3F85-465C-9141-C106AFD92B68}" ProjectSection(SolutionItems) = preProject - resources\js\diff_match_patch_uncompressed.js = resources\js\diff_match_patch_uncompressed.js resources\js\turndown.js = resources\js\turndown.js EndProjectSection EndProject -Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") = "svg", "svg", "{D4E9C905-4884-488E-B763-5BD39049C1B1}" - ProjectSection(SolutionItems) = preProject - resources\svg\average.svg = resources\svg\average.svg - resources\svg\check.svg = resources\svg\check.svg - resources\svg\circle.svg = resources\svg\circle.svg - resources\svg\circledots.svg = resources\svg\circledots.svg - resources\svg\clipboarddata.svg = resources\svg\clipboarddata.svg - resources\svg\download.svg = resources\svg\download.svg - resources\svg\eye.svg = resources\svg\eye.svg - resources\svg\flag.svg = resources\svg\flag.svg - resources\svg\gear.svg = resources\svg\gear.svg - resources\svg\reply.svg = resources\svg\reply.svg - resources\svg\settings.svg = resources\svg\settings.svg - resources\svg\trash.svg = resources\svg\trash.svg - resources\svg\upload.svg = resources\svg\upload.svg - resources\svg\x.svg = resources\svg\x.svg - EndProjectSection -EndProject Global GlobalSection(SolutionProperties) = preSolution HideSolutionNode = FALSE @@ -144,6 +115,5 @@ Global {68A87938-5C2B-49F5-8AAA-8A34FBBFD854} = {BCC6E6D2-343B-4C48-854D-5FE3BBC3CB70} {F266602F-1755-4A95-A11B-6C90C701C5BF} = {68A87938-5C2B-49F5-8AAA-8A34FBBFD854} {21D2A42C-3F85-465C-9141-C106AFD92B68} = {68A87938-5C2B-49F5-8AAA-8A34FBBFD854} - {D4E9C905-4884-488E-B763-5BD39049C1B1} = {68A87938-5C2B-49F5-8AAA-8A34FBBFD854} EndGlobalSection EndGlobal diff --git a/background.js b/background.js index b2efc89..39bea3e 100644 --- a/background.js +++ b/background.js @@ -19,7 +19,6 @@ let queue = Promise.resolve(); let queuedCount = 0; let processing = false; let iconTimer = null; -let errorTimer = null; let timingStats = { count: 0, mean: 0, m2: 0, total: 0, last: -1 }; let currentStart = 0; let logGetTiming = true; @@ -27,39 +26,18 @@ let htmlToMarkdown = false; let stripUrlParams = false; let altTextImages = false; let collapseWhitespace = false; -let tokenReduction = false; -let maxTokens = 4096; let TurndownService = null; let userTheme = 'auto'; let currentTheme = 'light'; -let detectSystemTheme; -let errorPending = false; -let errorLog = []; -let showDebugTab = false; -const ERROR_NOTIFICATION_ID = 'sortana-error'; -const ERROR_ICON_TIMEOUT = 4500; -const MAX_ERROR_LOG = 50; function normalizeRules(rules) { return Array.isArray(rules) ? rules.map(r => { - if (r.actions) { - if (!Array.isArray(r.accounts)) r.accounts = []; - if (!Array.isArray(r.folders)) r.folders = []; - r.enabled = r.enabled !== false; - return r; - } + if (r.actions) return r; const actions = []; if (r.tag) actions.push({ type: 'tag', tagKey: r.tag }); if (r.moveTo) actions.push({ type: 'move', folder: r.moveTo }); - if (r.copyTarget || r.copyTo) actions.push({ type: 'copy', copyTarget: r.copyTarget || r.copyTo }); const rule = { criterion: r.criterion, actions }; if (r.stopProcessing) rule.stopProcessing = true; - if (r.unreadOnly) rule.unreadOnly = true; - if (typeof r.minAgeDays === 'number') rule.minAgeDays = r.minAgeDays; - if (typeof r.maxAgeDays === 'number') rule.maxAgeDays = r.maxAgeDays; - if (Array.isArray(r.accounts)) rule.accounts = r.accounts; - if (Array.isArray(r.folders)) rule.folders = r.folders; - rule.enabled = r.enabled !== false; return rule; }) : []; } @@ -72,13 +50,30 @@ function iconPaths(name) { }; } +async function detectSystemTheme() { + try { + const t = await browser.theme.getCurrent(); + const scheme = t?.properties?.color_scheme; + if (scheme === 'dark' || scheme === 'light') { + return scheme; + } + const color = t?.colors?.frame || t?.colors?.toolbar; + if (color && /^#/.test(color)) { + const r = parseInt(color.slice(1, 3), 16); + const g = parseInt(color.slice(3, 5), 16); + const b = parseInt(color.slice(5, 7), 16); + const lum = (0.299 * r + 0.587 * g + 0.114 * b) / 255; + return lum < 0.5 ? 'dark' : 'light'; + } + } catch {} + return 'light'; +} const ICONS = { logo: () => 'resources/img/logo.png', circledots: () => iconPaths('circledots'), circle: () => iconPaths('circle'), - average: () => iconPaths('average'), - error: () => iconPaths('x') + average: () => iconPaths('average') }; function setIcon(path) { @@ -92,62 +87,19 @@ function setIcon(path) { function updateActionIcon() { let path = ICONS.logo(); - if (errorPending) { - path = ICONS.error(); - } else if (processing || queuedCount > 0) { + if (processing || queuedCount > 0) { path = ICONS.circledots(); } setIcon(path); } function showTransientIcon(factory, delay = 1500) { - if (errorPending) { - return; - } clearTimeout(iconTimer); const path = typeof factory === 'function' ? factory() : factory; setIcon(path); iconTimer = setTimeout(updateActionIcon, delay); } -async function clearError() { - errorPending = false; - clearTimeout(errorTimer); - await browser.notifications.clear(ERROR_NOTIFICATION_ID); - updateActionIcon(); -} - -function recordError(context, err) { - let message = 'Unknown error'; - let detail = ''; - if (err instanceof Error) { - message = err.message; - detail = err.stack || ''; - } else if (err && typeof err === 'object') { - message = typeof err.message === 'string' ? err.message : String(err || 'Unknown error'); - detail = typeof err.detail === 'string' ? err.detail : ''; - } else { - message = String(err || 'Unknown error'); - } - errorLog.unshift({ - time: Date.now(), - context, - message, - detail - }); - if (errorLog.length > MAX_ERROR_LOG) { - errorLog.length = MAX_ERROR_LOG; - } - errorPending = true; - updateActionIcon(); - clearTimeout(errorTimer); - errorTimer = setTimeout(() => { - errorPending = false; - updateActionIcon(); - }, ERROR_ICON_TIMEOUT); - browser.runtime.sendMessage({ type: 'sortana:errorLogUpdated', count: errorLog.length }).catch(() => {}); -} - function refreshMenuIcons() { browser.menus.update('apply-ai-rules-list', { icons: iconPaths('eye') }); browser.menus.update('apply-ai-rules-display', { icons: iconPaths('eye') }); @@ -163,16 +115,12 @@ function byteSize(str) { } function replaceInlineBase64(text) { - return text.replace(/(?:data:[^;]+;base64,)?[A-Za-z0-9+/=\r\n]{100,}/g, - m => tokenReduction ? '__BASE64__' : `[base64: ${byteSize(m)} bytes]`); + return text.replace(/[A-Za-z0-9+/]{100,}={0,2}/g, + m => `[base64: ${byteSize(m)} bytes]`); } function sanitizeString(text) { let t = String(text); - if (tokenReduction) { - t = t.replace(//gs, '') - .replace(/url\([^\)]*\)/gi, 'url(__IMG__)'); - } if (stripUrlParams) { t = t.replace(/https?:\/\/[^\s)]+/g, m => { const idx = m.indexOf('?'); @@ -180,7 +128,7 @@ function sanitizeString(text) { }); } if (collapseWhitespace) { - t = t.replace(/[\u200B-\u200D\u2060\s]{2,}/g, ' ').replace(/\n{3,}/g, '\n\n'); + t = t.replace(/[ \t\u00A0]{2,}/g, ' ').replace(/\n{3,}/g, '\n\n'); } return t; } @@ -199,26 +147,12 @@ function collectText(part, bodyParts, attachments) { attachments.push(`${name} (${ct}, ${part.size || byteSize(body)} bytes)`); } else if (ct.startsWith("text/html")) { const doc = new DOMParser().parseFromString(body, 'text/html'); - if (tokenReduction) { - doc.querySelectorAll('script,style').forEach(el => el.remove()); - const walker = doc.createTreeWalker(doc, NodeFilter.SHOW_COMMENT); - let node; - while ((node = walker.nextNode())) { - node.parentNode.removeChild(node); - } - doc.querySelectorAll('*').forEach(el => { - for (const attr of Array.from(el.attributes)) { - if (!['href','src','alt'].includes(attr.name)) { - el.removeAttribute(attr.name); - } - } + if (altTextImages) { + doc.querySelectorAll('img').forEach(img => { + const alt = img.getAttribute('alt') || ''; + img.replaceWith(doc.createTextNode(alt)); }); } - doc.querySelectorAll('img').forEach(img => { - const alt = img.getAttribute('alt') || ''; - const text = altTextImages ? alt : '__IMG__'; - img.replaceWith(doc.createTextNode(text)); - }); if (stripUrlParams) { doc.querySelectorAll('[href]').forEach(a => { const href = a.getAttribute('href'); @@ -245,188 +179,16 @@ function collectText(part, bodyParts, attachments) { } } -function collectRawText(part, bodyParts, attachments) { - if (part.parts && part.parts.length) { - for (const p of part.parts) collectRawText(p, bodyParts, attachments); - return; - } - const ct = (part.contentType || "text/plain").toLowerCase(); - const cd = (part.headers?.["content-disposition"]?.[0] || "").toLowerCase(); - const body = String(part.body || ""); - if (cd.includes("attachment") || !ct.startsWith("text/")) { - const nameMatch = /filename\s*=\s*"?([^";]+)/i.exec(cd) || /name\s*=\s*"?([^";]+)/i.exec(part.headers?.["content-type"]?.[0] || ""); - const name = nameMatch ? nameMatch[1] : ""; - attachments.push(`${name} (${ct}, ${part.size || byteSize(body)} bytes)`); - } else if (ct.startsWith("text/html")) { - const doc = new DOMParser().parseFromString(body, 'text/html'); - bodyParts.push(doc.body.textContent || ""); - } else { - bodyParts.push(body); - } -} - -function buildEmailText(full, applyTransforms = true) { +function buildEmailText(full) { const bodyParts = []; const attachments = []; - const collect = applyTransforms ? collectText : collectRawText; - collect(full, bodyParts, attachments); + collectText(full, bodyParts, attachments); const headers = Object.entries(full.headers || {}) .map(([k, v]) => `${k}: ${v.join(' ')}`) .join('\n'); - const attachInfo = `Attachments: ${attachments.length}` + - (attachments.length ? "\n" + attachments.map(a => ` - ${a}`).join('\n') : ""); - let combined = `${headers}\n${attachInfo}\n\n${bodyParts.join('\n')}`.trim(); - if (applyTransforms && tokenReduction) { - const seen = new Set(); - combined = combined.split('\n').filter(l => { - if (seen.has(l)) return false; - seen.add(l); - return true; - }).join('\n'); - } - return applyTransforms ? sanitizeString(combined) : combined; -} - -function updateTimingStats(elapsed) { - const t = timingStats; - t.count += 1; - t.total += elapsed; - t.last = elapsed; - const delta = elapsed - t.mean; - t.mean += delta / t.count; - t.m2 += delta * (elapsed - t.mean); -} - -async function getAllMessageIds(list) { - const ids = []; - if (!list) { - return ids; - } - let page = list; - ids.push(...(page.messages || []).map(m => m.id)); - while (page.id) { - page = await messenger.messages.continueList(page.id); - ids.push(...(page.messages || []).map(m => m.id)); - } - return ids; -} - -async function processMessage(id) { - processing = true; - currentStart = Date.now(); - queuedCount--; - updateActionIcon(); - try { - const full = await messenger.messages.getFull(id); - const originalText = buildEmailText(full, false); - let text = buildEmailText(full); - if (tokenReduction && maxTokens > 0) { - const limit = Math.floor(maxTokens * 0.9); - if (text.length > limit) { - text = text.slice(0, limit); - } - } - if (showDebugTab) { - await storage.local.set({ lastFullText: originalText, lastPromptText: text }); - } - let hdr; - let currentTags = []; - let alreadyRead = false; - let identityId = null; - try { - hdr = await messenger.messages.get(id); - currentTags = Array.isArray(hdr.tags) ? [...hdr.tags] : []; - alreadyRead = hdr.read === true; - const ids = await messenger.identities.list(hdr.folder.accountId); - identityId = ids[0]?.id || null; - } catch (e) { - currentTags = []; - alreadyRead = false; - identityId = null; - } - - for (const rule of aiRules) { - if (rule.enabled === false) { - continue; - } - if (hdr && Array.isArray(rule.accounts) && rule.accounts.length && - !rule.accounts.includes(hdr.folder.accountId)) { - continue; - } - if (hdr && Array.isArray(rule.folders) && rule.folders.length && - !rule.folders.includes(hdr.folder.path)) { - continue; - } - if (rule.unreadOnly && alreadyRead) { - continue; - } - if (hdr && (typeof rule.minAgeDays === 'number' || typeof rule.maxAgeDays === 'number')) { - const msgTime = new Date(hdr.date).getTime(); - if (!isNaN(msgTime)) { - const ageDays = (Date.now() - msgTime) / 86400000; - if (typeof rule.minAgeDays === 'number' && ageDays < rule.minAgeDays) { - continue; - } - if (typeof rule.maxAgeDays === 'number' && ageDays > rule.maxAgeDays) { - continue; - } - } - } - const cacheKey = await AiClassifier.buildCacheKey(id, rule.criterion); - const matched = await AiClassifier.classifyText(text, rule.criterion, cacheKey); - if (matched) { - for (const act of (rule.actions || [])) { - if (act.type === 'tag' && act.tagKey) { - if (!currentTags.includes(act.tagKey)) { - currentTags.push(act.tagKey); - await messenger.messages.update(id, { tags: currentTags }); - } - } else if (act.type === 'move' && act.folder) { - await messenger.messages.move([id], act.folder); - } else if (act.type === 'copy' && act.copyTarget) { - await messenger.messages.copy([id], act.copyTarget); - } else if (act.type === 'junk') { - await messenger.messages.update(id, { junk: !!act.junk }); - } else if (act.type === 'read') { - await messenger.messages.update(id, { read: !!act.read }); - } else if (act.type === 'flag') { - await messenger.messages.update(id, { flagged: !!act.flagged }); - } else if (act.type === 'delete') { - await messenger.messages.delete([id]); - } else if (act.type === 'archive') { - await messenger.messages.archive([id]); - } else if (act.type === 'forward' && act.address && identityId) { - await browser.compose.beginForward(id, { to: [act.address], identityId }); - } else if (act.type === 'reply' && act.replyType && identityId) { - await browser.compose.beginReply(id, { replyType: act.replyType, identityId }); - } - } - if (rule.stopProcessing) { - break; - } - } - } - processing = false; - const elapsed = Date.now() - currentStart; - currentStart = 0; - updateTimingStats(elapsed); - await storage.local.set({ classifyStats: timingStats }); - showTransientIcon(ICONS.circle); - } catch (e) { - processing = false; - const elapsed = Date.now() - currentStart; - currentStart = 0; - updateTimingStats(elapsed); - await storage.local.set({ classifyStats: timingStats }); - logger.aiLog("failed to apply AI rules", { level: 'error' }, e); - recordError("Failed to apply AI rules", e); - browser.notifications.create(ERROR_NOTIFICATION_ID, { - type: 'basic', - iconUrl: browser.runtime.getURL('resources/img/logo.png'), - title: 'Sortana Error', - message: 'Failed to apply AI rules' - }); - } + const attachInfo = `Attachments: ${attachments.length}` + (attachments.length ? "\n" + attachments.map(a => ` - ${a}`).join('\n') : ""); + const combined = `${headers}\n${attachInfo}\n\n${bodyParts.join('\n')}`.trim(); + return sanitizeString(combined); } async function applyAiRules(idsInput) { const ids = Array.isArray(idsInput) ? idsInput : [idsInput]; @@ -441,7 +203,71 @@ async function applyAiRules(idsInput) { const id = msg?.id ?? msg; queuedCount++; updateActionIcon(); - queue = queue.then(() => processMessage(id)); + queue = queue.then(async () => { + processing = true; + currentStart = Date.now(); + queuedCount--; + updateActionIcon(); + try { + const full = await messenger.messages.getFull(id); + const text = buildEmailText(full); + let currentTags = []; + try { + const hdr = await messenger.messages.get(id); + currentTags = Array.isArray(hdr.tags) ? [...hdr.tags] : []; + } catch (e) { + currentTags = []; + } + + for (const rule of aiRules) { + const cacheKey = await AiClassifier.buildCacheKey(id, rule.criterion); + const matched = await AiClassifier.classifyText(text, rule.criterion, cacheKey); + if (matched) { + for (const act of (rule.actions || [])) { + if (act.type === 'tag' && act.tagKey) { + if (!currentTags.includes(act.tagKey)) { + currentTags.push(act.tagKey); + await messenger.messages.update(id, { tags: currentTags }); + } + } else if (act.type === 'move' && act.folder) { + await messenger.messages.move([id], act.folder); + } else if (act.type === 'junk') { + await messenger.messages.update(id, { junk: !!act.junk }); + } + } + if (rule.stopProcessing) { + break; + } + } + } + processing = false; + const elapsed = Date.now() - currentStart; + currentStart = 0; + const t = timingStats; + t.count += 1; + t.total += elapsed; + t.last = elapsed; + const delta = elapsed - t.mean; + t.mean += delta / t.count; + t.m2 += delta * (elapsed - t.mean); + await storage.local.set({ classifyStats: t }); + showTransientIcon(ICONS.circle); + } catch (e) { + processing = false; + const elapsed = Date.now() - currentStart; + currentStart = 0; + const t = timingStats; + t.count += 1; + t.total += elapsed; + t.last = elapsed; + const delta = elapsed - t.mean; + t.mean += delta / t.count; + t.m2 += delta * (elapsed - t.mean); + await storage.local.set({ classifyStats: t }); + logger.aiLog("failed to apply AI rules", { level: 'error' }, e); + showTransientIcon(ICONS.average); + } + }); } return queue; @@ -472,7 +298,6 @@ async function clearCacheForMessages(idsInput) { (async () => { logger = await import(browser.runtime.getURL("logger.js")); - ({ detectSystemTheme } = await import(browser.runtime.getURL('modules/themeUtils.js'))); try { AiClassifier = await import(browser.runtime.getURL("modules/AiClassifier.js")); logger.aiLog("AiClassifier imported", { debug: true }); @@ -484,7 +309,7 @@ async function clearCacheForMessages(idsInput) { } try { - const store = await storage.local.get(["endpoint", "model", "apiKey", "openaiOrganization", "openaiProject", "templateName", "customTemplate", "customSystemPrompt", "aiParams", "debugLogging", "htmlToMarkdown", "stripUrlParams", "altTextImages", "collapseWhitespace", "tokenReduction", "aiRules", "theme", "showDebugTab"]); + const store = await storage.local.get(["endpoint", "templateName", "customTemplate", "customSystemPrompt", "aiParams", "debugLogging", "htmlToMarkdown", "stripUrlParams", "altTextImages", "collapseWhitespace", "aiRules", "theme"]); logger.setDebug(store.debugLogging); await AiClassifier.setConfig(store); userTheme = store.theme || 'auto'; @@ -494,11 +319,6 @@ async function clearCacheForMessages(idsInput) { stripUrlParams = store.stripUrlParams === true; altTextImages = store.altTextImages === true; collapseWhitespace = store.collapseWhitespace === true; - tokenReduction = store.tokenReduction === true; - if (store.aiParams && typeof store.aiParams.max_tokens !== 'undefined') { - maxTokens = parseInt(store.aiParams.max_tokens) || maxTokens; - } - showDebugTab = store.showDebugTab === true; const savedStats = await storage.local.get('classifyStats'); if (savedStats.classifyStats && typeof savedStats.classifyStats === 'object') { Object.assign(timingStats, savedStats.classifyStats); @@ -514,29 +334,6 @@ async function clearCacheForMessages(idsInput) { aiRules = normalizeRules(newRules); logger.aiLog("aiRules updated from storage change", { debug: true }, aiRules); } - if (changes.endpoint || changes.model || changes.apiKey || changes.openaiOrganization || changes.openaiProject || changes.templateName || changes.customTemplate || changes.customSystemPrompt || changes.aiParams || changes.debugLogging) { - const config = {}; - if (changes.endpoint) config.endpoint = changes.endpoint.newValue; - if (changes.model) config.model = changes.model.newValue; - if (changes.apiKey) config.apiKey = changes.apiKey.newValue; - if (changes.openaiOrganization) config.openaiOrganization = changes.openaiOrganization.newValue; - if (changes.openaiProject) config.openaiProject = changes.openaiProject.newValue; - if (changes.templateName) config.templateName = changes.templateName.newValue; - if (changes.customTemplate) config.customTemplate = changes.customTemplate.newValue; - if (changes.customSystemPrompt) config.customSystemPrompt = changes.customSystemPrompt.newValue; - if (changes.aiParams) { - config.aiParams = changes.aiParams.newValue; - if (changes.aiParams.newValue && typeof changes.aiParams.newValue.max_tokens !== 'undefined') { - maxTokens = parseInt(changes.aiParams.newValue.max_tokens) || maxTokens; - } - } - if (changes.debugLogging) { - config.debugLogging = changes.debugLogging.newValue === true; - logger.setDebug(config.debugLogging); - } - await AiClassifier.setConfig(config); - logger.aiLog("AiClassifier config updated from storage change", { debug: true }, config); - } if (changes.htmlToMarkdown) { htmlToMarkdown = changes.htmlToMarkdown.newValue === true; logger.aiLog("htmlToMarkdown updated from storage change", { debug: true }, htmlToMarkdown); @@ -553,13 +350,6 @@ async function clearCacheForMessages(idsInput) { collapseWhitespace = changes.collapseWhitespace.newValue === true; logger.aiLog("collapseWhitespace updated from storage change", { debug: true }, collapseWhitespace); } - if (changes.tokenReduction) { - tokenReduction = changes.tokenReduction.newValue === true; - logger.aiLog("tokenReduction updated from storage change", { debug: true }, tokenReduction); - } - if (changes.showDebugTab) { - showDebugTab = changes.showDebugTab.newValue === true; - } if (changes.theme) { userTheme = changes.theme.newValue || 'auto'; currentTheme = userTheme === 'auto' ? await detectSystemTheme() : userTheme; @@ -634,16 +424,12 @@ async function clearCacheForMessages(idsInput) { browser.menus.onClicked.addListener(async (info, tab) => { if (info.menuItemId === "apply-ai-rules-list" || info.menuItemId === "apply-ai-rules-display") { - let ids = info.messageId ? [info.messageId] : []; - if (info.selectedMessages) { - ids = await getAllMessageIds(info.selectedMessages); - } + const ids = info.selectedMessages?.messages?.map(m => m.id) || + (info.messageId ? [info.messageId] : []); await applyAiRules(ids); } else if (info.menuItemId === "clear-ai-cache-list" || info.menuItemId === "clear-ai-cache-display") { - let ids = info.messageId ? [info.messageId] : []; - if (info.selectedMessages) { - ids = await getAllMessageIds(info.selectedMessages); - } + const ids = info.selectedMessages?.messages?.map(m => m.id) || + (info.messageId ? [info.messageId] : []); await clearCacheForMessages(ids); } else if (info.menuItemId === "view-ai-reason-list" || info.menuItemId === "view-ai-reason-display") { const [header] = await browser.messageDisplay.getDisplayedMessages(tab.id); @@ -750,22 +536,8 @@ async function clearCacheForMessages(idsInput) { logger.aiLog("failed to clear cache for message", { level: 'error' }, e); return { ok: false }; } - } else if (msg?.type === "sortana:resetTimingStats") { - const last = timingStats.last; - timingStats.count = 0; - timingStats.mean = 0; - timingStats.m2 = 0; - timingStats.total = 0; - timingStats.last = typeof last === 'number' ? last : -1; - await storage.local.set({ classifyStats: timingStats }); - return { ok: true }; } else if (msg?.type === "sortana:getQueueCount") { return { count: queuedCount + (processing ? 1 : 0) }; - } else if (msg?.type === "sortana:getErrorLog") { - return { errors: errorLog.slice() }; - } else if (msg?.type === "sortana:recordError") { - recordError(msg.context || "Sortana Error", { message: msg.message, detail: msg.detail }); - return { ok: true }; } else if (msg?.type === "sortana:getTiming") { const t = timingStats; const std = t.count > 1 ? Math.sqrt(t.m2 / (t.count - 1)) : 0; @@ -797,19 +569,6 @@ async function clearCacheForMessages(idsInput) { // Catch any unhandled rejections window.addEventListener("unhandledrejection", ev => { logger.aiLog("Unhandled promise rejection", { level: 'error' }, ev.reason); - recordError("Unhandled promise rejection", ev.reason); - }); - - browser.notifications.onClicked.addListener(id => { - if (id === ERROR_NOTIFICATION_ID) { - clearError(); - } - }); - - browser.notifications.onButtonClicked.addListener((id) => { - if (id === ERROR_NOTIFICATION_ID) { - clearError(); - } }); browser.runtime.onInstalled.addListener(async ({ reason }) => { diff --git a/build-xpi.sh b/build-xpi.sh deleted file mode 100755 index 20c6e15..0000000 --- a/build-xpi.sh +++ /dev/null @@ -1,77 +0,0 @@ -#!/usr/bin/env bash -set -euo pipefail - -script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" -release_dir="$script_dir/release" -manifest="$script_dir/manifest.json" - -if [[ ! -f "$manifest" ]]; then - echo "manifest.json not found at $manifest" >&2 - exit 1 -fi - -if ! command -v zip >/dev/null 2>&1; then - echo "zip is required to build the XPI." >&2 - exit 1 -fi - -if command -v jq >/dev/null 2>&1; then - version="$(jq -r '.version // empty' "$manifest")" -else - if ! command -v python3 >/dev/null 2>&1; then - echo "python3 is required to read manifest.json without jq." >&2 - exit 1 - fi - version="$(python3 - <<'PY' -import json -import sys -with open(sys.argv[1], 'r', encoding='utf-8') as f: - data = json.load(f) -print(data.get('version', '') or '') -PY -"$manifest")" -fi - -if [[ -z "$version" ]]; then - echo "No version found in manifest.json" >&2 - exit 1 -fi - -mkdir -p "$release_dir" - -xpi_name="sortana-$version.xpi" -zip_path="$release_dir/ai-filter-$version.zip" -xpi_path="$release_dir/$xpi_name" - -rm -f "$zip_path" "$xpi_path" - -mapfile -d '' files < <( - find "$script_dir" -type f \ - ! -name '*.sln' \ - ! -name '*.ps1' \ - ! -name '*.sh' \ - ! -path "$release_dir/*" \ - ! -path "$script_dir/.vs/*" \ - ! -path "$script_dir/.git/*" \ - -printf '%P\0' -) - -if [[ ${#files[@]} -eq 0 ]]; then - echo "No files found to package." >&2 - exit 0 -fi - -for rel in "${files[@]}"; do - full="$script_dir/$rel" - size=$(stat -c '%s' "$full") - echo "Zipping: $rel <- $full ($size bytes)" -done - -( - cd "$script_dir" - printf '%s\n' "${files[@]}" | zip -q -9 -@ "$zip_path" -) - -mv -f "$zip_path" "$xpi_path" - -echo "Built XPI at: $xpi_path" diff --git a/details.js b/details.js index f306e01..cad1979 100644 --- a/details.js +++ b/details.js @@ -1,9 +1,8 @@ const aiLog = (await import(browser.runtime.getURL("logger.js"))).aiLog; const storage = (globalThis.messenger ?? browser).storage; -const { detectSystemTheme } = await import(browser.runtime.getURL('modules/themeUtils.js')); const { theme } = await storage.local.get('theme'); const mode = (theme || 'auto') === 'auto' - ? await detectSystemTheme() + ? (window.matchMedia('(prefers-color-scheme: dark)').matches ? 'dark' : 'light') : theme; document.documentElement.dataset.theme = mode; @@ -11,11 +10,11 @@ const qMid = parseInt(new URLSearchParams(location.search).get("mid"), 10); if (!isNaN(qMid)) { loadMessage(qMid); } else { - const { ids } = await browser.runtime.sendMessage({ + const { messages } = await browser.runtime.sendMessage({ type: "sortana:getDisplayedMessages", }); - if (ids && ids[0]) { - loadMessage(ids[0]); + if (messages && messages[0]) { + loadMessage(messages[0].id); } else { const tabs = await browser.tabs.query({ active: true, currentWindow: true }); const tabId = tabs[0]?.id; diff --git a/manifest.json b/manifest.json index dcddb67..0cbfb13 100644 --- a/manifest.json +++ b/manifest.json @@ -1,7 +1,7 @@ { "manifest_version": 2, "name": "Sortana", - "version": "2.4.2", + "version": "2.1.0", "default_locale": "en-US", "applications": { "gecko": { @@ -32,18 +32,16 @@ "page": "options/options.html", "open_in_tab": true }, - "permissions": [ - "storage", - "messagesRead", - "messagesMove", - "messagesUpdate", - "messagesTagsList", - "accountsRead", - "menus", - "notifications", - "scripting", - "tabs", - "theme", - "compose" - ] + "permissions": [ + "storage", + "messagesRead", + "messagesMove", + "messagesUpdate", + "messagesTagsList", + "accountsRead", + "menus", + "scripting", + "tabs", + "theme" + ] } diff --git a/modules/AiClassifier.js b/modules/AiClassifier.js index b4c0907..3c526f8 100644 --- a/modules/AiClassifier.js +++ b/modules/AiClassifier.js @@ -1,6 +1,5 @@ "use strict"; import { aiLog, setDebug } from "../logger.js"; -import { DEFAULT_AI_PARAMS } from "./defaultParams.js"; const storage = (globalThis.messenger ?? globalThis.browser).storage; @@ -15,9 +14,6 @@ try { Services = undefined; } -const COMPLETIONS_PATH = "/v1/completions"; -const MODELS_PATH = "/v1/models"; - const SYSTEM_PREFIX = `You are an email-classification assistant. Read the email below and the classification criterion provided by the user. `; @@ -26,61 +22,34 @@ const DEFAULT_CUSTOM_SYSTEM_PROMPT = "Determine whether the email satisfies the const SYSTEM_SUFFIX = ` Return ONLY a JSON object on a single line of the form: -{"match": true, "reason": ""} - if the email satisfies the criterion -{"match": false, "reason": ""} - otherwise +{"match": true} - if the email satisfies the criterion +{"match": false} - otherwise Do not add any other keys, text, or formatting.`; -let gEndpointBase = "http://127.0.0.1:5000"; -let gEndpoint = buildEndpointUrl(gEndpointBase); +let gEndpoint = "http://127.0.0.1:5000/v1/classify"; let gTemplateName = "openai"; let gCustomTemplate = ""; let gCustomSystemPrompt = DEFAULT_CUSTOM_SYSTEM_PROMPT; let gTemplateText = ""; -let gAiParams = Object.assign({}, DEFAULT_AI_PARAMS); -let gModel = ""; -let gApiKey = ""; -let gOpenaiOrganization = ""; -let gOpenaiProject = ""; +let gAiParams = { + max_tokens: 4096, + temperature: 0.6, + top_p: 0.95, + seed: -1, + repetition_penalty: 1.0, + top_k: 20, + min_p: 0, + presence_penalty: 0, + frequency_penalty: 0, + typical_p: 1, + tfs: 1, +}; let gCache = new Map(); let gCacheLoaded = false; -function normalizeEndpointBase(endpoint) { - if (typeof endpoint !== "string") { - return ""; - } - let base = endpoint.trim(); - if (!base) { - return ""; - } - base = base.replace(/\/v1\/(completions|models)\/?$/i, ""); - return base; -} - -function buildEndpointUrl(endpointBase) { - const base = normalizeEndpointBase(endpointBase); - if (!base) { - return ""; - } - const withScheme = /^https?:\/\//i.test(base) ? base : `https://${base}`; - const needsSlash = withScheme.endsWith("/"); - const path = COMPLETIONS_PATH.replace(/^\//, ""); - return `${withScheme}${needsSlash ? "" : "/"}${path}`; -} - -function buildModelsUrl(endpointBase) { - const base = normalizeEndpointBase(endpointBase); - if (!base) { - return ""; - } - const withScheme = /^https?:\/\//i.test(base) ? base : `https://${base}`; - const needsSlash = withScheme.endsWith("/"); - const path = MODELS_PATH.replace(/^\//, ""); - return `${withScheme}${needsSlash ? "" : "/"}${path}`; -} - function sha256HexSync(str) { try { const hasher = Cc["@mozilla.org/security/hash;1"].createInstance(Ci.nsICryptoHash); @@ -103,6 +72,10 @@ async function sha256Hex(str) { return sha256HexSync(str); } +function buildCacheKeySync(id, criterion) { + return sha256HexSync(`${id}|${criterion}`); +} + async function resolveHeaderId(id) { if (typeof id === "number" && typeof messenger?.messages?.get === "function") { try { @@ -120,7 +93,7 @@ async function resolveHeaderId(id) { async function buildCacheKey(id, criterion) { const resolvedId = await resolveHeaderId(id); if (Services) { - return sha256HexSync(`${resolvedId}|${criterion}`); + return buildCacheKeySync(resolvedId, criterion); } return sha256Hex(`${resolvedId}|${criterion}`); } @@ -160,6 +133,16 @@ async function loadCache() { gCacheLoaded = true; } +function loadCacheSync() { + if (!gCacheLoaded) { + if (!Services?.tm?.spinEventLoopUntil) { + throw new Error("loadCacheSync requires Services"); + } + let done = false; + loadCache().finally(() => { done = true; }); + Services.tm.spinEventLoopUntil(() => done); + } +} async function saveCache(updatedKey, updatedValue) { if (typeof updatedKey !== "undefined") { @@ -200,12 +183,8 @@ function loadTemplateSync(name) { } async function setConfig(config = {}) { - if (typeof config.endpoint === "string") { - const base = normalizeEndpointBase(config.endpoint); - if (base) { - gEndpointBase = base; - } - gEndpoint = buildEndpointUrl(gEndpointBase); + if (config.endpoint) { + gEndpoint = config.endpoint; } if (config.templateName) { gTemplateName = config.templateName; @@ -223,18 +202,6 @@ async function setConfig(config = {}) { } } } - if (typeof config.model === "string") { - gModel = config.model.trim(); - } - if (typeof config.apiKey === "string") { - gApiKey = config.apiKey.trim(); - } - if (typeof config.openaiOrganization === "string") { - gOpenaiOrganization = config.openaiOrganization.trim(); - } - if (typeof config.openaiProject === "string") { - gOpenaiProject = config.openaiProject.trim(); - } if (typeof config.debugLogging === "boolean") { setDebug(config.debugLogging); } @@ -245,28 +212,10 @@ async function setConfig(config = {}) { } else { gTemplateText = await loadTemplate(gTemplateName); } - if (!gEndpoint) { - gEndpoint = buildEndpointUrl(gEndpointBase); - } - aiLog(`[AiClassifier] Endpoint base set to ${gEndpointBase}`, {debug: true}); aiLog(`[AiClassifier] Endpoint set to ${gEndpoint}`, {debug: true}); aiLog(`[AiClassifier] Template set to ${gTemplateName}`, {debug: true}); } -function buildAuthHeaders() { - const headers = {}; - if (gApiKey) { - headers.Authorization = `Bearer ${gApiKey}`; - } - if (gOpenaiOrganization) { - headers["OpenAI-Organization"] = gOpenaiOrganization; - } - if (gOpenaiProject) { - headers["OpenAI-Project"] = gOpenaiProject; - } - return headers; -} - function buildSystemPrompt() { return SYSTEM_PREFIX + (gCustomSystemPrompt || DEFAULT_CUSTOM_SYSTEM_PROMPT) + SYSTEM_SUFFIX; } @@ -284,7 +233,11 @@ function buildPrompt(body, criterion) { function getCachedResult(cacheKey) { if (!gCacheLoaded) { - return null; + if (Services?.tm?.spinEventLoopUntil) { + loadCacheSync(); + } else { + return null; + } } if (cacheKey && gCache.has(cacheKey)) { aiLog(`[AiClassifier] Cache hit for key: ${cacheKey}`, {debug: true}); @@ -296,7 +249,11 @@ function getCachedResult(cacheKey) { function getReason(cacheKey) { if (!gCacheLoaded) { - return null; + if (Services?.tm?.spinEventLoopUntil) { + loadCacheSync(); + } else { + return null; + } } const entry = gCache.get(cacheKey); return cacheKey && entry ? entry.reason || null : null; @@ -306,101 +263,18 @@ function buildPayload(text, criterion) { let payloadObj = Object.assign({ prompt: buildPrompt(text, criterion) }, gAiParams); - if (gModel) { - payloadObj.model = gModel; - } return JSON.stringify(payloadObj); } -function reportParseError(message, detail) { - try { - const runtime = (globalThis.browser ?? globalThis.messenger)?.runtime; - if (!runtime?.sendMessage) { - return; - } - runtime.sendMessage({ - type: "sortana:recordError", - context: "AI response parsing", - message, - detail - }).catch(() => {}); - } catch (e) { - aiLog("Failed to report parse error", { level: "warn" }, e); - } -} - -function extractLastJsonObject(text) { - let last = null; - let start = -1; - let depth = 0; - let inString = false; - let escape = false; - - for (let i = 0; i < text.length; i += 1) { - const ch = text[i]; - if (inString) { - if (escape) { - escape = false; - continue; - } - if (ch === "\\") { - escape = true; - continue; - } - if (ch === "\"") { - inString = false; - } - continue; - } - if (ch === "\"") { - inString = true; - continue; - } - if (ch === "{") { - if (depth === 0) { - start = i; - } - depth += 1; - continue; - } - if (ch === "}" && depth > 0) { - depth -= 1; - if (depth === 0 && start !== -1) { - last = text.slice(start, i + 1); - start = -1; - } - } - } - - return last; -} - function parseMatch(result) { const rawText = result.choices?.[0]?.text || ""; - const candidate = extractLastJsonObject(rawText); - if (!candidate) { - reportParseError("No JSON object found in AI response.", rawText.slice(0, 800)); - return { matched: false, reason: "" }; - } - - let obj; - try { - obj = JSON.parse(candidate); - } catch (e) { - reportParseError("Failed to parse JSON from AI response.", candidate.slice(0, 800)); - return { matched: false, reason: "" }; - } - - const matchValue = Object.prototype.hasOwnProperty.call(obj, "match") ? obj.match : obj.matched; - const matched = matchValue === true; - if (matchValue !== true && matchValue !== false) { - reportParseError("AI response missing valid match boolean.", candidate.slice(0, 800)); - } - - const reasonValue = obj.reason ?? obj.reasoning ?? obj.explaination; - const reason = typeof reasonValue === "string" ? reasonValue : ""; - - return { matched, reason }; + const thinkText = rawText.match(/[\s\S]*?<\/think>/gi)?.join('') || ''; + aiLog('[AiClassifier] ⮡ Reasoning:', {debug: true}, thinkText); + const cleanedText = rawText.replace(/[\s\S]*?<\/think>/gi, "").trim(); + aiLog('[AiClassifier] ⮡ Cleaned Response Text:', {debug: true}, cleanedText); + const obj = JSON.parse(cleanedText); + const matched = obj.matched === true || obj.match === true; + return { matched, reason: thinkText }; } function cacheEntry(cacheKey, matched, reason) { @@ -456,6 +330,48 @@ async function getCacheSize() { return gCache.size; } +function classifyTextSync(text, criterion, cacheKey = null) { + if (!Services?.tm?.spinEventLoopUntil) { + throw new Error("classifyTextSync requires Services"); + } + const cached = getCachedResult(cacheKey); + if (cached !== null) { + return cached; + } + + const payload = buildPayload(text, criterion); + + aiLog(`[AiClassifier] Sending classification request to ${gEndpoint}`, {debug: true}); + + let result; + let done = false; + (async () => { + try { + const response = await fetch(gEndpoint, { + method: "POST", + headers: { "Content-Type": "application/json" }, + body: payload, + }); + if (response.ok) { + const json = await response.json(); + aiLog(`[AiClassifier] Received response:`, {debug: true}, json); + result = parseMatch(json); + cacheEntry(cacheKey, result.matched, result.reason); + result = result.matched; + } else { + aiLog(`HTTP status ${response.status}`, {level: 'warn'}); + result = false; + } + } catch (e) { + aiLog(`HTTP request failed`, {level: 'error'}, e); + result = false; + } finally { + done = true; + } + })(); + Services.tm.spinEventLoopUntil(() => done); + return result; +} async function classifyText(text, criterion, cacheKey = null) { if (!gCacheLoaded) { @@ -467,11 +383,6 @@ async function classifyText(text, criterion, cacheKey = null) { } const payload = buildPayload(text, criterion); - try { - await storage.local.set({ lastPayload: JSON.parse(payload) }); - } catch (e) { - aiLog('failed to save last payload', { level: 'warn' }, e); - } aiLog(`[AiClassifier] Sending classification request to ${gEndpoint}`, {debug: true}); aiLog(`[AiClassifier] Classification request payload:`, { debug: true }, payload); @@ -479,7 +390,7 @@ async function classifyText(text, criterion, cacheKey = null) { try { const response = await fetch(gEndpoint, { method: "POST", - headers: { "Content-Type": "application/json", ...buildAuthHeaders() }, + headers: { "Content-Type": "application/json" }, body: payload, }); @@ -503,4 +414,4 @@ async function init() { await loadCache(); } -export { buildEndpointUrl, buildModelsUrl, normalizeEndpointBase, classifyText, setConfig, removeCacheEntries, clearCache, getReason, getCachedResult, buildCacheKey, getCacheSize, init }; +export { classifyText, classifyTextSync, setConfig, removeCacheEntries, clearCache, getReason, getCachedResult, buildCacheKey, buildCacheKeySync, getCacheSize, init }; diff --git a/modules/defaultParams.js b/modules/defaultParams.js deleted file mode 100644 index a8afe53..0000000 --- a/modules/defaultParams.js +++ /dev/null @@ -1,16 +0,0 @@ -"use strict"; - -export const DEFAULT_AI_PARAMS = { - max_tokens: 4096, - temperature: 0.6, - top_p: 0.95, - seed: -1, - repetition_penalty: 1.0, - top_k: 20, - min_p: 0, - presence_penalty: 0, - frequency_penalty: 0, - typical_p: 1, - tfs: 1, -}; - diff --git a/modules/themeUtils.js b/modules/themeUtils.js deleted file mode 100644 index 58728f1..0000000 --- a/modules/themeUtils.js +++ /dev/null @@ -1,20 +0,0 @@ -"use strict"; - -export async function detectSystemTheme() { - try { - const t = await browser.theme.getCurrent(); - const scheme = t?.properties?.color_scheme; - if (scheme === 'dark' || scheme === 'light') { - return scheme; - } - const color = t?.colors?.frame || t?.colors?.toolbar; - if (color && /^#/.test(color)) { - const r = parseInt(color.slice(1, 3), 16); - const g = parseInt(color.slice(3, 5), 16); - const b = parseInt(color.slice(5, 7), 16); - const lum = (0.299 * r + 0.587 * g + 0.114 * b) / 255; - return lum < 0.5 ? 'dark' : 'light'; - } - } catch {} - return 'light'; -} diff --git a/options/dataTransfer.js b/options/dataTransfer.js index 393b533..b289c02 100644 --- a/options/dataTransfer.js +++ b/options/dataTransfer.js @@ -3,10 +3,6 @@ const storage = (globalThis.messenger ?? browser).storage; const KEY_GROUPS = { settings: [ 'endpoint', - 'model', - 'apiKey', - 'openaiOrganization', - 'openaiProject', 'templateName', 'customTemplate', 'customSystemPrompt', diff --git a/options/options.html b/options/options.html index cfc370d..58cfe37 100644 --- a/options/options.html +++ b/options/options.html @@ -31,10 +31,6 @@ .tag { --bulma-tag-h: 318; } - #diff-display { - white-space: pre-wrap; - font-family: monospace; - } @@ -51,8 +47,6 @@
  • Settings
  • Rules
  • Maintenance
  • - - @@ -74,22 +68,6 @@
    -

    - - -
    - -
    -
    -
    - -
    -
    -
    - -
    -
    -

    @@ -141,32 +119,6 @@