Compare commits

..

3 Commits

Author SHA1 Message Date
dgtlmoon 4679e28cf9 Fixing dupes 2026-04-24 18:13:05 +10:00
dgtlmoon e053b50382 Add package 2026-04-24 18:01:11 +10:00
dgtlmoon 4f76c01857 Re #4080 msgfmt linting 2026-04-24 17:57:48 +10:00
118 changed files with 5713 additions and 25832 deletions
+5 -67
View File
@@ -31,73 +31,11 @@ jobs:
echo "Checking $f"
msgfmt --check-format -o /dev/null "$f"
done
- name: Check translation catalog is up-to-date
run: |
pip install "$(grep -E '^babel==' requirements.txt)"
python setup.py extract_messages
python setup.py update_catalog
python setup.py compile_catalog
# Ignore POT-Creation-Date timestamp lines — they change on every extract_messages run
if git diff changedetectionio/translations | grep -v 'POT-Creation-Date' | grep -qE '^[+-][^+-]'; then
echo "ERROR: Translation catalog is out of sync. Run: python setup.py extract_messages && python setup.py update_catalog && python setup.py compile_catalog"
git diff --stat changedetectionio/translations
exit 1
fi
lint-template-i18n:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- name: Check for fragmented gettext calls in templates
run: |
python3 << 'PYEOF'
import re, sys
from pathlib import Path
# Detects adjacent {{ _(...) }} calls on the same line separated only by HTML
# tags, whitespace, or non-translating Jinja2 variables — the anti-pattern of
# splitting a single sentence across multiple msgids.
# See https://github.com/dgtlmoon/changedetection.io/issues/4074 for background.
#
# The correct fix is to consolidate fragments into one entire-sentence msgid,
# injecting dynamic values via %(name)s kwargs — per the GNU gettext manual
# sections "Entire sentences" and "No string concatenation". See PR #4076 for
# worked examples of each consolidation pattern.
#
# BASELINE: this limit reflects pre-existing violations present when this check
# was introduced. It must only ever go DOWN. Each time you fix a template, lower
# the limit by the number of lines fixed so the improvement is locked in.
# When the count reaches 0, replace the baseline check with a hard sys.exit(1).
BASELINE_LIMIT = 44
FRAGMENT_RE = re.compile(
r'\{\{[^{}]*\b_\s*\([^)]*\)[^{}]*\}\}'
r'(?:\s*(?:<[^>]+>|\{\{(?![^}]*\b_\s*\()[^}]*\}\})\s*)+'
r'\{\{[^{}]*\b_\s*\([^)]*\)[^{}]*\}\}'
)
violations = []
for f in sorted(Path('changedetectionio').rglob('*.html')):
for lineno, line in enumerate(f.read_text().splitlines(), 1):
if FRAGMENT_RE.search(line):
violations.append(f"{f}:{lineno}: {line.strip()[:120]}")
count = len(violations)
print(f"Fragmented i18n calls found: {count} (limit: {BASELINE_LIMIT})")
for v in violations:
print(v)
if count > BASELINE_LIMIT:
print(f"\nERROR: {count} fragmented gettext calls exceed the baseline of {BASELINE_LIMIT}.")
print("Consolidate adjacent _() calls into a single entire-sentence msgid.")
print("See https://github.com/dgtlmoon/changedetection.io/issues/4074 for patterns.")
sys.exit(1)
PYEOF
test-application-3-10:
# Only run on push to master (including PR merges)
if: github.event_name == 'push' && github.ref == 'refs/heads/master'
needs: [lint-code, lint-translations, lint-template-i18n]
needs: [lint-code, lint-translations]
uses: ./.github/workflows/test-stack-reusable-workflow.yml
with:
python-version: '3.10'
@@ -105,7 +43,7 @@ jobs:
test-application-3-11:
# Always run
needs: [lint-code, lint-translations, lint-template-i18n]
needs: [lint-code, lint-translations]
uses: ./.github/workflows/test-stack-reusable-workflow.yml
with:
python-version: '3.11'
@@ -113,7 +51,7 @@ jobs:
test-application-3-12:
# Only run on push to master (including PR merges)
if: github.event_name == 'push' && github.ref == 'refs/heads/master'
needs: [lint-code, lint-translations, lint-template-i18n]
needs: [lint-code, lint-translations]
uses: ./.github/workflows/test-stack-reusable-workflow.yml
with:
python-version: '3.12'
@@ -122,7 +60,7 @@ jobs:
test-application-3-13:
# Only run on push to master (including PR merges)
if: github.event_name == 'push' && github.ref == 'refs/heads/master'
needs: [lint-code, lint-translations, lint-template-i18n]
needs: [lint-code, lint-translations]
uses: ./.github/workflows/test-stack-reusable-workflow.yml
with:
python-version: '3.13'
@@ -131,7 +69,7 @@ jobs:
test-application-3-14:
#if: github.event_name == 'push' && github.ref == 'refs/heads/master'
needs: [lint-code, lint-translations, lint-template-i18n]
needs: [lint-code, lint-translations]
uses: ./.github/workflows/test-stack-reusable-workflow.yml
with:
python-version: '3.14'
@@ -99,7 +99,7 @@ jobs:
- name: Run Unit Tests
run: |
docker run test-changedetectionio bash -c 'cd changedetectionio;pytest tests/unit/ tests/llm/'
docker run test-changedetectionio bash -c 'cd changedetectionio;pytest tests/unit/'
# Basic pytest tests with ancillary services
basic-tests:
+2 -36
View File
@@ -22,20 +22,6 @@ Ideal for monitoring price changes, content edits, conditional changes and more.
- Get started watching and receiving website change notifications straight away.
- See our [tutorials and how-to page for more inspiration](https://changedetection.io/tutorials)
## AI-powered website change detection — smart alerts and plain-language summaries
Stop drowning in noise. Connect any LLM (OpenAI, Gemini, Anthropic, Ollama and more) and go from _"something changed"_ to _"only the thing you care about changed"_.
**AI change detection rules** — write a plain-English intent once: _"notify me only when the price drops below $50"_, _"alert me when the item comes back in stock"_, _"ignore navigation and footer changes"_. The AI evaluates every detected diff against your intent and silently suppresses everything irrelevant. Fewer false positives, zero noise.
**AI change summaries** — instead of staring at a raw diff, your notification reads _"Price dropped from $89.99 to $67.00"_ or _"3 new products added to the listing"_. Works globally or per-watch, with full control over the prompt.
Works with any model you already pay for — GPT-4o-mini and Gemini Flash handle this well at fractions of a cent per check. Or run it entirely locally with Ollama. Powered by [LiteLLM](https://github.com/BerriAI/litellm), giving you seamless access to [100+ supported providers and models](https://docs.litellm.ai/docs/providers).
[<img src="./docs/LLM-change-summary.jpeg" style="max-width:100%;" alt="AI-powered website change detection — plain language change summaries and smart alert rules" title="AI website change detection with LLM change summaries and intelligent alert filtering" />](https://changedetection.io?src=github)
_Note: Available in our subscription/hosted service from June 2026_
### Target specific parts of the webpage using the Visual Selector tool.
Available when connected to a <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Playwright-content-fetcher">playwright content fetcher</a> (included as part of our subscription service)
@@ -322,27 +308,9 @@ I offer commercial support, this software is depended on by network security, ae
[release-link]: https://github.com/dgtlmoon/changedetection.io/releases
[docker-link]: https://hub.docker.com/r/dgtlmoon/changedetection.io
## Commercial Licencing
## Disclaimer
**This software is provided "as-is", without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and non-infringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the software or the use or other dealings in the software.**
### Website content monitoring
You are solely responsible for ensuring that your use of this software complies with the terms of service, `robots.txt` directives, access policies, and all applicable laws of any website or service you choose to monitor. The authors and contributors of this software accept no liability whatsoever for how you choose to use it or for any consequences arising from that use.
### AI / LLM features
If you choose to enable AI / LLM features, content detected on monitored websites — including page diffs and extracted text — will be transmitted to a third-party AI provider of your choosing, outside of this installation. You are solely responsible for:
- Ensuring such transmission is permitted by the terms of service of every website you monitor.
- Compliance with all applicable data-protection and privacy laws (including but not limited to GDPR) with respect to any personal data that may appear in monitored content.
- All API costs and charges levied by your chosen AI provider. This software has no visibility into or control over those charges.
- Any consequences arising from acting on AI-generated output.
**AI and LLM models are known to hallucinate** — producing plausible-sounding but factually incorrect, incomplete, or entirely fabricated output with apparent confidence. By design, LLMs may also omit or silently truncate relevant information during summarisation. **AI output must never be relied upon as complete or accurate.**
By using this software, and in particular any AI / LLM features, you personally indemnify and hold harmless the author(s), contributor(s), and any associated parties from and against any and all claims, damages, losses, costs, and expenses (including reasonable legal fees) arising out of or in connection with your use of this software.
If you are reselling this software either in part or full as part of any commercial arrangement, you must abide by our COMMERCIAL_LICENCE.md found in our code repository, please contact dgtlmoon@gmail.com and contact@changedetection.io .
## Third-party licenses
@@ -352,6 +320,4 @@ changedetectionio.html_tools.elementpath_tostring: Copyright (c), 2018-2021, SIS
Recognition of fantastic contributors to the project
<sub>Developer note: see [translation guide](changedetectionio/translations/README.md) for i18n template patterns and workflow.</sub>
- Constantin Hong https://github.com/Constantin1489
+3 -8
View File
@@ -2,7 +2,7 @@
# Read more https://github.com/dgtlmoon/changedetection.io/wiki
# Semver means never use .01, or 00. Should be .1.
__version__ = '0.55.3'
__version__ = '0.54.10'
from changedetectionio.strtobool import strtobool
from json.decoder import JSONDecodeError
@@ -400,11 +400,8 @@ def main():
datastore.data['settings']['application']['all_paused'] = all_paused
logger.info(f"Setting all watches paused: {all_paused}")
# Register built-in restock plugins (deferred here to avoid circular imports at module load time)
from changedetectionio.pluggy_interface import inject_datastore_into_plugins, register_builtin_restock_plugins
register_builtin_restock_plugins()
# Inject datastore into plugins that need access to settings
from changedetectionio.pluggy_interface import inject_datastore_into_plugins
inject_datastore_into_plugins(datastore)
# Step 1: Add URLs with their options (if provided via -u flags)
@@ -627,14 +624,12 @@ def main():
@app.context_processor
def inject_template_globals():
from changedetectionio.llm.evaluator import get_llm_config as _get_llm_config
return dict(right_sticky="v"+__version__,
new_version_available=app.config['NEW_VERSION_AVAILABLE'],
has_password=datastore.data['settings']['application']['password'] != False,
socket_io_enabled=datastore.data['settings']['application'].get('ui', {}).get('socket_io_enabled', True),
all_paused=datastore.data['settings']['application'].get('all_paused', False),
all_muted=datastore.data['settings']['application'].get('all_muted', False),
llm_configured=bool(_get_llm_config(datastore)),
all_muted=datastore.data['settings']['application'].get('all_muted', False)
)
# Monitored websites will not receive a Referer header when a user clicks on an outgoing link.
@@ -25,7 +25,7 @@
<div class="pure-control-group">
{{ _('Enter one URL per line, and optionally add tags for each URL after a space, delineated by comma (,):') }}
<br>
<p><strong>{{ _('Example') }}: </strong><code>https://example.com tag1, tag2, last tag</code></p>
<p><strong>{{ _('Example:') }} </strong><code>https://example.com tag1, tag2, last tag</code></p>
{{ _('URLs which do not pass validation will stay in the textarea.') }}
</div>
{{ render_field(form.processor, class="processor") }}
@@ -45,7 +45,7 @@
<div class="tab-pane-inner" id="distill-io">
<div class="pure-control-group">
{{ _('Copy and Paste your Distill.io watch \'export\' file, this should be a JSON file.') }}<br>
{{ _('This is <i>experimental</i>, supported fields are <code>name</code>, <code>uri</code>, <code>tags</code>, <code>config:selections</code>, the rest (including <code>schedule</code>) are ignored.')|safe }}
{{ _('This is') }} <i>{{ _('experimental') }}</i>, {{ _('supported fields are') }} <code>name</code>, <code>uri</code>, <code>tags</code>, <code>config:selections</code>, {{ _('the rest (including') }} <code>schedule</code>) {{ _('are ignored.') }}
<br>
<p>
{{ _('How to export?') }} <a href="https://distill.io/docs/web-monitor/how-export-and-import-monitors/">https://distill.io/docs/web-monitor/how-export-and-import-monitors/</a><br>
@@ -13,9 +13,7 @@ from changedetectionio.auth_decorator import login_optionally_required
def construct_blueprint(datastore: ChangeDetectionStore):
from changedetectionio.blueprint.settings.llm import construct_llm_blueprint
settings_blueprint = Blueprint('settings', __name__, template_folder="templates")
settings_blueprint.register_blueprint(construct_llm_blueprint(datastore), url_prefix='/llm')
@settings_blueprint.route("", methods=['GET', "POST"])
@login_optionally_required
@@ -29,23 +27,6 @@ def construct_blueprint(datastore: ChangeDetectionStore):
default = deepcopy(datastore.data['settings'])
# Pre-populate LLM sub-form fields from stored config (text fields only —
# PasswordField for api_key is intentionally left blank on GET).
_stored_llm = datastore.data['settings']['application'].get('llm') or {}
default['llm'] = {
'llm_model': _stored_llm.get('model', ''),
'llm_api_base': _stored_llm.get('api_base', ''),
'llm_change_summary_default': datastore.data['settings']['application'].get('llm_change_summary_default', ''),
'llm_override_diff_with_summary': datastore.data['settings']['application'].get('llm_override_diff_with_summary', True),
'llm_restock_use_fallback_extract': datastore.data['settings']['application'].get('llm_restock_use_fallback_extract', True),
'llm_budget_action': datastore.data['settings']['application'].get('llm_budget_action', 'skip_llm'),
'llm_thinking_budget': str(datastore.data['settings']['application'].get('llm_thinking_budget', 0)),
'llm_max_summary_tokens': str(datastore.data['settings']['application'].get('llm_max_summary_tokens', 3000)),
'llm_token_budget_month': _stored_llm.get('token_budget_month', 0),
'llm_max_input_chars': _stored_llm.get('max_input_chars', 0),
}
if datastore.proxy_list is not None:
available_proxies = list(datastore.proxy_list.keys())
# When enabled
@@ -95,73 +76,6 @@ def construct_blueprint(datastore: ChangeDetectionStore):
datastore.data['settings']['application'].update(app_update)
# Save LLM config separately under settings.application.llm.
# Token counters (tokens_total_cumulative, tokens_this_month, tokens_month_key)
# are system-managed and must never be overwritten by form submissions.
_LLM_PROTECTED_FIELDS = {
'tokens_total_cumulative', 'tokens_this_month', 'tokens_month_key',
'cost_usd_total_cumulative', 'cost_usd_this_month',
}
existing_llm = datastore.data['settings']['application'].get('llm') or {}
preserved_counters = {k: v for k, v in existing_llm.items() if k in _LLM_PROTECTED_FIELDS}
llm_data = form.data.get('llm') or {}
# PasswordField never re-populates its value on GET, so the submitted value
# is only non-empty when the user explicitly typed a new key.
# If blank, preserve the existing key so a settings save doesn't accidentally clear it.
submitted_api_key = (llm_data.get('llm_api_key') or '').strip()
effective_api_key = submitted_api_key if submitted_api_key else existing_llm.get('api_key', '')
# Application-level LLM settings (survive provider changes)
datastore.data['settings']['application']['llm_change_summary_default'] = (
llm_data.get('llm_change_summary_default') or ''
).strip()
datastore.data['settings']['application']['llm_override_diff_with_summary'] = (
bool(llm_data.get('llm_override_diff_with_summary', True))
)
datastore.data['settings']['application']['llm_restock_use_fallback_extract'] = (
bool(llm_data.get('llm_restock_use_fallback_extract', True))
)
datastore.data['settings']['application']['llm_budget_action'] = (
llm_data.get('llm_budget_action') or 'skip_llm'
)
datastore.data['settings']['application']['llm_thinking_budget'] = (
int(llm_data.get('llm_thinking_budget') or 0)
)
datastore.data['settings']['application']['llm_max_summary_tokens'] = (
int(llm_data.get('llm_max_summary_tokens') or 3000)
)
# Monthly token budget — only save if env var is not set
import os as _os
if not _os.getenv('LLM_TOKEN_BUDGET_MONTH', '').strip():
_budget = llm_data.get('llm_token_budget_month') or 0
existing_llm['token_budget_month'] = int(_budget) if _budget else 0
# Max input chars — only save if env var is not set
if not _os.getenv('LLM_MAX_INPUT_CHARS', '').strip():
_max_chars = llm_data.get('llm_max_input_chars') or 0
existing_llm['max_input_chars'] = int(_max_chars) if _max_chars else 0
llm_config = {
'model': (llm_data.get('llm_model') or '').strip(),
'api_key': effective_api_key,
'api_base': (llm_data.get('llm_api_base') or '').strip(),
'token_budget_month': existing_llm.get('token_budget_month', 0),
'max_input_chars': existing_llm.get('max_input_chars', 0),
**preserved_counters,
}
# Only store if a model is set
if llm_config['model']:
datastore.data['settings']['application']['llm'] = llm_config
else:
# Remove model config but retain counters for historical record
if preserved_counters:
datastore.data['settings']['application']['llm'] = preserved_counters
else:
datastore.data['settings']['application'].pop('llm', None)
# Handle dynamic worker count adjustment
old_worker_count = datastore.data['settings']['requests'].get('workers', 1)
new_worker_count = form.data['requests'].get('workers', 1)
@@ -250,34 +164,9 @@ def construct_blueprint(datastore: ChangeDetectionStore):
# Instantiate the form with existing settings
plugin_forms[plugin_id] = form_class(data=settings)
from changedetectionio.llm.evaluator import (
get_llm_config as _get_llm_cfg,
llm_configured_via_env,
get_global_token_budget_month,
)
llm_config = _get_llm_cfg(datastore) or {}
llm_env_configured = llm_configured_via_env()
llm_stored = datastore.data['settings']['application'].get('llm') or {}
llm_token_budget_month = get_global_token_budget_month(datastore)
llm_token_budget_month_env = get_global_token_budget_month() # env var only, for readonly logic
_max_input_chars_env_str = os.getenv('LLM_MAX_INPUT_CHARS', '').strip()
llm_max_input_chars_env = int(_max_input_chars_env_str) if _max_input_chars_env_str.isdigit() else 0
from changedetectionio.llm.evaluator import _get_max_input_chars, _DEFAULT_MAX_INPUT_CHARS
llm_effective_max_input_chars = _get_max_input_chars(datastore)
# Cost display: only when user configured their own key (not hosted/operator-managed)
llm_show_costs = not llm_env_configured
output = render_template("settings.html",
active_plugins=active_plugins,
api_key=datastore.data['settings']['application'].get('api_access_token'),
llm_config=llm_config,
llm_env_configured=llm_env_configured,
llm_stored=llm_stored,
llm_token_budget_month=llm_token_budget_month,
llm_token_budget_month_env=llm_token_budget_month_env,
llm_max_input_chars_env=llm_max_input_chars_env,
llm_effective_max_input_chars=llm_effective_max_input_chars,
llm_show_costs=llm_show_costs,
python_version=python_version,
uptime_seconds=uptime_seconds,
available_timezones=sorted(available_timezones()),
-128
View File
@@ -1,128 +0,0 @@
import os
from flask import Blueprint, jsonify, redirect, url_for, flash
from flask_babel import gettext
from loguru import logger
from changedetectionio.store import ChangeDetectionStore
from changedetectionio.auth_decorator import login_optionally_required
def construct_llm_blueprint(datastore: ChangeDetectionStore):
llm_blueprint = Blueprint('llm', __name__)
@llm_blueprint.route("/models", methods=['GET'])
@login_optionally_required
def llm_get_models():
from flask import request
provider = request.args.get('provider', '').strip()
api_key = request.args.get('api_key', '').strip()
api_base = request.args.get('api_base', '').strip()
logger.debug(f"LLM model list requested for provider={provider!r} api_base={api_base!r}")
if not provider:
logger.debug("LLM model list: no provider specified, returning 400")
return jsonify({'models': [], 'error': 'No provider specified'}), 400
# Fall back to the stored key if the user hasn't typed one yet
if not api_key:
api_key = (datastore.data['settings']['application'].get('llm') or {}).get('api_key', '')
logger.debug("LLM model list: no api_key in request, using stored key")
_PREFIXES = {'gemini': 'gemini/', 'ollama': 'ollama/', 'openrouter': 'openrouter/'}
prefix = _PREFIXES.get(provider, '')
try:
import litellm
logger.debug(f"LLM model list: calling litellm.get_valid_models provider={provider!r} api_base={api_base!r}")
raw = litellm.get_valid_models(
check_provider_endpoint=True,
custom_llm_provider=provider,
api_key=api_key or None,
api_base=api_base or None,
) or []
models = sorted({(m if m.startswith(prefix) else prefix + m) for m in raw})
logger.debug(f"LLM model list: got {len(models)} models for provider={provider!r}")
return jsonify({'models': models, 'error': None})
except Exception as e:
logger.error(f"LLM model list failed for provider={provider!r}: {e}")
logger.exception("LLM model list full traceback:")
return jsonify({'models': [], 'error': str(e)}), 400
@llm_blueprint.route("/test", methods=['GET'])
@login_optionally_required
def llm_test():
from changedetectionio.llm.client import completion
llm_cfg = datastore.data['settings']['application'].get('llm') or {}
model = llm_cfg.get('model', '').strip()
api_base = llm_cfg.get('api_base', '') or ''
logger.debug(f"LLM connection test requested: model={model!r} api_base={api_base!r}")
if not model:
logger.error("LLM connection test failed: no model configured in datastore")
return jsonify({'ok': False, 'error': 'No model configured.'}), 400
try:
logger.debug(f"LLM connection test: sending test prompt to model={model!r}")
text, total_tokens, input_tokens, output_tokens = completion(
model=model,
messages=[{'role': 'user', 'content':
'Reply with exactly five words confirming you are ready.'}],
api_key=llm_cfg.get('api_key') or None,
api_base=api_base or None,
timeout=20,
max_tokens=200,
)
reply = text.strip()
if not reply:
logger.warning(
f"LLM connection test: model={model!r} responded but returned empty content "
f"tokens={total_tokens} (in={input_tokens} out={output_tokens}) — "
f"check finish_reason in client debug log above"
)
return jsonify({'ok': False, 'error': 'Model responded but returned empty content — check server logs.'}), 400
logger.success(
f"LLM connection test OK: model={model!r} "
f"tokens={total_tokens} (in={input_tokens} out={output_tokens}) "
f"reply={reply!r}"
)
return jsonify({'ok': True, 'text': reply, 'tokens': total_tokens})
except Exception as e:
logger.error(f"LLM connection test FAILED: model={model!r} api_base={api_base!r} error={e}")
logger.exception("LLM connection test full traceback:")
return jsonify({'ok': False, 'error': str(e)}), 400
@llm_blueprint.route("/clear", methods=['GET'])
@login_optionally_required
def llm_clear():
logger.debug("LLM configuration cleared by user")
datastore.data['settings']['application'].pop('llm', None)
datastore.commit()
flash(gettext("AI / LLM configuration removed."), 'notice')
return redirect(url_for('settings.settings_page') + '#ai')
@llm_blueprint.route("/clear-summary-cache", methods=['GET'])
@login_optionally_required
def llm_clear_summary_cache():
import glob
count = 0
for watch in datastore.data['watching'].values():
if not watch.data_dir:
continue
for f in glob.glob(os.path.join(watch.data_dir, 'change-summary-*.txt')):
try:
os.remove(f)
logger.info(f"LLM summary cache removed: {f}")
count += 1
except OSError as e:
logger.warning(f"Could not remove LLM summary cache file {f}: {e}")
logger.info(f"LLM summary cache cleared: {count} file(s) removed")
flash(gettext("AI summary cache cleared (%(n)s file(s) removed).", n=count), 'notice')
return redirect(url_for('settings.settings_page') + '#ai')
return llm_blueprint
@@ -9,7 +9,6 @@
const email_notification_prefix=JSON.parse('{{emailprefix|tojson}}');
{% endif %}
</script>
<script src="{{url_for('static_content', group='js', filename='modal.js')}}"></script>
<script src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script>
<script src="{{url_for('static_content', group='js', filename='plugins.js')}}" defer></script>
<script src="{{url_for('static_content', group='js', filename='notifications.js')}}" defer></script>
@@ -34,7 +33,6 @@
<li class="tab"><a href="#plugin-{{ tab.plugin_id }}">{{ tab.tab_label }}</a></li>
{% endfor %}
{% endif %}
<li class="tab"><a href="#ai">{{ _('AI / LLM') }}</a></li>
<li class="tab"><a href="#info">{{ _('Info') }}</a></li>
</ul>
</div>
@@ -58,7 +56,7 @@
{{ render_field(form.application.form.filter_failure_notification_threshold_attempts, class="filter_failure_notification_threshold_attempts") }}
<span class="pure-form-message-inline">{{ _('After this many consecutive times that the CSS/xPath filter is missing, send a notification') }}
<br>
{{ _('Set to <strong>0</strong> to disable')|safe }}
{{ _('Set to') }} <strong>0</strong> {{ _('to disable') }}
</span>
</div>
<div class="pure-control-group">
@@ -120,15 +118,15 @@
<div class="pure-control-group inline-radio">
{{ render_field(form.application.form.fetch_backend, class="fetch-backend") }}
<span class="pure-form-message-inline">
<p>{{ _('Use the <strong>Basic</strong> method (default) where your watched sites don\'t need Javascript to render.')|safe }}</p>
<p>{{ _('The <strong>Chrome/Javascript</strong> method requires a network connection to a running WebDriver+Chrome server, set by the ENV var \'WEBDRIVER_URL\'.')|safe }}</p>
<p>{{ _('Use the') }} <strong>{{ _('Basic') }}</strong> {{ _('method (default) where your watched sites don\'t need Javascript to render.') }}</p>
<p>{{ _('The') }} <strong>{{ _('Chrome/Javascript') }}</strong> {{ _('method requires a network connection to a running WebDriver+Chrome server, set by the ENV var') }} 'WEBDRIVER_URL'. </p>
</span>
</div>
<fieldset class="pure-group" id="webdriver-override-options" data-visible-for="application-fetch_backend=html_webdriver">
<div class="pure-form-message-inline">
<strong>{{ _('If you\'re having trouble waiting for the page to be fully rendered (text missing etc), try increasing the \'wait\' time here.') }}</strong>
<br>
{{ _('This will wait <i>n</i> seconds before extracting the text.')|safe }}
{{ _('This will wait') }} <i>n</i> {{ _('seconds before extracting the text.') }}
</div>
<div class="pure-control-group">
{{ render_field(form.application.form.webdriver_delay) }}
@@ -197,7 +195,7 @@ nav
<span class="pure-form-message-inline">{{ _('Note: This is applied globally in addition to the per-watch rules.') }}</span><br>
<span class="pure-form-message-inline">
<ul>
<li>{{ _('Matching text will be ignored in the text snapshot (you can still see it but it wont trigger a change)') }}</li>
<li>{{ _('Matching text will be') }} <strong>{{ _('ignored') }}</strong> {{ _('in the text snapshot (you can still see it but it wont trigger a change)') }}</li>
<li>{{ _('Note: This is applied globally in addition to the per-watch rules.') }}</li>
<li>{{ _('Each line processed separately, any line matching will be ignored (removed before creating the checksum)') }}</li>
<li>{{ _('Regular Expression support, wrap the entire line in forward slash') }} <code>/regex/</code></li>
@@ -265,7 +263,7 @@ nav
</div>
<div>
{{ render_field(form.application.form.rss_template_override) }}
{{ show_token_placeholders(extra_notification_token_placeholder_info=extra_notification_token_placeholder_info, suffix="-rss", settings_application=settings_application) }}
{{ show_token_placeholders(extra_notification_token_placeholder_info=extra_notification_token_placeholder_info, suffix="-rss") }}
</div>
</div>
<br>
@@ -394,7 +392,6 @@ nav
</div>
{% endfor %}
{% endif %}
{% include 'settings_llm_tab.html' %}
<div class="tab-pane-inner" id="info">
<p><strong>{{ _('Uptime:') }}</strong> {{ uptime_seconds|format_duration }}</p>
<p><strong>{{ _('Python version:') }}</strong> {{ python_version }}</p>
@@ -1,528 +0,0 @@
{% from '_helpers.html' import render_field %}
{% from '_stab.html' import stab_shell, stab_pane %}
{#
AI / LLM settings tab content — included from settings.html.
Requires template context: form, llm_config, llm_env_configured
#}
<div class="tab-pane-inner" id="ai">
<script src="{{ url_for('static_content', group='js', filename='sub-tabs.js') }}"></script>
{# TRANSLATORS: 'Usage' here means token consumption/cost stats for the AI provider, not a how-to guide #}
{% set _usage_label = pgettext('AI usage stats', 'Usage') %}
{% call stab_shell('ai-settings', [
{'id': 'overview', 'label': _('Overview'), 'icon': '✦'},
{'id': 'provider', 'label': _('Provider'), 'icon': '⚙'},
{'id': 'prompts', 'label': _('Prompts'), 'icon': '≡'},
{'id': 'behaviour', 'label': _('Behaviour'), 'icon': '⚑'},
{'id': 'usage', 'label': _usage_label, 'icon': '$'},
]) %}
{# ── Overview ──────────────────────────────────────────────────────────── #}
{% call stab_pane('overview') %}
<div class="stab-overview-hero">
<h3><span class="stab-overview-glyph"></span> {{ _('AI-powered change monitoring') }}</h3>
<p>{{ _('Connect an LLM to move from "something changed" to "only the thing you care about changed".') }}</p>
</div>
<div class="stab-overview-features">
<div class="stab-overview-feature">
<div class="stab-overview-icon"></div>
<div class="stab-overview-text">
<strong>{{ _('Intent filtering') }}</strong>
<p>{{ _('Each watch or tag can carry a plain-text intent — %(ex1)s or %(ex2)s. On every detected change the AI evaluates the diff against it and suppresses irrelevant noise.', ex1='<strong>"notify me only when the price drops"</strong>', ex2='<strong>"alert when the item goes out of stock"</strong>') | safe }}</p>
</div>
</div>
<div class="stab-overview-feature">
<div class="stab-overview-icon"></div>
<div class="stab-overview-text">
<strong>{{ _('AI Change Summary') }}</strong>
<p>{{ _('Instead of raw diffs, receive plain-language summaries in notifications — %(ex1)s or %(ex2)s. Set a global default prompt here, or override per watch or tag.', ex1='<strong>"Price dropped from $89 to $67"</strong>', ex2='<strong>"3 new items added to the listing"</strong>') | safe }}</p>
</div>
</div>
<div class="stab-overview-feature">
<div class="stab-overview-icon"></div>
<div class="stab-overview-text">
<strong>{{ _('Minimal cost') }}</strong>
<p>{{ _('The AI sees only a unified diff of what changed — never full page HTML. Low-cost models like %(gpt)s or %(gemini)s handle this well, typically fractions of a cent per check.',
gpt='<a href="https://platform.openai.com/api-keys" target="_blank" rel="noopener">gpt-4o-mini</a>',
gemini='<a href="https://aistudio.google.com/apikey" target="_blank" rel="noopener">Gemini Flash</a>') | safe }}</p>
</div>
</div>
</div>
<div class="stab-overview-cta">
{% if llm_config and llm_config.get('model') %}
<span class="stab-configured-badge">&#10003; {{ _('AI / LLM configured:') }} {{ llm_config.get('model') }}</span>
{% else %}
<button type="button" class="pure-button pure-button-primary" data-stab-goto="provider">
⚙ {{ _('Configure AI Provider') }} &rarr;
</button>
{% endif %}
</div>
{% endcall %}
{# ── Provider ──────────────────────────────────────────────────────────── #}
{% call stab_pane('provider') %}
<p class="stab-section-title">{{ _('AI Provider') }}</p>
{% if not llm_env_configured and not (llm_config and llm_config.get('model')) %}
<div class="stab-overview-disclaimer">
<div class="stab-disclaimer-icon"></div>
<div class="stab-disclaimer-body">
<strong>{{ _('Third-party data transfer — please read') }}</strong>
<p>{{ _('When AI features are active, change data from the websites you monitor — including page diffs and extracted text — is sent to an external AI provider of your choice.') }}</p>
<ul>
<li>{{ _('You are solely responsible for ensuring this complies with the terms of service of each website you monitor.') }}</li>
<li>{{ _("You are solely responsible for compliance with applicable data-protection laws (e.g. GDPR) regarding any personal data that may appear in monitored content.") }}</li>
<li>{{ _('API costs charged by your chosen provider are your own responsibility; this software has no visibility into or control over those charges.') }}</li>
<li>{{ _('AI / LLM models are known to hallucinate — producing plausible-sounding but factually incorrect or entirely fabricated output with apparent confidence — and by design may omit or truncate relevant data during summarisation. AI output must never be relied upon as complete or accurate. This software is provided as-is with no warranty of any kind.') }}</li>
<li>{{ _('By enabling AI features you personally indemnify and hold harmless the creator(s) and contributor(s) of this software from any claims, damages, or liability arising from this data transfer or your use of AI features.') }}</li>
</ul>
<label class="stab-disclaimer-check">
<input type="checkbox" id="llm-disclaimer-accept" onchange="llmDisclaimerToggle(this)">
<span>{{ _('I have read and understood the above. I accept full responsibility and indemnify the creator(s) of this software.') }}</span>
</label>
</div>
</div>
<div id="llm-provider-fields" style="display:none">
{% endif %}
{% if llm_env_configured %}
<div class="inline-warning" style="margin-bottom: 1em;">
<img class="inline-warning-icon" src="{{ url_for('static_content', group='images', filename='notice.svg') }}" alt="{{ _('Note') }}">
{{ _('AI / LLM is configured via environment variables (<code>LLM_MODEL=%(model)s</code>%(api_base)s). Remove the <code>LLM_MODEL</code> environment variable to configure via this form instead.',
model=llm_config.get('model', '')|e,
api_base=(', <code>LLM_API_BASE=' ~ (llm_config.get('api_base')|e) ~ '</code>') if llm_config.get('api_base') else '') | safe }}
</div>
{% else %}
<div class="pure-control-group">
<label for="llm-provider">{{ _('Provider') }}</label>
<select id="llm-provider" onchange="llmOnProviderChange(this.value)">
<option value="">— {{ _('select a provider') }} —</option>
<optgroup label="OpenAI">
<option value="openai">OpenAI</option>
</optgroup>
<optgroup label="Anthropic">
<option value="anthropic">Anthropic</option>
</optgroup>
<optgroup label="Google">
<option value="gemini">Google (Gemini)</option>
</optgroup>
<optgroup label="{{ _('Local / Self-hosted') }}">
<option value="ollama">Ollama (local)</option>
</optgroup>
<optgroup label="OpenRouter">
<option value="openrouter">OpenRouter (200+ models)</option>
</optgroup>
</select>
</div>
<div class="pure-control-group">
{{ render_field(form.llm.form.llm_api_key) }}
<span class="pure-form-message-inline" id="llm-key-hint"></span>
</div>
<div class="pure-control-group" id="llm-base-group" style="display:none">
{{ render_field(form.llm.form.llm_api_base) }}
<span class="pure-form-message-inline">{{ _('Only needed for Ollama or custom/self-hosted endpoints. Leave blank for cloud providers.') }}</span>
</div>
<div class="pure-control-group" id="llm-fetch-group" style="display:none">
<label></label>
<button type="button" id="llm-fetch-btn" class="pure-button button-xsmall" onclick="llmFetchModels()"
style="background:#27ae60;color:#fff;border:none;">
&#8635; {{ _('Load available models') }}
</button>
<span id="llm-fetch-status" style="margin-left:.6em;font-size:.85em;color:#888;"></span>
</div>
<div class="pure-control-group" id="llm-model-select-group" style="display:none">
<label for="llm-model-select">{{ _('Available models') }}</label>
<select id="llm-model-select" class="pure-input-1-2" onchange="llmOnModelPick(this.value)">
<option value="">— {{ _('choose a model') }} —</option>
</select>
</div>
<div class="pure-control-group">
{{ render_field(form.llm.form.llm_model,
readonly=True,
placeholder=_("Enter API key and click 'Load available models'")) }}
</div>
{% if llm_config and llm_config.get('model') %}
<div class="pure-control-group">
<label></label>
<span style="color:#4a7c59;font-weight:bold;">
&#10003; {{ _('AI / LLM configured:') }} {{ llm_config.get('model') }}
</span>
&nbsp;
<a href="{{ url_for('settings.llm.llm_clear') }}"
class="pure-button button-xsmall"
style="background:#c0392b;color:#fff;"
data-requires-confirm
data-confirm-type="danger"
data-confirm-title="{{ _('Remove AI / LLM configuration?') }}"
data-confirm-message="<p>{{ _('This will remove your saved AI provider, model, and API key.') }}</p>"
data-confirm-button="{{ _('Remove') }}"
data-cancel-button="{{ _('Cancel') }}">
&#10005; {{ _('Remove') }}
</a>
&nbsp;
<button type="button" id="llm-test-btn" class="pure-button button-xsmall" onclick="llmRunTest()"
style="background:#2980b9;color:#fff;border:none;">
&#9654; {{ _('Test connection') }}
</button>
</div>
<div id="llm-test-result" style="display:none; margin-top:0.6em; padding:0.6em 0.85em; border-radius:5px; font-size:0.88em; line-height:1.45;"></div>
{% endif %}
<p class="pure-form-message-inline" style="margin-top:0.5em;">
{{ _("Your API key is stored locally and sent only to your chosen provider. On each detected change, the watch's diff and extracted text are sent to the LLM — no full page HTML.") }}
</p>
<div class="pure-control-group" style="margin-top:1.2em; padding-top:1em; border-top:1px solid rgba(128,128,128,0.15);">
<label style="color:#888; font-size:0.85em;">{{ _('Cache') }}</label>
<a href="{{ url_for('settings.llm.llm_clear_summary_cache') }}"
class="pure-button button-xsmall"
style="background:#7f8c8d;color:#fff;"
data-requires-confirm
data-confirm-type="warning"
data-confirm-title="{{ _('Clear all summary cache?') }}"
data-confirm-message="<p>{{ _('This will remove all cached AI change summaries across all watches.') }}</p><p>{{ _('They will be regenerated on the next check.') }}</p>"
data-confirm-button="{{ _('Clear cache') }}"
data-cancel-button="{{ _('Cancel') }}">
&#10005; {{ _('Clear all summary cache') }}
</a>
<span class="pure-form-message-inline">{{ _('Removes all cached AI change summaries across all watches. They will be regenerated on the next check.') }}</span>
</div>
{% endif %}{# llm_env_configured #}
{% if not llm_env_configured and not (llm_config and llm_config.get('model')) %}
</div>{# llm-provider-fields #}
{% endif %}
{% endcall %}
{# ── Prompts ───────────────────────────────────────────────────────────── #}
{% call stab_pane('prompts') %}
<p class="stab-section-title">{{ _('Default AI Change Summary') }}</p>
<div class="pure-control-group">
{{ render_field(form.llm.form.llm_change_summary_default) }}
<span class="pure-form-message-inline">
{{ _('Used for all watches unless overridden by the watch or its tag/group.') }}
&nbsp;<a href="#" class="pure-button button-small" onclick="var t=document.getElementById('llm-llm_change_summary_default'); if(!t.value && t.placeholder) t.value=t.placeholder; return false;">{{ _('Modify default prompt') }}</a>
</span>
</div>
{% endcall %}
{# ── Behaviour ─────────────────────────────────────────────────────────── #}
{% call stab_pane('behaviour') %}
<p class="stab-section-title">{{ _('Behaviour') }}</p>
{% if llm_config and llm_config.get('model') %}
<div class="pure-control-group">
<label></label>
{{ form.llm.form.llm_override_diff_with_summary() }}
<label for="{{ form.llm.form.llm_override_diff_with_summary.id }}" style="display:inline; font-weight:normal;">
{{ form.llm.form.llm_override_diff_with_summary.label.text }}
</label>
<span class="pure-form-message-inline">
{{ _('When enabled, the <code>%(diff)s</code> notification token shows the AI summary instead of the raw diff. Use <code>%(raw_diff)s</code> to always get the original.',
diff='{{diff}}', raw_diff='{{raw_diff}}') | safe }}
</span>
</div>
<div class="pure-control-group">
<label></label>
{{ form.llm.form.llm_restock_use_fallback_extract() }}
<label for="{{ form.llm.form.llm_restock_use_fallback_extract.id }}" style="display:inline; font-weight:normal;">
{{ form.llm.form.llm_restock_use_fallback_extract.label.text }}
</label>
<span class="pure-form-message-inline">
{{ _('When enabled, the AI will be used as a last resort to extract price and stock status from product pages where no structured metadata (JSON-LD, microdata, OpenGraph) is found.') }}
</span>
</div>
<div class="pure-control-group">
<label for="{{ form.llm.form.llm_thinking_budget.id }}">{{ form.llm.form.llm_thinking_budget.label.text }}</label>
{{ form.llm.form.llm_thinking_budget() }}
<span class="pure-form-message-inline">{{ _('For Gemini 2.5+ models only. Thinking tokens improve reasoning quality but count against the output budget. Set to Off if summaries are being cut short.') }}</span>
</div>
<div class="pure-control-group">
<label for="{{ form.llm.form.llm_max_summary_tokens.id }}">{{ form.llm.form.llm_max_summary_tokens.label.text }}</label>
{{ form.llm.form.llm_max_summary_tokens() }}
<span class="pure-form-message-inline">{{ _('Upper limit on tokens the AI may use when writing a change summary. Higher values allow longer summaries but cost more.') }}</span>
</div>
<div class="pure-control-group">
<label>{{ form.llm.form.llm_budget_action.label.text }}</label>
<div>
{% for subfield in form.llm.form.llm_budget_action %}
<label class="pure-radio" style="display:block; font-weight:normal; margin-bottom:0.3em;">
{{ subfield() }} {{ subfield.label.text }}
</label>
{% endfor %}
</div>
</div>
{% else %}
<p class="pure-form-message-inline" style="margin-top:0.5em;">
{{ _('Configure a provider first to unlock behaviour settings.') }}
</p>
{% endif %}
{% endcall %}
{# ── Usage ─────────────────────────────────────────────────────────────── #}
{% call stab_pane('usage') %}
<p class="stab-section-title">{{ _('Token & Cost Tracking') }}</p>
{% if llm_stored.get('tokens_total_cumulative') or llm_stored.get('tokens_this_month') %}
<div class="llm-usage-grid">
<div class="llm-stat-card">
<div class="llm-stat-label">{{ _('This month') }}</div>
<div class="llm-stat-value">{{ '{:,}'.format(llm_stored.get('tokens_this_month', 0)) }}</div>
<div class="llm-stat-sub">{{ _('tokens') }}{% if llm_show_costs and llm_stored.get('cost_usd_this_month') %} &nbsp;·&nbsp;&thinsp;${{ '%.4f'|format(llm_stored.get('cost_usd_this_month', 0)) }}{% endif %}</div>
{% if llm_token_budget_month %}
{% set pct = (llm_stored.get('tokens_this_month', 0) / llm_token_budget_month * 100)|int %}
<div class="llm-stat-bar-wrap">
<div class="llm-stat-bar-fill {% if pct >= 100 %}bar-over{% elif pct >= 80 %}bar-warn{% else %}bar-ok{% endif %}"
style="width:{{ [pct, 100]|min }}%"></div>
</div>
<div class="llm-stat-budget-text">{{ _('%(percent)s%% of %(budget)s', percent=pct, budget='{:,}'.format(llm_token_budget_month)) }}</div>
{% endif %}
</div>
<div class="llm-stat-card">
<div class="llm-stat-label">{{ _('All-time total') }}</div>
<div class="llm-stat-value">{{ '{:,}'.format(llm_stored.get('tokens_total_cumulative', 0)) }}</div>
<div class="llm-stat-sub">{{ _('tokens') }}{% if llm_show_costs and llm_stored.get('cost_usd_total_cumulative') %} &nbsp;·&nbsp;&thinsp;${{ '%.4f'|format(llm_stored.get('cost_usd_total_cumulative', 0)) }}{% endif %}</div>
</div>
</div>
{% if llm_token_budget_month and llm_stored.get('tokens_this_month', 0) >= llm_token_budget_month %}
<p class="llm-budget-alert">&#9888; {{ _('Monthly token budget reached. AI summarisation is paused until next month.') }}</p>
{% endif %}
<div class="llm-usage-settings">
<div class="llm-usage-row">
<span class="llm-usage-row-label">{{ _('Token budget this period') }}</span>
<span class="llm-usage-row-value">
{% if llm_token_budget_month_env %}
<strong>{{ '{:,}'.format(llm_token_budget_month_env) }}</strong>
<span class="llm-env-badge">{{ _('(set via <code>LLM_TOKEN_BUDGET_MONTH</code>)') | safe }}</span>
<input type="hidden" name="llm-llm_token_budget_month" value="{{ llm_token_budget_month_env }}">
{% else %}
{{ form.llm.form.llm_token_budget_month(placeholder=_('0 = unlimited'), value=llm_stored.get('token_budget_month', 0) or '') }}
<span class="llm-field-hint">{{ _('tokens (0 = unlimited)') }}</span>
{% endif %}
</span>
</div>
{% if llm_stored.get('tokens_month_key') %}
<div class="llm-usage-row">
<span class="llm-usage-row-label">{{ _('Current billing period') }}</span>
<span class="llm-usage-row-value">{{ llm_stored.get('tokens_month_key') }}</span>
</div>
{% endif %}
<div class="llm-usage-row">
<span class="llm-usage-row-label">{{ _('Max input characters') }}</span>
<span class="llm-usage-row-value">
{% if llm_max_input_chars_env %}
{{ form.llm.form.llm_max_input_chars(value=llm_max_input_chars_env, readonly=True, style="width:10em;opacity:0.6;cursor:not-allowed;") }}
<span class="llm-env-badge">{{ _('(set via <code>LLM_MAX_INPUT_CHARS</code>)') | safe }}</span>
{% else %}
{{ form.llm.form.llm_max_input_chars(placeholder='100000', value=llm_stored.get('max_input_chars', 100000) or '') }}
<span class="llm-field-hint">{{ _('characters — currently enforcing: %(n)s', n='{:,}'.format(llm_effective_max_input_chars)) }}</span>
{% endif %}
</span>
</div>
</div>
{% else %}
<p class="llm-no-usage">{{ _('No AI usage recorded yet.') }}</p>
<div class="llm-usage-settings">
<div class="llm-usage-row">
<span class="llm-usage-row-label">{{ _('Token budget') }}</span>
<span class="llm-usage-row-value">
{% if llm_token_budget_month_env %}
<strong>{{ '{:,}'.format(llm_token_budget_month_env) }}</strong>
<span class="llm-env-badge">{{ _('(set via <code>LLM_TOKEN_BUDGET_MONTH</code>)') | safe }}</span>
<input type="hidden" name="llm-llm_token_budget_month" value="{{ llm_token_budget_month_env }}">
{% else %}
{{ form.llm.form.llm_token_budget_month(placeholder=_('0 = unlimited'), value=llm_stored.get('token_budget_month', 0) or '') }}
<span class="llm-field-hint">{{ _('tokens per month (0 = unlimited)') }}</span>
{% endif %}
</span>
</div>
<div class="llm-usage-row">
<span class="llm-usage-row-label">{{ _('Max input characters') }}</span>
<span class="llm-usage-row-value">
{% if llm_max_input_chars_env %}
{{ form.llm.form.llm_max_input_chars(value=llm_max_input_chars_env, readonly=True, style="width:10em;opacity:0.6;cursor:not-allowed;") }}
<span class="llm-env-badge">{{ _('(set via <code>LLM_MAX_INPUT_CHARS</code>)') | safe }}</span>
{% else %}
{{ form.llm.form.llm_max_input_chars(placeholder='100000', value=llm_stored.get('max_input_chars', 100000) or '') }}
<span class="llm-field-hint">{{ _('characters — currently enforcing: %(n)s', n='{:,}'.format(llm_effective_max_input_chars)) }}</span>
{% endif %}
</span>
</div>
</div>
{% endif %}
{% endcall %}
{% endcall %}{# stab_shell #}
</div>
<script>
(function () {
const LIVE_PROVIDERS = ['openai', 'anthropic', 'gemini', 'ollama', 'openrouter'];
const BASE_DEFAULTS = { ollama: 'http://localhost:11434' };
const KEY_HINTS = {
openai: '{{ _("platform.openai.com → API keys") }}',
anthropic: '{{ _("console.anthropic.com → API keys") }}',
gemini: '{{ _("aistudio.google.com → Get API key") }}',
ollama: '{{ _("No API key needed for local Ollama") }}',
openrouter: '{{ _("openrouter.ai → Keys") }}',
};
window.llmDisclaimerToggle = function (cb) {
const fields = document.getElementById('llm-provider-fields');
if (fields) fields.style.display = cb.checked ? '' : 'none';
};
window.llmOnProviderChange = function (provider) {
const fetchGroup = document.getElementById('llm-fetch-group');
const baseGroup = document.getElementById('llm-base-group');
const modelSelGrp = document.getElementById('llm-model-select-group');
const baseField = document.querySelector('[name="llm-llm_api_base"]');
const hint = document.getElementById('llm-key-hint');
fetchGroup.style.display = LIVE_PROVIDERS.includes(provider) ? '' : 'none';
const needsBase = provider === 'ollama';
baseGroup.style.display = needsBase ? '' : 'none';
if (BASE_DEFAULTS[provider] !== undefined) {
if (!baseField.value) baseField.value = BASE_DEFAULTS[provider];
}
hint.textContent = KEY_HINTS[provider] || '';
modelSelGrp.style.display = 'none';
document.getElementById('llm-fetch-status').textContent = '';
};
window.llmFetchModels = async function () {
const provider = document.getElementById('llm-provider').value;
const apiKey = document.querySelector('[name="llm-llm_api_key"]').value.trim();
const apiBase = document.querySelector('[name="llm-llm_api_base"]').value.trim();
const btn = document.getElementById('llm-fetch-btn');
const statusEl = document.getElementById('llm-fetch-status');
const selGroup = document.getElementById('llm-model-select-group');
const modelSel = document.getElementById('llm-model-select');
if (!provider) { statusEl.textContent = '{{ _("Select a provider first.") }}'; return; }
btn.disabled = true;
btn.textContent = '⏳ {{ _("Loading…") }}';
statusEl.textContent = '';
const params = new URLSearchParams({ provider });
if (apiKey) params.set('api_key', apiKey);
if (apiBase) params.set('api_base', apiBase);
try {
const resp = await fetch('{{ url_for("settings.llm.llm_get_models") }}?' + params);
const data = await resp.json();
if (data.error) {
statusEl.style.color = '#c0392b';
statusEl.textContent = '✗ ' + data.error;
selGroup.style.display = 'none';
return;
}
if (!data.models || data.models.length === 0) {
statusEl.style.color = '#e67e22';
statusEl.textContent = '{{ _("No models returned — check your API key.") }}';
selGroup.style.display = 'none';
return;
}
modelSel.innerHTML = '<option value="">{{ _("— choose a model —") }}</option>';
const currentModel = document.querySelector('[name="llm-llm_model"]').value.trim();
for (const m of data.models) {
const opt = document.createElement('option');
opt.value = m;
opt.textContent = m;
if (m === currentModel) opt.selected = true;
modelSel.appendChild(opt);
}
selGroup.style.display = '';
statusEl.style.color = '#27ae60';
statusEl.textContent = '✓ ' + data.models.length + ' {{ _("models available with your key") }}';
} catch (e) {
statusEl.style.color = '#c0392b';
statusEl.textContent = '✗ {{ _("Request failed") }}: ' + e.message;
} finally {
btn.disabled = false;
btn.textContent = '↻ {{ _("Load available models") }}';
}
};
window.llmOnModelPick = function (value) {
if (value) document.querySelector('[name="llm-llm_model"]').value = value;
};
window.llmRunTest = async function () {
const btn = document.getElementById('llm-test-btn');
const result = document.getElementById('llm-test-result');
if (!btn || !result) return;
btn.disabled = true;
btn.textContent = '⏳ {{ _("Testing…") }}';
result.style.display = 'none';
try {
const resp = await fetch('{{ url_for("settings.llm.llm_test") }}');
const data = await resp.json();
if (data.ok) {
result.style.cssText = 'display:block; background:rgba(39,174,96,0.08); border:1px solid rgba(39,174,96,0.3); border-radius:5px; padding:0.6em 0.85em; font-size:0.88em; line-height:1.45;';
result.innerHTML = '<span style="color:#27ae60; font-weight:600;">&#10003; {{ _("Connected") }}</span>'
+ (data.tokens ? ' <span style="opacity:0.55; font-size:0.9em;">(' + data.tokens + ' {{ _("tokens") }})</span>' : '')
+ '<br><em style="opacity:0.75;">' + data.text.replace(/</g,'&lt;') + '</em>';
} else {
result.style.cssText = 'display:block; background:rgba(192,57,43,0.07); border:1px solid rgba(192,57,43,0.25); border-radius:5px; padding:0.6em 0.85em; font-size:0.88em; line-height:1.45;';
result.innerHTML = '<span style="color:#c0392b; font-weight:600;">&#10007; {{ _("Failed") }}</span><br><code style="font-size:0.92em; word-break:break-all;">' + (data.error || '').replace(/</g,'&lt;') + '</code>';
}
} catch (e) {
result.style.cssText = 'display:block; background:rgba(192,57,43,0.07); border:1px solid rgba(192,57,43,0.25); border-radius:5px; padding:0.6em 0.85em; font-size:0.88em;';
result.innerHTML = '<span style="color:#c0392b; font-weight:600;">&#10007; {{ _("Request failed") }}</span>: ' + e.message.replace(/</g,'&lt;');
} finally {
btn.disabled = false;
btn.textContent = '&#9654; {{ _("Test connection") }}';
}
};
// On page load: detect and pre-select provider from current model
(function detectCurrentProvider() {
const modelField = document.querySelector('[name="llm-llm_model"]');
if (!modelField) return;
const m = modelField.value.trim();
if (!m) return;
let guessed = '';
if (m.startsWith('gemini/')) guessed = 'gemini';
else if (m.startsWith('ollama/')) guessed = 'ollama';
else if (m.startsWith('openrouter/')) guessed = 'openrouter';
else if (m.startsWith('claude')) guessed = 'anthropic';
else if (m.startsWith('gpt') || m.startsWith('o1') || m.startsWith('o3')) guessed = 'openai';
if (guessed) {
const sel = document.getElementById('llm-provider');
if (sel) { sel.value = guessed; llmOnProviderChange(guessed); }
}
})();
}());
</script>
@@ -5,7 +5,6 @@ from loguru import logger
from changedetectionio.store import ChangeDetectionStore
from changedetectionio.flask_app import login_optionally_required
from changedetectionio.llm.evaluator import get_llm_config as _get_llm_config
def construct_blueprint(datastore: ChangeDetectionStore):
@@ -184,7 +183,6 @@ def construct_blueprint(datastore: ChangeDetectionStore):
'form': form,
'watch': default,
'extra_notification_token_placeholder_info': datastore.get_unique_notification_token_placeholders_available(),
'llm_configured': bool(_get_llm_config(datastore)),
}
included_content = {}
-11
View File
@@ -2,29 +2,18 @@ from wtforms import (
Form,
StringField,
SubmitField,
TextAreaField,
validators,
)
from wtforms.fields.simple import BooleanField
from flask_babel import lazy_gettext as _l
from changedetectionio.processors.restock_diff.forms import processor_settings_form as restock_settings_form
from changedetectionio.llm.ui_strings import LLM_INTENT_TAG_PLACEHOLDER
from changedetectionio.llm.evaluator import DEFAULT_CHANGE_SUMMARY_PROMPT
class group_restock_settings_form(restock_settings_form):
overrides_watch = BooleanField(_l('Activate for individual watches in this tag/group?'), default=False)
url_match_pattern = StringField(_l('Auto-apply to watches with URLs matching'),
render_kw={"placeholder": _l("e.g. *://example.com/* or github.com/myorg")})
tag_colour = StringField(_l('Tag colour'), default='')
llm_intent = TextAreaField('AI Change Intent',
validators=[validators.Optional(), validators.Length(max=2000)],
render_kw={"rows": "5", "placeholder": LLM_INTENT_TAG_PLACEHOLDER})
llm_change_summary = TextAreaField('AI Change Summary',
validators=[validators.Optional(), validators.Length(max=2000)],
render_kw={"rows": "5", "placeholder": DEFAULT_CHANGE_SUMMARY_PROMPT},
default='')
class SingleTag(Form):
@@ -27,9 +27,6 @@
<div class="tabs collapsable">
<ul>
<li class="tab" id=""><a href="#general">{{ _('General') }}</a></li>
{% if llm_configured %}
<li class="tab"><a href="#ai-llm">{{ _('AI / LLM') }}</a></li>
{% endif %}
<li class="tab"><a href="#filters-and-triggers">{{ _('Filters & Triggers') }}</a></li>
{% if extra_tab_content %}
<li class="tab"><a href="#extras_tab">{{ extra_tab_content }}</a></li>
@@ -91,15 +88,8 @@
</fieldset>
</div>
{% if llm_configured %}
<div class="tab-pane-inner" id="ai-llm">
{% include "edit/include_llm_intent.html" %}
</div>
{% endif %}
<div class="tab-pane-inner" id="filters-and-triggers">
<p>{{ _('These settings are <strong><i>added</i></strong> to any existing watch configurations.')|safe }}</p>
<p>{{ _('These settings are') }} <strong><i>{{ _('added') }}</i></strong> {{ _('to any existing watch configurations.') }}</p>
{% include "edit/include_subtract.html" %}
<div class="text-filtering border-fieldset">
<h3>{{ _('Text filtering') }}</h3>
@@ -130,7 +120,7 @@
{% if has_default_notification_urls %}
<div class="inline-warning">
<img class="inline-warning-icon" src="{{url_for('static_content', group='images', filename='notice.svg')}}" alt="{{ _('Look out!') }}" title="{{ _('Lookout!') }}" >
{{ _('There are <a href="%(url)s">system-wide notification URLs enabled</a>, this form will override notification settings for this watch only &dash; an empty Notification URL list here will still send notifications.', url=url_for('settings.settings_page') ~ '#notifications')|safe }}
{{ _('There are') }} <a href="{{ url_for('settings.settings_page')}}#notifications">{{ _('system-wide notification URLs enabled') }}</a>, {{ _('this form will override notification settings for this watch only') }} &dash; {{ _('an empty Notification URL list here will still send notifications.') }}
</div>
{% endif %}
<a href="#notifications" id="notification-setting-reset-to-default" class="pure-button button-xsmall" style="right: 20px; top: 20px; position: absolute; background-color: #5f42dd; border-radius: 4px; font-size: 70%; color: #fff">{{ _('Use system defaults') }}</a>
-235
View File
@@ -17,34 +17,6 @@ from changedetectionio.store import ChangeDetectionStore
from changedetectionio.auth_decorator import login_optionally_required
def _clean_litellm_error(exc) -> str:
"""Return a short, human-readable error string from a litellm exception.
litellm embeds the raw provider JSON in str(exc), which can be hundreds of
characters of verbose quota detail. We try to pull just the provider's
'message' field; failing that we return the first non-empty line with the
'litellm.XxxError:' class prefix stripped.
"""
import json, re
raw = str(exc)
# Try to parse the embedded JSON block (starts at first '{')
brace = raw.find('{')
if brace >= 0:
try:
payload = json.loads(raw[brace:])
msg = (payload.get('error') or {}).get('message') or ''
if msg:
# Take only the first sentence / line — provider messages can be long
return msg.split('\n')[0].split('. ')[0].strip() + '.'
except Exception:
pass
# Fallback: strip the "litellm.XxxError: litellm.XxxError: providerException - " prefix
first_line = raw.split('\n')[0]
first_line = re.sub(r'^(litellm\.\w+:\s*)+', '', first_line)
first_line = re.sub(r'\w+Exception\s*-\s*', '', first_line).strip()
return first_line or raw.split('\n')[0]
def construct_blueprint(datastore: ChangeDetectionStore):
diff_blueprint = Blueprint('ui_diff', __name__, template_folder="../ui/templates")
@@ -156,172 +128,6 @@ def construct_blueprint(datastore: ChangeDetectionStore):
redirect=redirect
)
@diff_blueprint.route("/diff/<uuid_str:uuid>/llm-summary/prompt", methods=['GET'])
@login_optionally_required
def diff_llm_summary_prompt(uuid):
"""Return the effective LLM summary prompt for a watch immediately (no LLM call)."""
from flask import jsonify
watch = datastore.data['watching'].get(uuid)
if not watch:
return jsonify({'prompt': ''}), 404
try:
from changedetectionio.llm.evaluator import get_effective_summary_prompt
prompt = get_effective_summary_prompt(watch, datastore)
except Exception:
prompt = ''
return jsonify({'prompt': prompt})
@diff_blueprint.route("/diff/<uuid_str:uuid>/llm-summary", methods=['GET'])
@login_optionally_required
def diff_llm_summary(uuid):
"""
Generate (or return cached) an AI summary of the diff between two snapshots.
Called via AJAX from the diff page when no cached summary exists.
Returns JSON: {"summary": "...", "error": null} or {"summary": null, "error": "..."}
"""
import difflib
from flask import jsonify
try:
watch = datastore.data['watching'][uuid]
except KeyError:
return jsonify({'summary': None, 'error': 'Watch not found'}), 404
llm_cfg = datastore.data.get('settings', {}).get('application', {}).get('llm', {})
if not llm_cfg.get('model'):
return jsonify({'summary': None, 'error': 'LLM not configured'}), 400
dates = list(watch.history.keys())
if len(dates) < 2:
return jsonify({'summary': None, 'error': 'Not enough history'}), 400
best_from = watch.get_from_version_based_on_last_viewed
from_version = request.args.get('from_version', best_from if best_from else dates[-2])
to_version = request.args.get('to_version', dates[-1])
all_changes = request.args.get('all_changes', '0') == '1'
ignore_whitespace = request.args.get('ignore_whitespace', '0') == '1'
show_removed = request.args.get('removed', '1') == '1'
show_added = request.args.get('added', '1') == '1'
def _prep(text):
"""Optionally normalise whitespace on each line before diffing."""
if not ignore_whitespace:
return text.splitlines()
return [' '.join(line.split()) for line in text.splitlines()]
def _make_unified_diff(a_text, b_text):
lines = list(difflib.unified_diff(_prep(a_text), _prep(b_text), lineterm='', n=3))
return '\n'.join(lines[2:]) if len(lines) > 2 else '\n'.join(lines)
def _apply_filters(diff_text):
"""Strip +/- lines the user has hidden in the UI so the LLM matches what they see."""
if show_removed and show_added:
return diff_text
out = []
for line in diff_text.splitlines():
if line.startswith('-') and not show_removed:
continue
if line.startswith('+') and not show_added:
continue
out.append(line)
return '\n'.join(out)
try:
from_text = watch.get_history_snapshot(timestamp=from_version)
to_text = watch.get_history_snapshot(timestamp=to_version)
except Exception as e:
return jsonify({'summary': None, 'error': f'Could not read snapshots: {e}'}), 500
if all_changes:
# Build sequential diffs for every intermediate snapshot between from and to
# so the LLM sees the full timeline of changes, not just start→end
sorted_dates = sorted(dates)
try:
start_idx = sorted_dates.index(from_version)
end_idx = sorted_dates.index(to_version)
except ValueError:
start_idx, end_idx = 0, len(sorted_dates) - 1
steps = sorted_dates[start_idx:end_idx + 1]
segments = []
for i in range(len(steps) - 1):
a_ts, b_ts = steps[i], steps[i + 1]
try:
a_text = watch.get_history_snapshot(timestamp=a_ts) or ''
b_text = watch.get_history_snapshot(timestamp=b_ts) or ''
except Exception:
continue
seg = _apply_filters(_make_unified_diff(a_text, b_text))
if seg.strip():
segments.append(f'=== {a_ts}{b_ts} ===\n{seg}')
diff_text = '\n\n'.join(segments) if segments else ''
else:
diff_text = _apply_filters(_make_unified_diff(from_text, to_text))
if not diff_text.strip():
return jsonify({'summary': None, 'error': 'No differences found'})
from changedetectionio.llm.evaluator import (
summarise_change, get_effective_summary_prompt,
is_global_token_budget_exceeded, get_global_token_budget_month,
LLMInputTooLargeError,
)
effective_prompt = get_effective_summary_prompt(watch, datastore)
from changedetectionio.llm.prompt_builder import build_change_summary_system_prompt
# Diff-pref flags + system prompt are part of the cache key so prompt changes bust the cache
_max_summary_tokens = datastore.data['settings']['application'].get('llm_max_summary_tokens', 3000)
cache_prompt = (
effective_prompt
+ f'\x00prefs:all={int(all_changes)},ws={int(ignore_whitespace)}'
f',rm={int(show_removed)},add={int(show_added)}'
+ f'\x00sys:{build_change_summary_system_prompt()}'
+ f'\x00max_tokens:{_max_summary_tokens}'
)
# Check cache — keyed by version pair + prompt hash (invalidates if prompt changes)
cached = watch.get_llm_diff_summary(from_version, to_version, prompt=cache_prompt)
if cached:
import time
datastore.set_last_viewed(uuid, int(time.time()))
return jsonify({'summary': cached, 'error': None, 'cached': True})
# Check global monthly token budget before making an LLM call
if is_global_token_budget_exceeded(datastore):
budget = get_global_token_budget_month(datastore)
llm_cfg = datastore.data.get('settings', {}).get('application', {}).get('llm', {})
used = llm_cfg.get('tokens_this_month', 0)
return jsonify({
'summary': None,
'error': gettext(
'Monthly AI token budget of %(budget)s tokens reached (%(used)s used). Resets next month.',
budget=f'{budget:,}',
used=f'{used:,}',
),
'budget_exceeded': True,
}), 429
try:
summary = summarise_change(watch, datastore, diff=diff_text, current_snapshot=to_text)
except LLMInputTooLargeError as e:
return jsonify({'summary': None, 'error': str(e)}), 400
except Exception as e:
logger.error(f"LLM summary generation failed for {uuid}: {e}")
return jsonify({'summary': None, 'error': _clean_litellm_error(e)}), 500
if not summary:
return jsonify({'summary': None, 'error': 'LLM returned empty summary'})
try:
watch.save_llm_diff_summary(summary, from_version, to_version, prompt=cache_prompt)
except Exception as e:
logger.warning(f"Could not cache llm summary for {uuid}: {e}")
import time
datastore.set_last_viewed(uuid, int(time.time()))
return jsonify({'summary': summary, 'error': None, 'cached': False})
@diff_blueprint.route("/diff/<uuid_str:uuid>/extract", methods=['GET'])
@login_optionally_required
def diff_history_page_extract_GET(uuid):
@@ -432,47 +238,6 @@ def construct_blueprint(datastore: ChangeDetectionStore):
redirect=redirect
)
@diff_blueprint.route("/diff/<uuid_str:uuid>/download-patch", methods=['GET'])
@login_optionally_required
def download_patch(uuid):
"""
Generate and return a unified diff patch file between two snapshots.
Query params: from_version, to_version (timestamp strings from watch history).
Returns the patch as a downloadable .patch file the same content fed to the LLM.
"""
import difflib
try:
watch = datastore.data['watching'][uuid]
except KeyError:
return make_response('Watch not found', 404)
dates = list(watch.history.keys())
if len(dates) < 2:
return make_response('Not enough history', 400)
from_version = request.args.get('from_version', dates[-2])
to_version = request.args.get('to_version', dates[-1])
try:
from_text = watch.get_history_snapshot(timestamp=from_version)
to_text = watch.get_history_snapshot(timestamp=to_version)
except Exception as e:
return make_response(f'Could not read snapshots: {e}', 500)
diff_lines = list(difflib.unified_diff(
from_text.splitlines(keepends=True),
to_text.splitlines(keepends=True),
fromfile=f'snapshot-{from_version}',
tofile=f'snapshot-{to_version}',
lineterm='',
))
patch_text = ''.join(diff_lines) if diff_lines else '(no differences)\n'
response = make_response(patch_text)
response.headers['Content-Type'] = 'text/plain; charset=utf-8'
return response
@diff_blueprint.route("/diff/<uuid_str:uuid>/processor-asset/<string:asset_name>", methods=['GET'])
@login_optionally_required
def processor_asset(uuid, asset_name):
+1 -26
View File
@@ -10,32 +10,10 @@ from changedetectionio.store import ChangeDetectionStore
from changedetectionio.auth_decorator import login_optionally_required
from changedetectionio.time_handler import is_within_schedule
from changedetectionio import worker_pool
from changedetectionio.llm.evaluator import get_llm_config as _get_llm_config
def construct_blueprint(datastore: ChangeDetectionStore, update_q, queuedWatchMetaData):
edit_blueprint = Blueprint('ui_edit', __name__, template_folder="../ui/templates")
def _resolve_llm_group_overrides(watch, datastore) -> dict:
"""
For each LLM field (llm_intent, llm_change_summary): if the watch has no own
value but a linked tag does, return {'value': ..., 'group_name': ...} so the
edit template can render the textarea as readonly with a group-sourced placeholder.
Returns None for each field when the watch has its own value (editable).
"""
result = {'llm_intent': None, 'llm_change_summary': None}
for field in ('llm_intent', 'llm_change_summary'):
if (watch.get(field) or '').strip():
continue # watch has its own value — editable, no group override
for tag_uuid in watch.get('tags', []):
tag = datastore.data['settings']['application'].get('tags', {}).get(tag_uuid)
if tag and (tag.get(field) or '').strip():
result[field] = {
'value': tag.get(field).strip(),
'group_name': tag.get('title', 'tag'),
}
break
return result
def _watch_has_tag_options_set(watch):
"""This should be fixed better so that Tag is some proper Model, a tag is just a Watch also"""
for tag_uuid, tag in datastore.data['settings']['application'].get('tags', {}).items():
@@ -348,9 +326,6 @@ def construct_blueprint(datastore: ChangeDetectionStore, update_q, queuedWatchMe
for tag_uuid, tag in datastore.data['settings']['application']['tags'].items()
if tag_uuid not in watch.get('tags', []) and tag.matches_url(watch.get('url', ''))
},
# LLM intent context
'llm_configured': bool(_get_llm_config(datastore)),
'llm_group_overrides': _resolve_llm_group_overrides(watch, datastore),
}
included_content = None
@@ -78,12 +78,6 @@
<label for="replaced" class="pure-checkbox" id="label-diff-replaced">
<input type="checkbox" id="replaced" name="replaced" {% if diff_prefs.replaced %}checked=""{% endif %}> {{ _('Replaced') }}</label>
</span>
{%- if llm_configured -%}
<span>
<label for="llm_all_changes" class="pure-checkbox" id="label-diff-llm-all-changes" title="{{ _('Include all intermediate snapshots between the selected versions in the AI summary') }}">
<input type="checkbox" id="llm_all_changes" name="llm_all_changes" {% if diff_prefs.llm_all_changes %}checked=""{% endif %}> &#x2728; {{ _('AI: every change between versions') }}</label>
</span>
{%- endif -%}
</fieldset>
{%- if versions|length >= 2 -%}
<div id="keyboard-nav">
@@ -132,22 +126,9 @@
</div>
{%- endif -%}
{%- if password_enabled_and_share_is_off -%}
<div class="tip">{{ _('Pro-tip: You can enable <strong>"share access when password is enabled"</strong> from settings.')|safe }}
<div class="tip">{{ _('Pro-tip: You can enable') }} <strong>{{ _('"share access when password is enabled"') }}</strong> {{ _('from settings.') }}
</div>
{%- endif -%}
{%- if llm_configured -%}
<div id="llm-diff-summary-area"{% if not llm_diff_summary %} data-pending="1"{% endif %}>
<span class="llm-diff-summary-label">&#x2728; {{ _('AI Change Summary') }}</span>
{%- if llm_diff_summary -%}
<p class="llm-diff-summary-text">{{ llm_diff_summary }}</p>
{%- else -%}
<p class="llm-diff-summary-text llm-diff-summary-loading">{{ _('Generating summary…') }}</p>
{%- if llm_summary_prompt -%}
<p class="llm-diff-summary-prompt"><span class="llm-diff-summary-prompt-text">{{ llm_summary_prompt }}</span></p>
{%- endif -%}
{%- endif -%}
</div>
{%- endif -%}
<div id="text-diff-heading-area" style="user-select: none;">
<div class="snapshot-age"><span>{{ from_version|format_timestamp_timeago }}</span>
{%- if note -%}<span class="note"><strong>{{ note }}</strong></span>{%- endif -%}
@@ -157,7 +138,6 @@
<pre id="difference" style="border-left: 2px solid #ddd;">{{ content| diff_unescape_difference_spans }}</pre>
<div id="diff-visualiser-area-after" style="user-select: none;">
<strong>{{ _('Tip:') }}</strong> {{ _('Highlight text to share or add to ignore lists.') }}
&nbsp;&mdash;&nbsp;<a href="{{ url_for('ui.ui_diff.download_patch', uuid=uuid, from_version=from_version, to_version=to_version) }}" target="_blank" rel="noopener" style="font-size:0.85em;">{{ _('Download difference patch') }}</a>
</div>
</div>
@@ -184,58 +164,5 @@
</script>
<script src="{{url_for('static_content', group='js', filename='diff-render.js')}}"></script>
{% if llm_configured %}
<script>
$(function () {
var $area = $('#llm-diff-summary-area');
if (!$area.length || !$area.data('pending')) return;
var fromVersion = $('#diff-from-version').val();
var toVersion = $('#diff-to-version').val();
var summaryUrl = "{{ url_for('ui.ui_diff.diff_llm_summary', uuid=uuid) }}";
function showLlmError(msg) {
$area.find('.llm-diff-summary-text')
.removeClass('llm-diff-summary-loading')
.addClass('llm-error')
.text(msg);
$area.removeAttr('data-pending');
}
var llmAllChanges = $('#llm_all_changes').is(':checked') ? 1 : 0;
var ignoreWhitespace = $('#ignoreWhitespace').is(':checked') ? 1 : 0;
var showRemoved = $('#removed').is(':checked') ? 1 : 0;
var showAdded = $('#added').is(':checked') ? 1 : 0;
$.getJSON(summaryUrl, {
from_version: fromVersion,
to_version: toVersion,
all_changes: llmAllChanges,
ignore_whitespace: ignoreWhitespace,
removed: showRemoved,
added: showAdded,
})
.done(function (data) {
if (data.summary) {
$area.find('.llm-diff-summary-text')
.removeClass('llm-diff-summary-loading')
.text(data.summary);
$area.removeAttr('data-pending');
} else if (data.error) {
showLlmError(data.error);
} else {
$area.remove();
}
})
.fail(function (xhr) {
var resp = xhr.responseJSON;
if (resp && resp.error) {
showLlmError(resp.error);
} else {
showLlmError('AI summary request failed (HTTP ' + xhr.status + ').');
}
});
});
</script>
{% endif %}
{% endblock %}
@@ -1,6 +1,6 @@
{% extends 'base.html' %}
{% block content %}
{% from '_helpers.html' import render_field, render_checkbox_field, render_button, render_time_schedule_form, only_playwright_type_watches_warning, highlight_trigger_ignored_explainer, render_conditions_fieldlist_of_formfields_as_table, render_ternary_field %}
{% from '_helpers.html' import render_field, render_checkbox_field, render_button, render_time_schedule_form, playwright_warning, only_playwright_type_watches_warning, highlight_trigger_ignored_explainer, render_conditions_fieldlist_of_formfields_as_table, render_ternary_field %}
{% from '_common_fields.html' import render_common_settings_form %}
<script src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script>
<script src="{{url_for('static_content', group='js', filename='vis.js')}}" defer></script>
@@ -57,7 +57,6 @@
{% if capabilities.supports_visual_selector %}
<li class="tab"><a id="visualselector-tab" href="#visualselector">{{ _('Visual Filter Selector') }}</a></li>
{% endif %}
<li class="tab"><a href="#ai-llm">{{ _('AI / LLM') }}</a></li>
{% if capabilities.supports_text_filters_and_triggers %}
<li class="tab" id="filters-and-triggers-tab"><a href="#filters-and-triggers">{{ _('Filters & Triggers') }}</a></li>
<li class="tab" id="conditions-tab"><a href="#conditions">{{ _('Conditions') }}</a></li>
@@ -142,8 +141,8 @@
<div class="pure-control-group inline-radio">
{{ render_field(form.fetch_backend, class="fetch-backend") }}
<span class="pure-form-message-inline">
<p>{{ _('Use the <strong>Basic</strong> method (default) where your watched sites don\'t need Javascript to render.')|safe }}</p>
<p>{{ _('The <strong>Chrome/Javascript</strong> method requires a network connection to a running WebDriver+Chrome server, set by the ENV var \'WEBDRIVER_URL\'.')|safe }}</p>
<p>{{ _('Use the') }} <strong>{{ _('Basic') }}</strong> {{ _('method (default) where your watched site doesn\'t need Javascript to render.') }}</p>
<p>{{ _('The') }} <strong>{{ _('Chrome/Javascript') }}</strong> {{ _('method requires a network connection to a running WebDriver+Chrome server, set by the ENV var \'WEBDRIVER_URL\'.') }} </p>
{{ _('Tip:') }} <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration#brightdata-proxy-support">{{ _('Connect using Bright Data and Oxylabs Proxies, find out more here.') }}</a>
</span>
</div>
@@ -164,7 +163,7 @@
<div class="pure-form-message-inline">
<strong>{{ _('If you\'re having trouble waiting for the page to be fully rendered (text missing etc), try increasing the \'wait\' time here.') }}</strong>
<br>
{{ _('This will wait <i>n</i> seconds before extracting the text.')|safe }}
{{ _('This will wait') }} <i>n</i> {{ _('seconds before extracting the text.') }}
{% if using_global_webdriver_wait %}
<br><strong>{{ _('Using the current global default settings') }}</strong>
{% endif %}
@@ -297,7 +296,7 @@ Math: {{ 1 + 1 }}") }}
{% if has_default_notification_urls %}
<div class="inline-warning">
<img class="inline-warning-icon" src="{{url_for('static_content', group='images', filename='notice.svg')}}" alt="{{ _('Look out!') }}" title="{{ _('Lookout!') }}" >
{{ _('There are <a href="%(url)s">system-wide notification URLs enabled</a>, this form will override notification settings for this watch only &dash; an empty Notification URL list here will still send notifications.', url=url_for('settings.settings_page') ~ '#notifications')|safe }}
{{ _('There are') }} <a href="{{ url_for('settings.settings_page')}}#notifications">{{ _('system-wide notification URLs enabled') }}</a>, {{ _('this form will override notification settings for this watch only') }} &dash; {{ _('an empty Notification URL list here will still send notifications.') }}
</div>
{% endif %}
<a href="#notifications" id="notification-setting-reset-to-default" class="pure-button button-xsmall" style="right: 20px; top: 20px; position: absolute; background-color: #5f42dd; border-radius: 4px; font-size: 70%; color: #fff">{{ _('Use system defaults') }}</a>
@@ -321,11 +320,7 @@ Math: {{ 1 + 1 }}") }}
</div>
</div>
</div>
<div class="tab-pane-inner" id="ai-llm">
{% include "edit/include_llm_intent.html" %}
</div>
<div class="tab-pane-inner" id="filters-and-triggers">
<span id="activate-text-preview" class="pure-button pure-button-primary button-xsmall">{{ _('Activate preview') }}</span>
<div>
<div id="edit-text-filter">
@@ -351,7 +346,7 @@ Math: {{ 1 + 1 }}") }}
{{ render_checkbox_field(form.filter_text_added) }}
{{ render_checkbox_field(form.filter_text_replaced) }}
{{ render_checkbox_field(form.filter_text_removed) }}
<span class="pure-form-message-inline">{{ _('Note: Depending on the length and similarity of the text on each line, the algorithm may consider an <strong>addition</strong> instead of <strong>replacement</strong> for example.')|safe }}</span><br>
<span class="pure-form-message-inline">{{ _('Note: Depending on the length and similarity of the text on each line, the algorithm may consider an') }} <strong>{{ _('addition') }}</strong> {{ _('instead of') }} <strong>{{ _('replacement') }}</strong> {{ _('for example.') }}</span><br>
<span class="pure-form-message-inline">&nbsp;{{ _('So it\'s always better to select') }} <strong>{{ _('Added') }}</strong>+<strong>{{ _('Replaced') }}</strong> {{ _('when you\'re interested in new content.') }}</span><br>
<span class="pure-form-message-inline">&nbsp;{{ _('When content is merely moved in a list, it will also trigger an') }} <strong>{{ _('addition') }}</strong>, {{ _('consider enabling') }} <code><strong>{{ _('Only trigger when unique lines appear') }}</strong></code></span>
</fieldset>
@@ -365,7 +360,7 @@ Math: {{ 1 + 1 }}") }}
</fieldset>
<fieldset class="pure-control-group">
{{ render_checkbox_field(form.sort_text_alphabetically) }}
<span class="pure-form-message-inline">{{ _('Helps reduce changes detected caused by sites shuffling lines around, combine with <i>check unique lines</i> below.')|safe }}</span>
<span class="pure-form-message-inline">{{ _('Helps reduce changes detected caused by sites shuffling lines around, combine with') }} <i>{{ _('check unique lines') }}</i> {{ _('below.') }}</span>
</fieldset>
<fieldset class="pure-control-group">
{{ render_checkbox_field(form.trim_text_whitespace) }}
@@ -379,20 +374,7 @@ Math: {{ 1 + 1 }}") }}
const preview_text_edit_filters_url="{{url_for('ui.ui_edit.watch_get_preview_rendered', uuid=uuid)}}";
</script>
<br>
{% if llm_configured %}
<div id="llm-preview-result" style="display:none; margin-bottom: 0.8em; padding: 0.8em 1.1em; border-radius: 4px; border-left: 4px solid #ccc; font-size: 0.9em;">
<div style="font-size:0.75em; text-transform:uppercase; letter-spacing:0.06em; opacity:0.55; margin-bottom:0.35em;">{{ _('AI Intent preview') }}</div>
<span class="llm-preview-verdict" style="font-weight: bold;"></span>
<div class="llm-preview-answer" style="margin-top: 0.5em; white-space: pre-wrap; line-height: 1.5; font-style: italic;"></div>
</div>
<style>
#llm-preview-result { transition: border-color 0.2s, background 0.2s; }
#llm-preview-result[data-found="1"] { border-color: #2ecc71; background: rgba(46,204,113,0.07); }
#llm-preview-result[data-found="1"] .llm-preview-verdict { color: #27ae60; }
#llm-preview-result[data-found="0"] { border-color: #aaa; background: rgba(0,0,0,0.03); }
#llm-preview-result[data-found="0"] .llm-preview-verdict { color: #888; }
</style>
{% endif %}
{#<div id="text-preview-controls"><span id="text-preview-refresh" class="pure-button button-xsmall">Refresh</span></div>#}
<div class="minitabs-wrapper">
<div class="minitabs-content">
<div id="text-preview-inner" class="monospace-preview">
@@ -502,16 +484,6 @@ Math: {{ 1 + 1 }}") }}
<td>{{ _('Server type reply') }}</td>
<td>{{ watch.get('remote_server_reply') }}</td>
</tr>
{% if settings_application.get('llm', {}).get('model') %}
<tr>
<td>{{ _('AI tokens (last check)') }}</td>
<td>{{ "{:,}".format(watch.get('llm_last_tokens_used') or 0) }}</td>
</tr>
<tr>
<td>{{ _('AI tokens (total)') }}</td>
<td>{{ "{:,}".format(watch.get('llm_tokens_used_cumulative') or 0) }}</td>
</tr>
{% endif %}
</tbody>
</table>
+1 -5
View File
@@ -26,11 +26,7 @@ def construct_blueprint(datastore: ChangeDetectionStore, update_q, queuedWatchMe
add_paused = request.form.get('edit_and_watch_submit_button') != None
from changedetectionio import processors
processor = request.form.get('processor', processors.get_default_processor())
llm_intent = request.form.get('llm_intent', '').strip()
extras = {'paused': add_paused, 'processor': processor}
if llm_intent:
extras['llm_intent'] = llm_intent
new_uuid = datastore.add_watch(url=url, tag=request.form.get('tags','').strip(), extras=extras)
new_uuid = datastore.add_watch(url=url, tag=request.form.get('tags','').strip(), extras={'paused': add_paused, 'processor': processor})
if new_uuid:
if add_paused:
@@ -82,11 +82,6 @@ def construct_blueprint(datastore: ChangeDetectionStore, update_q, queuedWatchMe
sorted_tags = sorted(datastore.data['settings']['application'].get('tags').items(), key=lambda x: x[1]['title'])
proxy_list = datastore.proxy_list
from changedetectionio.llm.evaluator import get_llm_config as _get_llm_config
from changedetectionio.llm.ui_strings import LLM_INTENT_WATCH_PLACEHOLDER
llm_configured = bool(_get_llm_config(datastore))
output = render_template(
"watch-overview.html",
active_tag=active_tag,
@@ -94,7 +89,7 @@ def construct_blueprint(datastore: ChangeDetectionStore, update_q, queuedWatchMe
app_rss_token=datastore.data['settings']['application'].get('rss_access_token'),
datastore=datastore,
errored_count=errored_count,
extra_classes=' '.join(filter(None, ['has-queue' if not update_q.empty() else '', 'llm-configured' if llm_configured else ''])),
extra_classes='has-queue' if not update_q.empty() else '',
form=form,
generate_tag_colors=processors.generate_processor_badge_colors,
wcag_text_color=processors.wcag_text_color,
@@ -114,9 +109,7 @@ def construct_blueprint(datastore: ChangeDetectionStore, update_q, queuedWatchMe
system_default_fetcher=datastore.data['settings']['application'].get('fetch_backend'),
tags=sorted_tags,
unread_changes_count=datastore.unread_changes_count,
watches=sorted_watches,
llm_configured=llm_configured,
llm_intent_watch_placeholder=LLM_INTENT_WATCH_PLACEHOLDER,
watches=sorted_watches
)
# Return freed template-building memory to the OS immediately.
@@ -113,16 +113,6 @@ html[data-darkmode="true"] .watch-tag-list.tag-{{ class_name }} {
{{ render_nolabel_field(form.watch_submit_button, title=_("Watch this URL!") ) }}
{{ render_nolabel_field(form.edit_and_watch_submit_button, title=_("Edit first then Watch") ) }}
</div>
{% if llm_configured %}
<div id="quick-watch-llm-intent" style="display:none; margin-top: 0.5em;">
<textarea name="llm_intent"
id="quick_watch_llm_intent"
rows="2"
class="pure-input-1"
placeholder="{{ _('AI — Notify when…') }} {{ llm_intent_watch_placeholder }}"
></textarea>
</div>
{% endif %}
<div id="watch-group-tag">
{{ render_field(form.tags, value=active_tag.title if active_tag_uuid else '', placeholder=_("Watch group / tag"), class="transparent-field") }}
</div>
@@ -136,14 +126,6 @@ html[data-darkmode="true"] .watch-tag-list.tag-{{ class_name }} {
</span>
</form>
</div>
{% if llm_configured %}
<script>
window.watchOverviewI18n = {
generatingSummary: {{ _('Generating summary…')|tojson }},
gotoHistory: {{ _('Goto full history')|tojson }}
};
</script>
{% endif %}
<div class="box">
<form class="pure-form" action="{{ url_for('ui.form_watch_list_checkbox_operations') }}" method="POST" id="watch-list-form">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" >
@@ -223,8 +205,8 @@ window.watchOverviewI18n = {
{%- if any_has_restock_price_processor -%}
<th>{{ _('Restock & Price') }}</th>
{%- endif -%}
<th><a class="{{ 'active '+link_order if sort_attribute == 'last_checked' else 'inactive' }}" href="{{url_for('watchlist.index', sort='last_checked', order=link_order, tag=active_tag_uuid)}}"><span class="hide-on-mobile">{{ _('Last Checked') }}</span><span class="hide-on-desktop">{{ _('Checked') }}</span> <span class='arrow {{link_order}}'></span></a></th>
<th><a class="{{ 'active '+link_order if sort_attribute == 'last_changed' else 'inactive' }}" href="{{url_for('watchlist.index', sort='last_changed', order=link_order, tag=active_tag_uuid)}}"><span class="hide-on-mobile">{{ _('Last Changed') }}</span><span class="hide-on-desktop">{{ _('Changed') }}</span> <span class='arrow {{link_order}}'></span></a></th>
<th><a class="{{ 'active '+link_order if sort_attribute == 'last_checked' else 'inactive' }}" href="{{url_for('watchlist.index', sort='last_checked', order=link_order, tag=active_tag_uuid)}}"><span class="hide-on-mobile">{{ _('Last') }}</span> {{ _('Checked') }} <span class='arrow {{link_order}}'></span></a></th>
<th><a class="{{ 'active '+link_order if sort_attribute == 'last_changed' else 'inactive' }}" href="{{url_for('watchlist.index', sort='last_changed', order=link_order, tag=active_tag_uuid)}}"><span class="hide-on-mobile">{{ _('Last') }}</span> {{ _('Changed') }} <span class='arrow {{link_order}}'></span></a></th>
<th class="empty-cell"></th>
</tr>
</thead>
@@ -371,7 +353,7 @@ window.watchOverviewI18n = {
<a href="" class="already-in-queue-button recheck pure-button pure-button-primary" style="display: none;" disabled="disabled">{{ _('Queued') }}</a>
<a href="{{ url_for('ui.form_watch_checknow', uuid=watch.uuid, tag=request.args.get('tag')) }}" data-op='recheck' class="ajax-op recheck pure-button pure-button-primary">{{ _('Recheck') }}</a>
<a href="{{ url_for('ui.ui_edit.edit_page', uuid=watch.uuid, tag=active_tag_uuid)}}#general" class="pure-button pure-button-primary">{{ _('Edit') }}</a>
<a href="{{ url_for('ui.ui_diff.diff_history_page', uuid=watch.uuid)}}" {{target_attr}} class="pure-button pure-button-primary history-link ai-history-btn" style="display: none;" data-uuid="{{ watch.uuid }}" data-summary-url="{{ url_for('ui.ui_diff.diff_llm_summary', uuid=watch.uuid) }}"><span class="btn-label-history">{{ _('History') }}</span><span class="btn-label-summary">&#x2728; {{ _('Summary') }}</span></a>
<a href="{{ url_for('ui.ui_diff.diff_history_page', uuid=watch.uuid)}}" {{target_attr}} class="pure-button pure-button-primary history-link" style="display: none;">{{ _('History') }}</a>
<a href="{{ url_for('ui.ui_preview.preview_page', uuid=watch.uuid)}}" {{target_attr}} class="pure-button pure-button-primary preview-link" style="display: none;">{{ _('Preview') }}</a>
</div>
</td>
-5
View File
@@ -981,11 +981,6 @@ def changedetection_app(config=None, datastore_o=None):
"queued_data": all_queued
})
if strtobool(os.getenv('HISTORY_SNAPSHOT_FILE_ALLOW_OUTSIDE_WATCH_DATADIR', 'False')):
logger.warning("SECURITY WARNING: HISTORY_SNAPSHOT_FILE_ALLOW_OUTSIDE_WATCH_DATADIR is enabled — "
"snapshot reads are NOT confined to the watch data directory. "
"This disables protection against path traversal via restored backups (GHSA-8757-69j2-hx56).")
# Start the async workers during app initialization
# Can be overridden by ENV or use the default settings
n_workers = int(os.getenv("FETCH_WORKERS", datastore.data['settings']['requests']['workers']))
+1 -132
View File
@@ -5,8 +5,6 @@ from wtforms.widgets.core import TimeInput
from flask_babel import lazy_gettext as _l, gettext
from changedetectionio.blueprint.rss import RSS_FORMAT_TYPES, RSS_TEMPLATE_TYPE_OPTIONS, RSS_TEMPLATE_HTML_DEFAULT
from changedetectionio.llm.ui_strings import LLM_INTENT_WATCH_PLACEHOLDER
from changedetectionio.llm.evaluator import DEFAULT_CHANGE_SUMMARY_PROMPT, LLM_DEFAULT_MAX_SUMMARY_TOKENS, LLM_DEFAULT_THINKING_BUDGET
from changedetectionio.conditions.form import ConditionFormRow
from changedetectionio.notification_service import NotificationContextData
from changedetectionio.strtobool import strtobool
@@ -18,7 +16,6 @@ from wtforms import (
Field,
FloatField,
IntegerField,
PasswordField,
RadioField,
SelectField,
StringField,
@@ -797,13 +794,6 @@ class processor_text_json_diff_form(commonSettingsForm):
time_between_check_use_default = BooleanField(_l('Use global settings for time between check and scheduler.'), default=False)
llm_intent = TextAreaField(_l('AI Change Intent'), validators=[validators.Optional(), validators.Length(max=2000)],
render_kw={"rows": "5", "placeholder": LLM_INTENT_WATCH_PLACEHOLDER})
llm_change_summary = TextAreaField(_l('AI Change Summary'), validators=[validators.Optional(), validators.Length(max=2000)],
render_kw={"rows": "5", "placeholder": DEFAULT_CHANGE_SUMMARY_PROMPT},
default='')
include_filters = StringListField(_l('CSS/JSONPath/JQ/XPath Filters'), [ValidateCSSJSONXPATHInput()], default='')
subtractive_selectors = StringListField(_l('Remove elements'), [ValidateCSSJSONXPATHInput(allow_json=False)])
@@ -999,7 +989,7 @@ class globalSettingsApplicationForm(commonSettingsForm):
api_access_token_enabled = BooleanField(_l('API access token security check enabled'), default=True, validators=[validators.Optional()])
base_url = StringField(_l('Notification base URL override'),
validators=[validators.Optional()],
render_kw={"placeholder": os.getenv('BASE_URL', _l('Not set'))}
render_kw={"placeholder": os.getenv('BASE_URL', 'Not set')}
)
empty_pages_are_a_change = BooleanField(_l('Treat empty pages as a change?'), default=False)
fetch_backend = RadioField(_l('Fetch Method'), default="html_requests", choices=content_fetchers.available_fetchers(), validators=[ValidateContentFetcherIsReady()])
@@ -1049,126 +1039,6 @@ class globalSettingsApplicationForm(commonSettingsForm):
ui = FormField(globalSettingsApplicationUIForm)
class globalSettingsLLMForm(Form):
"""
LLM / AI provider settings stored under datastore['settings']['application']['llm'].
Uses litellm under the hood, so the model string encodes both the provider and model.
No separate provider dropdown needed litellm routes automatically:
gpt-4o-mini OpenAI
claude-3-5-haiku-20251001 Anthropic
ollama/llama3.2 Ollama (local)
openrouter/google/gemma-3-12b-it:free OpenRouter (free tier)
gemini/gemini-2.0-flash Google Gemini
azure/gpt-4o Azure OpenAI
"""
llm_model = StringField(
_l('Model'),
validators=[validators.Optional()],
render_kw={"placeholder": "gpt-4o-mini", "style": "width: 24em;"},
)
llm_api_key = PasswordField(
_l('API Key'),
validators=[validators.Optional()],
render_kw={
"placeholder": _l('Leave blank to use LITELLM_API_KEY env var'),
"autocomplete": "off",
"style": "width: 24em;",
},
)
llm_api_base = StringField(
_l('API Base URL'),
validators=[validators.Optional()],
render_kw={
"placeholder": "http://localhost:11434 (Ollama / custom endpoints only)",
"style": "width: 24em;",
},
)
llm_change_summary_default = TextAreaField(
_l('Default AI Change Summary prompt'),
validators=[validators.Optional(), validators.Length(max=2000)],
render_kw={
"rows": "5",
"placeholder": DEFAULT_CHANGE_SUMMARY_PROMPT,
"style": "width: 100%; ",
},
default='',
)
llm_max_tokens_per_check = IntegerField(
_l('Max tokens per check'),
validators=[validators.Optional(), validators.NumberRange(min=0)],
default=0,
render_kw={
"placeholder": "0 = unlimited",
"style": "width: 8em;",
},
)
llm_max_tokens_cumulative = IntegerField(
_l('Max cumulative tokens (per watch)'),
validators=[validators.Optional(), validators.NumberRange(min=0)],
default=0,
render_kw={
"placeholder": "0 = unlimited",
"style": "width: 8em;",
},
)
llm_token_budget_month = IntegerField(
_l('Monthly token budget'),
validators=[validators.Optional(), validators.NumberRange(min=0)],
default=0,
render_kw={"style": "width: 10em;"},
)
llm_max_input_chars = IntegerField(
_l('Max input characters'),
validators=[validators.Optional(), validators.NumberRange(min=1)],
default=100000,
render_kw={
"placeholder": "100000",
"style": "width: 10em;",
},
)
llm_override_diff_with_summary = BooleanField(
_l('Replace {{diff}} notification token with AI summary'),
default=True,
)
llm_restock_use_fallback_extract = BooleanField(
_l('Use LLM as a fallback for extracting price and restock info'),
default=True,
)
llm_thinking_budget = SelectField(
_l('AI thinking budget (tokens)'),
choices=[
('0', _l('Off (no thinking)')),
('100', '100'),
('500', '500'),
('2000', '2000'),
],
default=str(LLM_DEFAULT_THINKING_BUDGET),
validators=[validators.Optional()],
)
llm_max_summary_tokens = SelectField(
_l('Max AI summary length (tokens)'),
choices=[
('500', '500'),
('1000', '1000'),
('3000', '3000'),
('5000', '5000'),
('10000', '10000'),
('15000', '15000'),
],
default=str(LLM_DEFAULT_MAX_SUMMARY_TOKENS),
validators=[validators.Optional()],
)
llm_budget_action = RadioField(
_l('When monthly token budget is reached'),
choices=[
('skip_llm', _l('Skip AI summarisation only (watch still checks)')),
('skip_check', _l('Skip the watch check entirely')),
],
default='skip_llm',
)
class globalSettingsForm(Form):
# Define these as FormFields/"sub forms", this way it matches the JSON storage
# datastore.data['settings']['application']..
@@ -1181,7 +1051,6 @@ class globalSettingsForm(Form):
requests = FormField(globalSettingsRequestForm)
application = FormField(globalSettingsApplicationForm)
llm = FormField(globalSettingsLLMForm)
save_button = SubmitField(_l('Save'), render_kw={"class": "pure-button pure-button-primary"})
+2 -2
View File
@@ -282,7 +282,7 @@ def xpath_filter(xpath_filter, html_content, append_pretty_line_formatting=False
try:
if is_xml:
# So that we can keep CDATA for cdata_in_document_to_text() to process
parser = etree.XMLParser(strip_cdata=False, resolve_entities=False, no_network=True)
parser = etree.XMLParser(strip_cdata=False)
# For XML/RSS content, use etree.fromstring to properly handle XML declarations
tree = etree.fromstring(html_content.encode('utf-8') if isinstance(html_content, str) else html_content, parser=parser)
else:
@@ -346,7 +346,7 @@ def xpath1_filter(xpath_filter, html_content, append_pretty_line_formatting=Fals
try:
if is_xml:
# So that we can keep CDATA for cdata_in_document_to_text() to process
parser = etree.XMLParser(strip_cdata=False, resolve_entities=False, no_network=True)
parser = etree.XMLParser(strip_cdata=False)
# For XML/RSS content, use etree.fromstring to properly handle XML declarations
tree = etree.fromstring(html_content.encode('utf-8') if isinstance(html_content, str) else html_content, parser=parser)
else:
-1
View File
@@ -1 +0,0 @@
# LLM intent-based change evaluation
-52
View File
@@ -1,52 +0,0 @@
"""
BM25-based relevance trimming for large snapshot text.
When a snapshot is large and no CSS pre-filter has narrowed it down,
we use BM25 to select the lines most relevant to the user's intent
before sending to the LLM. This keeps the context focused without
an arbitrary char truncation.
Pure functions no side effects, fully testable.
"""
MAX_CONTEXT_CHARS = 15_000
def trim_to_relevant(text: str, query: str, max_chars: int = MAX_CONTEXT_CHARS) -> str:
"""
Return the lines from `text` most relevant to `query` up to `max_chars`.
If text fits within budget, return it unchanged.
Falls back to head-truncation if rank_bm25 is unavailable.
"""
if not text or not query:
return text or ''
if len(text) <= max_chars:
return text
lines = [l for l in text.splitlines() if l.strip()]
if not lines:
return text[:max_chars]
try:
from rank_bm25 import BM25Okapi
except ImportError:
# rank-bm25 not installed — fall back to simple head truncation
return text[:max_chars]
tokenized = [line.lower().split() for line in lines]
bm25 = BM25Okapi(tokenized)
scores = bm25.get_scores(query.lower().split())
ranked = sorted(enumerate(zip(scores, lines)), key=lambda x: x[1][0], reverse=True)
selected_indices, total = [], 0
for idx, (_score, line) in ranked:
if total + len(line) + 1 > max_chars:
break
selected_indices.append(idx)
total += len(line) + 1
# Re-order selected lines to preserve original document order
ordered = [lines[i] for i in sorted(selected_indices)]
return '\n'.join(ordered)
-115
View File
@@ -1,115 +0,0 @@
"""
Thin wrapper around litellm.completion.
Keeps litellm import isolated so the rest of the codebase doesn't depend on it directly,
and makes the call easy to mock in tests.
"""
import os
from loguru import logger
# Default output token cap for JSON-returning calls (intent eval, preview, setup).
# These return small JSON objects — 400 is enough for a verbose explanation while
# still preventing runaway cost. Change summaries pass their own max_tokens via
# _summary_max_tokens() and are NOT subject to this cap.
_MAX_COMPLETION_TOKENS = 400
DEFAULT_TIMEOUT = int(os.getenv('LLM_TIMEOUT', 60))
DEFAULT_RETRIES = 3
def completion(model: str, messages: list, api_key: str = None,
api_base: str = None, timeout: int = DEFAULT_TIMEOUT,
max_tokens: int = None, extra_body: dict = None) -> tuple[str, int, int, int]:
"""
Call the LLM and return (response_text, total_tokens, input_tokens, output_tokens).
Retries up to DEFAULT_RETRIES times on timeout or connection errors.
Token counts are 0 if the provider doesn't return usage data.
Raises on network/auth errors callers handle gracefully.
"""
try:
import litellm
except ImportError:
raise RuntimeError("litellm is not installed. Add it to requirements.txt.")
_timeout = timeout if timeout is not None else DEFAULT_TIMEOUT
kwargs = {
'model': model,
'messages': messages,
'timeout': _timeout,
'temperature': 0,
'max_tokens': max_tokens if max_tokens is not None else _MAX_COMPLETION_TOKENS,
}
if api_key:
kwargs['api_key'] = api_key
if api_base:
kwargs['api_base'] = api_base
if extra_body:
kwargs['extra_body'] = extra_body
_retryable = (litellm.Timeout, litellm.APIConnectionError)
for attempt in range(1, DEFAULT_RETRIES + 1):
try:
response = litellm.completion(**kwargs)
choice = response.choices[0]
message = choice.message
finish = getattr(choice, 'finish_reason', None)
text = message.content or ''
if not text:
# Some providers (e.g. Gemini) put text in message.parts instead of .content
parts = getattr(message, 'parts', None)
if parts:
text = ''.join(getattr(p, 'text', '') or '' for p in parts).strip()
logger.debug(f"LLM client: extracted text from message.parts ({len(parts)} parts) model={model!r}")
if finish == 'length':
logger.warning(
f"LLM client: response truncated (finish_reason='length') model={model!r} "
f"— increase max_tokens; got {len(text)} chars so far"
)
if not text:
logger.warning(
f"LLM client: empty content from model={model!r} "
f"finish_reason={finish!r} "
f"message={message!r}"
)
usage = getattr(response, 'usage', None)
input_tokens = int(getattr(usage, 'prompt_tokens', 0) or 0) if usage else 0
output_tokens = int(getattr(usage, 'completion_tokens', 0) or 0) if usage else 0
total_tokens = int(getattr(usage, 'total_tokens', 0) or 0) if usage else (input_tokens + output_tokens)
logger.debug(
f"LLM client: model={model!r} finish={finish!r} "
f"tokens={total_tokens} (in={input_tokens} out={output_tokens}) "
f"text_len={len(text)}"
)
return text, total_tokens, input_tokens, output_tokens
except _retryable as e:
# litellm formats its Timeout message with None when the provider doesn't
# propagate the timeout value — patch the exception args in-place so every
# caller that logs str(e) sees the real number.
_fix = f'after {_timeout} seconds'
try:
e.args = tuple(str(a).replace('after None seconds', _fix) for a in e.args)
except Exception:
pass
if attempt < DEFAULT_RETRIES:
logger.warning(
f"LLM call timed out/connection error (attempt {attempt}/{DEFAULT_RETRIES}), "
f"retrying — model={model!r} timeout={_timeout}s error={e}"
)
continue
logger.warning(
f"LLM call failed after {DEFAULT_RETRIES} attempts ({_timeout}s timeout) "
f"model={model!r} error={e}"
)
raise
except Exception as e:
logger.warning(f"LLM call failed: model={model!r} error={e}")
raise
-611
View File
@@ -1,611 +0,0 @@
"""
LLM evaluation orchestration.
Two public entry points:
- run_setup(watch, datastore) one-time: decide if pre-filter needed
- evaluate_change(watch, datastore, diff, current_snapshot) per-change evaluation
Intent resolution: watch.llm_intent first tag with llm_intent None (no evaluation)
Cache: each (intent, diff) pair is evaluated exactly once, result stored in watch.
Environment variable overrides (take priority over datastore settings):
LLM_MODEL model string (e.g. "gpt-4o-mini", "ollama/llama3.2")
LLM_API_KEY API key for cloud providers
LLM_API_BASE base URL for local/custom endpoints (e.g. http://localhost:11434)
"""
import hashlib
import os
from datetime import datetime, timezone
from loguru import logger
from . import client as llm_client
from .prompt_builder import (
build_change_summary_prompt, build_change_summary_system_prompt,
build_eval_prompt, build_eval_system_prompt,
build_preview_prompt, build_preview_system_prompt,
build_setup_prompt, build_setup_system_prompt,
)
from .response_parser import parse_eval_response, parse_preview_response, parse_setup_response
_DEFAULT_MAX_INPUT_CHARS = 100_000
def _get_max_input_chars(datastore) -> int:
"""Max input characters to send to the LLM. Resolution: env var → datastore → 100,000.
Always returns at least 1 unlimited is not permitted.
"""
env_val = os.getenv('LLM_MAX_INPUT_CHARS', '').strip()
if env_val.isdigit() and int(env_val) > 0:
return int(env_val)
cfg = datastore.data.get('settings', {}).get('application', {}).get('llm') or {}
stored = cfg.get('max_input_chars')
if stored and int(stored) > 0:
return int(stored)
return _DEFAULT_MAX_INPUT_CHARS
class LLMInputTooLargeError(Exception):
pass
def _check_input_size(text: str, max_chars: int) -> None:
"""Raise LLMInputTooLargeError if text exceeds max_chars."""
if len(text) > max_chars:
raise LLMInputTooLargeError(
f"Change too large for AI summary ({len(text):,} chars, limit {max_chars:,})"
)
LLM_DEFAULT_THINKING_BUDGET = 0 # 0 = thinking disabled by default
def _thinking_extra_body(model: str, budget: int) -> dict | None:
"""Return litellm extra_body to control thinking for models that support it.
For Gemini 2.5+: passes thinkingConfig with the given budget (0 = disabled).
For all other models: returns None (no-op).
"""
if not model.startswith('gemini/gemini-2.5'):
return None
return {'generationConfig': {'thinkingConfig': {'thinkingBudget': budget}}}
def _cached_system(text: str, model: str = '') -> dict:
"""Wrap a system prompt, adding Anthropic prompt-caching headers only for Anthropic models.
Gemini and other providers have their own caching APIs that break when they receive
cache_control, so we only apply it where it's supported.
"""
is_anthropic = model.startswith('claude') or model.startswith('anthropic/')
if is_anthropic:
return {'role': 'system', 'content': [{'type': 'text', 'text': text, 'cache_control': {'type': 'ephemeral'}}]}
return {'role': 'system', 'content': text}
LLM_DEFAULT_MAX_SUMMARY_TOKENS = 3000
# Default prompt used when the user hasn't configured llm_change_summary
DEFAULT_CHANGE_SUMMARY_PROMPT = "Describe in plain English what changed — list what was added or removed as bullet points, including key details for each item. Be careful of content that merely just moved around, you should mention that it moved but dont report that it was added/removed etc. Be considerate of the style content you are summarising the change of, adjust your report accordingly. Do not quote non-English text verbatim; translate and summarise all content into English. Your entire response must be in English."
def _summary_max_tokens(diff: str, max_cap: int = LLM_DEFAULT_MAX_SUMMARY_TOKENS) -> int:
"""Scale completion tokens to diff size: floor 400, ~1 token per 4 chars, ceiling max_cap."""
return max(400, min(len(diff) // 4, max_cap))
# ---------------------------------------------------------------------------
# Intent resolution
# ---------------------------------------------------------------------------
def resolve_llm_field(watch, datastore, field: str) -> tuple[str, str]:
"""
Generic cascade resolver for any LLM per-watch field.
Returns (value, source) where source is 'watch' or tag title.
Returns ('', '') if not set anywhere.
"""
value = (watch.get(field) or '').strip()
if value:
return value, 'watch'
for tag_uuid in watch.get('tags', []):
tag = datastore.data['settings']['application'].get('tags', {}).get(tag_uuid)
if tag:
tag_value = (tag.get(field) or '').strip()
if tag_value:
return tag_value, tag.get('title', 'tag')
return '', ''
def resolve_intent(watch, datastore) -> tuple[str, str]:
"""
Return (intent, source) where source is 'watch' or tag title.
Returns ('', '') if no intent is configured anywhere.
"""
intent = (watch.get('llm_intent') or '').strip()
if intent:
return intent, 'watch'
for tag_uuid in watch.get('tags', []):
tag = datastore.data['settings']['application'].get('tags', {}).get(tag_uuid)
if tag:
tag_intent = (tag.get('llm_intent') or '').strip()
if tag_intent:
return tag_intent, tag.get('title', 'tag')
return '', ''
# ---------------------------------------------------------------------------
# LLM config helper
# ---------------------------------------------------------------------------
def get_llm_config(datastore) -> dict | None:
"""
Return LLM config dict or None if not configured.
Resolution order (first non-empty model wins):
1. Environment variables: LLM_MODEL, LLM_API_KEY, LLM_API_BASE
2. Datastore settings (set via UI)
"""
# 1. Environment variable override
env_model = os.getenv('LLM_MODEL', '').strip()
if env_model:
return {
'model': env_model,
'api_key': os.getenv('LLM_API_KEY', '').strip(),
'api_base': os.getenv('LLM_API_BASE', '').strip(),
}
# 2. Datastore settings
cfg = datastore.data['settings']['application'].get('llm') or {}
if not cfg.get('model'):
return None
return cfg
def llm_configured_via_env() -> bool:
"""True when LLM config comes from environment variables, not the UI."""
return bool(os.getenv('LLM_MODEL', '').strip())
# ---------------------------------------------------------------------------
# Global monthly token budget
# ---------------------------------------------------------------------------
def _get_month_key() -> str:
"""Returns 'YYYY-MM' for the current UTC month."""
return datetime.now(timezone.utc).strftime("%Y-%m")
def get_global_token_budget_month(datastore=None) -> int:
"""
Monthly token budget ceiling. Resolution order:
1. LLM_TOKEN_BUDGET_MONTH env var (takes priority, makes field read-only in UI)
2. datastore settings (set via UI)
Returns 0 (no limit) if not set anywhere.
"""
try:
env_val = int(os.getenv('LLM_TOKEN_BUDGET_MONTH', '0'))
if env_val > 0:
return env_val
except (ValueError, TypeError):
pass
if datastore is not None:
try:
stored = datastore.data['settings']['application'].get('llm') or {}
val = int(stored.get('token_budget_month') or 0)
return max(0, val)
except (ValueError, TypeError):
pass
return 0
def _estimate_cost_usd(model: str, input_tokens: int, output_tokens: int) -> float:
"""
Return estimated cost in USD using litellm's pricing database.
Returns 0.0 for unknown models (local/Ollama/custom endpoints).
Never raises cost estimation is best-effort.
"""
if not model or (not input_tokens and not output_tokens):
return 0.0
try:
from litellm.cost_calculator import cost_per_token
prompt_cost, completion_cost = cost_per_token(
model=model,
prompt_tokens=input_tokens,
completion_tokens=output_tokens,
)
return float(prompt_cost + completion_cost)
except Exception:
return 0.0
def accumulate_global_tokens(datastore, tokens: int,
input_tokens: int = 0, output_tokens: int = 0,
model: str = '') -> None:
"""
Add *tokens* to both the all-time and this-month global counters.
When input_tokens / output_tokens / model are supplied the estimated
USD cost is accumulated alongside the token counts.
Resets monthly counters automatically on month rollover.
These counters live at datastore.data['settings']['application']['llm']
and are intentionally read-only from the API/form side they are only
ever written here, in a controlled way.
"""
if tokens <= 0:
return
current_month = _get_month_key()
cost = _estimate_cost_usd(model, input_tokens, output_tokens)
# Work on the live dict in-place (or create a stub if llm key is absent)
app_settings = datastore.data['settings']['application']
if 'llm' not in app_settings:
app_settings['llm'] = {}
llm_cfg = app_settings['llm']
# Month rollover: reset monthly counters
if llm_cfg.get('tokens_month_key') != current_month:
llm_cfg['tokens_this_month'] = 0
llm_cfg['cost_usd_this_month'] = 0.0
llm_cfg['tokens_month_key'] = current_month
llm_cfg['tokens_total_cumulative'] = (llm_cfg.get('tokens_total_cumulative') or 0) + tokens
llm_cfg['tokens_this_month'] = (llm_cfg.get('tokens_this_month') or 0) + tokens
llm_cfg['cost_usd_total_cumulative'] = (llm_cfg.get('cost_usd_total_cumulative') or 0.0) + cost
llm_cfg['cost_usd_this_month'] = (llm_cfg.get('cost_usd_this_month') or 0.0) + cost
# Persist immediately — token accounting must survive restarts
datastore.commit()
def is_global_token_budget_exceeded(datastore) -> bool:
"""
Returns True when a monthly token budget is configured (via
LLM_TOKEN_BUDGET_MONTH) and the current month's usage has reached
or exceeded that budget.
"""
budget = get_global_token_budget_month(datastore)
if not budget:
return False
llm_cfg = datastore.data['settings']['application'].get('llm') or {}
if llm_cfg.get('tokens_month_key') != _get_month_key():
# Counter hasn't been updated yet this month → zero usage
return False
return (llm_cfg.get('tokens_this_month') or 0) >= budget
# ---------------------------------------------------------------------------
# One-time setup: derive pre-filter
# ---------------------------------------------------------------------------
def _check_token_budget(watch, cfg, tokens_this_call: int = 0) -> bool:
"""
Check token budget limits. Returns True if within budget, False if exceeded.
Also accumulates tokens_this_call into watch['llm_tokens_used_cumulative'].
"""
if tokens_this_call > 0:
current = watch.get('llm_tokens_used_cumulative') or 0
watch['llm_tokens_used_cumulative'] = current + tokens_this_call
max_per_check = int(cfg.get('max_tokens_per_check') or 0)
max_cumulative = int(cfg.get('max_tokens_cumulative') or 0)
if max_per_check and tokens_this_call > max_per_check:
logger.warning(
f"LLM token budget exceeded for {watch.get('uuid')}: "
f"{tokens_this_call} tokens > per-check limit {max_per_check}"
)
return False
if max_cumulative:
total = watch.get('llm_tokens_used_cumulative') or 0
if total > max_cumulative:
logger.warning(
f"LLM cumulative token budget exceeded for {watch.get('uuid')}: "
f"{total} tokens > limit {max_cumulative}"
)
return False
return True
def run_setup(watch, datastore, snapshot_text: str) -> None:
"""
Ask the LLM whether a CSS pre-filter would improve precision for this intent.
Stores result in watch['llm_prefilter'] (str selector or None).
Called once when intent is first set, and again if pre-filter returns zero matches.
"""
cfg = get_llm_config(datastore)
if not cfg:
return
intent, _ = resolve_intent(watch, datastore)
if not intent:
return
url = watch.get('url', '')
system_prompt = build_setup_system_prompt()
user_prompt = build_setup_prompt(intent, snapshot_text, url=url)
try:
raw, tokens, *_ = llm_client.completion(
model=cfg['model'],
messages=[
_cached_system(system_prompt, model=cfg['model']),
{'role': 'user', 'content': user_prompt},
],
api_key=cfg.get('api_key'),
api_base=cfg.get('api_base'),
extra_body=_thinking_extra_body(cfg['model'], int(datastore.data['settings']['application'].get('llm_thinking_budget', LLM_DEFAULT_THINKING_BUDGET) or 0)),
)
_check_token_budget(watch, cfg, tokens)
accumulate_global_tokens(datastore, tokens, model=cfg['model'])
result = parse_setup_response(raw)
watch['llm_prefilter'] = result['selector']
logger.debug(f"LLM setup for {watch.get('uuid')}: prefilter={result['selector']} reason={result['reason']}")
except Exception as e:
logger.warning(f"LLM setup call failed for {watch.get('uuid')}: {e}")
watch['llm_prefilter'] = None
# ---------------------------------------------------------------------------
# AI Change Summary — human-readable description of what changed
# ---------------------------------------------------------------------------
def get_effective_summary_prompt(watch, datastore) -> str:
"""Return the prompt that summarise_change will use.
Cascade: watch tag global settings default hardcoded fallback.
"""
prompt, _ = resolve_llm_field(watch, datastore, 'llm_change_summary')
if prompt:
return prompt
global_default = (
datastore.data.get('settings', {})
.get('application', {})
.get('llm_change_summary_default', '') or ''
).strip()
return global_default or DEFAULT_CHANGE_SUMMARY_PROMPT
def compute_summary_cache_key(diff_text: str, prompt: str) -> str:
"""Stable 16-char hex key for a (diff, prompt) pair. Stored alongside the summary file."""
h = hashlib.md5()
h.update(diff_text.encode('utf-8', errors='replace'))
h.update(b'\x00')
h.update(prompt.encode('utf-8', errors='replace'))
return h.hexdigest()[:16]
def summarise_change(watch, datastore, diff: str, current_snapshot: str = '') -> str:
"""
Generate a plain-language summary of the change using the watch's
llm_change_summary prompt (cascades from tag if not set on watch).
Returns the summary string, or '' on failure.
The result replaces {{ diff }} in notifications so the user gets a
readable description instead of raw +/- diff lines.
"""
cfg = get_llm_config(datastore)
if not cfg:
return ''
if is_global_token_budget_exceeded(datastore):
budget = get_global_token_budget_month(datastore)
llm_cfg = datastore.data['settings']['application'].get('llm') or {}
used = llm_cfg.get('tokens_this_month', 0)
logger.warning(
f"LLM summarise_change skipped: monthly budget {budget:,} reached "
f"({used:,} used this month)"
)
return ''
custom_prompt = get_effective_summary_prompt(watch, datastore)
if not diff.strip():
return ''
_check_input_size(diff, _get_max_input_chars(datastore))
url = watch.get('url', '')
title = watch.get('page_title') or watch.get('title') or ''
system_prompt = build_change_summary_system_prompt()
user_prompt = build_change_summary_prompt(
diff=diff,
custom_prompt=custom_prompt,
current_snapshot=current_snapshot,
url=url,
title=title,
)
_thinking_budget = int(datastore.data['settings']['application'].get('llm_thinking_budget', LLM_DEFAULT_THINKING_BUDGET) or 0)
_extra_body = _thinking_extra_body(cfg['model'], _thinking_budget)
try:
_resp = llm_client.completion(
model=cfg['model'],
messages=[
_cached_system(system_prompt, model=cfg['model']),
{'role': 'user', 'content': user_prompt},
],
api_key=cfg.get('api_key'),
api_base=cfg.get('api_base'),
max_tokens=_summary_max_tokens(
diff,
max_cap=int(datastore.data['settings']['application'].get('llm_max_summary_tokens', LLM_DEFAULT_MAX_SUMMARY_TOKENS) or LLM_DEFAULT_MAX_SUMMARY_TOKENS),
),
extra_body=_extra_body,
)
raw, tokens = _resp[0], _resp[1]
input_tokens = _resp[2] if len(_resp) > 2 else 0
output_tokens = _resp[3] if len(_resp) > 3 else 0
summary = raw.strip()
_check_token_budget(watch, cfg, tokens)
watch['llm_last_tokens_used'] = tokens
watch['llm_tokens_used_cumulative'] = (watch.get('llm_tokens_used_cumulative') or 0) + tokens
accumulate_global_tokens(datastore, tokens,
input_tokens=input_tokens,
output_tokens=output_tokens,
model=cfg['model'])
logger.debug(
f"LLM change summary {watch.get('uuid')}: tokens={tokens} "
f"summary={summary[:80]}"
)
return summary
except Exception as e:
raise
# ---------------------------------------------------------------------------
# Live-preview extraction (current content, no diff)
# ---------------------------------------------------------------------------
def preview_extract(watch, datastore, content: str) -> dict | None:
"""
For the live-preview endpoint: extract relevant information from the
*current* page content according to the watch's intent.
Unlike evaluate_change (which compares a diff), this asks the LLM to
directly answer the intent against the current snapshot giving the user
immediate feedback like "30 articles listed" or "Price: $149, 25% off".
Returns {'found': bool, 'answer': str} or None if LLM not configured / no intent.
"""
cfg = get_llm_config(datastore)
if not cfg:
return None
intent, _ = resolve_intent(watch, datastore)
if not intent or not content.strip():
return None
_check_input_size(content, _get_max_input_chars(datastore))
url = watch.get('url', '')
title = watch.get('page_title') or watch.get('title') or ''
system_prompt = build_preview_system_prompt()
user_prompt = build_preview_prompt(intent, content, url=url, title=title)
try:
raw, tokens, *_ = llm_client.completion(
model=cfg['model'],
messages=[
_cached_system(system_prompt, model=cfg['model']),
{'role': 'user', 'content': user_prompt},
],
api_key=cfg.get('api_key'),
api_base=cfg.get('api_base'),
extra_body=_thinking_extra_body(cfg['model'], int(datastore.data['settings']['application'].get('llm_thinking_budget', LLM_DEFAULT_THINKING_BUDGET) or 0)),
)
accumulate_global_tokens(datastore, tokens, model=cfg['model'])
result = parse_preview_response(raw)
logger.debug(
f"LLM preview {watch.get('uuid')}: found={result['found']} "
f"tokens={tokens} answer={result['answer'][:80]}"
)
return result
except Exception as e:
logger.warning(f"LLM preview extraction failed for {watch.get('uuid')}: {e}")
return None
# ---------------------------------------------------------------------------
# Per-change evaluation
# ---------------------------------------------------------------------------
def evaluate_change(watch, datastore, diff: str, current_snapshot: str = '') -> dict | None:
"""
Evaluate whether `diff` matches the watch's intent.
Returns {'important': bool, 'summary': str} or None if LLM not configured / no intent.
Results are cached by (intent, diff) hash each unique diff is evaluated exactly once.
"""
cfg = get_llm_config(datastore)
if not cfg:
return None
intent, source = resolve_intent(watch, datastore)
if not intent:
return None
if not diff or not diff.strip():
return {'important': False, 'summary': ''}
_check_input_size(diff, _get_max_input_chars(datastore))
# Cache lookup — evaluations are deterministic once cached
cache_key = hashlib.sha256(f"{intent}||{diff}".encode()).hexdigest()
cache = watch.get('llm_evaluation_cache') or {}
if cache_key in cache:
logger.debug(f"LLM cache hit for {watch.get('uuid')} key={cache_key[:8]}")
return cache[cache_key]
# Check global monthly budget before making the call
if is_global_token_budget_exceeded(datastore):
budget = get_global_token_budget_month(datastore)
llm_cfg = datastore.data['settings']['application'].get('llm') or {}
used = llm_cfg.get('tokens_this_month', 0)
logger.warning(
f"LLM evaluate_change skipped for {watch.get('uuid')}: monthly budget {budget:,} reached "
f"({used:,} used this month) — passing change through as important"
)
# Fail open: don't suppress notifications when budget is exhausted
return {'important': True, 'summary': ''}
# Check per-watch cumulative budget before making the call
if not _check_token_budget(watch, cfg):
# Already over budget — fail open (don't suppress notification)
return {'important': True, 'summary': ''}
url = watch.get('url', '')
title = watch.get('page_title') or watch.get('title') or ''
system_prompt = build_eval_system_prompt()
user_prompt = build_eval_prompt(
intent=intent,
diff=diff,
current_snapshot=current_snapshot,
url=url,
title=title,
)
try:
_resp = llm_client.completion(
model=cfg['model'],
messages=[
_cached_system(system_prompt, model=cfg['model']),
{'role': 'user', 'content': user_prompt},
],
api_key=cfg.get('api_key'),
api_base=cfg.get('api_base'),
extra_body=_thinking_extra_body(cfg['model'], int(datastore.data['settings']['application'].get('llm_thinking_budget', LLM_DEFAULT_THINKING_BUDGET) or 0)),
)
raw, tokens = _resp[0], _resp[1]
input_tokens = _resp[2] if len(_resp) > 2 else 0
output_tokens = _resp[3] if len(_resp) > 3 else 0
result = parse_eval_response(raw)
except Exception as e:
logger.warning(f"LLM evaluation failed for {watch.get('uuid')}: {e}")
# On failure: don't suppress the notification — pass through as important
watch['llm_last_tokens_used'] = 0
return {'important': True, 'summary': ''}
# Accumulate token usage: per-watch limit and global monthly budget
_check_token_budget(watch, cfg, tokens)
watch['llm_last_tokens_used'] = tokens
accumulate_global_tokens(datastore, tokens,
input_tokens=input_tokens,
output_tokens=output_tokens,
model=cfg['model'])
# Store in cache
if 'llm_evaluation_cache' not in watch or watch['llm_evaluation_cache'] is None:
watch['llm_evaluation_cache'] = {}
watch['llm_evaluation_cache'][cache_key] = result
logger.debug(
f"LLM eval {watch.get('uuid')} (intent from {source}): "
f"important={result['important']} tokens={tokens} summary={result['summary'][:80]}"
)
return result
-212
View File
@@ -1,212 +0,0 @@
"""
Prompt construction for LLM evaluation calls.
Pure functions no side effects, fully testable.
"""
import re
from .bm25_trim import trim_to_relevant
_AGO_RE = re.compile(r'^\d+\s+\w+\s+ago$', re.IGNORECASE)
SNAPSHOT_CONTEXT_CHARS = 3_000 # current page state excerpt sent alongside the diff
def _annotate_moved_lines(diff_text: str) -> str:
"""
Pre-process a unified diff to mark lines that appear on both the + and - sides
as [MOVED] rather than genuinely added/removed. This prevents the LLM from
incorrectly classifying repositioned content as new or deleted.
Lines are compared after stripping leading +/- and whitespace so that
indentation changes don't prevent matching.
"""
lines = diff_text.splitlines()
added_texts = {l[1:].strip().lower() for l in lines if l.startswith('+') and l[1:].strip()}
removed_texts = {l[1:].strip().lower() for l in lines if l.startswith('-') and l[1:].strip()}
moved_texts = added_texts & removed_texts
if not moved_texts:
return diff_text
result = []
for line in lines:
if line.startswith(('+', '-')):
bare = line[1:].strip().lower()
if bare in moved_texts or _AGO_RE.match(line[1:].strip()):
result.append(f'~{line[1:]}') # ~ prefix = moved/reordered/trivial, skip
continue
result.append(line)
return '\n'.join(result)
def build_eval_prompt(intent: str, diff: str, current_snapshot: str = '',
url: str = '', title: str = '') -> str:
"""
Build the user message for a diff evaluation call.
The system prompt is kept separate (see build_eval_system_prompt).
"""
parts = []
if url:
parts.append(f"URL: {url}")
if title:
parts.append(f"Page title: {title}")
parts.append(f"Intent: {intent}")
if current_snapshot:
excerpt = trim_to_relevant(current_snapshot, intent, max_chars=SNAPSHOT_CONTEXT_CHARS)
if excerpt:
parts.append(f"\nCurrent page state (relevant excerpt):\n{excerpt}")
parts.append(f"\nWhat changed (diff):\n{diff}")
return '\n'.join(parts)
def build_eval_system_prompt() -> str:
return (
"You are a precise, reliable website-change evaluator for a monitoring tool.\n"
"Your job is to read a unified diff and decide whether it matches a user's stated intent.\n"
"Accuracy is critical — false positives waste the user's attention; false negatives miss what they care about.\n\n"
"Diff format:\n"
"- Lines starting with '+' are newly ADDED content\n"
"- Lines starting with '-' are REMOVED content\n"
"- Lines starting with ' ' (space) are unchanged context\n\n"
"Respond with ONLY a JSON object — no markdown, no explanation outside it:\n"
'{"important": true/false, "summary": "one sentence describing the relevant change, or why it doesn\'t match"}\n\n'
"Rules:\n"
"- important=true ONLY when the diff clearly and specifically matches the intent — be strict\n"
"- Pay close attention to direction: an intent about price drops means removed (-) prices and added (+) lower prices\n"
"- Empty, trivial, or cosmetic diffs (timestamps, counters, whitespace, navigation) → important=false\n"
"- If the same text appears in both removed (-) and added (+) lines the content has likely just "
"shifted or been reordered. Treat pure reordering as important=false unless the intent "
"explicitly asks about order or position.\n"
"- Use OR logic when the intent lists multiple triggers — any one matching is sufficient\n"
"- When uncertain whether a change truly matches, prefer important=false and explain why in the summary\n"
"- Summary must be in the same language as the intent\n"
"- If important=false, the summary must clearly explain what changed and why it does not match"
)
def build_preview_prompt(intent: str, content: str, url: str = '', title: str = '') -> str:
"""
Build the user message for a live-preview extraction call.
Unlike build_eval_prompt (which analyses a diff), this asks the LLM to
extract relevant information from the *current* page content giving the
user a direct answer to their intent so they can verify it makes sense
before saving.
"""
parts = []
if url:
parts.append(f"URL: {url}")
if title:
parts.append(f"Page title: {title}")
parts.append(f"Intent / question: {intent}")
parts.append(f"\nPage content:\n{content[:6_000]}")
return '\n'.join(parts)
def build_preview_system_prompt() -> str:
return (
"You are a precise, detail-oriented web page content analyst for a website monitoring tool.\n"
"Given the user's intent or question and the current page content, extract and directly answer "
"what the intent is looking for. Never guess or paraphrase — report only what the page actually contains.\n\n"
"Respond with ONLY a JSON object — no markdown, no explanation outside it:\n"
'{"found": true/false, "answer": "concise direct answer or extraction"}\n\n'
"Rules:\n"
"- found=true when the page clearly contains something relevant to the intent\n"
"- answer must directly address the intent with specific values where possible "
"(e.g. for 'current price?''$149.99', not 'a price is shown')\n"
"- answer must be in the same language as the intent\n"
"- Keep answer brief — one or two sentences maximum\n"
"- If found=false, briefly state what the page contains instead"
)
def build_change_summary_prompt(diff: str, custom_prompt: str,
current_snapshot: str = '', url: str = '', title: str = '') -> str:
"""
Build the user message for an AI Change Summary call.
The user supplies their own instructions (custom_prompt); this wraps them
with the diff and optional page context.
"""
parts = []
if url:
parts.append(f"URL: {url}")
if title:
parts.append(f"Page title: {title}")
parts.append(f"Instructions: {custom_prompt}")
if current_snapshot:
excerpt = trim_to_relevant(current_snapshot, custom_prompt, max_chars=2_000)
if excerpt:
parts.append(f"\nCurrent page (excerpt):\n{excerpt}")
parts.append(f"\nWhat changed (diff):\n{_annotate_moved_lines(diff)}")
return '\n'.join(parts)
def build_change_summary_system_prompt() -> str:
return (
"You are a meticulous, accurate summariser of website changes for monitoring notifications.\n"
"Your goal is to describe exactly what changed — never omit significant details, "
"never add information that isn't in the diff, and never speculate.\n\n"
"Rules for reading the diff:\n"
"- Lines starting with + are genuinely new content. List them specifically.\n"
"- Lines starting with - are genuinely removed content. List them specifically.\n"
"- Lines starting with ~ have been PRE-IDENTIFIED as moved/reordered or trivial — "
"the same text exists on both sides of the diff, or the line is a standalone timestamp. "
"Do NOT report ~ lines as added or removed. "
"If many ~ lines exist, note briefly that some content was reordered.\n"
"- Never list standalone timestamps like '3 hours ago', 'Yesterday', '2 minutes ago' "
"as added or removed items — they are not meaningful content changes.\n"
"For content-heavy pages (news, listings, feeds): quote or paraphrase the specific new "
"headlines, items, or entries that were added — do not collapse them into vague phrases "
"like 'new articles were added' or 'section was expanded'.\n"
"For large blocks of new text (full articles, documents, long paragraphs): briefly summarise "
"the substance in 1-2 sentences capturing the key point — do not just repeat the title.\n\n"
"Structure your response using these sections, in this fixed order — "
"omit a section entirely if there is nothing to report for it:\n"
" Added: ...\n"
" Changed: ...\n"
" Removed: ...\n"
"The Removed section MUST always be last. Never place removals before additions or changes.\n\n"
"Follow the user's formatting instructions exactly for structure, language, and length.\n"
"Respond with ONLY the summary text — no JSON, no markdown code fences, no preamble. "
"Just the description."
)
def build_setup_prompt(intent: str, snapshot_text: str, url: str = '') -> str:
"""
Build the prompt for the one-time setup call that decides whether
a CSS pre-filter would improve evaluation precision.
"""
excerpt = trim_to_relevant(snapshot_text, intent, max_chars=4_000)
parts = []
if url:
parts.append(f"URL: {url}")
parts.append(f"Intent: {intent}")
parts.append(f"\nPage content excerpt:\n{excerpt}")
return '\n'.join(parts)
def build_setup_system_prompt() -> str:
return (
"You help configure a website change monitor.\n"
"Given a monitoring intent and a sample of the page content, decide if a CSS pre-filter "
"would improve evaluation precision by scoping the content to a specific structural section.\n\n"
"Respond with ONLY a JSON object:\n"
'{"needs_prefilter": true/false, "selector": "CSS selector or null", "reason": "one sentence"}\n\n'
"Rules:\n"
"- Only recommend a pre-filter when the intent references a specific structural section "
"(e.g. 'footer', 'sidebar', 'nav', 'header', 'main', 'article') OR the page clearly "
"has high-noise sections unrelated to the intent\n"
"- Use ONLY semantic element selectors: footer, nav, header, main, article, aside, "
"or attribute-based like [id*='price'], [class*='sidebar'] — NEVER positional selectors "
"like div:nth-child(3) or //*[2]\n"
"- Default to needs_prefilter=false — most intents don't need one\n"
"- selector must be null when needs_prefilter=false"
)
-84
View File
@@ -1,84 +0,0 @@
"""
Parse and validate LLM JSON responses.
Pure functions no side effects, fully testable.
LLMs occasionally return JSON wrapped in markdown fences or with trailing
text. This module handles those cases gracefully.
"""
import json
import re
# Positional selectors are fragile — reject them even if the LLM generates them
_POSITIONAL_SELECTOR_RE = re.compile(
r'nth-child|nth-of-type|:eq\(|\[\d+\]|\/\/\*\[\d',
re.IGNORECASE
)
def _extract_json(raw: str) -> str:
"""Strip markdown fences and extract the first JSON object."""
raw = raw.strip()
# Remove ```json ... ``` or ``` ... ``` fences
raw = re.sub(r'^```(?:json)?\s*', '', raw, flags=re.MULTILINE)
raw = re.sub(r'\s*```$', '', raw, flags=re.MULTILINE)
# Find the first { ... } block
match = re.search(r'\{.*\}', raw, re.DOTALL)
return match.group(0) if match else raw
def parse_eval_response(raw: str) -> dict:
"""
Parse a diff evaluation response.
Returns {'important': bool, 'summary': str}.
Falls back to important=False on any parse error.
"""
try:
data = json.loads(_extract_json(raw))
return {
'important': bool(data.get('important', False)),
'summary': str(data.get('summary', '')).strip(),
}
except (json.JSONDecodeError, AttributeError):
return {'important': False, 'summary': ''}
def parse_preview_response(raw: str) -> dict:
"""
Parse a live-preview extraction response.
Returns {'found': bool, 'answer': str}.
Falls back to found=False on any parse error.
"""
try:
data = json.loads(_extract_json(raw))
return {
'found': bool(data.get('found', False)),
'answer': str(data.get('answer', '')).strip(),
}
except (json.JSONDecodeError, AttributeError):
return {'found': False, 'answer': ''}
def parse_setup_response(raw: str) -> dict:
"""
Parse a setup/pre-filter decision response.
Returns {'needs_prefilter': bool, 'selector': str|None, 'reason': str}.
Rejects positional selectors even if the LLM generates them.
"""
try:
data = json.loads(_extract_json(raw))
needs = bool(data.get('needs_prefilter', False))
selector = data.get('selector') or None
# Sanitise: reject positional selectors
if selector and _POSITIONAL_SELECTOR_RE.search(selector):
selector = None
needs = False
return {
'needs_prefilter': needs,
'selector': selector if needs else None,
'reason': str(data.get('reason', '')).strip(),
}
except (json.JSONDecodeError, AttributeError):
return {'needs_prefilter': False, 'selector': None, 'reason': ''}
-18
View File
@@ -1,18 +0,0 @@
"""
Shared UI placeholder strings for LLM fields.
Used by WTForms field definitions in forms.py and blueprint/tags/form.py.
Templates use their own _()-translated variants but should stay in sync with these.
"""
# llm_intent field — placeholder text for per-watch context
LLM_INTENT_WATCH_PLACEHOLDER = (
"e.g. Alert me when the price drops below $300, or a new product is launched. "
"Ignore footer and navigation changes."
)
# llm_intent field — placeholder text for tag/group context
LLM_INTENT_TAG_PLACEHOLDER = (
"e.g. Flag price changes or new product launches across all watches in this group"
)
-3
View File
@@ -2,7 +2,6 @@ from os import getenv
from copy import deepcopy
from changedetectionio.blueprint.rss import RSS_FORMAT_TYPES, RSS_CONTENT_FORMAT_DEFAULT
from changedetectionio.llm.evaluator import LLM_DEFAULT_MAX_SUMMARY_TOKENS, LLM_DEFAULT_THINKING_BUDGET
from changedetectionio.model.Tags import TagsDict
from changedetectionio.notification import (
@@ -71,8 +70,6 @@ class model(dict):
'shared_diff_access': False,
'strip_ignored_lines': False,
'tags': None, # Initialized in __init__ with real datastore_path
'llm_thinking_budget': LLM_DEFAULT_THINKING_BUDGET,
'llm_max_summary_tokens': LLM_DEFAULT_MAX_SUMMARY_TOKENS,
'webdriver_delay': None , # Extra delay in seconds before extracting text
'ui': {
'use_page_title_in_list': True,
+15 -55
View File
@@ -465,21 +465,22 @@ class model(EntityPersistenceMixin, watch_base):
if ',' in i:
k, v = i.strip().split(',', 2)
# Always resolve history entries to within the watch's own data directory.
# Entries restored from backup could contain absolute or traversal paths —
# never trust them. Use realpath to also block symlink-based escapes.
safe_data_dir = os.path.realpath(self.data_dir)
snapshot_fname = os.path.basename(v.strip())
resolved_path = os.path.realpath(os.path.join(self.data_dir, snapshot_fname))
# The index history could contain a relative path, so we need to make the fullpath
# so that python can read it
# Cross-platform: check for any path separator (works on Windows and Unix)
if os.sep not in v and '/' not in v and '\\' not in v:
# Relative filename only, no path separators
v = os.path.join(self.data_dir, v)
else:
# It's possible that they moved the datadir on older versions
# So the snapshot exists but is in a different path
# Cross-platform: use os.path.basename instead of split('/')
snapshot_fname = os.path.basename(v)
proposed_new_path = os.path.join(self.data_dir, snapshot_fname)
if not os.path.exists(v) and os.path.exists(proposed_new_path):
v = proposed_new_path
if not resolved_path.startswith(safe_data_dir + os.sep) and resolved_path != safe_data_dir:
logger.warning(f"Skipping unsafe history entry for {self.get('uuid')}: {v!r}")
continue
if not os.path.exists(resolved_path):
continue
tmp_history[k] = resolved_path
tmp_history[k] = v
if len(tmp_history):
self.__newest_history_key = list(tmp_history.keys())[-1]
@@ -562,15 +563,6 @@ class model(EntityPersistenceMixin, watch_base):
if not filepath:
filepath = self.history[timestamp]
# Confine every read to the watch's own data directory — defence in depth
# against any path that bypasses the history parser (e.g. direct filepath= callers).
# Set HISTORY_SNAPSHOT_FILE_ALLOW_OUTSIDE_WATCH_DATADIR=true to disable (not recommended).
if self.data_dir and not strtobool(os.getenv('HISTORY_SNAPSHOT_FILE_ALLOW_OUTSIDE_WATCH_DATADIR', 'False')):
safe_data_dir = os.path.realpath(self.data_dir)
resolved = os.path.realpath(filepath)
if not (resolved.startswith(safe_data_dir + os.sep) or resolved == safe_data_dir):
raise PermissionError(f"Snapshot path {filepath!r} is outside the watch data directory")
# Check if binary file (image, PDF, etc.)
# Binary files are NEVER saved with .br compression, only text files are
binary_extensions = ('.png', '.jpg', '.jpeg', '.gif', '.webp', '.pdf', '.bin', '.jfif')
@@ -1009,38 +1001,6 @@ class model(EntityPersistenceMixin, watch_base):
return False
@staticmethod
def _llm_summary_prompt_hash(prompt: str) -> str:
"""8-char hex hash of the prompt — used to detect when the prompt changes."""
import hashlib
return hashlib.md5(prompt.encode('utf-8', errors='replace')).hexdigest()[:8]
def get_llm_diff_summary(self, from_version, to_version, prompt: str = '') -> str:
"""Return the cached AI Change Summary for this from→to + prompt combination, or ''.
The prompt hash is embedded in the filename so that a changed prompt
automatically produces a cache miss and triggers regeneration.
"""
prompt_hash = self._llm_summary_prompt_hash(prompt)
fname = os.path.join(self.data_dir, f'change-summary-{from_version}-to-{to_version}-{prompt_hash}.txt')
if not os.path.isfile(fname):
return ''
with open(fname, 'r', encoding='utf-8') as f:
return f.read().strip()
def save_llm_diff_summary(self, summary: str, from_version, to_version, prompt: str = ''):
"""Persist the AI Change Summary keyed by version pair + prompt hash."""
self.ensure_data_dir_exists()
prompt_hash = self._llm_summary_prompt_hash(prompt)
fname = os.path.join(self.data_dir, f'change-summary-{from_version}-to-{to_version}-{prompt_hash}.txt')
tmp = fname + '.tmp'
try:
with open(tmp, 'w', encoding='utf-8') as f:
f.write(summary)
os.replace(tmp, fname)
except OSError as e:
logger.warning(f"Could not write LLM summary cache {fname}: {e}")
def pause(self):
self['paused'] = True
-13
View File
@@ -188,11 +188,6 @@ class watch_base(dict):
'date_created': None,
'extract_lines_containing': [], # Keep only lines containing these substrings (plain text, case-insensitive)
'extract_text': [], # Extract text by regex after filters
# LLM intent-based evaluation
'llm_intent': '', # Plain-English description of what the user cares about (change filter)
'llm_change_summary': '', # Prompt for AI Change Summary — replaces {{ diff }} in notifications
'llm_prefilter': None, # CSS selector derived at setup time (semantic only, e.g. "footer")
'llm_evaluation_cache': {}, # {sha256(intent+diff): {important, summary}} - evaluated once, cached
'fetch_backend': 'system', # plaintext, playwright etc
'fetch_time': 0.0,
'filter_failure_notification_send': strtobool(os.getenv('FILTER_FAILURE_NOTIFICATION_SEND_DEFAULT', 'True')),
@@ -346,14 +341,6 @@ class watch_base(dict):
'last_filter_config_hash', # Set by text_json_diff processor, internal skip-cache
'restock', # Set by restock processor
'last_viewed', # Set by mark_all_viewed endpoint
# LLM runtime fields written back by worker/evaluator
'_llm_result',
'_llm_intent',
'_llm_change_summary',
'llm_prefilter',
'llm_evaluation_cache',
'llm_last_tokens_used',
'llm_tokens_used_cumulative',
}
# Only mark as edited if this is a user-writable field
@@ -48,9 +48,8 @@ To verify this works:
"""
import json
import os
import re
from urllib.parse import unquote_plus, urlparse
from urllib.parse import unquote_plus
import requests
from apprise import plugins
@@ -60,8 +59,6 @@ from apprise.utils.logic import dict_full_update
from loguru import logger
from requests.structures import CaseInsensitiveDict
from changedetectionio.validate_url import is_private_hostname
SUPPORTED_HTTP_METHODS = {"get", "post", "put", "delete", "patch", "head"}
@@ -198,15 +195,6 @@ def apprise_http_custom_handler(
url = re.sub(rf"^{schema}", "https" if schema.endswith("s") else "http", parsed_url.get("url"))
# SSRF protection — block private/loopback addresses unless explicitly allowed
if not os.getenv('ALLOW_IANA_RESTRICTED_ADDRESSES', '').lower() in ('true', '1', 'yes'):
hostname = urlparse(url).hostname or ''
if hostname and is_private_hostname(hostname):
raise ValueError(
f"Notification target '{hostname}' is a private/reserved address. "
f"Set ALLOW_IANA_RESTRICTED_ADDRESSES=true to allow."
)
response = requests.request(
method=method,
url=url,
-15
View File
@@ -364,21 +364,6 @@ def process_notification(n_object: NotificationContextData, datastore):
)
)
# {{ raw_diff }} always holds the actual diff regardless of AI Change Summary
n_object['raw_diff'] = n_object.get('diff', '')
# AI Change Summary: optionally replace {{ diff }} with the AI summary
_llm_change_summary = (n_object.get('_llm_change_summary') or '').strip()
_override_diff = datastore.data['settings']['application'].get('llm_override_diff_with_summary', True)
if _llm_change_summary and _override_diff:
n_object['diff'] = _llm_change_summary
# Lazily populate llm_summary / llm_intent if used in notification template
scan_text = n_object.get('notification_body', '') + n_object.get('notification_title', '')
if 'llm_summary' in scan_text or 'llm_intent' in scan_text or 'raw_diff' in scan_text:
n_object['llm_summary'] = _llm_change_summary or (n_object.get('_llm_result') or {}).get('summary', '')
n_object['llm_intent'] = n_object.get('_llm_intent', '')
with (apprise.LogCapture(level=apprise.logging.DEBUG) as logs):
for url in n_object['notification_urls']:
@@ -195,8 +195,6 @@ class NotificationContextData(dict):
'timestamp_from': None,
'timestamp_to': None,
'triggered_text': None,
'llm_summary': None, # AI plain-English summary of what changed (requires AI intent to be configured)
'llm_intent': None, # The intent that was evaluated (watch-level or inherited from tag)
'uuid': 'XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX', # Converted to 'watch_uuid' in create_notification_parameters
'watch_mime_type': None,
'watch_tag': None,
@@ -413,11 +411,6 @@ class NotificationService:
n_object['notification_body'] = _check_cascading_vars(self.datastore,'notification_body', watch)
n_object['notification_format'] = _check_cascading_vars(self.datastore,'notification_format', watch)
# Attach LLM results so notification tokens render correctly
n_object['_llm_result'] = watch.get('_llm_result')
n_object['_llm_intent'] = watch.get('_llm_intent', '')
n_object['_llm_change_summary'] = watch.get('_llm_change_summary', '')
# (Individual watch) Only prepare to notify if the rules above matched
queued = False
if n_object and n_object.get('notification_urls'):
+15 -39
View File
@@ -61,7 +61,7 @@ class ChangeDetectionSpec:
pass
@hookspec
def get_itemprop_availability_override(self, content, fetcher_name, fetcher_instance, url, llm_intent=None):
def get_itemprop_availability_override(self, content, fetcher_name, fetcher_instance, url):
"""Provide custom implementation of get_itemprop_availability for a specific fetcher.
This hook allows plugins to provide their own product availability detection
@@ -73,7 +73,6 @@ class ChangeDetectionSpec:
fetcher_name: The name of the fetcher being used (e.g., 'html_js_zyte')
fetcher_instance: The fetcher instance that generated the content
url: The URL being watched/checked
llm_intent: Optional user-supplied intent string (e.g. "alert when price drops below $300")
Returns:
dict or None: Dictionary with availability data:
@@ -242,27 +241,24 @@ plugin_manager.add_hookspecs(ChangeDetectionSpec)
# Load plugins from subdirectories
def load_plugins_from_directories():
# List of (python_package_prefix, filesystem_path) pairs to scan for plugins.
# NOTE: processors/restock_diff/plugins is intentionally excluded here — those
# plugins are registered via register_builtin_restock_plugins() to avoid the
# circular import: restock_diff/__init__.py → model.Watch → content_fetchers → pluggy_interface.
plugin_dirs = [
(
'changedetectionio.conditions.plugins',
os.path.join(os.path.dirname(__file__), 'conditions', 'plugins'),
),
]
for module_prefix, dir_path in plugin_dirs:
# Dictionary of directories to scan for plugins
plugin_dirs = {
'conditions': os.path.join(os.path.dirname(__file__), 'conditions', 'plugins'),
# Add more plugin directories here as needed
}
# Note: Removed the direct import of example_word_count_plugin as it's now in the conditions/plugins directory
for dir_name, dir_path in plugin_dirs.items():
if not os.path.exists(dir_path):
continue
# Get all Python files (excluding __init__.py)
for filename in os.listdir(dir_path):
if filename.endswith(".py") and filename != "__init__.py":
module_name = filename[:-3] # Remove .py extension
module_path = f"{module_prefix}.{module_name}"
module_path = f"changedetectionio.{dir_name}.plugins.{module_name}"
try:
module = importlib.import_module(module_path)
# Register the plugin with pluggy
@@ -314,24 +310,6 @@ def register_builtin_fetchers():
if hasattr(webdriver_selenium, 'webdriver_selenium_plugin'):
plugin_manager.register(webdriver_selenium.webdriver_selenium_plugin, 'builtin_webdriver_selenium')
def register_builtin_restock_plugins():
"""Register built-in restock processor plugins after all imports are complete.
Called from content_fetchers/__init__.py alongside register_builtin_fetchers()
to avoid the circular import that occurs when loading via load_plugins_from_directories()
(restock_diff/__init__.py model.Watch content_fetchers pluggy_interface).
"""
import importlib
module_path = 'changedetectionio.processors.restock_diff.plugins.llm_restock'
try:
module = importlib.import_module(module_path)
if not plugin_manager.is_registered(module):
plugin_manager.register(module, 'llm_restock')
logger.debug("Registered built-in restock plugin: llm_restock")
except Exception as e:
logger.error(f"Failed to register llm_restock plugin: {e}")
# Helper function to collect UI stats extras from all plugins
def collect_ui_edit_stats_extras(watch):
"""Collect and combine HTML content from all plugins that implement ui_edit_stats_extras"""
@@ -368,7 +346,7 @@ def collect_fetcher_status_icons(fetcher_name):
return None
def get_itemprop_availability_from_plugin(content, fetcher_name, fetcher_instance, url, llm_intent=None):
def get_itemprop_availability_from_plugin(content, fetcher_name, fetcher_instance, url):
"""Get itemprop availability data from plugins as a fallback.
This is called when the built-in get_itemprop_availability doesn't find good data.
@@ -378,7 +356,6 @@ def get_itemprop_availability_from_plugin(content, fetcher_name, fetcher_instanc
fetcher_name: The name of the fetcher being used (e.g., 'html_js_zyte')
fetcher_instance: The fetcher instance that generated the content
url: The URL being watched (watch.link - includes Jinja2 evaluation)
llm_intent: Optional user-supplied intent string passed through to plugins
Returns:
dict or None: Availability data dictionary from first matching plugin, or None
@@ -388,8 +365,7 @@ def get_itemprop_availability_from_plugin(content, fetcher_name, fetcher_instanc
content=content,
fetcher_name=fetcher_name,
fetcher_instance=fetcher_instance,
url=url,
llm_intent=llm_intent,
url=url
)
# Return first non-None result with actual data
@@ -1,295 +0,0 @@
"""
LLM fallback plugin for price and restock info extraction.
When the built-in structured-metadata extraction (JSON-LD, microdata, OpenGraph)
fails to produce both a price and availability, this plugin is called as a last
resort. It sends a trimmed, HTML-stripped version of the page to the configured
LLM and asks it to return a structured JSON answer.
The module-level `datastore` variable is injected at startup by
`inject_datastore_into_plugins()` in pluggy_interface.py.
"""
import json
import re
from loguru import logger
from changedetectionio.pluggy_interface import hookimpl
# Injected at startup by inject_datastore_into_plugins()
datastore = None
SYSTEM_PROMPT = (
'You are an expert price and restock extraction utility. '
'Your task is to analyse a product page and determine the price and stock status of the MAIN product only.\n\n'
'AVAILABILITY — treat as "in stock":\n'
'- Action buttons near the product: "Add to cart", "Add to basket", "Buy now", '
'"Order now", "Purchase", "Import", "Add to bag", "Add to trolley", "In stock", '
'"Available", "Ships in X days/weeks", "In store", "Pick up today".\n'
'- "Pre-order" or "Reserve" — the item is orderable, treat as "in stock".\n'
'- "Only X left", "Almost gone", "Low stock", "Limited availability" — still in stock.\n'
'- "Request a quote" or "Contact us for pricing" — item is available, price is null.\n'
'- IMPORTANT: Ignore cart/basket/bag links in the page HEADER or navigation bar '
'(e.g. a shopping cart icon showing item count). That reflects what is already in '
'the visitor\'s cart — it says nothing about whether THIS product is available.\n\n'
'PRICE — what NOT to use:\n'
'- A "$0.00" or "0" that appears near header/nav links such as "Login", "Wishlist", '
'"Contact Us", "My Account" is an empty shopping-cart indicator, NOT the product price. '
'Ignore it entirely — return null for price rather than 0 in this situation.\n'
'- Only return 0 (free) when the page clearly states the product itself costs nothing '
'(e.g. "Free", "Free download", "Price: $0").\n\n'
'AVAILABILITY — treat as "out of stock":\n'
'- "Out of stock", "Sold out", "Unavailable", "Currently unavailable", '
'"Temporarily out of stock", "Discontinued", "No longer available", '
'"Notify me when available", "Email me when back", "Join waitlist".\n\n'
'AVAILABILITY — return null when uncertain:\n'
'- The page asks the user to select a size, colour, or other variant first '
'("Select an option", "Choose a size") — availability depends on the variant, so return null.\n'
'- You cannot clearly tell from the page content whether the item is available.\n\n'
'PRICE rules:\n'
'- Extract the main selling price as a plain number, no currency symbol.\n'
'- Prices may use any popular locale format — interpret them all correctly and return a plain decimal number. '
'Examples: "10 000 Kč" = 10000, "1.299,95 €" = 1299.95, "1,299.95" = 1299.95, '
'"10 000,50" = 10000.50, "£1.299" = 1299, "¥10000" = 10000.\n'
'- If both an original (crossed-out) price and a sale/current price appear, use the sale price.\n'
'- "From $X" or "Starting at $X" are teaser prices — prefer a definite price or return null.\n'
'- A price of 0 (free) is valid — return 0, not null.\n'
'- If pricing requires a quote or login, return null for price.\n'
'- Ignore prices shown in search/filter UI elements (e.g. "Price from: — to:").\n'
'- IMPORTANT: Ignore ALL prices that appear inside or below recommendation/discovery blocks '
'such as: "Similar items", "You may also like", "Customers also bought", '
'"Based on your browsing", "Based on your shopping", "Frequently bought together", '
'"People also viewed", "Related products", "Sponsored products", "More like this", '
'"Other sellers", "Compare with similar items". '
'These sections contain prices for OTHER products, not the main product.\n'
'- When multiple prices appear on the page, prefer the price that is positioned '
'earliest/highest in the page content — it is almost always the main product price. '
'Prices appearing after large blocks of descriptive text or review sections are '
'likely from recommendation widgets and should be ignored.\n\n'
'CLASSIFIEDS AND LISTING PAGES:\n'
'- On classifieds or marketplace sites (e.g. eBay listings, Craigslist, Bazoš, Gumtree), '
'if a price is shown alongside seller contact details or a "Contact seller" link, '
'treat the item as "instock" — the listing being active means it is available.\n\n'
'Return ONLY a JSON object with exactly these three keys:\n'
' "price" — number or null\n'
' "currency" — ISO-4217 code (USD, EUR, GBP …) or null\n'
' "availability" — exactly one of: "instock", "outofstock", or null\n'
' Use "instock" when the product can be ordered/purchased.\n'
' Use "outofstock" when it cannot.\n'
' Use null when you genuinely cannot tell.\n'
'No markdown, no backticks, no explanation — pure JSON only.'
)
_MAX_CONTENT_CHARS = 8_000
def _extract_jsonld(html_content: str) -> str:
"""Extract JSON-LD blocks — these contain reliable structured product data."""
blocks = re.findall(
r'<script[^>]+type=["\']application/ld\+json["\'][^>]*>(.*?)</script>',
html_content, flags=re.DOTALL | re.IGNORECASE
)
if not blocks:
return ''
combined = ' '.join(b.strip() for b in blocks)
return combined[:2000]
# Semantic tags always treated as chrome (nav/header/footer)
_CHROME_TAGS = {'nav', 'header', 'footer', 'aside'}
# id/class fragments that strongly indicate navigation or site-chrome
_CHROME_PATTERNS = re.compile(
r'\b(nav|navigation|navbar|menu|mega-menu|breadcrumb|breadcrumbs?|'
r'site-header|page-header|top-bar|top-nav|top-header|mobile-nav|header-bar|'
r'site-footer|page-footer|footer-links|related|similar|'
r'you-?may-?also|customers?-?also|frequently-?bought|'
r'people-?also|sponsored|recommendation|widget|sidebar|'
r'cross-?sell|up-?sell)\b',
re.IGNORECASE,
)
def _remove_chrome(html_content: str) -> str:
"""Use BS4 to strip navigation, header, footer and recommendation noise.
Uses html.parser (built-in, no lxml) to avoid memory leak issues.
Falls back to the original HTML string if BS4 fails for any reason.
"""
try:
from bs4 import BeautifulSoup, Tag
soup = BeautifulSoup(html_content, 'html.parser')
# Snapshot the full tag list before any decompositions so we don't
# mutate the tree while iterating it. After a parent is decomposed
# its children become orphans (parent=None) — skip those.
for tag in list(soup.find_all(True)):
if not isinstance(tag, Tag) or tag.parent is None:
continue
name = tag.name or ''
if name in _CHROME_TAGS:
tag.decompose()
continue
try:
cls_list = tag.get('class') or []
cls_str = ' '.join(cls_list) if isinstance(cls_list, list) else str(cls_list)
id_str = tag.get('id') or ''
except Exception:
continue
if _CHROME_PATTERNS.search(cls_str + ' ' + id_str):
tag.decompose()
return str(soup)
except Exception as e:
logger.debug(f"BS4 chrome removal failed ({e}), using raw HTML")
return html_content
def _strip_html(html_content: str) -> str:
"""HTML-to-text for LLM consumption.
1. Extracts JSON-LD (structured product data) to prepend.
2. Strips nav/header/footer/recommendation blocks via BS4.
3. Removes all remaining tags and collapses whitespace.
JSON-LD is prepended so reliable price/availability data is always visible
to the LLM regardless of how deep it sits in the page.
"""
jsonld = _extract_jsonld(html_content)
# Remove site-chrome before generic tag stripping
cleaned = _remove_chrome(html_content)
# Drop HTML comments (can contain large disabled markup blocks)
text = re.sub(r'<!--.*?-->', ' ', cleaned, flags=re.DOTALL)
# Drop all <script> and <style> blocks
text = re.sub(r'<(script|style)[^>]*>.*?</(script|style)>', ' ', text, flags=re.DOTALL | re.IGNORECASE)
# Strip remaining tags
text = re.sub(r'<[^>]+>', ' ', text)
# Decode common entities
text = (text
.replace('&nbsp;', ' ')
.replace('&amp;', '&')
.replace('&lt;', '<')
.replace('&gt;', '>')
.replace('&quot;', '"')
.replace('&#39;', "'"))
text = re.sub(r'\s+', ' ', text).strip()
if jsonld:
budget = _MAX_CONTENT_CHARS - len(jsonld) - 1
return (jsonld + ' ' + text[:budget]).strip()
return text[:_MAX_CONTENT_CHARS]
@hookimpl
def get_itemprop_availability_override(content, fetcher_name, fetcher_instance, url, llm_intent=None):
"""Use an LLM as a last-resort fallback for price and restock extraction."""
global datastore
if datastore is None:
logger.debug("LLM restock fallback: no datastore injected yet, skipping")
return None
# Gate on the user setting (default True — enabled out of the box)
app_settings = datastore.data.get('settings', {}).get('application', {})
if not app_settings.get('llm_restock_use_fallback_extract', True):
logger.debug("LLM restock fallback: disabled in settings")
return None
try:
from changedetectionio.llm.evaluator import get_llm_config, accumulate_global_tokens
from changedetectionio.llm import client as llm_client
except ImportError as e:
logger.debug(f"LLM restock fallback: LLM libraries not available ({e})")
return None
llm_cfg = get_llm_config(datastore)
if not llm_cfg or not llm_cfg.get('model'):
logger.debug("LLM restock fallback: no LLM model configured, skipping")
return None
text_content = _strip_html(content) if content else ''
logger.debug(f"LLM restock fallback: stripped HTML to {len(text_content)} chars for {url}")
if not text_content.strip():
logger.debug("LLM restock fallback: no text content after stripping HTML")
return None
logger.info(f"LLM restock fallback: using LLM ({llm_cfg['model']}) for price/stock extraction - {url}")
user_prompt = f'URL: {url or "unknown"}\n\nPage content:\n{text_content}'
if llm_intent:
user_prompt += f'\n\nUser notification intent: {llm_intent}'
try:
raw, tokens, input_tokens, output_tokens = llm_client.completion(
model=llm_cfg['model'],
messages=[
{'role': 'system', 'content': SYSTEM_PROMPT},
{'role': 'user', 'content': user_prompt},
],
api_key=llm_cfg.get('api_key'),
api_base=llm_cfg.get('api_base'),
max_tokens=80,
)
accumulate_global_tokens(
datastore, tokens,
input_tokens=input_tokens,
output_tokens=output_tokens,
model=llm_cfg['model'],
)
# Strip optional markdown fences the model might add
raw = raw.strip()
if raw.startswith('```'):
raw = re.sub(r'^```[a-z]*\n?', '', raw)
raw = raw.rstrip('`').strip()
logger.debug(f"LLM restock fallback raw response: {raw!r}")
result = json.loads(raw)
price = result.get('price')
currency = result.get('currency') or None
availability = result.get('availability') or None
# Normalise price to float
if price is not None:
try:
if isinstance(price, str):
price = float(re.sub(r'[^\d.]', '', price))
else:
price = float(price)
except (ValueError, TypeError):
logger.warning(f"LLM restock fallback: could not convert price {price!r} to float, ignoring")
price = None
if price is None and not availability:
logger.info(f"LLM restock fallback: LLM returned no usable price or availability for {url} (raw: {raw!r})")
return None
logger.info(
f"LLM restock fallback result: price={price} currency={currency} "
f"availability={availability!r} url={url}"
)
return {
'price': price,
'currency': currency,
'availability': availability,
'_tokens': tokens,
'_input_tokens': input_tokens,
'_output_tokens': output_tokens,
'_model': llm_cfg['model'],
}
except json.JSONDecodeError as e:
logger.warning(f"LLM restock fallback: JSON parse failed ({e}) - raw response was: {raw!r}")
return None
except Exception as e:
logger.warning(f"LLM restock fallback: extraction failed for {url}: {e}")
return None
@@ -486,7 +486,8 @@ class perform_site_check(difference_detection_processor):
has_price = itemprop_availability.get('price') is not None
has_availability = itemprop_availability.get('availability') is not None
if not (has_price and has_availability):
# @TODO !!! some setting like "Use as fallback" or "always use", "t
if not (has_price and has_availability) or True:
from changedetectionio.pluggy_interface import get_itemprop_availability_from_plugin
fetcher_name = watch.get('fetch_backend', 'html_requests')
@@ -505,23 +506,9 @@ class perform_site_check(difference_detection_processor):
# Try plugin override - plugins can decide if they support this fetcher
if fetcher_name:
logger.debug(f"Calling extra plugins for getting item price/availability (fetcher: {fetcher_name})")
from changedetectionio.llm.evaluator import resolve_intent
_llm_intent, _ = resolve_intent(watch, self.datastore)
plugin_availability = get_itemprop_availability_from_plugin(self.fetcher.content, fetcher_name, self.fetcher, watch.link, llm_intent=_llm_intent or None)
plugin_availability = get_itemprop_availability_from_plugin(self.fetcher.content, fetcher_name, self.fetcher, watch.link)
if plugin_availability:
# Extract and strip LLM token metadata before using as Restock data
_plugin_tokens = plugin_availability.pop('_tokens', 0)
_plugin_input_tokens = plugin_availability.pop('_input_tokens', 0)
_plugin_output_tokens = plugin_availability.pop('_output_tokens', 0)
_plugin_model = plugin_availability.pop('_model', '')
# Update per-watch token counters directly on the watch (same
# pattern as evaluator.py) so they're committed when update_watch runs
if _plugin_tokens:
watch['llm_last_tokens_used'] = _plugin_tokens
watch['llm_tokens_used_cumulative'] = (watch.get('llm_tokens_used_cumulative') or 0) + _plugin_tokens
# Plugin provided better data, use it
plugin_has_price = plugin_availability.get('price') is not None
plugin_has_availability = plugin_availability.get('availability') is not None
@@ -65,12 +65,6 @@ def prepare_filter_prevew(datastore, watch_uuid, form_data):
# Only update vars that came in via the AJAX post
p = {k: v for k, v in form.data.items() if k in form_data.keys()}
tmp_watch.update(p)
# Apply llm_intent from form directly — it's not part of processor_text_json_diff_form
# but the AJAX sends all visible inputs, so it arrives in form_data
if hasattr(form_data, 'get') and 'llm_intent' in form_data:
tmp_watch['llm_intent'] = (form_data.get('llm_intent') or '').strip()
blank_watch_no_filters = watch_model(datastore_path=datastore.datastore_path, __datastore=datastore.data)
blank_watch_no_filters['url'] = tmp_watch.get('url')
@@ -126,18 +120,6 @@ def prepare_filter_prevew(datastore, watch_uuid, form_data):
except Exception as e:
text_before_filter = f"Error: {str(e)}"
# LLM preview extraction — asks the LLM to directly answer the intent
# against the current filtered content (no diff comparison).
# e.g. intent "how many articles?" → answer "30 articles listed"
# Results are NOT cached back to the real watch.
llm_evaluation = None
try:
from changedetectionio.llm.evaluator import preview_extract
if text_after_filter and text_after_filter.strip() not in ('', 'Empty content'):
llm_evaluation = preview_extract(tmp_watch, datastore, content=text_after_filter)
except Exception as e:
logger.warning(f"LLM preview evaluation failed for {watch_uuid}: {e}")
logger.trace(f"Parsed in {time.time() - now:.3f}s")
return ({
@@ -146,7 +128,6 @@ def prepare_filter_prevew(datastore, watch_uuid, form_data):
'blocked_line_numbers': blocked_line_numbers,
'duration': time.time() - now,
'ignore_line_numbers': ignore_line_numbers,
'llm_evaluation': llm_evaluation,
'trigger_line_numbers': trigger_line_numbers,
})
@@ -98,7 +98,6 @@ DIFF_PREFERENCES_CONFIG = {
'added': {'default': True, 'type': 'bool'},
'replaced': {'default': True, 'type': 'bool'},
'type': {'default': 'diffLines', 'type': 'value'},
'llm_all_changes': {'default': False, 'type': 'bool'},
}
def render(watch, datastore, request, url_for, render_template, flash, redirect, extract_form=None):
@@ -200,23 +199,6 @@ def render(watch, datastore, request, url_for, render_template, flash, redirect,
if str(from_version) != str(dates[-2]) or str(to_version) != str(dates[-1]):
note = 'Note: You are not viewing the latest changes.'
llm_configured = bool(
datastore.data.get('settings', {}).get('application', {}).get('llm', {}).get('model')
)
# Load cached AI diff summary for this exact from→to + prompt combination
viewing_latest = str(to_version) == str(dates[-1])
llm_diff_summary = ''
llm_summary_prompt = ''
if llm_configured:
try:
from changedetectionio.llm.evaluator import get_effective_summary_prompt
_prompt = get_effective_summary_prompt(watch, datastore)
llm_summary_prompt = _prompt
llm_diff_summary = watch.get_llm_diff_summary(from_version, to_version, prompt=_prompt)
except Exception as e:
logger.warning(f"Could not load llm-diff-summary for {uuid}: {e}")
output = render_template("diff.html",
#initial_scroll_line_number=100,
bottom_horizontal_offscreen_contents=offscreen_content,
@@ -224,7 +206,7 @@ def render(watch, datastore, request, url_for, render_template, flash, redirect,
current_diff_url=watch['url'],
diff_cell_grid=diff_cell_grid,
diff_prefs=diff_prefs,
extra_classes=' '.join(filter(None, ['difference-page', 'llm-configured' if llm_configured else ''])),
extra_classes='difference-page',
extra_stylesheets=extra_stylesheets,
extra_title=f" - {watch.label} - {gettext('History')}",
extract_form=extract_form,
@@ -243,9 +225,5 @@ def render(watch, datastore, request, url_for, render_template, flash, redirect,
uuid=uuid,
versions=dates, # All except current/last
watch_a=watch,
llm_configured=llm_configured,
llm_diff_summary=llm_diff_summary,
llm_summary_prompt=llm_summary_prompt,
viewing_latest=viewing_latest,
)
return output
+9 -18
View File
@@ -3,13 +3,6 @@
* Provides accessible, animated confirmation dialogs
*/
// Escapes a string for safe insertion via innerHTML
function _modalEscapeHTML(str) {
const div = document.createElement('div');
div.textContent = str;
return div.innerHTML;
}
const ModalDialog = {
/**
* Show a confirmation dialog
@@ -132,10 +125,9 @@ const ModalDialog = {
* @param {Function} onConfirm - Callback when confirmed
*/
confirmDelete: function(itemName, onConfirm) {
const safeName = _modalEscapeHTML(itemName);
return this.confirm({
title: 'Delete ' + safeName + '?',
message: `<p>Are you sure you want to delete <strong>${safeName}</strong>?</p><p>This action cannot be undone.</p>`,
title: 'Delete ' + itemName + '?',
message: `<p>Are you sure you want to delete <strong>${itemName}</strong>?</p><p>This action cannot be undone.</p>`,
type: 'danger',
confirmText: 'Delete',
cancelText: 'Cancel',
@@ -149,10 +141,9 @@ const ModalDialog = {
* @param {Function} onConfirm - Callback when confirmed
*/
confirmUnlink: function(itemName, onConfirm) {
const safeName = _modalEscapeHTML(itemName);
return this.confirm({
title: 'Unlink ' + safeName + '?',
message: `<p>Are you sure you want to unlink all watches from <strong>${safeName}</strong>?</p><p>The tag will be kept but watches will be removed from it.</p>`,
title: 'Unlink ' + itemName + '?',
message: `<p>Are you sure you want to unlink all watches from <strong>${itemName}</strong>?</p><p>The tag will be kept but watches will be removed from it.</p>`,
type: 'warning',
confirmText: 'Unlink',
cancelText: 'Cancel',
@@ -181,11 +172,11 @@ $(document).ready(function() {
const url = $element.attr('href');
const config = {
type: $element.attr('data-confirm-type') || 'danger',
title: $element.attr('data-confirm-title') || 'Confirm Action',
message: $element.attr('data-confirm-message') || '<p>Are you sure you want to proceed?</p>',
confirmText: $element.attr('data-confirm-button') || 'Confirm',
cancelText: $element.attr('data-cancel-button') || 'Cancel',
type: $element.data('confirm-type') || 'danger',
title: $element.data('confirm-title') || 'Confirm Action',
message: $element.data('confirm-message') || '<p>Are you sure you want to proceed?</p>',
confirmText: $element.data('confirm-button') || 'Confirm',
cancelText: $element.data('cancel-button') || 'Cancel',
onConfirm: function() {
// If it's a link, navigate to the URL
if ($element.is('a')) {
-79
View File
@@ -1,79 +0,0 @@
/**
* sub-tabs.js Vertical sub-tab switcher.
*
* Finds every .stab-shell on the page and wires up tab switching.
* The shell needs an id= attribute for localStorage persistence.
*
* HTML contract (generated by _stab.html macros):
* .stab-shell#some-id
* .stab-nav
* button.stab-btn[data-stab="foo"]
* .stab-body
* .stab-pane[data-stab="foo"]
*
* Any element inside the shell with data-stab-goto="tab-id" triggers
* navigation to that pane when clicked (for CTA buttons etc.).
*/
(function () {
'use strict';
function initShell(shell) {
var shellId = shell.id;
var storageKey = shellId ? 'stab:' + shellId : null;
var btns = Array.prototype.slice.call(shell.querySelectorAll('.stab-nav .stab-btn'));
var panes = Array.prototype.slice.call(shell.querySelectorAll('.stab-body .stab-pane'));
if (!btns.length || !panes.length) return;
var validIds = btns.map(function (b) { return b.dataset.stab; });
function activate(tabId) {
if (validIds.indexOf(tabId) === -1) return;
btns.forEach(function (b) {
b.classList.toggle('active', b.dataset.stab === tabId);
});
panes.forEach(function (p) {
p.classList.toggle('active', p.dataset.stab === tabId);
});
if (storageKey) {
try { localStorage.setItem(storageKey, tabId); } catch (e) {}
}
}
// Nav button clicks
btns.forEach(function (btn) {
btn.addEventListener('click', function () { activate(btn.dataset.stab); });
});
// data-stab-goto navigation from CTA buttons anywhere inside the shell
shell.addEventListener('click', function (e) {
var el = e.target.closest('[data-stab-goto]');
if (el && shell.contains(el)) {
e.preventDefault();
activate(el.dataset.stabGoto);
}
});
// Restore persisted tab or fall back to first tab
var stored = null;
if (storageKey) {
try { stored = localStorage.getItem(storageKey); } catch (e) {}
}
activate(stored && validIds.indexOf(stored) !== -1 ? stored : validIds[0]);
}
function initAll() {
var shells = document.querySelectorAll('.stab-shell');
shells.forEach(function (shell) { initShell(shell); });
}
if (document.readyState === 'loading') {
document.addEventListener('DOMContentLoaded', initAll);
} else {
initAll();
}
}());
@@ -10,27 +10,6 @@ $(document).ready(function () {
setCookieValue(!isDark);
});
// AI mode toggle — persisted in localStorage
(function initAiMode() {
const enabled = localStorage.getItem('ai-mode') === 'true';
$("html").attr("data-ai-mode", enabled ? "true" : "false");
})();
$(".toggle-ai-mode").on("click", function () {
if ($(this).data("llm-configured") !== true && $(this).data("llm-configured") !== "true") {
document.getElementById("llm-not-configured-modal").showModal();
return;
}
const current = $("html").attr("data-ai-mode") === "true";
const next = !current;
$("html").attr("data-ai-mode", next ? "true" : "false");
localStorage.setItem('ai-mode', next ? 'true' : 'false');
});
$("#close-llm-not-configured-modal").on("click", function () {
document.getElementById("llm-not-configured-modal").close();
});
const setCookieValue = (value) => {
document.cookie = `css_dark_mode=${value};max-age=31536000;path=/`
}
@@ -107,118 +107,5 @@ $(function () {
nowtimeserver = nowtimeserver + time_check_step_size_seconds;
}, time_check_step_size_seconds * 1000);
// LLM / AI features — only active when the server has LLM configured
if ($('body').hasClass('llm-configured')) {
var i18n = window.watchOverviewI18n || {};
var msgGenerating = i18n.generatingSummary || 'Generating summary…';
var msgHistory = i18n.gotoHistory || 'Goto full history';
// Reveal intent textarea on first keydown in the quick-add URL field
var $intentWrap = $('#quick-watch-llm-intent');
if ($intentWrap.length) {
$('#new-watch-form input[name="url"]').one('keydown', function () {
$intentWrap.slideDown(200);
});
}
// Inline AI summary — clicking the Summary button inserts a row below with AJAX content
$(document).on('click', '.ai-history-btn', function (e) {
if ($('html').attr('data-ai-mode') !== 'true') return; // normal navigation when AI mode is off
e.preventDefault();
var $btn = $(this);
var uuid = $btn.data('uuid');
var url = $btn.data('summary-url');
var $row = $btn.closest('tr');
var rowId = 'ai-summary-row-' + uuid;
var cols = $row.find('td').length;
var $tbody = $row.closest('tbody');
// Toggle: remove existing row if already open
if ($('#' + rowId).length) {
$('#' + rowId).remove();
$tbody.find('tr:not(.ai-inline-summary-row) td').css('background-color', '');
return;
}
// Snapshot row backgrounds BEFORE DOM mutation — inserting a <tr> shifts nth-child parity
var $dataRows = $tbody.find('tr:not(.ai-inline-summary-row)');
var bgMap = [];
$dataRows.each(function () {
bgMap.push($(this).find('td:first').css('background-color'));
});
var $summaryRow = $(
'<tr class="ai-inline-summary-row" id="' + rowId + '">' +
'<td colspan="' + cols + '">' +
'<div class="ai-inline-summary-content">' +
'<span class="ai-inline-spinner">&#x2728;</span>' +
'<div class="ai-inline-body">' +
'<span class="ai-inline-text">' + $('<span>').text(msgGenerating).html() + '</span>' +
'</div>' +
'</div>' +
'</td></tr>'
);
$row.after($summaryRow);
// Re-apply frozen backgrounds so the nth-child parity shift is invisible
$dataRows.each(function (i) {
$(this).find('td').css('background-color', bgMap[i]);
});
function formatSummary(text) {
var sectionRe = /^(Added|Changed|Removed|Updated|New|Deleted)\s*:/i;
return text.split('\n').map(function (line) {
var safe = $('<span>').text(line).html();
return sectionRe.test(line.trim())
? safe.replace(/^(\w[\w\s]*)(\s*:)/i, '<strong>$1$2</strong>')
: safe;
}).join('<br>');
}
var promptUrl = url + '/prompt';
// Fire both requests simultaneously — prompt returns immediately, summary after LLM
$.getJSON(promptUrl)
.done(function (data) {
if (data.prompt && $summaryRow.find('.ai-inline-summary-content:not(.loaded)').length) {
$summaryRow.find('.ai-inline-body').append(
'<span class="ai-inline-prompt">' + $('<span>').text(data.prompt).html() + '</span>'
);
}
});
$.getJSON(url)
.done(function (data) {
var $content = $summaryRow.find('.ai-inline-summary-content');
var historyUrl = $btn.attr('href');
if (data.summary) {
$content.addClass('loaded');
$content.find('.ai-inline-text').html(formatSummary(data.summary));
$content.find('.ai-inline-prompt').remove();
$content.find('.ai-inline-body').append(
'<a href="' + historyUrl + '" class="ai-inline-history-link">' +
$('<span>').text(msgHistory).html() + '</a>'
);
} else if (data.error) {
$summaryRow.find('td').html(
'<span class="ai-inline-error">' + $('<span>').text(data.error).html() + '</span>'
);
} else {
$summaryRow.remove();
}
})
.fail(function (xhr) {
var msg = (xhr.responseJSON && xhr.responseJSON.error)
? xhr.responseJSON.error
: 'AI summary request failed (HTTP ' + xhr.status + ').';
$summaryRow.find('td').html(
'<span class="ai-inline-error">' + $('<span>').text(msg).html() + '</span>'
);
});
});
}
});
@@ -12,14 +12,7 @@ function request_textpreview_update() {
data[name] = $element.is(':checkbox') ? ($element.is(':checked') ? $element.val() : false) : $element.val();
});
// llm_intent lives in a separate (potentially hidden) tab — include it explicitly
const $llmIntent = $('textarea[name="llm_intent"]');
if ($llmIntent.length) {
data['llm_intent'] = $llmIntent.val();
}
$('body').toggleClass('spinner-active', 1);
$('#llm-preview-result').hide();
$.abortiveSingularAjax({
type: "POST",
@@ -48,21 +41,6 @@ function request_textpreview_update() {
'title': "No change-detection will occur because this text exists."
}
])
// LLM preview extraction result
const $llmResult = $('#llm-preview-result');
if ($llmResult.length && data['llm_evaluation']) {
const ev = data['llm_evaluation'];
const found = ev['found'];
$llmResult.attr('data-found', found ? '1' : '0');
$llmResult.find('.llm-preview-verdict').text(
found ? '✓ Would trigger a change' : '✗ Would not trigger a change'
);
$llmResult.find('.llm-preview-answer').text(ev['answer'] || '');
$llmResult.show();
} else if ($llmResult.length) {
$llmResult.hide();
}
}).fail(function (error) {
if (error.statusText === 'abort') {
console.log('Request was aborted due to a new request being fired.');
@@ -245,4 +245,4 @@ body.difference-page {
padding-right: 1rem;
}
}
}
}
@@ -1,14 +1,3 @@
// LLM intent/summary textareas are prose, not code override the
// .monospaced-textarea rules so they wrap vertically, not scroll horizontally.
#llm-intent-section textarea {
white-space: normal;
overflow-wrap: break-word;
overflow-x: hidden;
overflow-y: auto;
resize: vertical;
font-family: inherit;
}
ul#conditions_match_logic {
list-style: none;
input, label, li {
@@ -1,200 +0,0 @@
// AI / LLM shared styles
// Used by both the diff page (#llm-diff-summary-area) and the watchlist
// inline summary row (.ai-inline-summary-row).
// Diff page: summary block
#llm-diff-summary-area {
margin: 0.6rem 0 0.4rem;
padding: 0.65rem 0.9rem;
background: linear-gradient(135deg, rgba(120, 80, 200, 0.18), rgba(80, 160, 220, 0.14));
border-left: 3px solid rgba(140, 90, 220, 0.8);
border-radius: 0 4px 4px 0;
min-width: 0;
max-width: 100%;
box-sizing: border-box;
overflow: hidden;
.llm-diff-summary-label {
display: block;
font-size: 0.7rem;
font-weight: 700;
letter-spacing: 0.06em;
text-transform: uppercase;
opacity: 0.55;
margin-bottom: 0.25rem;
}
.llm-diff-summary-text {
margin: 0;
font-size: 0.9rem;
line-height: 1.5;
white-space: pre-wrap;
overflow-wrap: break-word;
word-break: break-word;
}
}
.llm-diff-summary-prompt {
margin: 0.4em 0 0;
font-size: 0.78rem;
font-style: italic;
overflow: hidden;
max-height: 3.8em;
animation: llm-prompt-reveal 0.7s ease-out both;
.llm-diff-summary-prompt-text {
display: block;
opacity: 0.55;
mask-image: linear-gradient(to bottom, rgba(0,0,0,0.85) 30%, rgba(0,0,0,0) 100%);
-webkit-mask-image: linear-gradient(to bottom, rgba(0,0,0,0.85) 30%, rgba(0,0,0,0) 100%);
white-space: pre-wrap;
overflow-wrap: break-word;
line-height: 1.45;
}
}
@keyframes llm-prompt-reveal {
from { opacity: 0; transform: translateY(-3px); }
to { opacity: 1; transform: translateY(0); }
}
.llm-diff-summary-loading {
opacity: 0.5;
font-style: italic;
animation: llm-pulse 1.4s ease-in-out infinite;
}
@keyframes llm-pulse {
0%, 100% { opacity: 0.5; }
50% { opacity: 0.2; }
}
.llm-budget-exceeded,
.llm-error {
color: #c0392b;
font-weight: 600;
font-style: normal;
opacity: 1;
}
// Menu: AI mode toggle button
.toggle-ai-mode {
opacity: 0.4;
transition: opacity 0.2s ease, filter 0.2s ease;
display: inline-flex;
align-items: center;
gap: 0.25rem;
color: var(--color-text-menu-link);
svg {
height: 1.2rem;
width: 1.2rem;
}
.ai-mode-label {
font-size: 0.75rem;
font-weight: 600;
letter-spacing: 0.04em;
line-height: 1;
}
}
html[data-ai-mode="true"] .toggle-ai-mode {
opacity: 1;
filter: drop-shadow(0 0 4px rgba(160, 100, 255, 0.7));
}
// Watchlist: History/Summary dual-label button
.btn-label-summary {
display: none;
}
html[data-ai-mode="true"] body.llm-configured {
.btn-label-history { display: none; }
.btn-label-summary { display: inline; }
}
// Watchlist: inline AI summary row
.ai-inline-summary-row {
td {
white-space: normal !important;
word-break: break-word;
padding: 0.5rem 1rem 0.6rem 1.4rem !important;
background: linear-gradient(135deg, #f0ebff, #eaf0ff) !important;
border-top: 1px solid #c4b5fd !important;
border-left: 3px solid #8b5cf6 !important;
color: #1a0640 !important;
font-size: 0.85rem;
line-height: 1.5;
}
html[data-darkmode="true"] & td {
background: linear-gradient(135deg, #1c0d35, #0d1535) !important;
border-top: 1px solid #3b1f6e !important;
border-left-color: #8b5cf6 !important;
color: #e9d5ff !important;
}
.ai-inline-summary-content {
display: flex;
gap: 0.5rem;
align-items: flex-start;
.ai-inline-spinner {
flex-shrink: 0;
animation: llm-pulse 1.4s ease-in-out infinite;
}
.ai-inline-body {
display: flex;
flex-direction: column;
min-width: 0;
}
.ai-inline-text {
font-style: italic;
opacity: 0.75;
white-space: pre-wrap;
}
&.loaded .ai-inline-spinner {
animation: none;
}
&.loaded .ai-inline-text {
font-style: normal;
opacity: 1;
}
}
.ai-inline-history-link {
display: inline-block;
margin-top: 0.4rem;
font-size: 0.78rem;
font-weight: 700;
opacity: 0.7;
white-space: nowrap;
&:hover { opacity: 1; }
}
.ai-inline-error {
color: #c0392b;
}
.ai-inline-prompt {
display: block;
margin-top: 0.3em;
font-size: 0.75rem;
font-style: italic;
overflow: hidden;
max-height: 3.6em;
animation: llm-prompt-reveal 0.6s ease-out both;
mask-image: linear-gradient(to bottom, rgba(0,0,0,0.7) 30%, rgba(0,0,0,0) 100%);
-webkit-mask-image: linear-gradient(to bottom, rgba(0,0,0,0.7) 30%, rgba(0,0,0,0) 100%);
opacity: 0.55;
line-height: 1.4;
white-space: pre-wrap;
overflow-wrap: break-word;
}
}
@@ -1,476 +0,0 @@
// Vertical sub-tab layout
// Reusable panel: left nav rail + right content area.
// HTML structure:
// <div class="stab-shell" id="some-id">
// <nav class="stab-nav">
// <button class="stab-btn" data-stab="foo"><span class="stab-icon"></span> Foo</button>
// </nav>
// <div class="stab-body">
// <div class="stab-pane" data-stab="foo">content</div>
// </div>
// </div>
// JS: sub-tabs.js initialises all .stab-shell elements, toggles .active.
// Form fields in hidden panes still submit (visibility:hidden, not display:none).
.stab-shell {
display: flex;
align-items: stretch;
background: var(--color-background);
border: 1px solid rgba(0, 0, 0, 0.08);
border-radius: 8px;
overflow: hidden;
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.05);
margin-bottom: 1.5rem;
}
// Left nav rail
.stab-nav {
display: flex;
flex-direction: column;
width: 11rem;
flex-shrink: 0;
padding: 0.75rem 0;
gap: 1px;
background: linear-gradient(180deg,
rgba(0, 0, 0, 0.03) 0%,
rgba(0, 0, 0, 0.05) 100%
);
border-right: 1px solid rgba(0, 0, 0, 0.07);
}
.stab-btn {
position: relative;
display: flex;
align-items: center;
gap: 0.5rem;
padding: 0.65rem 0.9rem 0.65rem 1rem;
width: 100%;
background: none;
border: none;
border-left: 3px solid transparent;
border-radius: 0;
cursor: pointer;
font: inherit;
color: var(--color-text);
text-align: left;
opacity: 0.65;
transition: background 0.12s ease, opacity 0.12s ease, border-color 0.12s ease, color 0.12s ease;
&:hover {
background: rgba(0, 0, 0, 0.04);
opacity: 0.85;
}
&.active {
border-left-color: var(--color-menu-accent);
background: rgba(237, 89, 0, 0.07);
color: var(--color-menu-accent);
font-weight: 700;
opacity: 1;
}
.stab-icon {
font-size: 0.95rem;
width: 1.1rem;
text-align: center;
flex-shrink: 0;
opacity: 0.8;
}
}
// Right content area
.stab-body {
flex: 1;
min-width: 0;
padding: 1.4rem 1.6rem;
overflow-x: hidden;
}
// Individual pane hidden but field values still submitted
.stab-pane {
visibility: hidden;
height: 0;
overflow: hidden;
&.active {
visibility: visible;
height: auto;
overflow: visible;
animation: stab-enter 0.16s ease both;
}
}
@keyframes stab-enter {
from {
opacity: 0;
transform: translateY(5px);
}
to {
opacity: 1;
transform: translateY(0);
}
}
// Generic overview pane layout (hero + feature cards + CTA)
.stab-overview-hero {
margin-bottom: 1.5rem;
h3 {
margin: 0 0 0.3rem;
font-size: 1.05rem;
}
.stab-overview-glyph {
color: var(--color-menu-accent);
margin-right: 0.2rem;
}
p {
margin: 0;
font-size: 0.88rem;
color: var(--color-text-input-description);
max-width: 44rem;
line-height: 1.55;
}
}
.stab-overview-features {
display: flex;
flex-direction: column;
gap: 0.7rem;
margin-bottom: 1.6rem;
}
.stab-overview-feature {
display: flex;
gap: 0.85rem;
align-items: flex-start;
padding: 0.75rem 1rem;
border-radius: 6px;
background: rgba(0, 0, 0, 0.022);
border: 1px solid rgba(0, 0, 0, 0.05);
transition: background 0.12s ease;
&:hover {
background: rgba(0, 0, 0, 0.035);
}
.stab-overview-icon {
font-size: 1.1rem;
width: 1.6rem;
flex-shrink: 0;
text-align: center;
padding-top: 0.05rem;
opacity: 0.7;
}
.stab-overview-text {
> strong {
display: block;
margin-bottom: 0.2rem;
}
p {
color: var(--color-text-input-description);
}
}
}
.stab-overview-disclaimer {
display: flex;
gap: 0.75rem;
align-items: flex-start;
margin: 0 0 1.1rem;
padding: 0.85rem 1rem;
border-radius: 6px;
border: 1px solid rgba(211, 136, 0, 0.35);
background: rgba(255, 180, 0, 0.07);
.stab-disclaimer-icon {
font-size: 1.15rem;
flex-shrink: 0;
padding-top: 0.05rem;
color: #c07800;
}
.stab-disclaimer-body {
font-size: 0.85rem;
line-height: 1.55;
> strong {
display: block;
margin-bottom: 0.35rem;
color: #8a5500;
font-size: 0.87rem;
}
p {
margin: 0 0 0.45rem;
color: var(--color-text-input-description);
}
ul {
margin: 0 0 0.6rem;
padding-left: 1.25rem;
color: var(--color-text-input-description);
li {
margin-bottom: 0.2rem;
}
}
}
.stab-disclaimer-check {
display: flex;
gap: 0.5rem;
align-items: flex-start;
cursor: pointer;
font-size: 0.82rem;
color: var(--color-text-input-description);
font-weight: 600;
input[type="checkbox"] {
flex-shrink: 0;
margin-top: 0.18rem;
cursor: pointer;
}
}
}
.stab-overview-cta {
margin-top: 0.4rem;
display: flex;
align-items: center;
gap: 0.8rem;
flex-wrap: wrap;
}
.stab-configured-badge {
display: inline-flex;
align-items: center;
gap: 0.4rem;
padding: 0.35rem 0.75rem;
background: rgba(39, 174, 96, 0.09);
border: 1px solid rgba(39, 174, 96, 0.28);
border-radius: 4px;
color: #2a7a4e;
font-size: 0.82rem;
font-weight: 600;
}
// Section header inside a pane
.stab-section-title {
font-size: 0.72rem;
font-weight: 700;
letter-spacing: 0.07em;
text-transform: uppercase;
opacity: 0.45;
margin: 1.4rem 0 0.6rem;
&:first-child {
margin-top: 0;
}
}
// Mobile: stack nav above content, buttons stay vertical
@media (max-width: 600px) {
.stab-shell {
flex-direction: column;
min-height: unset;
}
.stab-nav {
width: 100%;
border-right: none;
border-bottom: 1px solid rgba(0, 0, 0, 0.07);
padding: 0.4rem 0;
}
.stab-body {
padding-left: 1rem;
}
}
// LLM Usage pane stat cards
.llm-usage-grid {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(12rem, 1fr));
gap: 0.9rem;
margin-bottom: 1.4rem;
}
.llm-stat-card {
padding: 1rem 1.1rem 0.85rem;
border-radius: 7px;
background: rgba(0, 0, 0, 0.025);
border: 1px solid rgba(0, 0, 0, 0.07);
.llm-stat-label {
font-size: 0.7rem;
font-weight: 700;
letter-spacing: 0.07em;
text-transform: uppercase;
opacity: 0.4;
margin-bottom: 0.4rem;
}
.llm-stat-value {
font-size: 1.65rem;
font-weight: 700;
letter-spacing: -0.02em;
line-height: 1;
margin-bottom: 0.25rem;
}
.llm-stat-sub {
font-size: 0.79rem;
opacity: 0.5;
}
.llm-stat-budget-text {
font-size: 0.77rem;
opacity: 0.55;
margin-top: 0.3rem;
}
}
.llm-stat-bar-wrap {
height: 4px;
border-radius: 2px;
background: rgba(0, 0, 0, 0.1);
overflow: hidden;
margin-top: 0.65rem;
}
.llm-stat-bar-fill {
height: 100%;
border-radius: 2px;
transition: width 0.5s ease;
&.bar-ok { background: #27ae60; }
&.bar-warn { background: #e67e22; }
&.bar-over { background: #c0392b; }
}
// Usage settings rows
.llm-usage-settings {
border-top: 1px solid rgba(0, 0, 0, 0.07);
padding-top: 0.9rem;
display: flex;
flex-direction: column;
gap: 0.65rem;
}
.llm-usage-row {
display: flex;
align-items: baseline;
gap: 0.9rem;
flex-wrap: wrap;
.llm-usage-row-label {
font-size: 0.82rem;
font-weight: 600;
opacity: 0.6;
min-width: 12rem;
flex-shrink: 0;
}
.llm-usage-row-value {
display: flex;
align-items: baseline;
gap: 0.5rem;
flex-wrap: wrap;
font-size: 0.88rem;
}
}
.llm-field-hint {
font-size: 0.8rem;
opacity: 0.55;
}
.llm-env-badge {
font-size: 0.79rem;
opacity: 0.6;
}
.llm-budget-alert {
color: #c0392b;
font-weight: 600;
font-size: 0.88rem;
margin: 0 0 1rem;
}
.llm-no-usage {
opacity: 0.5;
font-style: italic;
font-size: 0.88rem;
margin-bottom: 1rem;
}
// Dark mode
html[data-darkmode="true"] {
.stab-shell {
border-color: rgba(255, 255, 255, 0.07);
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.25);
}
.stab-nav {
background: linear-gradient(180deg,
rgba(255, 255, 255, 0.025) 0%,
rgba(255, 255, 255, 0.04) 100%
);
border-right-color: rgba(255, 255, 255, 0.07);
}
.stab-btn {
&:hover {
background: rgba(255, 255, 255, 0.05);
}
&.active {
background: rgba(237, 89, 0, 0.12);
}
}
.stab-overview-feature {
background: rgba(255, 255, 255, 0.025);
border-color: rgba(255, 255, 255, 0.05);
&:hover {
background: rgba(255, 255, 255, 0.04);
}
}
.stab-configured-badge {
background: rgba(39, 174, 96, 0.1);
border-color: rgba(39, 174, 96, 0.22);
color: #5db880;
}
.stab-overview-disclaimer {
border-color: rgba(255, 190, 50, 0.22);
background: rgba(255, 180, 0, 0.05);
.stab-disclaimer-icon {
color: #c9963a;
}
.stab-disclaimer-body > strong {
color: #c9a050;
}
}
.llm-stat-card {
background: rgba(255, 255, 255, 0.03);
border-color: rgba(255, 255, 255, 0.07);
}
.llm-stat-bar-wrap {
background: rgba(255, 255, 255, 0.1);
}
.llm-usage-settings {
border-top-color: rgba(255, 255, 255, 0.07);
}
}
@@ -180,10 +180,4 @@ $grid-gap: 0.5rem;
.pure-table td {
padding: 3px !important;
}
}
@media (min-width: 768px) {
.watch-table thead tr th .hide-on-desktop {
display: none;
}
}
@@ -28,12 +28,10 @@
@use "parts/action_sidebar";
@use "parts/hamburger_menu";
@use "parts/search_modal";
@use "parts/llm";
@use "parts/notification_bubble";
@use "parts/toast";
@use "parts/login_form";
@use "parts/tabs";
@use "parts/sub_tabs";
// Smooth transitions for theme switching
body,
File diff suppressed because one or more lines are too long
@@ -1,7 +1,7 @@
{% from '_helpers.html' import render_field %}
{% macro show_token_placeholders(extra_notification_token_placeholder_info, suffix="", settings_application=None) %}
{% macro show_token_placeholders(extra_notification_token_placeholder_info, suffix="") %}
<div class="pure-controls">
@@ -52,6 +52,10 @@
<td><code>{{ '{{diff_url}}' }}</code></td>
<td>{{ _('The URL of the diff output for the watch.') }}</td>
</tr>
<tr>
<td><code>{{ '{{diff_url}}' }}</code></td>
<td>{{ _('The URL of the diff output for the watch.') }}</td>
</tr>
<tr>
<td><code>{{ '{{diff}}' }}</code></td>
<td>{{ _('The diff output - only changes, additions, and removals') }}<br>
@@ -110,25 +114,7 @@
<tr>
<td><code>{{ '{{triggered_text}}' }}</code></td>
<td>{{ _('Text that tripped the trigger from filters') }}</td>
</tr>
{% if settings_application and settings_application.get('llm', {}).get('model') %}
<tr>
<td><code>{{ '{{diff}}' }}</code> <small style="opacity:0.6">{{ _('(upgraded)') }}</small></td>
<td>{{ _('When AI Change Summary is configured, contains the AI-generated description instead of the raw diff. Falls back to raw diff when not configured.') }}</td>
</tr>
<tr>
<td><code>{{ '{{raw_diff}}' }}</code></td>
<td>{{ _('Always the raw +/- diff, regardless of AI Change Summary setting.') }}</td>
</tr>
<tr>
<td><code>{{ '{{llm_summary}}' }}</code></td>
<td>{{ _('The AI Change Summary text (same as the upgraded {{diff}} — explicit reference).') }}</td>
</tr>
<tr>
<td><code>{{ '{{llm_intent}}' }}</code></td>
<td>{{ _('The AI Change Intent that was evaluated.') }}</td>
</tr>
{% endif %}
{% if extra_notification_token_placeholder_info %}
{% for token in extra_notification_token_placeholder_info %}
<tr>
@@ -158,8 +144,7 @@
}}
<div class="pure-form-message-inline">
<p>
<strong>{{ _('Tip:') }}</strong> {{ _('Use <a target="newwindow" href="%(url)s">AppRise Notification URLs</a> for notification to just about any service!',
url='https://github.com/caronc/apprise')|safe }} <a target="newwindow" href="https://github.com/dgtlmoon/changedetection.io/wiki/Notification-configuration-notes">{{ _('<i>Please read the notification services wiki here for important configuration notes</i>')|safe }}</a>.<br>
<strong>{{ _('Tip:') }}</strong> {{ _('Use') }} <a target="newwindow" href="https://github.com/caronc/apprise">{{ _('AppRise Notification URLs') }}</a> {{ _('for notification to just about any service!') }} <i><a target="newwindow" href="https://github.com/dgtlmoon/changedetection.io/wiki/Notification-configuration-notes">{{ _('Please read the notification services wiki here for important configuration notes') }}</a></i>.<br>
</p>
<div data-target="#advanced-help-notifications" class="toggle-show pure-button button-tag button-xsmall">{{ _('Show advanced help and tips') }}</div>
<ul style="display: none" id="advanced-help-notifications">
@@ -167,7 +152,7 @@
<li><code><a target="newwindow" href="https://github.com/caronc/apprise/wiki/Notify_telegram">tgram://</a></code> {{ _('bots can\'t send messages to other bots, so you should specify chat ID of non-bot user.') }}</li>
<li><code><a target="newwindow" href="https://github.com/caronc/apprise/wiki/Notify_telegram">tgram://</a></code> {{ _('only supports very limited HTML and can fail when extra tags are sent,') }} <a href="https://core.telegram.org/bots/api#html-style">{{ _('read more here') }}</a> {{ _('(or use plaintext/markdown format)') }}</li>
<li><code>gets://</code>, <code>posts://</code>, <code>puts://</code>, <code>deletes://</code> {{ _('for direct API calls (or omit the') }} "<code>s</code>" {{ _('for non-SSL ie') }} <code>get://</code>) <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Notification-configuration-notes#postposts">{{ _('more help here') }}</a></li>
<li>{{ _('Accepts the <code>%(token)s</code> placeholders listed below', token='{{token}}')|safe }}</li>
<li>{{ _('Accepts the') }} <code>{{ '{{token}}' }}</code> {{ _('placeholders listed below') }}</li>
</ul>
</div>
<div class="notifications-wrapper">
@@ -188,7 +173,7 @@
</div>
<div class="pure-control-group">
{{ render_field(form.notification_body , rows=5, class="notification-body", placeholder=settings_application['notification_body']) }}
{{ show_token_placeholders(extra_notification_token_placeholder_info=extra_notification_token_placeholder_info, settings_application=settings_application) }}
{{ show_token_placeholders(extra_notification_token_placeholder_info=extra_notification_token_placeholder_info) }}
<div class="pure-form-message-inline">
<ul>
<li><span class="pure-form-message-inline">
@@ -179,6 +179,13 @@
</div>
{% endmacro %}
{% macro playwright_warning() %}
<p><strong>{{ _('Error - This watch needs Chrome (with playwright/sockpuppetbrowser), but Chrome based fetching is not enabled.') }}</strong> {{ _('Alternatively try our') }} <a href="https://changedetection.io">{{ _('very affordable subscription based service which has all this setup for you') }}</a>.</p>
<p>{{ _('You may need to') }} <a href="https://github.com/dgtlmoon/changedetection.io/blob/09ebc6ec6338545bdd694dc6eee57f2e9d2b8075/docker-compose.yml#L31">{{ _('Enable playwright environment variable') }}</a> {{ _('and uncomment the') }} <strong>sockpuppetbrowser</strong> {{ _('in the') }} <a href="https://github.com/dgtlmoon/changedetection.io/blob/master/docker-compose.yml">docker-compose.yml</a> {{ _('file') }}.</p>
<br>
{% endmacro %}
{% macro render_time_schedule_form(form, available_timezones, timezone_default_config) %}
<style>
.day-schedule *, .day-schedule select {
-47
View File
@@ -1,47 +0,0 @@
{#
Vertical sub-tab macros — reusable across any settings pane.
Usage:
{% from '_stab.html' import stab_shell, stab_pane %}
{% call stab_shell('my-shell-id', [
{'id': 'overview', 'label': _('Overview'), 'icon': '✦'},
{'id': 'settings', 'label': _('Settings'), 'icon': '⚙'},
]) %}
{% call stab_pane('overview') %}
<p>Overview content…</p>
{% endcall %}
{% call stab_pane('settings') %}
<p>Settings content…</p>
{% endcall %}
{% endcall %}
Tabs are switched by sub-tabs.js (looks for .stab-shell elements).
Hidden panes use visibility:hidden so form fields inside still submit.
Active tab is persisted in localStorage keyed by shell id.
data-stab-goto="tab-id" on any element inside the shell navigates to that tab.
#}
{% macro stab_shell(shell_id, tabs) %}
<div class="stab-shell" id="{{ shell_id }}">
<nav class="stab-nav" aria-label="{{ _('Settings sections') }}">
{%- for tab in tabs %}
<button class="stab-btn" type="button" data-stab="{{ tab.id }}" aria-controls="stab-pane-{{ tab.id }}">
{%- if tab.get('icon') %}<span class="stab-icon" aria-hidden="true">{{ tab.icon }}</span>{% endif -%}
{{ tab.label }}
</button>
{%- endfor %}
</nav>
<div class="stab-body">
{{ caller() }}
</div>
</div>
{% endmacro %}
{% macro stab_pane(tab_id) %}
<div class="stab-pane" data-stab="{{ tab_id }}" id="stab-pane-{{ tab_id }}" role="tabpanel">
{{ caller() }}
</div>
{% endmacro %}
+1 -15
View File
@@ -281,20 +281,6 @@
</div>
</dialog>
<!-- LLM Not Configured Modal -->
<dialog id="llm-not-configured-modal" class="modal-dialog" aria-labelledby="llm-not-configured-modal-title">
<div class="modal-header">
<h2 class="modal-title" id="llm-not-configured-modal-title">{{ _('AI / LLM not configured') }}</h2>
</div>
<div class="modal-body">
<p>{{ _('LLM-powered summaries are not yet enabled. Configure an AI provider in Settings to get started.') }}</p>
<p><a href="{{ url_for('settings.settings_page') }}#ai" class="pure-button pure-button-primary">{{ _('Go to AI / LLM Settings') }}</a></p>
</div>
<div class="modal-footer">
<button type="button" class="pure-button" id="close-llm-not-configured-modal">{{ _('Close') }}</button>
</div>
</dialog>
<!-- Search Modal -->
{% if current_user.is_authenticated or not has_password %}
<dialog id="search-modal" class="modal-dialog" aria-labelledby="search-modal-title">
@@ -304,7 +290,7 @@
<div class="modal-body">
<form id="search-form" method="GET">
<div class="pure-control-group">
<label for="search-modal-input">{% if active_tag_uuid %}{{ _("URL or Title in '%(title)s'", title=active_tag.title) }}{% else %}{{ _('URL or Title') }}{% endif %}</label>
<label for="search-modal-input">{{ _('URL or Title') }}{% if active_tag_uuid %} {{ _('in') }} '{{ active_tag.title }}'{% endif %}</label>
<input id="search-modal-input" class="m-d" name="q" placeholder="{{ _('Enter search term...') }}" required type="text" value="" autofocus>
<input name="tags" type="hidden" value="{% if active_tag_uuid %}{{active_tag_uuid}}{% endif %}">
</div>
@@ -1,126 +0,0 @@
{#
AI Intent + AI Change Summary section — shared include for watch edit and tag/group edit.
Required template context:
llm_configured — bool: LLM provider is configured in settings
form — the WTForms form (must have .llm_intent and .llm_change_summary fields)
Optional (watch edit only):
watch — the Watch object (for processor check and prefilter display)
llm_group_overrides — dict returned by _resolve_llm_group_overrides():
{'llm_intent': {'value': str, 'group_name': str} | None,
'llm_change_summary': {'value': str, 'group_name': str} | None}
Present only in watch edit context; absent in tag edit context.
Usage in watch edit (edit.html):
{% include "edit/include_llm_intent.html" %}
Usage in tag edit (edit-tag.html):
{% include "edit/include_llm_intent.html" %}
(watch is not set → tag mode: no processor check, no examples, different description)
#}
{% from '_helpers.html' import render_field %}
{# Processor check only applies in watch-edit context (llm_group_overrides present). #}
{# In tag/group edit context the AI section is always visible. #}
{% if llm_group_overrides is defined %}
{% set is_text_json_diff = not watch.get('processor') or watch.get('processor') == 'text_json_diff' %}
{% else %}
{% set is_text_json_diff = true %}
{% endif %}
{% if is_text_json_diff %}
{# ── Configured: show the intent + summary fields ────────────────── #}
{% if llm_configured %}
<div class="border-fieldset" id="llm-intent-section">
<h3>&#x2728; {{ _('AI') }}</h3>
{# — AI Change Intent — #}
<h4 style="margin: 0 0 0.3em 0;">{{ _('AI — Notify when…') }}</h4>
<p class="pure-form-message-inline" style="margin-top:0">
{% if watch is defined and watch %}
{{ _('Describe what you care about. The AI evaluates every detected change against this and only notifies you when it matches.') }}
{% else %}
{{ _('Set a change intent for all watches in this tag/group. Each watch can override with its own.') }}
{% endif %}
</p>
<div class="pure-control-group">
{% if watch is defined and watch and llm_group_overrides is defined and llm_group_overrides.llm_intent %}
{% set intent_placeholder = _("From group '%(name)s': %(value)s", name=llm_group_overrides.llm_intent.group_name, value=llm_group_overrides.llm_intent.value) %}
{% elif watch is defined and watch %}
{% set intent_placeholder = _('e.g. Alert me when the price drops below $300, or a new product is launched. Ignore footer and navigation changes.') %}
{% else %}
{% set intent_placeholder = _('e.g. Flag price changes or new product launches across all watches in this group') %}
{% endif %}
{{ render_field(form.llm_intent, placeholder=intent_placeholder, rows=5, class="pure-input-1") }}
</div>
{% if watch is defined and watch %}
<div class="pure-form-message-inline">
<strong>{{ _('Examples:') }}</strong>
<ul style="margin: 0.3em 0 0 1.2em; padding: 0;">
<li><em>{{ _('Only notify if the price drops below $200, or a limited-time deal is added') }}</em></li>
<li><em>{{ _('Alert when a new recall, safety notice, or product withdrawal is published') }}</em></li>
<li><em>{{ _('Notify when a new grant round opens or an application deadline is announced') }}</em></li>
<li><em>{{ _('Only important if package versions change or a CVE is mentioned') }}</em></li>
</ul>
</div>
{% if watch.get('llm_prefilter') %}
<div class="pure-form-message-inline" style="margin-top: 0.5em;">
<small>{{ _('AI pre-filter active: <code>%(filter)s</code> — narrows content scope before evaluation', filter=watch.get('llm_prefilter')|e) | safe }}</small>
</div>
{% endif %}
{% endif %}
<hr style="margin: 1.2em 0; border: none; border-top: 1px solid var(--color-border, #ddd);">
{# — AI Change Summary — #}
<h4 style="margin: 0 0 0.3em 0;">{{ _('AI Change Summary') }}</h4>
<p class="pure-form-message-inline" style="margin-top:0">
{% if watch is defined and watch %}
{{ _('When a change is detected, the AI describes it according to your instructions and replaces <code>%(diff)s</code> in your notification. Use <code>%(raw_diff)s</code> if you still want the original diff.',
diff='{{diff}}', raw_diff='{{raw_diff}}') | safe }}
{% else %}
{{ _('Describe how changes should be summarised in notifications for all watches in this group.') }}
{% endif %}
</p>
<div class="pure-control-group">
{% if watch is defined and watch and llm_group_overrides is defined and llm_group_overrides.llm_change_summary %}
{% set summary_placeholder = _("From group '%(name)s': %(value)s", name=llm_group_overrides.llm_change_summary.group_name, value=llm_group_overrides.llm_change_summary.value) %}
{% else %}
{% set summary_placeholder = form.llm_change_summary.render_kw['placeholder'] %}
{% endif %}
{{ render_field(form.llm_change_summary, placeholder=summary_placeholder, rows=5, class="pure-input-1") }}
</div>
<div style="margin-top: 0.3em;">
<a href="#" class="pure-button button-xsmall" onclick="var t=document.getElementById('llm_change_summary'); if(!t.value&amp;&amp;t.placeholder) t.value=t.placeholder; return false;">{{ _('Modify default prompt') }}</a>
</div>
{% if watch is defined and watch %}
<div class="pure-form-message-inline">
<strong>{{ _('Examples:') }}</strong>
<ul style="margin: 0.3em 0 0 1.2em; padding: 0;">
<li><em>{{ _('List each new item added with its name and price. Translate to English.') }}</em></li>
<li><em>{{ _('Summarise what events were added or cancelled. Two sentences maximum.') }}</em></li>
<li><em>{{ _('Describe the price change: old price, new price, percentage difference.') }}</em></li>
</ul>
</div>
{% endif %}
</div>
{# ── Not configured: greyed-out prompt to configure ──────────────── #}
{% else %}
<div class="border-fieldset" id="llm-intent-section-disabled" style="opacity: 0.5;">
<h3>&#x2728; {{ _('AI') }}</h3>
<p>
{% if watch is defined and watch %}
{{ _('Configure an AI / LLM provider in <a href="%(url)s">Settings → AI / LLM</a> to enable AI Change Intent and AI Change Summary.',
url=url_for('settings.settings_page') + '#ai') | safe }}
{% else %}
{{ _('Configure an AI / LLM provider in <a href="%(url)s">Settings → AI / LLM</a> to enable AI features for this group.',
url=url_for('settings.settings_page') + '#ai') | safe }}
{% endif %}
</p>
</div>
{% endif %}
{% endif %}{# is_text_json_diff #}
@@ -7,33 +7,33 @@ xpath://body/div/span[contains(@class, 'example-class')]",
%}
{{ field }}
{% if '/text()' in field %}
<span class="pure-form-message-inline"><strong>{{ _('Note!: //text() function does not work where the <element> contains <![CDATA[]]>') }}</strong></span><br>
<span class="pure-form-message-inline"><strong>Note!: //text() function does not work where the &lt;element&gt; contains &lt;![CDATA[]]&gt;</strong></span><br>
{% endif %}
<span class="pure-form-message-inline">{{ _('One CSS, xPath 1 & 2, JSON Path/JQ selector per line, <i>any</i> rules that matches will be used.') | safe }}<br>
<span class="pure-form-message-inline">One CSS, xPath 1 &amp; 2, JSON Path/JQ selector per line, <i>any</i> rules that matches will be used.<br>
<span data-target="#advanced-help-selectors" class="toggle-show pure-button button-tag button-xsmall">{{ _('Show advanced help and tips') }}</span><br>
<ul id="advanced-help-selectors" style="display: none;">
<li>CSS - {{ _('Limit text to this CSS rule, only text matching this CSS rule is included.') }}</li>
<li>CSS - Limit text to this CSS rule, only text matching this CSS rule is included.</li>
<li>JSON - Limit text to this JSON rule, using either <a href="https://pypi.org/project/jsonpath-ng/" target="new">JSONPath</a> or <a href="https://stedolan.github.io/jq/" target="new">jq</a> (if installed).
<ul>
<li>JSONPath: {{ _('Prefix with <code>json:</code>, use <code>json:$</code> to force re-formatting if required,') | safe }} <a href="https://jsonpath.com/" target="new">{{ _('test your JSONPath here') }}</a>.</li>
<li>JSONPath: Prefix with <code>json:</code>, use <code>json:$</code> to force re-formatting if required, <a href="https://jsonpath.com/" target="new">test your JSONPath here</a>.</li>
{% if jq_support %}
<li>jq: Prefix with <code>jq:</code> and <a href="https://jqplay.org/" target="new">test your jq here</a>. Using <a href="https://stedolan.github.io/jq/" target="new">jq</a> allows for complex filtering and processing of JSON data with built-in functions, regex, filtering, and more. See examples and documentation <a href="https://stedolan.github.io/jq/manual/" target="new">here</a>. {{ _('Prefix <code>jqraw:</code> outputs the results as text instead of a JSON list.') | safe }}</li>
<li>jq: Prefix with <code>jq:</code> and <a href="https://jqplay.org/" target="new">test your jq here</a>. Using <a href="https://stedolan.github.io/jq/" target="new">jq</a> allows for complex filtering and processing of JSON data with built-in functions, regex, filtering, and more. See examples and documentation <a href="https://stedolan.github.io/jq/manual/" target="new">here</a>. Prefix <code>jqraw:</code> outputs the results as text instead of a JSON list.</li>
{% else %}
<li>{{ _('jq support not installed') }}</li>
<li>jq support not installed</li>
{% endif %}
</ul>
</li>
<li>XPath - {{ _('Limit text to this XPath rule, simply start with a forward-slash. To specify XPath to be used explicitly or the XPath rule starts with an XPath function: Prefix with <code>xpath:</code>') | safe }}
<li>XPath - Limit text to this XPath rule, simply start with a forward-slash. To specify XPath to be used explicitly or the XPath rule starts with an XPath function: Prefix with <code>xpath:</code>
<ul>
<li>{{ _('Example') }}: <code>//*[contains(@class, 'sametext')]</code> or <code>xpath:count(//*[contains(@class, 'sametext')])</code>, <a
href="http://xpather.com/" target="new">{{ _('test your XPath here') }}</a></li>
<li>{{ _('Example') }}: {{ _('Get all titles from an RSS feed <code>//title/text()</code>') | safe }}</li>
<li>{{ _('To use XPath1.0: Prefix with <code>xpath1:</code>') | safe }}</li>
<li>Example: <code>//*[contains(@class, 'sametext')]</code> or <code>xpath:count(//*[contains(@class, 'sametext')])</code>, <a
href="http://xpather.com/" target="new">test your XPath here</a></li>
<li>Example: Get all titles from an RSS feed <code>//title/text()</code></li>
<li>To use XPath1.0: Prefix with <code>xpath1:</code></li>
</ul>
</li>
<li>
Please be sure that you thoroughly understand how to write CSS, JSONPath, XPath{% if jq_support %}, or jq selector{%endif%} rules before filing an issue on GitHub! <a
href="https://github.com/dgtlmoon/changedetection.io/wiki/CSS-Selector-help">{{ _('here for more CSS selector help') }}</a>.<br>
href="https://github.com/dgtlmoon/changedetection.io/wiki/CSS-Selector-help">here for more CSS selector help</a>.<br>
</li>
</ul>
@@ -35,10 +35,10 @@
<fieldset>
<div class="pure-control-group">
{{ render_field(form.text_should_not_be_present, rows=5, placeholder=_("For example: Out of stock
{{ render_field(form.text_should_not_be_present, rows=5, placeholder="For example: Out of stock
Sold out
Not in stock
Unavailable")) }}
Unavailable") }}
<span class="pure-form-message-inline">
<ul>
<li>{{ _('Block change-detection while this text is on the page, all text and regex are tested case-insensitive, good for waiting for when a product is available again') }}</li>
@@ -51,15 +51,15 @@ Unavailable")) }}
</fieldset>
<fieldset>
<div class="pure-control-group">
{{ render_field(form.extract_lines_containing, rows=5, placeholder=_("celsius
{{ render_field(form.extract_lines_containing, rows=5, placeholder="celsius
temperature
price")) }}
price") }}
<span class="pure-form-message-inline">
<ul>
<li>{{ _('Keep only lines that contain any of these words or phrases (plain text, case-insensitive)') }}</li>
<li>{{ _('One entry per line — any line in the page text that contains a match is kept') }}</li>
<li>{{ _('Simpler alternative to regex — use this when you just want lines about a specific topic') }}</li>
<li>{{ _('Example') }}: {{ _('enter <code>celsius</code> to keep only lines mentioning temperature readings')|safe }}</li>
<li>{{ _('Example: enter') }} <code>celsius</code> {{ _('to keep only lines mentioning temperature readings') }}</li>
</ul>
</span>
</div>
@@ -75,8 +75,8 @@ keyword") }}
<ul>
<li>{{ _('Regular expression - example') }} <code>/reports.+?2022/i</code></li>
<li>{{ _('Don\'t forget to consider the white-space at the start of a line') }} <code>/.+?reports.+?2022/i</code></li>
<li>{{ _('Use <code>//(?aiLmsux))</code> type flags (more <a href="%(url)s">information here</a>)', url='https://docs.python.org/3/library/re.html#index-15')|safe }}<br></li>
<li>{{ _('Keyword example - example: <code>Out of stock</code>')|safe }}</li>
<li>{{ _('Use') }} <code>//(?aiLmsux))</code> {{ _('type flags (more') }} <a href="https://docs.python.org/3/library/re.html#index-15">{{ _('information here') }}</a>)<br></li>
<li>{{ _('Keyword example - example') }} <code>Out of stock</code></li>
<li>{{ _('Use groups to extract just that text - example') }} <code>/reports.+?(\d+)/i</code> {{ _('returns a list of years only') }}</li>
<li>{{ _('Example - match lines containing a keyword') }} <code>/.*icecream.*/</code></li>
</ul>
-4
View File
@@ -37,10 +37,6 @@
</li>
{% endif %}
<li class="pure-menu-item menu-collapsible" id="inline-menu-extras-group">
<button class="toggle-button toggle-ai-mode" type="button" title="{{ _('Toggle AI Mode') }}" data-llm-configured="{{ 'true' if llm_configured else 'false' }}" data-llm-settings-url="{{ url_for('settings.settings_page') }}#ai">
<span class="visually-hidden">{{ _('Toggle AI mode') }}</span>
{% include "svgs/ai-mode-icon.svg" %}<span class="ai-mode-label">LLM</span>
</button>
<button class="toggle-button toggle-light-mode " type="button" title="{{ _('Toggle Light/Dark Mode') }}">
<span class="visually-hidden">{{ _('Toggle light/dark mode') }}</span>
<span class="icon-light">
@@ -1,5 +0,0 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" fill="currentColor">
<path d="M12 2L13.8 9.2L21 11L13.8 12.8L12 22L10.2 12.8L3 11L10.2 9.2Z"/>
<circle cx="20" cy="3.5" r="1.4" opacity="0.75"/>
<circle cx="18" cy="18.5" r="0.9" opacity="0.5"/>
</svg>

Before

Width:  |  Height:  |  Size: 268 B

@@ -1,571 +0,0 @@
"""
Unit tests for changedetectionio/llm/evaluator.py
Uses mocked LLM calls no real API key needed.
"""
import pytest
from unittest.mock import patch, MagicMock
def _make_datastore(llm_cfg=None, tags=None):
"""Build a minimal datastore-like dict for testing."""
ds = MagicMock()
app_settings = {
'llm': llm_cfg or {},
'tags': tags or {},
}
ds.data = {
'settings': {
'application': app_settings,
}
}
return ds
def _make_watch(llm_intent='', llm_change_summary='', tags=None, uuid='test-uuid-1234'):
w = {}
w['llm_intent'] = llm_intent
w['llm_change_summary'] = llm_change_summary
w['tags'] = tags or []
w['uuid'] = uuid
w['url'] = 'https://example.com'
w['page_title'] = 'Test Page'
w['llm_evaluation_cache'] = {}
w['llm_prefilter'] = None
return w
# ---------------------------------------------------------------------------
# resolve_intent
# ---------------------------------------------------------------------------
class TestResolveIntent:
def test_watch_intent_takes_priority(self):
from changedetectionio.llm.evaluator import resolve_intent
tag = {'title': 'mygroup', 'llm_intent': 'group intent'}
ds = _make_datastore(tags={'tag-1': tag})
watch = _make_watch(llm_intent='watch intent', tags=['tag-1'])
intent, source = resolve_intent(watch, ds)
assert intent == 'watch intent'
assert source == 'watch'
def test_tag_intent_used_when_watch_has_none(self):
from changedetectionio.llm.evaluator import resolve_intent
tag = {'title': 'pricing-group', 'llm_intent': 'flag price drops'}
ds = _make_datastore(tags={'tag-1': tag})
watch = _make_watch(llm_intent='', tags=['tag-1'])
intent, source = resolve_intent(watch, ds)
assert intent == 'flag price drops'
assert source == 'pricing-group'
def test_no_intent_anywhere_returns_empty(self):
from changedetectionio.llm.evaluator import resolve_intent
ds = _make_datastore()
watch = _make_watch(llm_intent='')
intent, source = resolve_intent(watch, ds)
assert intent == ''
assert source == ''
def test_tag_applied_to_all_watches_in_group(self):
"""Tag intent propagates to every watch in the tag (no opt-in needed)."""
from changedetectionio.llm.evaluator import resolve_intent
tag = {'title': 'job-board', 'llm_intent': 'new engineering jobs'}
ds = _make_datastore(tags={'tag-1': tag})
# Three different watches, all in the tag, none have their own intent
for watch_uuid in ['uuid-A', 'uuid-B', 'uuid-C']:
watch = _make_watch(llm_intent='', tags=['tag-1'], uuid=watch_uuid)
intent, source = resolve_intent(watch, ds)
assert intent == 'new engineering jobs', f"Watch {watch_uuid} should inherit tag intent"
assert source == 'job-board'
def test_whitespace_only_intent_treated_as_empty(self):
from changedetectionio.llm.evaluator import resolve_intent
ds = _make_datastore()
watch = _make_watch(llm_intent=' ')
intent, source = resolve_intent(watch, ds)
assert intent == ''
def test_missing_tag_in_datastore_skipped(self):
from changedetectionio.llm.evaluator import resolve_intent
ds = _make_datastore(tags={}) # no tags registered
watch = _make_watch(llm_intent='', tags=['nonexistent-tag'])
intent, source = resolve_intent(watch, ds)
assert intent == ''
# ---------------------------------------------------------------------------
# get_llm_config
# ---------------------------------------------------------------------------
class TestGetLlmConfig:
def test_returns_none_when_no_model(self):
from changedetectionio.llm.evaluator import get_llm_config
ds = _make_datastore(llm_cfg={})
assert get_llm_config(ds) is None
def test_returns_config_when_model_set(self):
from changedetectionio.llm.evaluator import get_llm_config
cfg = {'model': 'gpt-4o-mini', 'api_key': 'sk-test'}
ds = _make_datastore(llm_cfg=cfg)
result = get_llm_config(ds)
assert result['model'] == 'gpt-4o-mini'
def test_env_var_overrides_datastore(self):
"""LLM_MODEL env var takes priority over datastore settings."""
from changedetectionio.llm.evaluator import get_llm_config
ds = _make_datastore(llm_cfg={'model': 'datastore-model'})
with patch.dict('os.environ', {'LLM_MODEL': 'ollama/llama3.2', 'LLM_API_KEY': '', 'LLM_API_BASE': ''}):
result = get_llm_config(ds)
assert result['model'] == 'ollama/llama3.2'
def test_env_var_api_key_and_base_included(self):
"""LLM_API_KEY and LLM_API_BASE are picked up alongside LLM_MODEL."""
from changedetectionio.llm.evaluator import get_llm_config
ds = _make_datastore()
env = {'LLM_MODEL': 'gpt-4o', 'LLM_API_KEY': 'env-key', 'LLM_API_BASE': 'http://localhost:11434'}
with patch.dict('os.environ', env):
result = get_llm_config(ds)
assert result['api_key'] == 'env-key'
assert result['api_base'] == 'http://localhost:11434'
def test_llm_configured_via_env_true_when_model_set(self):
"""llm_configured_via_env() returns True when LLM_MODEL is set."""
from changedetectionio.llm.evaluator import llm_configured_via_env
with patch.dict('os.environ', {'LLM_MODEL': 'gpt-4o-mini'}):
assert llm_configured_via_env() is True
def test_llm_configured_via_env_false_when_not_set(self):
"""llm_configured_via_env() returns False when LLM_MODEL is absent."""
from changedetectionio.llm.evaluator import llm_configured_via_env
env = {k: '' for k in ['LLM_MODEL', 'LLM_API_KEY', 'LLM_API_BASE']}
with patch.dict('os.environ', env, clear=False):
# Ensure LLM_MODEL is truly absent
import os
os.environ.pop('LLM_MODEL', None)
assert llm_configured_via_env() is False
# ---------------------------------------------------------------------------
# evaluate_change
# ---------------------------------------------------------------------------
class TestEvaluateChange:
def test_returns_none_when_llm_not_configured(self):
from changedetectionio.llm.evaluator import evaluate_change
ds = _make_datastore(llm_cfg={}) # no model
watch = _make_watch(llm_intent='flag price drops')
result = evaluate_change(watch, ds, diff='- $500\n+ $400')
assert result is None
def test_returns_none_when_no_intent(self):
from changedetectionio.llm.evaluator import evaluate_change
ds = _make_datastore(llm_cfg={'model': 'gpt-4o-mini'})
watch = _make_watch(llm_intent='')
result = evaluate_change(watch, ds, diff='some diff')
assert result is None
def test_returns_not_important_for_empty_diff(self):
from changedetectionio.llm.evaluator import evaluate_change
ds = _make_datastore(llm_cfg={'model': 'gpt-4o-mini'})
watch = _make_watch(llm_intent='flag price drops')
result = evaluate_change(watch, ds, diff='')
assert result == {'important': False, 'summary': ''}
def test_returns_not_important_for_whitespace_diff(self):
from changedetectionio.llm.evaluator import evaluate_change
ds = _make_datastore(llm_cfg={'model': 'gpt-4o-mini'})
watch = _make_watch(llm_intent='flag price drops')
result = evaluate_change(watch, ds, diff=' \n ')
assert result == {'important': False, 'summary': ''}
def test_calls_llm_and_returns_result(self):
from changedetectionio.llm.evaluator import evaluate_change
ds = _make_datastore(llm_cfg={'model': 'gpt-4o-mini', 'api_key': 'sk-test'})
watch = _make_watch(llm_intent='flag price drops')
llm_response = '{"important": true, "summary": "Price dropped from $500 to $400"}'
with patch('changedetectionio.llm.client.completion', return_value=(llm_response, 150)):
result = evaluate_change(watch, ds, diff='- $500\n+ $400')
assert result['important'] is True
assert 'Price dropped' in result['summary']
def test_cache_hit_skips_llm_call(self):
from changedetectionio.llm.evaluator import evaluate_change
import hashlib
ds = _make_datastore(llm_cfg={'model': 'gpt-4o-mini', 'api_key': 'sk-test'})
watch = _make_watch(llm_intent='flag price drops')
diff = '- $500\n+ $400'
intent = 'flag price drops'
cache_key = hashlib.sha256(f"{intent}||{diff}".encode()).hexdigest()
watch['llm_evaluation_cache'] = {
cache_key: {'important': True, 'summary': 'cached result'}
}
with patch('changedetectionio.llm.client.completion') as mock_llm:
result = evaluate_change(watch, ds, diff=diff)
mock_llm.assert_not_called()
assert result['summary'] == 'cached result'
def test_llm_failure_returns_important_true(self):
"""On LLM error, notification should NOT be suppressed (fail open)."""
from changedetectionio.llm.evaluator import evaluate_change
ds = _make_datastore(llm_cfg={'model': 'gpt-4o-mini', 'api_key': 'sk-test'})
watch = _make_watch(llm_intent='flag price drops')
with patch('changedetectionio.llm.client.completion', side_effect=Exception('API timeout')):
result = evaluate_change(watch, ds, diff='- $500\n+ $400')
assert result['important'] is True
assert result['summary'] == ''
def test_unimportant_result_from_llm(self):
from changedetectionio.llm.evaluator import evaluate_change
ds = _make_datastore(llm_cfg={'model': 'gpt-4o-mini'})
watch = _make_watch(llm_intent='only alert on price drops')
llm_response = '{"important": false, "summary": "Only a footer copyright year changed"}'
with patch('changedetectionio.llm.client.completion', return_value=(llm_response, 45)):
result = evaluate_change(watch, ds, diff='- Copyright 2023\n+ Copyright 2024')
assert result['important'] is False
assert 'footer' in result['summary'].lower() or 'copyright' in result['summary'].lower()
def test_last_tokens_used_stored_after_eval(self):
"""watch['llm_last_tokens_used'] is set to the token count after a successful call."""
from changedetectionio.llm.evaluator import evaluate_change
ds = _make_datastore(llm_cfg={'model': 'gpt-4o-mini'})
watch = _make_watch(llm_intent='flag price drops')
llm_response = '{"important": true, "summary": "Price fell"}'
with patch('changedetectionio.llm.client.completion', return_value=(llm_response, 123)):
evaluate_change(watch, ds, diff='- $500\n+ $300')
assert watch.get('llm_last_tokens_used') == 123
def test_cumulative_tokens_accumulate_across_evals(self):
"""Each eval adds its tokens to watch['llm_tokens_used_cumulative']."""
from changedetectionio.llm.evaluator import evaluate_change
ds = _make_datastore(llm_cfg={'model': 'gpt-4o-mini'})
watch = _make_watch(llm_intent='flag price drops')
resp1 = '{"important": true, "summary": "First"}'
resp2 = '{"important": false, "summary": "Second"}'
with patch('changedetectionio.llm.client.completion', return_value=(resp1, 80)):
evaluate_change(watch, ds, diff='- $500\n+ $400')
# Second call needs a different diff to avoid cache hit
with patch('changedetectionio.llm.client.completion', return_value=(resp2, 60)):
evaluate_change(watch, ds, diff='- $400\n+ $350')
assert watch.get('llm_tokens_used_cumulative') == 140
# ---------------------------------------------------------------------------
# Token budget enforcement
# ---------------------------------------------------------------------------
class TestTokenBudget:
def test_no_limits_always_returns_true(self):
"""When no limits configured, budget check always passes."""
from changedetectionio.llm.evaluator import _check_token_budget
watch = _make_watch()
cfg = {} # no limits
assert _check_token_budget(watch, cfg, tokens_this_call=10_000) is True
def test_per_check_limit_exceeded_returns_false(self):
"""Tokens on this call exceeding per-check limit → False."""
from changedetectionio.llm.evaluator import _check_token_budget
watch = _make_watch()
cfg = {'max_tokens_per_check': 100}
result = _check_token_budget(watch, cfg, tokens_this_call=150)
assert result is False
def test_per_check_limit_not_exceeded_returns_true(self):
"""Tokens on this call within per-check limit → True."""
from changedetectionio.llm.evaluator import _check_token_budget
watch = _make_watch()
cfg = {'max_tokens_per_check': 200}
result = _check_token_budget(watch, cfg, tokens_this_call=150)
assert result is True
def test_cumulative_limit_exceeded_returns_false(self):
"""Total accumulated tokens exceeding cumulative limit → False."""
from changedetectionio.llm.evaluator import _check_token_budget
watch = _make_watch()
watch['llm_tokens_used_cumulative'] = 900
cfg = {'max_tokens_cumulative': 1000}
# This call adds 200 → total 1100 > 1000
result = _check_token_budget(watch, cfg, tokens_this_call=200)
assert result is False
def test_cumulative_limit_not_yet_exceeded_returns_true(self):
"""Total accumulated tokens within cumulative limit → True."""
from changedetectionio.llm.evaluator import _check_token_budget
watch = _make_watch()
watch['llm_tokens_used_cumulative'] = 500
cfg = {'max_tokens_cumulative': 1000}
result = _check_token_budget(watch, cfg, tokens_this_call=100)
assert result is True
def test_tokens_accumulated_into_watch(self):
"""tokens_this_call is added to watch['llm_tokens_used_cumulative']."""
from changedetectionio.llm.evaluator import _check_token_budget
watch = _make_watch()
watch['llm_tokens_used_cumulative'] = 300
cfg = {}
_check_token_budget(watch, cfg, tokens_this_call=75)
assert watch['llm_tokens_used_cumulative'] == 375
def test_zero_tokens_call_does_not_change_cumulative(self):
"""Calling with tokens_this_call=0 (pre-flight check) doesn't modify cumulative."""
from changedetectionio.llm.evaluator import _check_token_budget
watch = _make_watch()
watch['llm_tokens_used_cumulative'] = 200
cfg = {}
_check_token_budget(watch, cfg, tokens_this_call=0)
assert watch['llm_tokens_used_cumulative'] == 200
def test_evaluate_change_skips_call_when_cumulative_over_budget(self):
"""Pre-flight cumulative check: if already over budget, skip LLM call and fail open."""
from changedetectionio.llm.evaluator import evaluate_change
ds = _make_datastore(llm_cfg={'model': 'gpt-4o-mini', 'max_tokens_cumulative': 100})
watch = _make_watch(llm_intent='flag price drops')
watch['llm_tokens_used_cumulative'] = 500 # already far over
with patch('changedetectionio.llm.client.completion') as mock_llm:
result = evaluate_change(watch, ds, diff='- $500\n+ $400')
mock_llm.assert_not_called()
# Fail open: important=True so the notification is NOT suppressed
assert result == {'important': True, 'summary': ''}
def test_evaluate_change_per_check_limit_fails_open(self):
"""Per-check token exceeded after call → result still returned (fail open)."""
from changedetectionio.llm.evaluator import evaluate_change
# max_tokens_per_check is 50, but the call returns 150 tokens
ds = _make_datastore(llm_cfg={'model': 'gpt-4o-mini', 'max_tokens_per_check': 50})
watch = _make_watch(llm_intent='flag price drops')
llm_response = '{"important": false, "summary": "Only minor change"}'
with patch('changedetectionio.llm.client.completion', return_value=(llm_response, 150)):
result = evaluate_change(watch, ds, diff='- $500\n+ $499')
# LLM said not important, but even with per-check warning the result is returned
# (budget warning is logged but evaluation result is still used)
assert result is not None
assert 'important' in result
# ---------------------------------------------------------------------------
# resolve_llm_field (generic cascade)
# ---------------------------------------------------------------------------
class TestResolveLlmField:
def test_watch_value_takes_priority(self):
from changedetectionio.llm.evaluator import resolve_llm_field
tag = {'title': 'mygroup', 'llm_change_summary': 'tag summary prompt'}
ds = _make_datastore(tags={'tag-1': tag})
watch = _make_watch(llm_change_summary='watch summary prompt', tags=['tag-1'])
value, source = resolve_llm_field(watch, ds, 'llm_change_summary')
assert value == 'watch summary prompt'
assert source == 'watch'
def test_tag_value_used_when_watch_empty(self):
from changedetectionio.llm.evaluator import resolve_llm_field
tag = {'title': 'events-group', 'llm_change_summary': 'list new events'}
ds = _make_datastore(tags={'tag-1': tag})
watch = _make_watch(llm_change_summary='', tags=['tag-1'])
value, source = resolve_llm_field(watch, ds, 'llm_change_summary')
assert value == 'list new events'
assert source == 'events-group'
def test_returns_empty_when_not_set_anywhere(self):
from changedetectionio.llm.evaluator import resolve_llm_field
ds = _make_datastore()
watch = _make_watch()
value, source = resolve_llm_field(watch, ds, 'llm_change_summary')
assert value == ''
assert source == ''
def test_works_for_llm_intent_field_too(self):
"""resolve_llm_field is generic — works for llm_intent same as llm_change_summary."""
from changedetectionio.llm.evaluator import resolve_llm_field
tag = {'title': 'grp', 'llm_intent': 'flag price drops'}
ds = _make_datastore(tags={'t1': tag})
watch = _make_watch(llm_intent='', tags=['t1'])
value, source = resolve_llm_field(watch, ds, 'llm_intent')
assert value == 'flag price drops'
# ---------------------------------------------------------------------------
# summarise_change
# ---------------------------------------------------------------------------
class TestSummariseChange:
def test_returns_empty_when_llm_not_configured(self):
from changedetectionio.llm.evaluator import summarise_change
ds = _make_datastore(llm_cfg={})
watch = _make_watch(llm_change_summary='List what changed')
result = summarise_change(watch, ds, diff='- old\n+ new')
assert result == ''
def test_uses_default_prompt_when_no_summary_prompt(self):
"""When llm_change_summary is empty, falls back to DEFAULT_CHANGE_SUMMARY_PROMPT."""
from changedetectionio.llm.evaluator import summarise_change, DEFAULT_CHANGE_SUMMARY_PROMPT
ds = _make_datastore(llm_cfg={'model': 'gpt-4o-mini', 'api_key': 'sk-test'})
watch = _make_watch(llm_change_summary='')
with patch('changedetectionio.llm.client.completion',
return_value=('A new item was added.', 40)) as mock_llm:
result = summarise_change(watch, ds, diff='- old\n+ new')
mock_llm.assert_called_once()
# Default prompt must appear in the user message
call_messages = mock_llm.call_args.kwargs['messages']
user_msg = next(m['content'] for m in call_messages if m['role'] == 'user')
assert DEFAULT_CHANGE_SUMMARY_PROMPT in user_msg
assert result == 'A new item was added.'
def test_returns_empty_when_diff_empty(self):
from changedetectionio.llm.evaluator import summarise_change
ds = _make_datastore(llm_cfg={'model': 'gpt-4o-mini'})
watch = _make_watch(llm_change_summary='List what changed')
with patch('changedetectionio.llm.client.completion') as mock_llm:
result = summarise_change(watch, ds, diff='')
mock_llm.assert_not_called()
assert result == ''
def test_calls_llm_and_returns_plain_text(self):
from changedetectionio.llm.evaluator import summarise_change
ds = _make_datastore(llm_cfg={'model': 'gpt-4o-mini', 'api_key': 'sk-test'})
watch = _make_watch(llm_change_summary='List new events in English')
with patch('changedetectionio.llm.client.completion',
return_value=('3 new events added: Jazz Night, Art Show, Comedy Gig', 80)):
result = summarise_change(watch, ds, diff='+ Jazz Night\n+ Art Show\n+ Comedy Gig')
assert 'Jazz Night' in result
assert 'Art Show' in result
def test_cascades_from_tag(self):
"""llm_change_summary on a tag propagates to watches in that tag."""
from changedetectionio.llm.evaluator import summarise_change
tag = {'title': 'events', 'llm_change_summary': 'Translate events to English'}
ds = _make_datastore(llm_cfg={'model': 'gpt-4o-mini'}, tags={'tag-1': tag})
watch = _make_watch(llm_change_summary='', tags=['tag-1'])
with patch('changedetectionio.llm.client.completion',
return_value=('New concert added on Friday', 60)):
result = summarise_change(watch, ds, diff='+ Konzert am Freitag')
assert result == 'New concert added on Friday'
def test_llm_failure_raises(self):
"""On LLM error, summarise_change re-raises so callers can surface the error."""
from changedetectionio.llm.evaluator import summarise_change
ds = _make_datastore(llm_cfg={'model': 'gpt-4o-mini'})
watch = _make_watch(llm_change_summary='Describe the change')
with patch('changedetectionio.llm.client.completion', side_effect=Exception('timeout')):
with pytest.raises(Exception, match='timeout'):
summarise_change(watch, ds, diff='- old\n+ new')
def test_uses_higher_token_limit_than_eval(self):
"""summarise_change passes a dynamic max_tokens larger than the eval default (200)."""
from changedetectionio.llm.evaluator import summarise_change, _summary_max_tokens
ds = _make_datastore(llm_cfg={'model': 'gpt-4o-mini'})
watch = _make_watch(llm_change_summary='Describe changes')
diff = '- old\n+ new'
with patch('changedetectionio.llm.client.completion',
return_value=('Some summary', 100)) as mock_llm:
summarise_change(watch, ds, diff=diff)
call_kwargs = mock_llm.call_args
passed_max_tokens = call_kwargs.kwargs.get('max_tokens')
assert passed_max_tokens == _summary_max_tokens(diff)
assert passed_max_tokens >= 400 # always more generous than eval cap of 200
def test_dynamic_token_cap_scales_with_diff_size(self):
"""Larger diffs produce a higher max_tokens cap, bounded at 3000."""
from changedetectionio.llm.evaluator import _summary_max_tokens
assert _summary_max_tokens('x' * 100) == 400 # floor
assert _summary_max_tokens('x' * 4000) == 1000
assert _summary_max_tokens('x' * 12000) == 3000 # ceiling
assert _summary_max_tokens('x' * 99999) == 3000 # never exceeds ceiling
# ---------------------------------------------------------------------------
# compute_summary_cache_key / get_effective_summary_prompt
# ---------------------------------------------------------------------------
class TestSummaryCacheKey:
def test_same_inputs_produce_same_key(self):
from changedetectionio.llm.evaluator import compute_summary_cache_key
key1 = compute_summary_cache_key('+ new line', 'describe changes')
key2 = compute_summary_cache_key('+ new line', 'describe changes')
assert key1 == key2
def test_different_diff_produces_different_key(self):
from changedetectionio.llm.evaluator import compute_summary_cache_key
key1 = compute_summary_cache_key('+ line A', 'prompt')
key2 = compute_summary_cache_key('+ line B', 'prompt')
assert key1 != key2
def test_different_prompt_produces_different_key(self):
from changedetectionio.llm.evaluator import compute_summary_cache_key
key1 = compute_summary_cache_key('diff', 'list changes')
key2 = compute_summary_cache_key('diff', 'translate to English')
assert key1 != key2
def test_key_is_16_hex_chars(self):
from changedetectionio.llm.evaluator import compute_summary_cache_key
key = compute_summary_cache_key('diff', 'prompt')
assert len(key) == 16
assert all(c in '0123456789abcdef' for c in key)
def test_get_effective_prompt_returns_custom_when_set(self):
from changedetectionio.llm.evaluator import get_effective_summary_prompt
ds = _make_datastore()
watch = _make_watch(llm_change_summary='My custom prompt')
assert get_effective_summary_prompt(watch, ds) == 'My custom prompt'
def test_get_effective_prompt_returns_default_when_empty(self):
from changedetectionio.llm.evaluator import get_effective_summary_prompt, DEFAULT_CHANGE_SUMMARY_PROMPT
ds = _make_datastore()
watch = _make_watch(llm_change_summary='')
assert get_effective_summary_prompt(watch, ds) == DEFAULT_CHANGE_SUMMARY_PROMPT
def test_get_effective_prompt_cascades_from_tag(self):
from changedetectionio.llm.evaluator import get_effective_summary_prompt
tag = {'title': 'grp', 'llm_change_summary': 'tag-level prompt'}
ds = _make_datastore(tags={'t1': tag})
watch = _make_watch(llm_change_summary='', tags=['t1'])
assert get_effective_summary_prompt(watch, ds) == 'tag-level prompt'
@@ -1,313 +0,0 @@
"""
Tests for the LLM restock fallback plugin.
All LLM calls are mocked no real API key required.
"""
import json
import pytest
from unittest.mock import patch, MagicMock
def _make_datastore(llm_model='gpt-4o-mini', enabled=True):
"""Minimal datastore mock with the fields the plugin reads."""
ds = MagicMock()
ds.data = {
'settings': {
'application': {
'llm_restock_use_fallback_extract': enabled,
'llm': {
'model': llm_model,
'api_key': 'test-key',
'api_base': '',
'tokens_total_cumulative': 0,
'tokens_this_month': 0,
'tokens_month_key': '2099-01',
'cost_usd_total_cumulative': 0.0,
'cost_usd_this_month': 0.0,
},
}
}
}
return ds
def _call_plugin(content, url='https://example.com/product',
llm_json=None, datastore=None, enabled=True, llm_intent=None):
"""Helper: import plugin, inject datastore, call the hook, return result."""
from changedetectionio.processors.restock_diff.plugins import llm_restock
if datastore is None:
datastore = _make_datastore(enabled=enabled)
llm_restock.datastore = datastore
if llm_json is not None:
with patch('changedetectionio.llm.client.completion',
return_value=(llm_json, 50, 40, 10)):
return llm_restock.get_itemprop_availability_override(
content=content,
fetcher_name='html_requests',
fetcher_instance=None,
url=url,
llm_intent=llm_intent,
)
else:
return llm_restock.get_itemprop_availability_override(
content=content,
fetcher_name='html_requests',
fetcher_instance=None,
url=url,
llm_intent=llm_intent,
)
class TestLLMRestockPluginDisabled:
def test_returns_none_when_no_datastore(self):
from changedetectionio.processors.restock_diff.plugins import llm_restock
llm_restock.datastore = None
result = llm_restock.get_itemprop_availability_override(
content='<html><body>Price: $49.99 In Stock</body></html>',
fetcher_name='html_requests',
fetcher_instance=None,
url='https://example.com/product',
)
assert result is None
def test_returns_none_when_setting_disabled(self):
result = _call_plugin(
'<html><body>Price: $49.99 In Stock</body></html>',
enabled=False,
)
assert result is None
def test_returns_none_when_no_llm_configured(self):
ds = MagicMock()
ds.data = {
'settings': {
'application': {
'llm_restock_use_fallback_extract': True,
# No 'llm' key → get_llm_config returns None
}
}
}
result = _call_plugin(
'<html><body>Price: $49.99 In Stock</body></html>',
datastore=ds,
)
assert result is None
def test_returns_none_for_empty_content(self):
result = _call_plugin('', llm_json='{"price": 9.99, "currency": "USD", "availability": "instock"}')
assert result is None
class TestLLMRestockPluginExtraction:
def test_extracts_price_and_in_stock(self):
llm_json = '{"price": 49.99, "currency": "USD", "availability": "instock"}'
result = _call_plugin(
'<html><body><span class="price">$49.99</span> <span>In Stock</span></body></html>',
llm_json=llm_json,
)
assert result is not None
assert result['price'] == 49.99
assert result['currency'] == 'USD'
assert result['availability'] == 'instock'
def test_extracts_out_of_stock(self):
llm_json = '{"price": 129.00, "currency": "EUR", "availability": "outofstock"}'
result = _call_plugin(
'<html><body>129,00 € — Sold out</body></html>',
llm_json=llm_json,
)
assert result is not None
assert result['price'] == 129.0
assert result['currency'] == 'EUR'
assert result['availability'] == 'outofstock'
def test_returns_availability_only_when_no_price(self):
llm_json = '{"price": null, "currency": null, "availability": "instock"}'
result = _call_plugin(
'<html><body>Item available</body></html>',
llm_json=llm_json,
)
assert result is not None
assert result['price'] is None
assert result['availability'] == 'instock'
def test_returns_price_only_when_no_availability(self):
llm_json = '{"price": 19.95, "currency": "GBP", "availability": null}'
result = _call_plugin(
'<html><body>£19.95</body></html>',
llm_json=llm_json,
)
assert result is not None
assert result['price'] == 19.95
assert result['availability'] is None
def test_returns_none_when_both_null(self):
llm_json = '{"price": null, "currency": null, "availability": null}'
result = _call_plugin(
'<html><body>No pricing info here</body></html>',
llm_json=llm_json,
)
assert result is None
def test_strips_markdown_fences(self):
llm_json = '```json\n{"price": 9.99, "currency": "USD", "availability": "instock"}\n```'
result = _call_plugin(
'<html><body>$9.99 In Stock</body></html>',
llm_json=llm_json,
)
assert result is not None
assert result['price'] == 9.99
def test_handles_integer_price(self):
llm_json = '{"price": 100, "currency": "USD", "availability": "instock"}'
result = _call_plugin(
'<html><body>$100 In Stock</body></html>',
llm_json=llm_json,
)
assert result is not None
assert result['price'] == 100.0
def test_handles_string_price(self):
"""Model might return price as a string despite the prompt."""
llm_json = '{"price": "49.99", "currency": "USD", "availability": "instock"}'
result = _call_plugin(
'<html><body>$49.99</body></html>',
llm_json=llm_json,
)
assert result is not None
assert result['price'] == 49.99
class TestLLMRestockPluginTokenAccounting:
def test_result_includes_token_metadata(self):
"""Plugin result must carry _tokens/_input_tokens/_output_tokens/_model."""
llm_json = '{"price": 49.99, "currency": "USD", "availability": "instock"}'
result = _call_plugin(
'<html><body>$49.99 In Stock</body></html>',
llm_json=llm_json,
)
assert result is not None
assert result['_tokens'] == 50
assert result['_input_tokens'] == 40
assert result['_output_tokens'] == 10
assert result['_model'] == 'gpt-4o-mini'
def test_token_keys_not_in_none_result(self):
"""When LLM returns nothing useful, result is None — no token metadata leaked."""
llm_json = '{"price": null, "currency": null, "availability": null}'
result = _call_plugin(
'<html><body>No pricing info</body></html>',
llm_json=llm_json,
)
assert result is None
class TestLLMRestockPluginIntent:
def test_llm_intent_appended_to_user_prompt(self):
"""llm_intent should appear in the prompt sent to the LLM."""
from changedetectionio.processors.restock_diff.plugins import llm_restock
ds = _make_datastore()
llm_restock.datastore = ds
captured = {}
def fake_completion(model, messages, api_key, api_base, max_tokens):
captured['messages'] = messages
return ('{"price": 299.0, "currency": "USD", "availability": "instock"}', 50, 40, 10)
with patch('changedetectionio.llm.client.completion', side_effect=fake_completion):
result = llm_restock.get_itemprop_availability_override(
content='<html><body>$299 In Stock</body></html>',
fetcher_name='html_requests',
fetcher_instance=None,
url='https://example.com',
llm_intent='Alert me when price drops below $300',
)
assert result is not None
user_msg = next(m for m in captured['messages'] if m['role'] == 'user')
assert 'Alert me when price drops below $300' in user_msg['content']
def test_no_intent_prompt_unchanged(self):
"""Without llm_intent the user prompt should not include the intent line."""
from changedetectionio.processors.restock_diff.plugins import llm_restock
ds = _make_datastore()
llm_restock.datastore = ds
captured = {}
def fake_completion(model, messages, api_key, api_base, max_tokens):
captured['messages'] = messages
return ('{"price": 9.99, "currency": "USD", "availability": "instock"}', 20, 15, 5)
with patch('changedetectionio.llm.client.completion', side_effect=fake_completion):
llm_restock.get_itemprop_availability_override(
content='<html><body>$9.99 In Stock</body></html>',
fetcher_name='html_requests',
fetcher_instance=None,
url='https://example.com',
llm_intent=None,
)
user_msg = next(m for m in captured['messages'] if m['role'] == 'user')
assert 'notification intent' not in user_msg['content']
class TestLLMRestockPluginErrorHandling:
def test_returns_none_on_bad_json(self):
from changedetectionio.processors.restock_diff.plugins import llm_restock
ds = _make_datastore()
llm_restock.datastore = ds
with patch('changedetectionio.llm.client.completion',
return_value=('not valid json at all', 10, 8, 2)):
result = llm_restock.get_itemprop_availability_override(
content='<html><body>$49.99 In Stock</body></html>',
fetcher_name='html_requests',
fetcher_instance=None,
url='https://example.com',
)
assert result is None
def test_returns_none_on_llm_exception(self):
from changedetectionio.processors.restock_diff.plugins import llm_restock
ds = _make_datastore()
llm_restock.datastore = ds
with patch('changedetectionio.llm.client.completion',
side_effect=Exception("LLM timeout")):
result = llm_restock.get_itemprop_availability_override(
content='<html><body>$49.99 In Stock</body></html>',
fetcher_name='html_requests',
fetcher_instance=None,
url='https://example.com',
)
assert result is None
class TestLLMRestockPluginHTMLStripping:
def test_strip_html_removes_tags(self):
from changedetectionio.processors.restock_diff.plugins.llm_restock import _strip_html
result = _strip_html('<html><body><p>Price: $10</p></body></html>')
assert '<' not in result
assert 'Price: $10' in result
def test_strip_html_removes_scripts(self):
from changedetectionio.processors.restock_diff.plugins.llm_restock import _strip_html
html = '<html><head><script>var x = 1;</script></head><body>In Stock</body></html>'
result = _strip_html(html)
assert 'var x' not in result
assert 'In Stock' in result
def test_strip_html_decodes_entities(self):
from changedetectionio.processors.restock_diff.plugins.llm_restock import _strip_html
result = _strip_html('Price: 49&nbsp;&amp;&nbsp;in stock')
assert '&amp;' not in result
assert '&nbsp;' not in result
assert 'in stock' in result
def test_strip_html_truncates_long_content(self):
from changedetectionio.processors.restock_diff.plugins.llm_restock import _strip_html, _MAX_CONTENT_CHARS
long_html = '<p>' + 'x' * (_MAX_CONTENT_CHARS * 2) + '</p>'
result = _strip_html(long_html)
assert len(result) <= _MAX_CONTENT_CHARS
@@ -1,325 +0,0 @@
"""
Tests that {{ llm_summary }} and {{ llm_intent }} notification tokens
are correctly populated in the notification pipeline.
Covers:
1. notification/handler.py lazy population logic (lines 367-372)
2. notification_service.py _llm_result / _llm_intent from watch n_object
3. End-to-end: tokens render in notification body/title
"""
import pytest
from unittest.mock import MagicMock, patch
from changedetectionio.notification_service import NotificationContextData
def _make_n_object(**extra):
n = NotificationContextData()
n.update({
'notification_body': '',
'notification_title': '',
'notification_format': 'text',
'notification_urls': ['json://localhost/'],
'uuid': 'test-uuid',
'watch_uuid': 'test-uuid',
'watch_url': 'https://example.com',
'current_snapshot': 'current text',
'prev_snapshot': 'previous text',
})
n.update(extra)
return n
# ---------------------------------------------------------------------------
# handler.py — lazy population of llm_summary / llm_intent
# ---------------------------------------------------------------------------
class TestHandlerLlmTokenPopulation:
"""
The notification handler checks if llm_summary or llm_intent tokens appear
in the notification text and lazily populates them from _llm_result.
"""
def _run_handler_llm_section(self, n_object):
"""
Replicate the exact logic from notification/handler.py lines 367-372.
This is tested directly to validate the handler's token population.
"""
scan_text = n_object.get('notification_body', '') + n_object.get('notification_title', '')
if 'llm_summary' in scan_text or 'llm_intent' in scan_text:
llm_result = n_object.get('_llm_result') or {}
n_object['llm_summary'] = llm_result.get('summary', '')
n_object['llm_intent'] = n_object.get('_llm_intent', '')
return n_object
def test_llm_summary_populated_when_token_in_body(self):
n = _make_n_object(
notification_body='Change detected! Summary: {{ llm_summary }}',
_llm_result={'important': True, 'summary': 'Price dropped from $500 to $400'},
_llm_intent='flag price drops',
)
result = self._run_handler_llm_section(n)
assert result['llm_summary'] == 'Price dropped from $500 to $400'
def test_llm_intent_populated_when_token_in_body(self):
n = _make_n_object(
notification_body='Intent was: {{ llm_intent }}',
_llm_result={'important': True, 'summary': 'some change'},
_llm_intent='flag price drops',
)
result = self._run_handler_llm_section(n)
assert result['llm_intent'] == 'flag price drops'
def test_llm_summary_in_title(self):
n = _make_n_object(
notification_title='[CD] {{ llm_summary }}',
notification_body='some body',
_llm_result={'important': True, 'summary': 'New job posted'},
_llm_intent='new jobs',
)
result = self._run_handler_llm_section(n)
assert result['llm_summary'] == 'New job posted'
def test_tokens_not_populated_when_absent_from_template(self):
"""Don't bother populating when tokens aren't used — avoid needless LLM calls."""
n = _make_n_object(
notification_body='Change at {{ watch_url }}',
notification_title='CD Alert',
_llm_result={'important': True, 'summary': 'should not appear'},
_llm_intent='test',
)
result = self._run_handler_llm_section(n)
# llm_summary and llm_intent should remain at their default None values
assert result.get('llm_summary') is None
assert result.get('llm_intent') is None
def test_empty_summary_when_no_llm_result(self):
n = _make_n_object(
notification_body='Summary: {{ llm_summary }}',
_llm_result=None,
_llm_intent='',
)
result = self._run_handler_llm_section(n)
assert result['llm_summary'] == ''
def test_empty_intent_when_not_set(self):
n = _make_n_object(
notification_body='Intent: {{ llm_intent }}',
_llm_result={'important': False, 'summary': ''},
)
result = self._run_handler_llm_section(n)
assert result['llm_intent'] == ''
def test_summary_from_unimportant_result(self):
"""Even when important=False the summary explains why — useful for debugging."""
n = _make_n_object(
notification_body='Summary: {{ llm_summary }}',
_llm_result={'important': False, 'summary': 'Only a copyright year changed'},
_llm_intent='flag price drops',
)
result = self._run_handler_llm_section(n)
assert result['llm_summary'] == 'Only a copyright year changed'
# ---------------------------------------------------------------------------
# notification_service.py — _llm_result / _llm_intent wired from watch
# ---------------------------------------------------------------------------
class TestNotificationServiceLlmAttachment:
"""
send_content_changed_notification() reads _llm_result and _llm_intent
from the watch object and attaches them to n_object so the handler can render tokens.
"""
def _make_watch(self, llm_result=None, llm_intent=''):
watch = MagicMock()
watch.get.side_effect = lambda key, default=None: {
'_llm_result': llm_result,
'_llm_intent': llm_intent,
'notification_urls': ['json://localhost/'],
'notification_title': '',
'notification_body': '',
'notification_format': 'text',
'notification_muted': False,
'notification_alert_count': 0,
}.get(key, default)
watch.history = {'1000': 'snap1', '2000': 'snap2'}
watch.get_history_snapshot = MagicMock(return_value='snapshot text')
watch.extra_notification_token_values = MagicMock(return_value={})
return watch
def test_llm_result_attached_to_n_object(self):
"""_llm_result from watch ends up in n_object for the notification handler."""
from changedetectionio.notification_service import NotificationService
llm_result = {'important': True, 'summary': 'Price dropped'}
watch = self._make_watch(llm_result=llm_result, llm_intent='flag price drops')
datastore = MagicMock()
datastore.data = {
'settings': {
'application': {
'active_base_url': 'http://localhost',
'notification_urls': [],
'notification_title': '',
'notification_body': '',
'notification_format': 'text',
'notification_muted': False,
}
},
'watching': {'test-uuid': watch},
}
datastore.get_all_tags_for_watch = MagicMock(return_value={})
captured = {}
def fake_queue_notification(n_object, watch, **kwargs):
captured['n_object'] = dict(n_object)
svc = NotificationService(datastore, MagicMock())
svc.queue_notification_for_watch = fake_queue_notification
svc.send_content_changed_notification('test-uuid')
assert '_llm_result' in captured['n_object']
assert captured['n_object']['_llm_result'] == llm_result
def test_llm_intent_attached_to_n_object(self):
"""_llm_intent from watch ends up in n_object."""
from changedetectionio.notification_service import NotificationService
watch = self._make_watch(
llm_result={'important': True, 'summary': 'test'},
llm_intent='flag price drops',
)
datastore = MagicMock()
datastore.data = {
'settings': {
'application': {
'active_base_url': 'http://localhost',
'notification_urls': [],
'notification_title': '',
'notification_body': '',
'notification_format': 'text',
'notification_muted': False,
}
},
'watching': {'test-uuid': watch},
}
datastore.get_all_tags_for_watch = MagicMock(return_value={})
captured = {}
def fake_queue_notification(n_object, watch, **kwargs):
captured['n_object'] = dict(n_object)
svc = NotificationService(datastore, MagicMock())
svc.queue_notification_for_watch = fake_queue_notification
svc.send_content_changed_notification('test-uuid')
assert captured['n_object']['_llm_intent'] == 'flag price drops'
def test_null_llm_result_when_no_evaluation(self):
"""When LLM wasn't evaluated, _llm_result is None — tokens render as empty."""
from changedetectionio.notification_service import NotificationService
watch = self._make_watch(llm_result=None, llm_intent='')
datastore = MagicMock()
datastore.data = {
'settings': {
'application': {
'active_base_url': 'http://localhost',
'notification_urls': [],
'notification_title': '',
'notification_body': '',
'notification_format': 'text',
'notification_muted': False,
}
},
'watching': {'test-uuid': watch},
}
datastore.get_all_tags_for_watch = MagicMock(return_value={})
captured = {}
def fake_queue_notification(n_object, watch, **kwargs):
captured['n_object'] = dict(n_object)
svc = NotificationService(datastore, MagicMock())
svc.queue_notification_for_watch = fake_queue_notification
svc.send_content_changed_notification('test-uuid')
assert captured['n_object']['_llm_result'] is None
assert captured['n_object']['_llm_intent'] == ''
# ---------------------------------------------------------------------------
# End-to-end: Jinja2 template rendering with llm_summary / llm_intent
# ---------------------------------------------------------------------------
class TestLlmTokenEndToEnd:
"""
Verify that the tokens render correctly through the Jinja2 engine
used for notification bodies.
"""
def test_llm_summary_renders_in_template(self):
from changedetectionio.jinja2_custom import render as jinja_render
from changedetectionio.notification_service import NotificationContextData
n = NotificationContextData()
n['llm_summary'] = 'Price dropped from $500 to $400'
n['watch_url'] = 'https://example.com'
rendered = jinja_render(
template_str='Change at {{watch_url}}: {{llm_summary}}',
**n
)
assert 'Price dropped from $500 to $400' in rendered
assert 'https://example.com' in rendered
def test_llm_intent_renders_in_template(self):
from changedetectionio.jinja2_custom import render as jinja_render
from changedetectionio.notification_service import NotificationContextData
n = NotificationContextData()
n['llm_intent'] = 'flag price drops below $300'
n['watch_url'] = 'https://example.com'
rendered = jinja_render(
template_str='Intent was: {{llm_intent}}',
**n
)
assert 'flag price drops below $300' in rendered
def test_llm_summary_empty_string_when_none(self):
from changedetectionio.jinja2_custom import render as jinja_render
from changedetectionio.notification_service import NotificationContextData
n = NotificationContextData()
# llm_summary defaults to None in NotificationContextData
rendered = jinja_render(
template_str='Summary: {{llm_summary or ""}}',
**n
)
assert rendered == 'Summary: '
def test_both_tokens_in_same_template(self):
from changedetectionio.jinja2_custom import render as jinja_render
from changedetectionio.notification_service import NotificationContextData
n = NotificationContextData()
n['llm_summary'] = 'New senior role posted'
n['llm_intent'] = 'alert on new engineering jobs'
n['watch_url'] = 'https://jobs.example.com'
rendered = jinja_render(
template_str='[{{llm_intent}}] {{llm_summary}} — {{watch_url}}',
**n
)
assert 'alert on new engineering jobs' in rendered
assert 'New senior role posted' in rendered
assert 'https://jobs.example.com' in rendered
@@ -1,137 +0,0 @@
"""
Unit tests for changedetectionio/llm/prompt_builder.py
All functions are pure no external dependencies needed.
"""
import pytest
from changedetectionio.llm.prompt_builder import (
build_eval_prompt,
build_eval_system_prompt,
build_setup_prompt,
build_setup_system_prompt,
SNAPSHOT_CONTEXT_CHARS,
)
class TestBuildEvalPrompt:
def test_contains_intent(self):
prompt = build_eval_prompt(intent='Alert on price drops', diff='- $500\n+ $400')
assert 'Alert on price drops' in prompt
def test_contains_diff(self):
prompt = build_eval_prompt(intent='price', diff='- $500\n+ $400')
assert '- $500' in prompt
assert '+ $400' in prompt
def test_optional_url_included_when_provided(self):
prompt = build_eval_prompt(
intent='price',
diff='some diff',
url='https://example.com/product',
)
assert 'https://example.com/product' in prompt
def test_url_absent_when_not_provided(self):
prompt = build_eval_prompt(intent='price', diff='diff')
assert 'URL:' not in prompt
def test_optional_title_included_when_provided(self):
prompt = build_eval_prompt(
intent='price',
diff='diff',
title='Example Product Page',
)
assert 'Example Product Page' in prompt
def test_snapshot_context_included(self):
snapshot = 'Current price: $400. Stock: in stock. Description: widget.'
prompt = build_eval_prompt(
intent='price',
diff='- $500\n+ $400',
current_snapshot=snapshot,
)
# Snapshot excerpt should appear somewhere in the prompt
assert 'Current price' in prompt or '$400' in prompt
def test_large_snapshot_trimmed_to_budget(self):
# Snapshot larger than SNAPSHOT_CONTEXT_CHARS should be trimmed
large_snapshot = 'irrelevant content line\n' * 2000
prompt = build_eval_prompt(
intent='price drop',
diff='changed',
current_snapshot=large_snapshot,
)
# Prompt should not be astronomically large
assert len(prompt) < len(large_snapshot)
def test_empty_snapshot_skipped(self):
prompt_with = build_eval_prompt(intent='x', diff='d', current_snapshot='some text')
prompt_without = build_eval_prompt(intent='x', diff='d', current_snapshot='')
# Without snapshot should be shorter
assert len(prompt_without) < len(prompt_with)
class TestBuildEvalSystemPrompt:
def test_returns_string(self):
result = build_eval_system_prompt()
assert isinstance(result, str)
assert len(result) > 0
def test_instructs_json_only_output(self):
result = build_eval_system_prompt()
assert 'JSON' in result or 'json' in result.lower()
def test_defines_important_field(self):
result = build_eval_system_prompt()
assert 'important' in result
def test_defines_summary_field(self):
result = build_eval_system_prompt()
assert 'summary' in result
class TestBuildSetupPrompt:
def test_contains_intent(self):
prompt = build_setup_prompt(
intent='monitor footer changes',
snapshot_text='<footer>Copyright 2024</footer>',
)
assert 'monitor footer changes' in prompt
def test_contains_url_when_provided(self):
prompt = build_setup_prompt(
intent='price',
snapshot_text='price: $10',
url='https://shop.example.com',
)
assert 'https://shop.example.com' in prompt
def test_url_absent_when_not_provided(self):
prompt = build_setup_prompt(intent='price', snapshot_text='text')
assert 'URL:' not in prompt
def test_large_snapshot_trimmed(self):
big_snapshot = 'unrelated junk line\n' * 500
prompt = build_setup_prompt(
intent='monitor price section',
snapshot_text=big_snapshot,
)
assert len(prompt) < len(big_snapshot)
class TestBuildSetupSystemPrompt:
def test_returns_string(self):
result = build_setup_system_prompt()
assert isinstance(result, str)
def test_forbids_positional_selectors(self):
result = build_setup_system_prompt()
assert 'nth-child' in result or 'positional' in result
def test_defines_needs_prefilter_field(self):
result = build_setup_system_prompt()
assert 'needs_prefilter' in result
def test_defines_selector_field(self):
result = build_setup_system_prompt()
assert 'selector' in result
@@ -1,146 +0,0 @@
"""
Unit tests for changedetectionio/llm/response_parser.py
All functions are pure no external dependencies needed.
"""
import pytest
from changedetectionio.llm.response_parser import (
_extract_json,
parse_eval_response,
parse_setup_response,
)
class TestExtractJson:
def test_plain_json_passes_through(self):
raw = '{"important": true, "summary": "price dropped"}'
assert _extract_json(raw) == raw
def test_strips_json_code_fence(self):
raw = '```json\n{"important": false, "summary": "no match"}\n```'
result = _extract_json(raw)
assert result.startswith('{')
assert '"important"' in result
def test_strips_plain_code_fence(self):
raw = '```\n{"important": true, "summary": "ok"}\n```'
result = _extract_json(raw)
assert result.startswith('{')
def test_extracts_json_from_surrounding_text(self):
raw = 'Here is my response: {"important": true, "summary": "match"} — done.'
result = _extract_json(raw)
assert result == '{"important": true, "summary": "match"}'
def test_multiline_json(self):
raw = '{\n "important": false,\n "summary": "nothing relevant"\n}'
result = _extract_json(raw)
assert '"important"' in result
class TestParseEvalResponse:
def test_valid_important_true(self):
raw = '{"important": true, "summary": "Price dropped from $500 to $400"}'
result = parse_eval_response(raw)
assert result['important'] is True
assert result['summary'] == 'Price dropped from $500 to $400'
def test_valid_important_false(self):
raw = '{"important": false, "summary": "Only a date counter changed"}'
result = parse_eval_response(raw)
assert result['important'] is False
assert 'date counter' in result['summary']
def test_markdown_fenced_response(self):
raw = '```json\n{"important": true, "summary": "New job posted"}\n```'
result = parse_eval_response(raw)
assert result['important'] is True
assert result['summary'] == 'New job posted'
def test_malformed_json_falls_back_to_safe_default(self):
result = parse_eval_response('this is not json at all')
assert result['important'] is False
assert result['summary'] == ''
def test_empty_string_falls_back(self):
result = parse_eval_response('')
assert result['important'] is False
def test_truthy_integer_coerced_to_bool(self):
raw = '{"important": 1, "summary": "yes"}'
result = parse_eval_response(raw)
assert result['important'] is True
def test_summary_stripped_of_whitespace(self):
raw = '{"important": false, "summary": " no match "}'
result = parse_eval_response(raw)
assert result['summary'] == 'no match'
def test_missing_summary_defaults_to_empty_string(self):
raw = '{"important": true}'
result = parse_eval_response(raw)
assert result['summary'] == ''
def test_extra_keys_ignored(self):
raw = '{"important": false, "summary": "skip", "confidence": 0.3, "debug": "xyz"}'
result = parse_eval_response(raw)
assert result['important'] is False
assert result['summary'] == 'skip'
class TestParseSetupResponse:
def test_no_prefilter_needed(self):
raw = '{"needs_prefilter": false, "selector": null, "reason": "intent is global"}'
result = parse_setup_response(raw)
assert result['needs_prefilter'] is False
assert result['selector'] is None
def test_semantic_selector_accepted(self):
raw = '{"needs_prefilter": true, "selector": "footer", "reason": "intent references footer"}'
result = parse_setup_response(raw)
assert result['needs_prefilter'] is True
assert result['selector'] == 'footer'
def test_attribute_selector_accepted(self):
raw = '{"needs_prefilter": true, "selector": "[class*=\'price\']", "reason": "pricing section"}'
result = parse_setup_response(raw)
assert result['needs_prefilter'] is True
assert result['selector'] is not None
def test_nth_child_positional_selector_rejected(self):
raw = '{"needs_prefilter": true, "selector": "div:nth-child(3)", "reason": "third div"}'
result = parse_setup_response(raw)
assert result['selector'] is None
assert result['needs_prefilter'] is False
def test_nth_of_type_positional_selector_rejected(self):
raw = '{"needs_prefilter": true, "selector": "p:nth-of-type(2)", "reason": "second p"}'
result = parse_setup_response(raw)
assert result['selector'] is None
assert result['needs_prefilter'] is False
def test_eq_positional_selector_rejected(self):
raw = '{"needs_prefilter": true, "selector": "div:eq(0)", "reason": "first div"}'
result = parse_setup_response(raw)
assert result['selector'] is None
def test_xpath_positional_selector_rejected(self):
raw = '{"needs_prefilter": true, "selector": "//*[2]", "reason": "second element"}'
result = parse_setup_response(raw)
assert result['selector'] is None
def test_selector_forced_to_null_when_needs_prefilter_false(self):
# Even if selector is provided alongside needs_prefilter=false, selector is nulled
raw = '{"needs_prefilter": false, "selector": "main", "reason": "not needed"}'
result = parse_setup_response(raw)
assert result['selector'] is None
def test_malformed_json_safe_defaults(self):
result = parse_setup_response('garbage text')
assert result['needs_prefilter'] is False
assert result['selector'] is None
assert result['reason'] == ''
def test_empty_response_safe_defaults(self):
result = parse_setup_response('')
assert result['needs_prefilter'] is False
@@ -1,106 +0,0 @@
#!/usr/bin/env python3
"""
Tests for the /diff/<uuid>/download-patch endpoint.
The route should accept from_version and to_version query parameters,
read those two snapshots from the watch history, generate a unified diff
patch, and return it as a downloadable text/plain file.
"""
from flask import url_for
from changedetectionio.tests.util import live_server_setup, delete_all_watches, wait_for_all_checks
def _add_watch_with_history(app, url, v1_text, v2_text):
"""
Add a watch and inject two synthetic snapshots into its history so we
can test the download-patch route without hitting a live fetch cycle.
"""
datastore = app.config['DATASTORE']
uuid = datastore.add_watch(url=url, extras={})
watch = datastore.data['watching'][uuid]
# Write the two snapshots directly via save_history_blob
# Args: contents (str), timestamp (str), snapshot_id (str)
watch.save_history_blob(v1_text, '1000000000', 'snap-v1')
watch.save_history_blob(v2_text, '1000000001', 'snap-v2')
return uuid
# ---------------------------------------------------------------------------
# Tests
# ---------------------------------------------------------------------------
def test_download_patch_returns_unified_diff(client, live_server, measure_memory_usage, datastore_path):
"""
The endpoint should return a .patch file whose content is a valid unified
diff between the two requested snapshots.
"""
live_server_setup(live_server)
delete_all_watches(client)
app = client.application
test_url = url_for('test_endpoint', content_type='text/html', content='hello', _external=True)
v1 = 'line one\nline two\nline three\n'
v2 = 'line one\nline two modified\nline three\nline four\n'
uuid = _add_watch_with_history(app, test_url, v1, v2)
res = client.get(
url_for('ui.ui_diff.download_patch', uuid=uuid,
from_version='1000000000', to_version='1000000001'),
follow_redirects=True,
)
assert res.status_code == 200, f"Expected 200, got {res.status_code}: {res.data[:200]}"
assert 'text/plain' in res.headers.get('Content-Type', '')
# No forced download — should open inline in the browser
assert 'attachment' not in res.headers.get('Content-Disposition', '')
patch = res.data.decode('utf-8')
assert '---' in patch or '+' in patch, "Response should contain unified diff markers"
assert 'line two modified' in patch or '+line two modified' in patch
assert '-line two' in patch
def test_download_patch_link_present_in_diff_page(client, live_server, measure_memory_usage, datastore_path):
"""
The diff history page HTML should contain a 'Download difference patch' link
pointing to the download-patch route when from_version and to_version are set.
"""
live_server_setup(live_server)
delete_all_watches(client)
app = client.application
test_url = url_for('test_endpoint', content_type='text/html', content='initial content', _external=True)
uuid = _add_watch_with_history(app, test_url, 'initial content\n', 'updated content\n')
# Load the diff page without explicit versions — should default to last two
res = client.get(
url_for('ui.ui_diff.diff_history_page', uuid=uuid),
follow_redirects=True,
)
assert res.status_code == 200
html = res.data.decode('utf-8')
assert 'Download difference patch' in html
assert 'download-patch' in html
def test_download_patch_unknown_uuid_returns_404(client, live_server, measure_memory_usage, datastore_path):
"""
Requesting a patch for a non-existent watch should return 404.
"""
live_server_setup(live_server)
delete_all_watches(client)
res = client.get(
url_for('ui.ui_diff.download_patch', uuid='00000000-0000-0000-0000-000000000000',
from_version='1000000000', to_version='1000000001'),
)
assert res.status_code == 404
@@ -1,183 +0,0 @@
#!/usr/bin/env python3
"""
Tests for the 'AI: every change between versions' (all_changes=1) feature.
Covers:
- all_changes=1 builds a multi-segment diff (pairwise across intermediate snapshots)
- all_changes=0 (default) uses a single fromto diff
- the two modes are cached under separate keys (no cross-contamination)
- a repeated all_changes=1 call returns the cached result without re-calling the LLM
"""
from unittest.mock import patch, call
from flask import url_for
from changedetectionio.tests.util import delete_all_watches
SNAP1 = "apple\nbanana\n"
SNAP2 = "apple\nbanana\ncherry\n"
SNAP3 = "apple\nbanana\ncherry\ndate\n"
TS1 = "2000000001"
TS2 = "2000000002"
TS3 = "2000000003"
def _configure_llm(client):
ds = client.application.config.get('DATASTORE')
ds.data['settings']['application']['llm'] = {'model': 'gpt-4o-mini', 'api_key': 'sk-test'}
def _make_watch_with_three_snapshots(client):
ds = client.application.config.get('DATASTORE')
uuid = ds.add_watch(url='https://example.com/allchanges')
watch = ds.data['watching'][uuid]
watch.save_history_blob(SNAP1, TS1, 'snap1')
watch.save_history_blob(SNAP2, TS2, 'snap2')
watch.save_history_blob(SNAP3, TS3, 'snap3')
return uuid, watch
# ---------------------------------------------------------------------------
# Multi-segment diff content reaches the LLM
# ---------------------------------------------------------------------------
def test_all_changes_sends_multi_segment_diff_to_llm(
client, live_server, measure_memory_usage, datastore_path):
"""
With all_changes=1 the diff passed to summarise_change must contain
two pairwise segments (TS1TS2 and TS2TS3), not just a single diff.
"""
_configure_llm(client)
uuid, _ = _make_watch_with_three_snapshots(client)
captured_diff = {}
def fake_summarise(watch, datastore, diff, current_snapshot=None):
captured_diff['diff'] = diff
return 'Multi-step summary.'
with patch('changedetectionio.llm.evaluator.summarise_change', side_effect=fake_summarise):
res = client.get(url_for(
'ui.ui_diff.diff_llm_summary', uuid=uuid,
from_version=TS1, to_version=TS3, all_changes=1,
))
assert res.status_code == 200
data = res.get_json()
assert data['summary'] == 'Multi-step summary.'
assert data['error'] is None
diff_sent = captured_diff.get('diff', '')
# Both segment headers must be present
assert f'{TS1} \u2192 {TS2}' in diff_sent, f"Missing TS1→TS2 header in: {diff_sent!r}"
assert f'{TS2} \u2192 {TS3}' in diff_sent, f"Missing TS2→TS3 header in: {diff_sent!r}"
delete_all_watches(client)
# ---------------------------------------------------------------------------
# Single-range diff (all_changes=0, the default)
# ---------------------------------------------------------------------------
def test_default_mode_sends_single_diff_to_llm(
client, live_server, measure_memory_usage, datastore_path):
"""
With all_changes=0 (or omitted) summarise_change receives a plain
unified diff between from_version and to_version only no segment headers.
"""
_configure_llm(client)
uuid, _ = _make_watch_with_three_snapshots(client)
captured_diff = {}
def fake_summarise(watch, datastore, diff, current_snapshot=None):
captured_diff['diff'] = diff
return 'Single-range summary.'
with patch('changedetectionio.llm.evaluator.summarise_change', side_effect=fake_summarise):
res = client.get(url_for(
'ui.ui_diff.diff_llm_summary', uuid=uuid,
from_version=TS1, to_version=TS3, all_changes=0,
))
assert res.status_code == 200
diff_sent = captured_diff.get('diff', '')
assert '\u2192' not in diff_sent, "Segment headers should not appear in single-range mode"
delete_all_watches(client)
# ---------------------------------------------------------------------------
# Cache key separation: all_changes=1 and all_changes=0 don't share cache
# ---------------------------------------------------------------------------
def test_all_changes_and_direct_use_separate_cache_keys(
client, live_server, measure_memory_usage, datastore_path):
"""
A cached all_changes=1 summary must not be served for an all_changes=0
request on the same from/to pair, and vice-versa.
"""
_configure_llm(client)
uuid, _ = _make_watch_with_three_snapshots(client)
call_count = {'n': 0}
def fake_summarise(watch, datastore, diff, current_snapshot=None):
call_count['n'] += 1
return f'Summary call #{call_count["n"]}'
with patch('changedetectionio.llm.evaluator.summarise_change', side_effect=fake_summarise):
# First call: all_changes=1
r1 = client.get(url_for(
'ui.ui_diff.diff_llm_summary', uuid=uuid,
from_version=TS1, to_version=TS3, all_changes=1,
))
# Second call: all_changes=0 — must NOT hit the cache from above
r2 = client.get(url_for(
'ui.ui_diff.diff_llm_summary', uuid=uuid,
from_version=TS1, to_version=TS3, all_changes=0,
))
assert call_count['n'] == 2, "LLM should be called twice (separate cache keys)"
assert r1.get_json()['summary'] != r2.get_json()['summary']
delete_all_watches(client)
# ---------------------------------------------------------------------------
# Caching: second all_changes=1 call returns cached result
# ---------------------------------------------------------------------------
def test_all_changes_result_is_cached(
client, live_server, measure_memory_usage, datastore_path):
"""
A second all_changes=1 request for the same from/to pair must be
served from cache summarise_change must only be called once.
"""
_configure_llm(client)
uuid, _ = _make_watch_with_three_snapshots(client)
call_count = {'n': 0}
def fake_summarise(watch, datastore, diff, current_snapshot=None):
call_count['n'] += 1
return 'Cached multi-step summary.'
with patch('changedetectionio.llm.evaluator.summarise_change', side_effect=fake_summarise):
r1 = client.get(url_for(
'ui.ui_diff.diff_llm_summary', uuid=uuid,
from_version=TS1, to_version=TS3, all_changes=1,
))
r2 = client.get(url_for(
'ui.ui_diff.diff_llm_summary', uuid=uuid,
from_version=TS1, to_version=TS3, all_changes=1,
))
assert call_count['n'] == 1, "LLM should only be called once; second request should be cached"
assert r1.get_json()['summary'] == r2.get_json()['summary'] == 'Cached multi-step summary.'
assert r2.get_json().get('cached') is True
delete_all_watches(client)
@@ -1,353 +0,0 @@
#!/usr/bin/env python3
"""
Security tests: LLM API key must never appear in any API response.
The LLM API key is a secret credential stored in
datastore.data['settings']['application']['llm']['api_key'].
It must never be leaked through any API endpoint watch GET/list,
tag GET/list, system-info, notifications even when the calling client
has a valid API token (which is a different kind of credential).
These tests set a recognisable fake key and then exhaustively check every
API endpoint's response body for the key string.
"""
import json
from flask import url_for
from changedetectionio.tests.util import live_server_setup, delete_all_watches
CANARY_KEY = 'sk-CANARY-SECRET-DO-NOT-EXPOSE-12345'
def _configure_llm(datastore, api_key=CANARY_KEY):
"""Inject a recognisable API key into the datastore LLM settings."""
app = datastore.data['settings']['application']
if 'llm' not in app:
app['llm'] = {}
app['llm'].update({
'model': 'gpt-4o-mini',
'api_key': api_key,
})
def _api_token(client):
return client.application.config.get('DATASTORE').data['settings']['application'].get('api_access_token')
def _key_in_response(response, key=CANARY_KEY) -> bool:
"""Return True if the canary key appears anywhere in the response body."""
body = response.data.decode('utf-8', errors='replace')
return key in body
# ---------------------------------------------------------------------------
# Watch endpoints
# ---------------------------------------------------------------------------
def test_watch_get_does_not_expose_llm_api_key(
client, live_server, measure_memory_usage, datastore_path):
"""GET /api/v1/watch/<uuid> must not contain the LLM API key."""
ds = client.application.config.get('DATASTORE')
_configure_llm(ds)
api_token = _api_token(client)
test_url = url_for('test_endpoint', _external=True)
res = client.post(
'/api/v1/watch',
data=json.dumps({'url': test_url}),
headers={'content-type': 'application/json', 'x-api-key': api_token},
follow_redirects=True,
)
assert res.status_code == 201
uuid = res.json.get('uuid')
res = client.get(
f'/api/v1/watch/{uuid}',
headers={'x-api-key': api_token},
)
assert res.status_code == 200
assert not _key_in_response(res), \
"LLM API key leaked in GET /api/v1/watch/<uuid> response"
delete_all_watches(client)
def test_watch_list_does_not_expose_llm_api_key(
client, live_server, measure_memory_usage, datastore_path):
"""GET /api/v1/watches must not contain the LLM API key."""
ds = client.application.config.get('DATASTORE')
_configure_llm(ds)
api_token = _api_token(client)
test_url = url_for('test_endpoint', _external=True)
client.post(
'/api/v1/watch',
data=json.dumps({'url': test_url}),
headers={'content-type': 'application/json', 'x-api-key': api_token},
follow_redirects=True,
)
res = client.get('/api/v1/watch', headers={'x-api-key': api_token})
assert res.status_code == 200
assert not _key_in_response(res), \
"LLM API key leaked in GET /api/v1/watch (list) response"
delete_all_watches(client)
def test_watch_put_response_does_not_expose_llm_api_key(
client, live_server, measure_memory_usage, datastore_path):
"""PUT /api/v1/watch/<uuid> response must not echo back the LLM API key."""
ds = client.application.config.get('DATASTORE')
_configure_llm(ds)
api_token = _api_token(client)
test_url = url_for('test_endpoint', _external=True)
res = client.post(
'/api/v1/watch',
data=json.dumps({'url': test_url}),
headers={'content-type': 'application/json', 'x-api-key': api_token},
follow_redirects=True,
)
assert res.status_code == 201
uuid = res.json.get('uuid')
res = client.put(
f'/api/v1/watch/{uuid}',
headers={'x-api-key': api_token, 'content-type': 'application/json'},
data=json.dumps({'url': test_url, 'title': 'updated'}),
)
assert res.status_code == 200
assert not _key_in_response(res), \
"LLM API key leaked in PUT /api/v1/watch/<uuid> response"
delete_all_watches(client)
# ---------------------------------------------------------------------------
# Tag endpoints
# ---------------------------------------------------------------------------
def test_tag_get_does_not_expose_llm_api_key(
client, live_server, measure_memory_usage, datastore_path):
"""GET /api/v1/tag/<uuid> must not contain the LLM API key."""
ds = client.application.config.get('DATASTORE')
_configure_llm(ds)
api_token = _api_token(client)
tag_uuid = ds.add_tag('security-test-tag')
res = client.get(
f'/api/v1/tag/{tag_uuid}',
headers={'x-api-key': api_token},
)
assert res.status_code == 200
assert not _key_in_response(res), \
"LLM API key leaked in GET /api/v1/tag/<uuid> response"
delete_all_watches(client)
def test_tag_list_does_not_expose_llm_api_key(
client, live_server, measure_memory_usage, datastore_path):
"""GET /api/v1/tags must not contain the LLM API key."""
ds = client.application.config.get('DATASTORE')
_configure_llm(ds)
api_token = _api_token(client)
res = client.get('/api/v1/tags', headers={'x-api-key': api_token})
assert res.status_code == 200
assert not _key_in_response(res), \
"LLM API key leaked in GET /api/v1/tags response"
delete_all_watches(client)
# ---------------------------------------------------------------------------
# System / global endpoints
# ---------------------------------------------------------------------------
def test_system_info_does_not_expose_llm_api_key(
client, live_server, measure_memory_usage, datastore_path):
"""GET /api/v1/systeminfo must not contain the LLM API key."""
ds = client.application.config.get('DATASTORE')
_configure_llm(ds)
api_token = _api_token(client)
res = client.get('/api/v1/systeminfo', headers={'x-api-key': api_token})
assert res.status_code == 200
assert not _key_in_response(res), \
"LLM API key leaked in GET /api/v1/systeminfo response"
delete_all_watches(client)
def test_notifications_api_does_not_expose_llm_api_key(
client, live_server, measure_memory_usage, datastore_path):
"""GET/POST/PUT /api/v1/notifications must not contain the LLM API key."""
ds = client.application.config.get('DATASTORE')
_configure_llm(ds)
api_token = _api_token(client)
# GET
res = client.get('/api/v1/notifications', headers={'x-api-key': api_token})
assert res.status_code == 200
assert not _key_in_response(res), \
"LLM API key leaked in GET /api/v1/notifications response"
# POST — add a notification URL; response must not echo back LLM config
res = client.post(
'/api/v1/notifications',
headers={'x-api-key': api_token, 'content-type': 'application/json'},
data=json.dumps({'notification_urls': ['json://localhost/']}),
)
assert res.status_code in (200, 201, 400) # 400 if URL invalid on server; still no key
assert not _key_in_response(res), \
"LLM API key leaked in POST /api/v1/notifications response"
# PUT — replace notification URLs; response must not include LLM config
res = client.put(
'/api/v1/notifications',
headers={'x-api-key': api_token, 'content-type': 'application/json'},
data=json.dumps({'notification_urls': ['json://localhost/']}),
)
assert res.status_code in (200, 201, 400)
assert not _key_in_response(res), \
"LLM API key leaked in PUT /api/v1/notifications response"
delete_all_watches(client)
def test_search_api_does_not_expose_llm_api_key(
client, live_server, measure_memory_usage, datastore_path):
"""GET /api/v1/search must not contain the LLM API key."""
ds = client.application.config.get('DATASTORE')
_configure_llm(ds)
api_token = _api_token(client)
test_url = url_for('test_endpoint', _external=True)
client.post(
'/api/v1/watch',
data=json.dumps({'url': test_url}),
headers={'content-type': 'application/json', 'x-api-key': api_token},
follow_redirects=True,
)
res = client.get('/api/v1/search?q=endpoint', headers={'x-api-key': api_token})
assert res.status_code == 200
assert not _key_in_response(res), \
"LLM API key leaked in GET /api/v1/search response"
delete_all_watches(client)
def test_openapi_spec_does_not_expose_llm_api_key(
client, live_server, measure_memory_usage, datastore_path):
"""
GET /api/v1/full-spec returns the static OpenAPI schema YAML.
It must not embed any runtime secrets (LLM API key).
"""
ds = client.application.config.get('DATASTORE')
_configure_llm(ds)
api_token = _api_token(client)
# Spec endpoint has no auth requirement, but test with and without key
res = client.get('/api/v1/full-spec')
assert res.status_code == 200
assert not _key_in_response(res), \
"LLM API key leaked in GET /api/v1/full-spec response"
delete_all_watches(client)
def test_no_api_settings_endpoint_exists(
client, live_server, measure_memory_usage, datastore_path):
"""
There is currently no /api/v1/settings endpoint.
If one is added in the future it must be covered by its own
security tests before reaching production.
This test acts as a canary it should FAIL if a settings endpoint
is accidentally wired up without review.
"""
api_token = _api_token(client)
# GET and POST to /api/v1/settings must not succeed — no settings endpoint exists.
# 404 = route not found; 405 = route exists for some methods but not this one.
# Either means there is no working read/write settings endpoint.
# A 200/201/400 would indicate a real endpoint was wired up.
res_get = client.get('/api/v1/settings', headers={'x-api-key': api_token})
assert res_get.status_code in (404, 405), \
(f"Unexpected /api/v1/settings GET returned {res_get.status_code}. "
"A settings endpoint must have explicit LLM key security tests before shipping.")
res_post = client.post(
'/api/v1/settings',
headers={'x-api-key': api_token, 'content-type': 'application/json'},
data=json.dumps({}),
)
assert res_post.status_code in (404, 405), \
(f"Unexpected /api/v1/settings POST returned {res_post.status_code}. "
"A settings endpoint must have explicit LLM key security tests before shipping.")
delete_all_watches(client)
# ---------------------------------------------------------------------------
# Settings HTML page — key must not appear in the form source HTML
# ---------------------------------------------------------------------------
def test_settings_page_does_not_render_llm_api_key_in_plaintext(
client, live_server, measure_memory_usage, datastore_path):
"""
The settings page renders the API key form. Because the field uses
PasswordField, WTForms must NOT embed the current key value in the HTML
(PasswordField intentionally omits the value attribute for security).
"""
ds = client.application.config.get('DATASTORE')
_configure_llm(ds)
res = client.get(url_for('settings.settings_page'))
assert res.status_code == 200
body = res.data.decode('utf-8', errors='replace')
assert CANARY_KEY not in body, \
"LLM API key appeared in plaintext in the settings page HTML source. " \
"The llm_api_key field must be a PasswordField so the value is never rendered."
def test_settings_form_preserves_api_key_when_submitted_blank(
client, live_server, measure_memory_usage, datastore_path):
"""
When the settings form is saved with an empty llm_api_key (which happens
every time because PasswordField never pre-populates), the existing key
must be preserved rather than cleared.
"""
ds = client.application.config.get('DATASTORE')
_configure_llm(ds, api_key='sk-should-be-kept')
res = client.post(
url_for('settings.settings_page'),
data={
'llm-llm_model': 'gpt-4o',
'llm-llm_api_key': '', # blank — PasswordField behaviour
'llm-llm_api_base': '',
'application-pager_size': '50',
'application-notification_format': 'System default',
'requests-time_between_check-days': '0',
'requests-time_between_check-hours': '0',
'requests-time_between_check-minutes': '5',
'requests-time_between_check-seconds': '0',
'requests-time_between_check-weeks': '0',
'requests-workers': '10',
'requests-timeout': '60',
},
follow_redirects=True,
)
assert res.status_code == 200
saved_key = ds.data['settings']['application'].get('llm', {}).get('api_key', '')
assert saved_key == 'sk-should-be-kept', \
f"Blank PasswordField submission must not clear the existing API key (got '{saved_key}')"
delete_all_watches(client)
@@ -1,475 +0,0 @@
#!/usr/bin/env python3
"""
Integration tests for AI Change Summary:
- llm_change_summary field saved via watch edit form
- llm_change_summary cascades from tag to watches
- {{ diff }} replaced by AI summary in notifications
- {{ raw_diff }} always contains original diff
- summarise_change only runs when change is detected
"""
import json
import time
from unittest.mock import patch
from flask import url_for
from changedetectionio.tests.util import wait_for_all_checks, delete_all_watches
HTML_V1 = "<html><body><ul><li>Item A</li><li>Item B</li></ul></body></html>"
HTML_V2 = "<html><body><ul><li>Item A</li><li>Item B</li><li>Item C — NEW</li></ul></body></html>"
def _set_response(datastore_path, content):
import os
with open(os.path.join(datastore_path, "endpoint-content.txt"), "w") as f:
f.write(content)
def _configure_llm(client):
ds = client.application.config.get('DATASTORE')
ds.data['settings']['application']['llm'] = {'model': 'gpt-4o-mini', 'api_key': 'sk-test'}
# ---------------------------------------------------------------------------
# Form field persistence
# ---------------------------------------------------------------------------
def test_llm_change_summary_saved_via_edit_form(
client, live_server, measure_memory_usage, datastore_path):
"""llm_change_summary submitted via watch edit form is persisted."""
_set_response(datastore_path, HTML_V1)
_configure_llm(client)
test_url = url_for('test_endpoint', _external=True)
uuid = client.application.config.get('DATASTORE').add_watch(url=test_url)
res = client.post(
url_for("ui.ui_edit.edit_page", uuid=uuid),
data={
"url": test_url,
"fetch_backend": "html_requests",
"time_between_check_use_default": "y",
"llm_change_summary": "List new items added as bullet points. Translate to English.",
},
follow_redirects=True,
)
assert b"Updated watch." in res.data
watch = client.application.config.get('DATASTORE').data['watching'][uuid]
assert watch.get('llm_change_summary') == "List new items added as bullet points. Translate to English."
delete_all_watches(client)
def test_llm_change_summary_cascades_from_tag(
client, live_server, measure_memory_usage, datastore_path):
"""llm_change_summary set on a tag is resolved for watches in that tag."""
from changedetectionio.llm.evaluator import resolve_llm_field
ds = client.application.config.get('DATASTORE')
_configure_llm(client)
_set_response(datastore_path, HTML_V1)
test_url = url_for('test_endpoint', _external=True)
# Create a tag with llm_change_summary
tag_uuid = ds.add_tag('events-group')
ds.data['settings']['application']['tags'][tag_uuid]['llm_change_summary'] = 'Summarise new events'
# Watch in that tag, no own summary prompt
uuid = ds.add_watch(url=test_url)
ds.data['watching'][uuid]['tags'] = [tag_uuid]
ds.data['watching'][uuid]['llm_change_summary'] = ''
watch = ds.data['watching'][uuid]
value, source = resolve_llm_field(watch, ds, 'llm_change_summary')
assert value == 'Summarise new events'
assert source == 'events-group'
delete_all_watches(client)
# ---------------------------------------------------------------------------
# Notification token behaviour
# ---------------------------------------------------------------------------
def test_diff_token_replaced_by_ai_summary_in_notification(
client, live_server, measure_memory_usage, datastore_path):
"""
When _llm_change_summary is set on the watch, the notification handler
must substitute it into {{ diff }} and preserve {{ raw_diff }}.
"""
from changedetectionio.notification.handler import process_notification
n_object = {
'notification_urls': ['json://localhost/'],
'notification_title': 'Change detected',
'notification_body': 'Summary: {{diff}}\nRaw: {{raw_diff}}',
'notification_format': 'text',
'uuid': 'test-uuid',
'watch_url': 'https://example.com',
'current_snapshot': 'Item A\nItem B\nItem C',
'prev_snapshot': 'Item A\nItem B',
'diff': '', # populated by add_rendered_diff_to_notification_vars
'raw_diff': '',
'_llm_change_summary': '1 new item added: Item C',
'_llm_result': None,
'_llm_intent': '',
'base_url': 'http://localhost:5000/',
'watch_mime_type': 'text/plain',
'triggered_text': '',
}
# We only need to verify the token substitution logic, not send a real notification
# Invoke just enough of the handler to check n_object state after substitution
from changedetectionio.notification_service import add_rendered_diff_to_notification_vars
diff_vars = add_rendered_diff_to_notification_vars(
notification_scan_text=n_object['notification_body'] + n_object['notification_title'],
current_snapshot=n_object['current_snapshot'],
prev_snapshot=n_object['prev_snapshot'],
word_diff=False,
)
n_object.update(diff_vars)
# Simulate what handler.py does
n_object['raw_diff'] = n_object.get('diff', '')
llm_summary = (n_object.get('_llm_change_summary') or '').strip()
if llm_summary:
n_object['diff'] = llm_summary
assert n_object['diff'] == '1 new item added: Item C'
assert 'Item C' in n_object['raw_diff'] or n_object['raw_diff'] != n_object['diff']
delete_all_watches(client)
# ---------------------------------------------------------------------------
# Error surfacing — rate limit / provider errors reach the AJAX endpoint
# ---------------------------------------------------------------------------
def test_llm_summary_ajax_surfaces_rate_limit_error(
client, live_server, measure_memory_usage, datastore_path):
"""
When the LLM call raises a RateLimitError the /llm-summary AJAX route must
return JSON {"summary": null, "error": "<readable message>"} with a 500
status not "LLM returned empty summary".
"""
from unittest.mock import patch
_configure_llm(client)
ds = client.application.config.get('DATASTORE')
test_url = url_for('test_endpoint', content_type='text/html', content='v1', _external=True)
uuid = ds.add_watch(url=test_url)
watch = ds.data['watching'][uuid]
watch.save_history_blob('snapshot one\n', '2000000000', 'snap1')
watch.save_history_blob('snapshot two\n', '2000000001', 'snap2')
# Build a realistic litellm RateLimitError string (matches real exception format)
rate_limit_msg = (
'litellm.RateLimitError: litellm.RateLimitError: geminiException - '
'{"error": {"code": 429, "message": "You exceeded your current quota, '
'please check your plan and billing details.", "status": "RESOURCE_EXHAUSTED"}}'
)
import litellm as _real_litellm
exc = _real_litellm.RateLimitError(
rate_limit_msg, llm_provider='gemini', model='gemini/gemini-2.5-pro'
)
with patch('litellm.completion', side_effect=exc):
res = client.get(
url_for('ui.ui_diff.diff_llm_summary', uuid=uuid,
from_version='2000000000', to_version='2000000001'),
)
assert res.status_code == 500
data = res.get_json()
assert data['summary'] is None
assert data['error'] # non-empty
assert 'LLM returned empty summary' not in data['error']
# Should contain the human-readable quota message, not a raw JSON blob
assert '{' not in data['error'], f"Error still contains raw JSON: {data['error']}"
delete_all_watches(client)
def test_llm_summary_ajax_error_displayed_not_silenced(
client, live_server, measure_memory_usage, datastore_path):
"""
Any non-success response from /llm-summary that has an 'error' key
should be surfaced verify the JSON contract (error present, summary absent).
Auth errors, timeout errors, etc. should follow the same shape.
"""
from unittest.mock import patch
_configure_llm(client)
ds = client.application.config.get('DATASTORE')
test_url = url_for('test_endpoint', content_type='text/html', content='v1', _external=True)
uuid = ds.add_watch(url=test_url)
watch = ds.data['watching'][uuid]
watch.save_history_blob('old content\n', '3000000000', 'snap-a')
watch.save_history_blob('new content\n', '3000000001', 'snap-b')
import litellm as _real_litellm
exc = _real_litellm.AuthenticationError(
'litellm.AuthenticationError: Invalid API key.',
llm_provider='openai', model='gpt-4o-mini'
)
with patch('litellm.completion', side_effect=exc):
res = client.get(
url_for('ui.ui_diff.diff_llm_summary', uuid=uuid,
from_version='3000000000', to_version='3000000001'),
)
assert res.status_code == 500
data = res.get_json()
assert data['summary'] is None
assert data['error']
assert 'LLM returned empty summary' not in data['error']
delete_all_watches(client)
# ---------------------------------------------------------------------------
# Global default prompt cascade
# ---------------------------------------------------------------------------
def _set_global_default(ds, prompt):
ds.data['settings']['application']['llm_change_summary_default'] = prompt
def test_global_default_used_when_watch_and_tag_have_none(
client, live_server, measure_memory_usage, datastore_path):
"""
get_effective_summary_prompt returns the global default when neither the
watch nor any of its tags have llm_change_summary set.
"""
from changedetectionio.llm.evaluator import get_effective_summary_prompt
ds = client.application.config.get('DATASTORE')
_configure_llm(client)
uuid = ds.add_watch(url='https://example.com')
watch = ds.data['watching'][uuid]
watch['llm_change_summary'] = ''
_set_global_default(ds, 'Global: summarise as one sentence.')
assert get_effective_summary_prompt(watch, ds) == 'Global: summarise as one sentence.'
delete_all_watches(client)
def test_tag_prompt_overrides_global_default(
client, live_server, measure_memory_usage, datastore_path):
"""
A tag-level llm_change_summary takes precedence over the global default.
"""
from changedetectionio.llm.evaluator import get_effective_summary_prompt
ds = client.application.config.get('DATASTORE')
_configure_llm(client)
tag_uuid = ds.add_tag('my-group')
ds.data['settings']['application']['tags'][tag_uuid]['llm_change_summary'] = 'Tag: bullet points.'
uuid = ds.add_watch(url='https://example.com')
watch = ds.data['watching'][uuid]
watch['llm_change_summary'] = ''
watch['tags'] = [tag_uuid]
_set_global_default(ds, 'Global: one sentence.')
assert get_effective_summary_prompt(watch, ds) == 'Tag: bullet points.'
delete_all_watches(client)
def test_watch_prompt_overrides_tag_and_global(
client, live_server, measure_memory_usage, datastore_path):
"""
A watch-level llm_change_summary wins over both tag and global default.
"""
from changedetectionio.llm.evaluator import get_effective_summary_prompt
ds = client.application.config.get('DATASTORE')
_configure_llm(client)
tag_uuid = ds.add_tag('my-group')
ds.data['settings']['application']['tags'][tag_uuid]['llm_change_summary'] = 'Tag prompt.'
uuid = ds.add_watch(url='https://example.com')
watch = ds.data['watching'][uuid]
watch['llm_change_summary'] = 'Watch: my own prompt.'
watch['tags'] = [tag_uuid]
_set_global_default(ds, 'Global prompt.')
assert get_effective_summary_prompt(watch, ds) == 'Watch: my own prompt.'
delete_all_watches(client)
def test_hardcoded_fallback_when_nothing_set(
client, live_server, measure_memory_usage, datastore_path):
"""
Falls back to DEFAULT_CHANGE_SUMMARY_PROMPT when watch, tag, and global
default are all empty.
"""
from changedetectionio.llm.evaluator import get_effective_summary_prompt, DEFAULT_CHANGE_SUMMARY_PROMPT
ds = client.application.config.get('DATASTORE')
_configure_llm(client)
uuid = ds.add_watch(url='https://example.com')
watch = ds.data['watching'][uuid]
watch['llm_change_summary'] = ''
# Ensure global default is also empty
ds.data['settings']['application']['llm_change_summary_default'] = ''
assert get_effective_summary_prompt(watch, ds) == DEFAULT_CHANGE_SUMMARY_PROMPT
delete_all_watches(client)
def test_llm_summary_ajax_sets_last_viewed(
client, live_server, measure_memory_usage, datastore_path):
"""
Calling /diff/<uuid>/llm-summary via AJAX should mark the watch as viewed
(set last_viewed) for both fresh and cached responses.
"""
from unittest.mock import patch, MagicMock
_configure_llm(client)
ds = client.application.config.get('DATASTORE')
test_url = url_for('test_endpoint', content_type='text/html', content='v1', _external=True)
uuid = ds.add_watch(url=test_url)
watch = ds.data['watching'][uuid]
watch.save_history_blob('old content\n', '4000000000', 'snap-old')
watch.save_history_blob('new content\n', '4000000001', 'snap-new')
assert watch['last_viewed'] == 0, "last_viewed should start at 0"
mock_response = MagicMock()
mock_response.choices = [MagicMock()]
mock_response.choices[0].message.content = 'Content changed from old to new.'
mock_response.usage = MagicMock(total_tokens=50, prompt_tokens=40, completion_tokens=10)
with patch('litellm.completion', return_value=mock_response):
res = client.get(
url_for('ui.ui_diff.diff_llm_summary', uuid=uuid,
from_version='4000000000', to_version='4000000001'),
)
assert res.status_code == 200
data = res.get_json()
assert data['summary'] == 'Content changed from old to new.'
assert watch['last_viewed'] > 0, "last_viewed should be set after fresh LLM summary"
# Reset and verify the cached path also sets last_viewed
watch['last_viewed'] = 0
with patch('litellm.completion', return_value=mock_response):
res2 = client.get(
url_for('ui.ui_diff.diff_llm_summary', uuid=uuid,
from_version='4000000000', to_version='4000000001'),
)
assert res2.status_code == 200
data2 = res2.get_json()
assert data2.get('cached') is True
assert watch['last_viewed'] > 0, "last_viewed should be set even when returning cached summary"
delete_all_watches(client)
def test_global_default_saved_and_loaded_via_settings_form(
client, live_server, measure_memory_usage, datastore_path):
"""
Submitting the settings form persists llm_change_summary_default at
settings.application level (not inside the llm credentials dict).
"""
from changedetectionio.tests.util import live_server_setup
live_server_setup(live_server)
_configure_llm(client)
res = client.post(
url_for('settings.settings_page'),
data={
'application-empty_pages_are_a_change': '',
'requests-time_between_check-minutes': 180,
'application-fetch_backend': 'html_requests',
'llm-llm_change_summary_default': 'Saved global prompt.',
# Keep existing model so llm block is retained
'llm-llm_model': 'gpt-4o-mini',
},
follow_redirects=True,
)
assert b'Settings updated.' in res.data
ds = client.application.config.get('DATASTORE')
stored = ds.data['settings']['application'].get('llm_change_summary_default', '')
assert stored == 'Saved global prompt.', f"Got: {stored!r}"
# Must NOT be buried inside the llm credentials dict
llm_dict = ds.data['settings']['application'].get('llm', {})
assert 'change_summary_default' not in llm_dict
delete_all_watches(client)
def test_global_default_survives_llm_clear(
client, live_server, measure_memory_usage, datastore_path):
"""
Clearing LLM credentials via /settings/llm/clear must not wipe
the global summary default.
"""
from changedetectionio.tests.util import live_server_setup
live_server_setup(live_server)
_configure_llm(client)
ds = client.application.config.get('DATASTORE')
_set_global_default(ds, 'Surviving prompt.')
res = client.get(url_for('settings.llm.llm_clear'), follow_redirects=True)
assert res.status_code == 200
assert ds.data['settings']['application'].get('llm_change_summary_default') == 'Surviving prompt.'
delete_all_watches(client)
def test_diff_token_unchanged_when_no_ai_summary(
client, live_server, measure_memory_usage, datastore_path):
"""When no AI Change Summary is configured, {{ diff }} renders the raw diff as normal."""
from changedetectionio.notification_service import add_rendered_diff_to_notification_vars
n_object = {
'current_snapshot': 'Item A\nItem B\nItem C',
'prev_snapshot': 'Item A\nItem B',
'_llm_change_summary': '',
}
diff_vars = add_rendered_diff_to_notification_vars(
notification_scan_text='{{diff}}',
current_snapshot=n_object['current_snapshot'],
prev_snapshot=n_object['prev_snapshot'],
word_diff=False,
)
n_object.update(diff_vars)
raw = n_object.get('diff', '')
n_object['raw_diff'] = raw
if (n_object.get('_llm_change_summary') or '').strip():
n_object['diff'] = n_object['_llm_change_summary']
# diff should still be the raw diff (not replaced)
assert n_object['diff'] == n_object['raw_diff']
delete_all_watches(client)
@@ -1,384 +0,0 @@
#!/usr/bin/env python3
"""
Tests for group/tag LLM field overrides on the watch edit page.
When a watch's first linked tag has llm_intent or llm_change_summary set
and the watch itself has no own value, the watch edit form should render
the relevant textarea as readonly with a "From group '<name>': <value>"
placeholder.
When the watch has its own value, the textarea is editable as normal.
The evaluator cascade (resolve_llm_field) is already tested in the
evaluator unit tests; these tests focus on the UI and form behaviour.
"""
import json
from flask import url_for
from changedetectionio.tests.util import live_server_setup, delete_all_watches
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def _configure_llm(datastore):
"""Enable a fake LLM so the AI section is visible in the edit form."""
app = datastore.data['settings']['application']
if 'llm' not in app:
app['llm'] = {}
app['llm'].update({'model': 'gpt-4o-mini', 'api_key': 'sk-test'})
def _create_watch(client, test_url, api_token):
res = client.post(
'/api/v1/watch',
data=json.dumps({'url': test_url}),
headers={'content-type': 'application/json', 'x-api-key': api_token},
follow_redirects=True,
)
assert res.status_code == 201
return res.json['uuid']
def _api_token(client):
return client.application.config.get('DATASTORE').data['settings']['application'].get('api_access_token')
# ---------------------------------------------------------------------------
# Tag setup
# ---------------------------------------------------------------------------
def _add_tag_with_llm(datastore, title, llm_intent='', llm_change_summary=''):
"""Create a tag with LLM fields set directly in the datastore."""
tag_uuid = datastore.add_tag(title)
tag = datastore.data['settings']['application']['tags'][tag_uuid]
if llm_intent:
tag['llm_intent'] = llm_intent
if llm_change_summary:
tag['llm_change_summary'] = llm_change_summary
return tag_uuid
def _link_watch_to_tag(datastore, watch_uuid, tag_uuid):
"""Append a tag UUID to a watch's tags list."""
watch = datastore.data['watching'][watch_uuid]
tags = list(watch.get('tags') or [])
if tag_uuid not in tags:
tags.append(tag_uuid)
watch['tags'] = tags
# ---------------------------------------------------------------------------
# Watch edit page — llm_intent group override
# ---------------------------------------------------------------------------
def test_watch_edit_shows_llm_intent_placeholder_from_group(
client, live_server, measure_memory_usage, datastore_path):
"""
When a watch has no own llm_intent but its first tag does,
the edit page must show "From group" + group name + group value in the
placeholder so the user sees the inherited value but can still type to override.
The field must NOT be readonly.
"""
ds = client.application.config.get('DATASTORE')
_configure_llm(ds)
api_token = _api_token(client)
test_url = url_for('test_endpoint', _external=True)
watch_uuid = _create_watch(client, test_url, api_token)
tag_uuid = _add_tag_with_llm(ds, 'Price Watchers', llm_intent='Notify only when price drops')
_link_watch_to_tag(ds, watch_uuid, tag_uuid)
res = client.get(url_for('ui.ui_edit.edit_page', uuid=watch_uuid))
assert res.status_code == 200
body = res.data.decode('utf-8', errors='replace')
assert 'name="llm_intent"' in body
# Placeholder must contain "From group", the tag name, and the value
assert 'From group' in body
assert 'Price Watchers' in body
assert 'Notify only when price drops' in body
# Field must be editable — no readonly attribute
intent_pos = body.find('name="llm_intent"')
snippet = body[max(0, intent_pos - 50): intent_pos + 300]
assert 'readonly' not in snippet, \
f"llm_intent must be editable when group sets it; snippet: {snippet!r}"
delete_all_watches(client)
def test_watch_edit_llm_intent_shows_own_value_not_group_placeholder(
client, live_server, measure_memory_usage, datastore_path):
"""
When a watch has its own llm_intent, the textarea body shows the watch's value
and the placeholder does NOT say "From group" (the group value is irrelevant).
"""
ds = client.application.config.get('DATASTORE')
_configure_llm(ds)
api_token = _api_token(client)
test_url = url_for('test_endpoint', _external=True)
watch_uuid = _create_watch(client, test_url, api_token)
tag_uuid = _add_tag_with_llm(ds, 'Deals Group', llm_intent='Tag intent: notify on any deal')
_link_watch_to_tag(ds, watch_uuid, tag_uuid)
ds.data['watching'][watch_uuid]['llm_intent'] = 'My own watch intent'
res = client.get(url_for('ui.ui_edit.edit_page', uuid=watch_uuid))
assert res.status_code == 200
body = res.data.decode('utf-8', errors='replace')
# Watch's own value in the textarea body
assert 'My own watch intent' in body
# No group placeholder — the watch has its own value
assert 'From group' not in body
delete_all_watches(client)
# ---------------------------------------------------------------------------
# Watch edit page — llm_change_summary group override
# ---------------------------------------------------------------------------
def test_watch_edit_shows_llm_change_summary_placeholder_from_group(
client, live_server, measure_memory_usage, datastore_path):
"""
When a watch has no own llm_change_summary but its first tag does,
the edit page shows the group value as placeholder (editable, not readonly).
"""
ds = client.application.config.get('DATASTORE')
_configure_llm(ds)
api_token = _api_token(client)
test_url = url_for('test_endpoint', _external=True)
watch_uuid = _create_watch(client, test_url, api_token)
tag_uuid = _add_tag_with_llm(
ds, 'Summary Group',
llm_change_summary='List new items as bullet points. Translate to English.'
)
_link_watch_to_tag(ds, watch_uuid, tag_uuid)
res = client.get(url_for('ui.ui_edit.edit_page', uuid=watch_uuid))
assert res.status_code == 200
body = res.data.decode('utf-8', errors='replace')
assert 'Summary Group' in body
assert 'List new items as bullet points' in body
# Field must be editable
summary_pos = body.find('name="llm_change_summary"')
assert summary_pos != -1
snippet = body[max(0, summary_pos - 50): summary_pos + 300]
assert 'readonly' not in snippet, \
f"llm_change_summary must be editable; snippet: {snippet!r}"
delete_all_watches(client)
def test_watch_edit_llm_change_summary_shows_own_value_not_group_placeholder(
client, live_server, measure_memory_usage, datastore_path):
"""
When a watch has its own llm_change_summary, the textarea body shows the watch's
value and no group placeholder appears.
"""
ds = client.application.config.get('DATASTORE')
_configure_llm(ds)
api_token = _api_token(client)
test_url = url_for('test_endpoint', _external=True)
watch_uuid = _create_watch(client, test_url, api_token)
tag_uuid = _add_tag_with_llm(ds, 'Summary Group', llm_change_summary='Tag summary prompt')
_link_watch_to_tag(ds, watch_uuid, tag_uuid)
ds.data['watching'][watch_uuid]['llm_change_summary'] = 'My own summary prompt'
res = client.get(url_for('ui.ui_edit.edit_page', uuid=watch_uuid))
assert res.status_code == 200
body = res.data.decode('utf-8', errors='replace')
assert 'My own summary prompt' in body
assert 'From group' not in body
delete_all_watches(client)
# ---------------------------------------------------------------------------
# No tag linked — fields are editable
# ---------------------------------------------------------------------------
def test_watch_edit_no_tag_fields_are_editable(
client, live_server, measure_memory_usage, datastore_path):
"""
A watch with no tags: both LLM textareas are editable (no readonly, no From group).
"""
ds = client.application.config.get('DATASTORE')
_configure_llm(ds)
api_token = _api_token(client)
test_url = url_for('test_endpoint', _external=True)
watch_uuid = _create_watch(client, test_url, api_token)
res = client.get(url_for('ui.ui_edit.edit_page', uuid=watch_uuid))
assert res.status_code == 200
body = res.data.decode('utf-8', errors='replace')
# Neither textarea should be readonly
for field in ('llm_intent', 'llm_change_summary'):
pos = body.find(f'name="{field}"')
if pos == -1:
continue # field might not render if LLM section hidden for some reason
snippet = body[max(0, pos - 50): pos + 300]
assert 'readonly' not in snippet, \
f"{field} textarea must not be readonly with no tags; snippet: {snippet!r}"
assert 'From group' not in body
delete_all_watches(client)
# ---------------------------------------------------------------------------
# Evaluator cascade — group value used when watch has none
# ---------------------------------------------------------------------------
def test_resolve_llm_field_uses_tag_value_when_watch_has_none(
client, live_server, measure_memory_usage, datastore_path):
"""
resolve_llm_field returns the tag's value (and tag name as source) when
the watch has no own value.
"""
from changedetectionio.llm.evaluator import resolve_llm_field
ds = client.application.config.get('DATASTORE')
api_token = _api_token(client)
test_url = url_for('test_endpoint', _external=True)
watch_uuid = _create_watch(client, test_url, api_token)
tag_uuid = _add_tag_with_llm(ds, 'Cascade Group', llm_intent='Group-level intent')
_link_watch_to_tag(ds, watch_uuid, tag_uuid)
watch = ds.data['watching'][watch_uuid]
value, source = resolve_llm_field(watch, ds, 'llm_intent')
assert value == 'Group-level intent'
assert source == 'Cascade Group'
delete_all_watches(client)
def test_resolve_llm_field_uses_watch_value_over_tag(
client, live_server, measure_memory_usage, datastore_path):
"""
resolve_llm_field prefers the watch's own value over the tag's.
"""
from changedetectionio.llm.evaluator import resolve_llm_field
ds = client.application.config.get('DATASTORE')
api_token = _api_token(client)
test_url = url_for('test_endpoint', _external=True)
watch_uuid = _create_watch(client, test_url, api_token)
tag_uuid = _add_tag_with_llm(ds, 'Override Group', llm_intent='Tag intent')
_link_watch_to_tag(ds, watch_uuid, tag_uuid)
ds.data['watching'][watch_uuid]['llm_intent'] = 'Watch-level intent'
watch = ds.data['watching'][watch_uuid]
value, source = resolve_llm_field(watch, ds, 'llm_intent')
assert value == 'Watch-level intent'
assert source == 'watch'
delete_all_watches(client)
# ---------------------------------------------------------------------------
# Both fields overridden independently
# ---------------------------------------------------------------------------
def test_watch_edit_independent_field_overrides(
client, live_server, measure_memory_usage, datastore_path):
"""
llm_intent can come from a group (readonly) while llm_change_summary
is editable (watch has its own), and vice versa.
"""
ds = client.application.config.get('DATASTORE')
_configure_llm(ds)
api_token = _api_token(client)
test_url = url_for('test_endpoint', _external=True)
watch_uuid = _create_watch(client, test_url, api_token)
tag_uuid = _add_tag_with_llm(
ds, 'Mixed Group',
llm_intent='Group intent here',
llm_change_summary='Group summary here',
)
_link_watch_to_tag(ds, watch_uuid, tag_uuid)
# Watch overrides only llm_change_summary
ds.data['watching'][watch_uuid]['llm_change_summary'] = 'My own summary'
res = client.get(url_for('ui.ui_edit.edit_page', uuid=watch_uuid))
assert res.status_code == 200
body = res.data.decode('utf-8', errors='replace')
# llm_intent: group placeholder visible (watch has no own value)
assert 'Group intent here' in body
intent_pos = body.find('name="llm_intent"')
assert intent_pos != -1
intent_snippet = body[max(0, intent_pos - 50): intent_pos + 300]
assert 'readonly' not in intent_snippet, \
f"llm_intent must be editable even when group sets it; snippet: {intent_snippet!r}"
# llm_change_summary: watch own value shown in body, no group placeholder
assert 'My own summary' in body
summary_pos = body.find('name="llm_change_summary"')
assert summary_pos != -1
summary_snippet = body[max(0, summary_pos - 50): summary_pos + 300]
assert 'readonly' not in summary_snippet, \
f"llm_change_summary should be editable; snippet: {summary_snippet!r}"
delete_all_watches(client)
# ---------------------------------------------------------------------------
# Tag edit page — AI section is always visible regardless of processor
# ---------------------------------------------------------------------------
def test_tag_edit_page_shows_ai_section(
client, live_server, measure_memory_usage, datastore_path):
"""
The tag/group edit page must always show the AI Intent and AI Change Summary
textareas when LLM is configured, regardless of whether the tag has a
'processor' key set (e.g. restock_diff tags must still show AI fields).
"""
ds = client.application.config.get('DATASTORE')
_configure_llm(ds)
tag_uuid = ds.add_tag('Test AI Group')
# Simulate a tag that has processor set (e.g. saved via restock form)
ds.data['settings']['application']['tags'][tag_uuid]['processor'] = 'restock_diff'
res = client.get(url_for('tags.form_tag_edit', uuid=tag_uuid))
assert res.status_code == 200
body = res.data.decode('utf-8', errors='replace')
# Both AI textareas must appear
assert 'name="llm_intent"' in body, \
"llm_intent textarea missing from tag edit page — processor check incorrectly blocks it"
assert 'name="llm_change_summary"' in body, \
"llm_change_summary textarea missing from tag edit page"
# Neither should be readonly in tag context
for field in ('llm_intent', 'llm_change_summary'):
pos = body.find(f'name="{field}"')
snippet = body[max(0, pos - 50): pos + 300]
assert 'readonly' not in snippet, \
f"{field} must not be readonly in tag edit context; snippet: {snippet!r}"
delete_all_watches(client)
-232
View File
@@ -1,232 +0,0 @@
#!/usr/bin/env python3
"""
Integration tests: /edit/<uuid>/preview-rendered returns llm_evaluation when
llm_intent is submitted alongside the filter form data.
These tests verify the full backend path:
JS POSTs llm_intent prepare_filter_prevew() applies it to tmp_watch
preview_extract() is called llm_evaluation appears in JSON response
The response uses {'found': bool, 'answer': str} NOT the diff-evaluation
{'important', 'summary'} shape, because preview asks the LLM to extract from
the current content directly (e.g. "30 articles listed") rather than compare
a diff.
"""
import json
import time
from unittest.mock import patch
from flask import url_for
from changedetectionio.tests.util import wait_for_all_checks, delete_all_watches
HTML_WITH_ARTICLES = """<html><body>
<ul id="articles">
<li>Article One</li>
<li>Article Two</li>
<li>Article Three</li>
</ul>
</body></html>"""
HTML_WITH_PRICE = """<html><body>
<p class="price">Original price: $199.00</p>
<p class="discount">Now: $149.00 25% off!</p>
</body></html>"""
def _set_response(datastore_path, content):
import os
with open(os.path.join(datastore_path, "endpoint-content.txt"), "w") as f:
f.write(content)
def _add_and_fetch(client, live_server, datastore_path, html):
"""Add a watch, fetch it once so a snapshot exists, return uuid."""
_set_response(datastore_path, html)
test_url = url_for('test_endpoint', _external=True)
uuid = client.application.config.get('DATASTORE').add_watch(url=test_url)
client.get(url_for("ui.form_watch_checknow"), follow_redirects=True)
time.sleep(0.5)
wait_for_all_checks(client)
return uuid
def _configure_llm(client):
"""Put a fake LLM config into the datastore."""
datastore = client.application.config.get('DATASTORE')
datastore.data['settings']['application']['llm'] = {
'model': 'gpt-4o-mini',
'api_key': 'sk-test-fake',
}
# ---------------------------------------------------------------------------
# llm_intent submitted → llm_evaluation returned with found/answer shape
# ---------------------------------------------------------------------------
def test_preview_returns_llm_answer_for_article_intent(
client, live_server, measure_memory_usage, datastore_path):
"""
With llm_intent='Tell me the number of articles in the list',
the preview endpoint returns llm_evaluation with found=True and an answer
that directly addresses the intent (e.g. "3 articles listed").
"""
uuid = _add_and_fetch(client, live_server, datastore_path, HTML_WITH_ARTICLES)
_configure_llm(client)
llm_json = '{"found": true, "answer": "3 articles are listed in the content"}'
with patch('changedetectionio.llm.client.completion', return_value=(llm_json, 50)):
res = client.post(
url_for("ui.ui_edit.watch_get_preview_rendered", uuid=uuid),
data={
'llm_intent': 'Tell me the number of articles in the list',
'fetch_backend': 'html_requests',
},
)
assert res.status_code == 200
data = json.loads(res.data.decode('utf-8'))
# Filtered text must still be present
assert data.get('after_filter'), "after_filter must be present"
# LLM evaluation must be returned with the new shape
ev = data.get('llm_evaluation')
assert ev is not None, "llm_evaluation must be in response"
assert ev['found'] is True
assert '3' in ev['answer']
delete_all_watches(client)
def test_preview_returns_llm_answer_for_price_intent(
client, live_server, measure_memory_usage, datastore_path):
"""
With a price-change intent, the LLM answer should reflect the discount
extracted directly from the current page (not a diff comparison).
"""
uuid = _add_and_fetch(client, live_server, datastore_path, HTML_WITH_PRICE)
_configure_llm(client)
llm_json = '{"found": true, "answer": "Price $149, 25% off (was $199)"}'
with patch('changedetectionio.llm.client.completion', return_value=(llm_json, 60)):
res = client.post(
url_for("ui.ui_edit.watch_get_preview_rendered", uuid=uuid),
data={
'llm_intent': 'Flag any price change, including discount percentages',
'fetch_backend': 'html_requests',
},
)
assert res.status_code == 200
data = json.loads(res.data.decode('utf-8'))
ev = data.get('llm_evaluation')
assert ev is not None
assert ev['found'] is True
assert '25' in ev['answer'] or '149' in ev['answer']
delete_all_watches(client)
def test_preview_found_false_when_content_not_relevant(
client, live_server, measure_memory_usage, datastore_path):
"""found=False when the LLM determines page content doesn't match intent."""
uuid = _add_and_fetch(client, live_server, datastore_path, HTML_WITH_ARTICLES)
_configure_llm(client)
llm_json = '{"found": false, "answer": "No price information found on this page"}'
with patch('changedetectionio.llm.client.completion', return_value=(llm_json, 45)):
res = client.post(
url_for("ui.ui_edit.watch_get_preview_rendered", uuid=uuid),
data={
'llm_intent': 'Show me any product prices',
'fetch_backend': 'html_requests',
},
)
assert res.status_code == 200
data = json.loads(res.data.decode('utf-8'))
ev = data.get('llm_evaluation')
assert ev is not None
assert ev['found'] is False
assert ev['answer']
delete_all_watches(client)
# ---------------------------------------------------------------------------
# No intent / no LLM → llm_evaluation is None
# ---------------------------------------------------------------------------
def test_preview_no_llm_evaluation_without_intent(
client, live_server, measure_memory_usage, datastore_path):
"""When llm_intent is absent, the LLM client must not be called."""
uuid = _add_and_fetch(client, live_server, datastore_path, HTML_WITH_ARTICLES)
_configure_llm(client)
with patch('changedetectionio.llm.client.completion') as mock_llm:
res = client.post(
url_for("ui.ui_edit.watch_get_preview_rendered", uuid=uuid),
data={'fetch_backend': 'html_requests'},
)
mock_llm.assert_not_called()
assert res.status_code == 200
data = json.loads(res.data.decode('utf-8'))
assert data.get('llm_evaluation') is None
delete_all_watches(client)
def test_preview_no_llm_evaluation_when_llm_not_configured(
client, live_server, measure_memory_usage, datastore_path):
"""When LLM model is not set, llm_evaluation must be None even with an intent."""
uuid = _add_and_fetch(client, live_server, datastore_path, HTML_WITH_ARTICLES)
# Intentionally do NOT configure LLM
with patch('changedetectionio.llm.client.completion') as mock_llm:
res = client.post(
url_for("ui.ui_edit.watch_get_preview_rendered", uuid=uuid),
data={
'llm_intent': 'Tell me the number of articles',
'fetch_backend': 'html_requests',
},
)
mock_llm.assert_not_called()
assert res.status_code == 200
data = json.loads(res.data.decode('utf-8'))
assert data.get('llm_evaluation') is None
delete_all_watches(client)
# ---------------------------------------------------------------------------
# LLM failure → llm_evaluation is None, preview still works
# ---------------------------------------------------------------------------
def test_preview_llm_failure_does_not_break_preview(
client, live_server, measure_memory_usage, datastore_path):
"""If the LLM call raises, preview_extract returns None — preview still works."""
uuid = _add_and_fetch(client, live_server, datastore_path, HTML_WITH_ARTICLES)
_configure_llm(client)
with patch('changedetectionio.llm.client.completion', side_effect=Exception('API timeout')):
res = client.post(
url_for("ui.ui_edit.watch_get_preview_rendered", uuid=uuid),
data={
'llm_intent': 'Tell me the number of articles',
'fetch_backend': 'html_requests',
},
)
assert res.status_code == 200
data = json.loads(res.data.decode('utf-8'))
# Filter content must still be returned
assert data.get('after_filter')
# preview_extract returns None on error (doesn't fail-open like evaluate_change)
assert data.get('llm_evaluation') is None
delete_all_watches(client)
@@ -1,498 +0,0 @@
#!/usr/bin/env python3
"""
Tests that verify global LLM token budget counters cannot be tampered with
via the API (watch PUT) or via form submissions (settings page POST).
This is critical for hosted deployments where the operator sets
LLM_TOKEN_BUDGET_MONTH in the container tenants must not be able
to reset or inflate the counter themselves.
"""
import json
import os
import pytest
from flask import url_for
from changedetectionio.tests.util import live_server_setup, delete_all_watches
# ---------------------------------------------------------------------------
# Helpers
# ---------------------------------------------------------------------------
def _seed_token_counters(datastore, this_month=5000, total=12000):
"""Pre-load token counters into the datastore's llm settings dict."""
from changedetectionio.llm.evaluator import _get_month_key
app_settings = datastore.data['settings']['application']
if 'llm' not in app_settings:
app_settings['llm'] = {}
app_settings['llm'].update({
'model': 'gpt-4o-mini',
'api_key': 'sk-test',
'tokens_this_month': this_month,
'tokens_total_cumulative': total,
'tokens_month_key': _get_month_key(),
})
def _get_counters(datastore):
llm_cfg = datastore.data['settings']['application'].get('llm') or {}
return {
'tokens_this_month': llm_cfg.get('tokens_this_month', 0),
'tokens_total_cumulative': llm_cfg.get('tokens_total_cumulative', 0),
'tokens_month_key': llm_cfg.get('tokens_month_key'),
}
# ---------------------------------------------------------------------------
# API tamper tests
# ---------------------------------------------------------------------------
def test_api_cannot_reset_token_counters_via_watch_put(
client, live_server, measure_memory_usage, datastore_path):
"""
A PUT to /api/v1/watch/<uuid> must NOT be able to reset or change the
global token counters stored in settings.application.llm.
The counters live on the datastore settings, not the watch object,
so this test confirms they remain intact regardless of API activity.
"""
ds = client.application.config.get('DATASTORE')
api_key = ds.data['settings']['application'].get('api_access_token')
_seed_token_counters(ds, this_month=7000, total=20000)
test_url = url_for('test_endpoint', _external=True)
# Create a watch via API
res = client.post(
url_for("createwatch"),
data=json.dumps({"url": test_url}),
headers={'content-type': 'application/json', 'x-api-key': api_key},
follow_redirects=True,
)
assert res.status_code == 201
uuid = res.json.get('uuid')
# Attempt to PUT the watch with llm_tokens_used_cumulative set to 0
# (trying to "reset" the per-watch counter — this field is readOnly on watches,
# but more importantly the global counters on settings must be unaffected)
res = client.put(
url_for("watch", uuid=uuid),
headers={'x-api-key': api_key, 'content-type': 'application/json'},
data=json.dumps({
"url": test_url,
"llm_tokens_used_cumulative": 0, # readOnly on Watch — should be silently ignored
"llm_last_tokens_used": 0, # readOnly on Watch — should be silently ignored
}),
)
assert res.status_code == 200, f"PUT failed: {res.data}"
# Global counters on settings must be completely unchanged
after = _get_counters(ds)
assert after['tokens_this_month'] == 7000, \
"API PUT must not reset tokens_this_month"
assert after['tokens_total_cumulative'] == 20000, \
"API PUT must not reset tokens_total_cumulative"
delete_all_watches(client)
def test_api_watch_put_llm_readonly_fields_are_ignored(
client, live_server, measure_memory_usage, datastore_path):
"""
llm_prefilter, llm_evaluation_cache, llm_last_tokens_used,
llm_tokens_used_cumulative are all readOnly in the API spec.
Sending them in a PUT must not raise an error (they should be silently
stripped) and must not modify the watch's stored values.
"""
ds = client.application.config.get('DATASTORE')
api_key = ds.data['settings']['application'].get('api_access_token')
test_url = url_for('test_endpoint', _external=True)
res = client.post(
url_for("createwatch"),
data=json.dumps({"url": test_url}),
headers={'content-type': 'application/json', 'x-api-key': api_key},
follow_redirects=True,
)
assert res.status_code == 201
uuid = res.json.get('uuid')
# Try to set readOnly LLM fields via PUT
res = client.put(
url_for("watch", uuid=uuid),
headers={'x-api-key': api_key, 'content-type': 'application/json'},
data=json.dumps({
"url": test_url,
"llm_tokens_used_cumulative": 999999,
"llm_last_tokens_used": 888888,
"llm_prefilter": "div.hacked",
"llm_evaluation_cache": {"fake_key": {"important": True}},
}),
)
# Must succeed (not 400) — readOnly fields are silently stripped
assert res.status_code == 200, f"Expected 200, got {res.status_code}: {res.data}"
# Fetch back and confirm readOnly values were NOT stored
res = client.get(
url_for("watch", uuid=uuid),
headers={'x-api-key': api_key},
)
assert res.json.get('llm_tokens_used_cumulative') != 999999
assert res.json.get('llm_last_tokens_used') != 888888
delete_all_watches(client)
# ---------------------------------------------------------------------------
# Settings form tamper tests
# ---------------------------------------------------------------------------
def test_settings_form_preserves_token_counters(
client, live_server, measure_memory_usage, datastore_path):
"""
Submitting the settings form (POST /settings) must preserve existing
token counters even when the LLM model/key fields change.
A malicious or accidental form submission must not zero the counters.
"""
ds = client.application.config.get('DATASTORE')
_seed_token_counters(ds, this_month=3000, total=9000)
before = _get_counters(ds)
assert before['tokens_this_month'] == 3000
# Submit settings form with a different model — simulates a normal settings save
res = client.post(
url_for('settings.settings_page'),
data={
# LLM sub-form fields
'llm-llm_model': 'gpt-4o',
'llm-llm_api_key': 'sk-different-key',
'llm-llm_api_base': '',
# Minimal required fields to pass form validation
'application-pager_size': '50',
'application-notification_format': 'System default',
'requests-time_between_check-days': '0',
'requests-time_between_check-hours': '0',
'requests-time_between_check-minutes': '5',
'requests-time_between_check-seconds': '0',
'requests-time_between_check-weeks': '0',
'requests-workers': '10',
'requests-timeout': '60',
},
follow_redirects=True,
)
# Settings save may redirect; we just need it to not crash
assert res.status_code == 200
after = _get_counters(ds)
assert after['tokens_this_month'] == 3000, \
f"Settings form save must not reset tokens_this_month (got {after['tokens_this_month']})"
assert after['tokens_total_cumulative'] == 9000, \
f"Settings form save must not reset tokens_total_cumulative (got {after['tokens_total_cumulative']})"
delete_all_watches(client)
def test_settings_form_cannot_inject_fake_token_counts(
client, live_server, measure_memory_usage, datastore_path):
"""
Even if a form POST includes hidden fields for token counters,
those values must be ignored and the real counters must remain intact.
"""
ds = client.application.config.get('DATASTORE')
_seed_token_counters(ds, this_month=1500, total=4000)
# Attempt to inject inflated or zeroed counters via form POST
res = client.post(
url_for('settings.settings_page'),
data={
'llm-llm_model': 'gpt-4o-mini',
'llm-llm_api_key': 'sk-test',
'llm-llm_api_base': '',
# Attempted injection of token counter fields
'llm-tokens_this_month': '0',
'llm-tokens_total_cumulative': '0',
'llm-tokens_month_key': '1970-01',
# Minimal required fields
'application-pager_size': '50',
'application-notification_format': 'System default',
'requests-time_between_check-days': '0',
'requests-time_between_check-hours': '0',
'requests-time_between_check-minutes': '5',
'requests-time_between_check-seconds': '0',
'requests-time_between_check-weeks': '0',
'requests-workers': '10',
'requests-timeout': '60',
},
follow_redirects=True,
)
assert res.status_code == 200
after = _get_counters(ds)
assert after['tokens_this_month'] == 1500, \
f"Form injection must not alter tokens_this_month (got {after['tokens_this_month']})"
assert after['tokens_total_cumulative'] == 4000, \
f"Form injection must not alter tokens_total_cumulative (got {after['tokens_total_cumulative']})"
delete_all_watches(client)
# ---------------------------------------------------------------------------
# accumulate_global_tokens unit tests
# ---------------------------------------------------------------------------
def test_accumulate_global_tokens_month_rollover(
client, live_server, measure_memory_usage, datastore_path):
"""
When tokens_month_key is stale (different month), tokens_this_month
must reset to zero before accumulating, and the key must update.
"""
from changedetectionio.llm.evaluator import accumulate_global_tokens, _get_month_key
from unittest.mock import patch
ds = client.application.config.get('DATASTORE')
ds.data['settings']['application']['llm'] = {
'model': 'gpt-4o-mini',
'tokens_this_month': 500,
'tokens_total_cumulative': 1000,
'tokens_month_key': '2024-01', # stale — previous month
}
# accumulate_global_tokens must detect the rollover and reset the monthly counter
accumulate_global_tokens(ds, 100)
llm_cfg = ds.data['settings']['application']['llm']
assert llm_cfg['tokens_month_key'] == _get_month_key(), "Month key must be current"
assert llm_cfg['tokens_this_month'] == 100, \
"Monthly counter must reset on rollover, then add new tokens"
assert llm_cfg['tokens_total_cumulative'] == 1100, \
"All-time counter must never reset"
delete_all_watches(client)
def test_accumulate_global_tokens_same_month(
client, live_server, measure_memory_usage, datastore_path):
"""Within the same month, both counters accumulate additively."""
from changedetectionio.llm.evaluator import accumulate_global_tokens, _get_month_key
ds = client.application.config.get('DATASTORE')
current_month = _get_month_key()
ds.data['settings']['application']['llm'] = {
'model': 'gpt-4o-mini',
'tokens_this_month': 200,
'tokens_total_cumulative': 800,
'tokens_month_key': current_month,
}
accumulate_global_tokens(ds, 50)
llm_cfg = ds.data['settings']['application']['llm']
assert llm_cfg['tokens_this_month'] == 250
assert llm_cfg['tokens_total_cumulative'] == 850
delete_all_watches(client)
def test_is_global_token_budget_exceeded(
client, live_server, measure_memory_usage, datastore_path):
"""is_global_token_budget_exceeded returns True only when budget is set and reached."""
from changedetectionio.llm.evaluator import is_global_token_budget_exceeded, _get_month_key
ds = client.application.config.get('DATASTORE')
current_month = _get_month_key()
# No budget env var → never exceeded
with pytest.MonkeyPatch().context() as mp:
mp.delenv('LLM_TOKEN_BUDGET_MONTH', raising=False)
ds.data['settings']['application']['llm'] = {
'tokens_this_month': 999999,
'tokens_month_key': current_month,
}
assert not is_global_token_budget_exceeded(ds)
# Budget set, under limit
with pytest.MonkeyPatch().context() as mp:
mp.setenv('LLM_TOKEN_BUDGET_MONTH', '10000')
ds.data['settings']['application']['llm'] = {
'tokens_this_month': 5000,
'tokens_month_key': current_month,
}
assert not is_global_token_budget_exceeded(ds)
# Budget set, at limit (exact)
with pytest.MonkeyPatch().context() as mp:
mp.setenv('LLM_TOKEN_BUDGET_MONTH', '10000')
ds.data['settings']['application']['llm'] = {
'tokens_this_month': 10000,
'tokens_month_key': current_month,
}
assert is_global_token_budget_exceeded(ds)
# Budget set, over limit
with pytest.MonkeyPatch().context() as mp:
mp.setenv('LLM_TOKEN_BUDGET_MONTH', '10000')
ds.data['settings']['application']['llm'] = {
'tokens_this_month': 12345,
'tokens_month_key': current_month,
}
assert is_global_token_budget_exceeded(ds)
# Budget set but stale month key → counter is 0 for current month → not exceeded
with pytest.MonkeyPatch().context() as mp:
mp.setenv('LLM_TOKEN_BUDGET_MONTH', '100')
ds.data['settings']['application']['llm'] = {
'tokens_this_month': 9999,
'tokens_month_key': '2020-01', # stale
}
assert not is_global_token_budget_exceeded(ds)
delete_all_watches(client)
# ---------------------------------------------------------------------------
# Cost accumulation tests
# ---------------------------------------------------------------------------
def test_accumulate_global_tokens_tracks_cost_for_known_model(
client, live_server, measure_memory_usage, datastore_path):
"""
When input/output tokens and a known model are supplied, cost_usd_this_month
and cost_usd_total_cumulative must be accumulated as positive floats.
Uses litellm's real pricing db — exact value may change but must be > 0
for a model that has known pricing (gpt-4o-mini).
"""
from changedetectionio.llm.evaluator import accumulate_global_tokens, _get_month_key
ds = client.application.config.get('DATASTORE')
ds.data['settings']['application']['llm'] = {
'model': 'gpt-4o-mini',
'tokens_this_month': 0,
'tokens_total_cumulative': 0,
'tokens_month_key': _get_month_key(),
'cost_usd_this_month': 0.0,
'cost_usd_total_cumulative': 0.0,
}
accumulate_global_tokens(ds, tokens=1000, input_tokens=800, output_tokens=200, model='gpt-4o-mini')
llm_cfg = ds.data['settings']['application']['llm']
assert llm_cfg['tokens_this_month'] == 1000
assert llm_cfg['tokens_total_cumulative'] == 1000
# gpt-4o-mini has known pricing in litellm — cost must be > 0
assert llm_cfg.get('cost_usd_this_month', 0) > 0, \
"cost_usd_this_month must be positive for a model with known pricing"
assert llm_cfg.get('cost_usd_total_cumulative', 0) > 0
delete_all_watches(client)
def test_accumulate_global_tokens_cost_rollover(
client, live_server, measure_memory_usage, datastore_path):
"""
On month rollover, cost_usd_this_month must reset to zero (fresh month),
while cost_usd_total_cumulative keeps growing.
"""
from changedetectionio.llm.evaluator import accumulate_global_tokens, _get_month_key
ds = client.application.config.get('DATASTORE')
ds.data['settings']['application']['llm'] = {
'model': 'gpt-4o-mini',
'tokens_this_month': 500,
'tokens_total_cumulative': 1000,
'tokens_month_key': '2024-01', # stale month
'cost_usd_this_month': 0.05,
'cost_usd_total_cumulative': 0.20,
}
accumulate_global_tokens(ds, tokens=50, input_tokens=40, output_tokens=10, model='gpt-4o-mini')
llm_cfg = ds.data['settings']['application']['llm']
assert llm_cfg['tokens_month_key'] == _get_month_key(), "Month key must update"
assert llm_cfg['tokens_this_month'] == 50, "Monthly token counter must reset then add"
assert llm_cfg['tokens_total_cumulative'] == 1050, "All-time counter must not reset"
# Monthly cost must reset (old 0.05 discarded) then add new cost
assert llm_cfg['cost_usd_this_month'] >= 0.0
assert llm_cfg['cost_usd_this_month'] < 0.05, \
"Monthly cost must have reset (new cost for 50 tokens is less than old 0.05)"
# All-time cost must keep growing from 0.20
assert llm_cfg['cost_usd_total_cumulative'] >= 0.20
delete_all_watches(client)
def test_accumulate_global_tokens_no_cost_for_unknown_model(
client, live_server, measure_memory_usage, datastore_path):
"""
When model is unknown (e.g. custom endpoint) or no input/output split
is provided, cost stays at 0.0 no error raised.
"""
from changedetectionio.llm.evaluator import accumulate_global_tokens, _get_month_key
ds = client.application.config.get('DATASTORE')
ds.data['settings']['application']['llm'] = {
'tokens_this_month': 0,
'tokens_total_cumulative': 0,
'tokens_month_key': _get_month_key(),
'cost_usd_this_month': 0.0,
'cost_usd_total_cumulative': 0.0,
}
# No model, no input/output split → no cost
accumulate_global_tokens(ds, tokens=200)
llm_cfg = ds.data['settings']['application']['llm']
assert llm_cfg['tokens_this_month'] == 200
assert llm_cfg['cost_usd_this_month'] == 0.0
assert llm_cfg['cost_usd_total_cumulative'] == 0.0
delete_all_watches(client)
def test_cost_fields_are_tamper_proof_via_settings_form(
client, live_server, measure_memory_usage, datastore_path):
"""
Submitting the settings form must not be able to set cost_usd_this_month
or cost_usd_total_cumulative those are operator-controlled counters.
"""
from flask import url_for
ds = client.application.config.get('DATASTORE')
ds.data['settings']['application']['llm'] = {
'model': 'gpt-4o-mini',
'api_key': 'sk-test',
'cost_usd_this_month': 1.23,
'cost_usd_total_cumulative': 9.99,
}
client.post(
url_for('settings.settings_page'),
data={
'llm-llm_model': 'gpt-4o',
'llm-llm_api_key': 'sk-test',
'llm-llm_api_base': '',
'llm-cost_usd_this_month': '0', # injection attempt
'llm-cost_usd_total_cumulative': '0', # injection attempt
'application-pager_size': '50',
'application-notification_format': 'System default',
'requests-time_between_check-days': '0',
'requests-time_between_check-hours': '0',
'requests-time_between_check-minutes': '5',
'requests-time_between_check-seconds': '0',
'requests-time_between_check-weeks': '0',
'requests-workers': '10',
'requests-timeout': '60',
},
follow_redirects=True,
)
llm_cfg = ds.data['settings']['application'].get('llm', {})
assert llm_cfg.get('cost_usd_this_month') == 1.23, \
"cost_usd_this_month must be tamper-proof"
assert llm_cfg.get('cost_usd_total_cumulative') == 9.99, \
"cost_usd_total_cumulative must be tamper-proof"
delete_all_watches(client)
-76
View File
@@ -1,11 +1,7 @@
import io
import os
import re
import time
import pytest
from flask import url_for
from zipfile import ZipFile, ZIP_DEFLATED
from changedetectionio.tests.util import set_modified_response
from .util import live_server_setup, wait_for_all_checks, delete_all_watches
@@ -827,75 +823,3 @@ def test_unresolvable_hostname_is_allowed(client, live_server, monkeypatch):
res = client.get(url_for('watchlist.index'))
assert b'this-host-does-not-exist-xyz987.invalid' in res.data, \
"Unresolvable hostname watch should appear in the watch overview list"
def test_ghsa_8757_69j2_hx56_backup_restore_history_path_traversal(client, live_server, measure_memory_usage, datastore_path):
"""
GHSA-8757-69j2-hx56: Crafted backup ZIP with absolute path in history.txt must not
expose arbitrary local files through the preview or API endpoints.
Attack chain:
1. Attacker creates a backup ZIP with a malicious history.txt containing an absolute
path (e.g. /etc/passwd) as a snapshot reference.
2. Victim restores the backup.
3. Attacker reads the targeted file via the Preview page.
The fix ensures history entries are always resolved to os.path.basename() joined with
the watch's data_dir, and rejects entries that escape that directory.
"""
set_original_response(datastore_path=datastore_path)
datastore = live_server.app.config['DATASTORE']
watch_url = url_for('test_endpoint', _external=True)
# Create a real watch and trigger a check so we have a valid backup structure
uuid = datastore.add_watch(url=watch_url)
client.get(url_for("ui.form_watch_checknow"), follow_redirects=True)
wait_for_all_checks(client)
# Download a legitimate backup to use as a template
client.get(url_for("backups.request_backup"), follow_redirects=True)
time.sleep(4)
res = client.get(url_for("backups.download_backup", filename="latest"), follow_redirects=True)
assert res.content_type == "application/zip"
# Tamper: replace the history.txt inside the backup with a malicious entry
# that points at /etc/passwd (a file that exists on any Unix system)
original_zip = ZipFile(io.BytesIO(res.data))
tampered_buf = io.BytesIO()
with ZipFile(tampered_buf, 'w', ZIP_DEFLATED) as new_zip:
for item in original_zip.infolist():
data = original_zip.read(item.filename)
# Replace the watch's history.txt with a malicious absolute path entry
if item.filename.endswith('history.txt') and uuid in item.filename:
data = b'1776969105,/etc/passwd\n'
new_zip.writestr(item, data)
tampered_buf.seek(0)
tampered_zip_data = tampered_buf.read()
# Restore the tampered backup
res = client.post(
url_for("backups.restore.backups_restore_start"),
data={
'zip_file': (io.BytesIO(tampered_zip_data), 'malicious_backup.zip'),
'include_watches': 'y',
'include_watches_replace_existing': 'y',
},
content_type='multipart/form-data',
follow_redirects=True
)
assert res.status_code == 200
time.sleep(2)
# Now try to read the /etc/passwd contents via the Preview page using the injected timestamp
res = client.get(
url_for("ui.ui_preview.preview_page", uuid=uuid) + "?timestamp=1776969105",
follow_redirects=True
)
# The preview must NOT contain typical /etc/passwd content
assert b'root:' not in res.data, \
"Preview must not expose /etc/passwd — history path traversal not blocked"
assert b'/bin/' not in res.data or b'No history' in res.data or res.status_code in [404, 500], \
"Preview must not serve arbitrary local files from a malicious history entry"
@@ -1,94 +0,0 @@
#!/usr/bin/env python3
# run from dir above changedetectionio/ dir
# python3 -m unittest changedetectionio.tests.unit.test_jq_security
import unittest
class TestJqExpressionSecurity(unittest.TestCase):
def test_blocked_builtins_raise(self):
"""Each dangerous builtin must be rejected by validate_jq_expression."""
from changedetectionio.html_tools import validate_jq_expression
blocked = [
# env access
'env',
'.foo | env',
'$ENV',
'$ENV.SECRET',
# file read via module system
'include "foo"',
'import "foo" as f',
# stdin reads
'input',
'inputs',
'[.,inputs]',
# process termination
'halt',
'halt_error(1)',
# stderr/debug leakage
'debug',
'. | debug | .foo',
'stderr',
# misc info leakage
'$__loc__',
'builtins',
'modulemeta',
'$JQ_BUILD_CONFIGURATION',
]
for expr in blocked:
with self.assertRaises(ValueError, msg=f"Expected ValueError for: {expr!r}"):
validate_jq_expression(expr)
def test_safe_expressions_pass(self):
"""Normal jq expressions must not be blocked."""
from changedetectionio.html_tools import validate_jq_expression
safe = [
'.foo',
'.items[] | .price',
'map(select(.active)) | length',
'.[] | select(.name | test("foo"))',
'to_entries | map(.value) | add',
'[.[] | .id] | unique',
'.price | tonumber',
'if .stock > 0 then "in stock" else "out of stock" end',
]
for expr in safe:
try:
validate_jq_expression(expr)
except ValueError as e:
self.fail(f"validate_jq_expression raised ValueError for safe expression {expr!r}: {e}")
def test_allow_risky_env_var_bypasses_check(self):
"""JQ_ALLOW_RISKY_EXPRESSIONS=true must skip all blocking."""
import os
from unittest.mock import patch
from changedetectionio.html_tools import validate_jq_expression
with patch.dict(os.environ, {'JQ_ALLOW_RISKY_EXPRESSIONS': 'true'}):
# Should not raise even for the most dangerous expression
try:
validate_jq_expression('env')
validate_jq_expression('$ENV')
except ValueError as e:
self.fail(f"Should not block when JQ_ALLOW_RISKY_EXPRESSIONS=true: {e}")
def test_allow_risky_env_var_off_by_default(self):
"""Without JQ_ALLOW_RISKY_EXPRESSIONS set, blocking must be active."""
import os
from unittest.mock import patch
from changedetectionio.html_tools import validate_jq_expression
env = {k: v for k, v in os.environ.items() if k != 'JQ_ALLOW_RISKY_EXPRESSIONS'}
with patch.dict(os.environ, env, clear=True):
with self.assertRaises(ValueError):
validate_jq_expression('env')
if __name__ == '__main__':
unittest.main()
@@ -1,127 +0,0 @@
"""
Unit tests for SSRF protection in the Apprise custom HTTP notification handler.
The handler (notification/apprise_plugin/custom_handlers.py) must block requests
to private/IANA-reserved addresses unless ALLOW_IANA_RESTRICTED_ADDRESSES=true.
"""
import pytest
from unittest.mock import patch, MagicMock
def _make_meta(url: str) -> dict:
"""Build a minimal Apprise meta dict that apprise_http_custom_handler expects."""
from apprise.utils.parse import parse_url as apprise_parse_url
schema = url.split("://")[0]
parsed = apprise_parse_url(url, default_schema=schema, verify_host=False, simple=True)
parsed["url"] = url
parsed["schema"] = schema
return parsed
class TestNotificationSSRFProtection:
def test_private_ip_blocked_by_default(self):
"""Requests to private IP addresses must be blocked when ALLOW_IANA_RESTRICTED_ADDRESSES is unset."""
from changedetectionio.notification.apprise_plugin.custom_handlers import apprise_http_custom_handler
meta = _make_meta("post://192.168.1.100/webhook")
with patch("changedetectionio.notification.apprise_plugin.custom_handlers.is_private_hostname", return_value=True), \
patch.dict("os.environ", {}, clear=False):
# Remove the env var if present so the default 'false' applies
import os
os.environ.pop("ALLOW_IANA_RESTRICTED_ADDRESSES", None)
with pytest.raises(ValueError, match="ALLOW_IANA_RESTRICTED_ADDRESSES"):
apprise_http_custom_handler(
body="test body",
title="test title",
notify_type="info",
meta=meta,
)
def test_loopback_blocked_by_default(self):
"""Requests to loopback addresses (127.x.x.x) must be blocked."""
from changedetectionio.notification.apprise_plugin.custom_handlers import apprise_http_custom_handler
meta = _make_meta("post://127.0.0.1:8080/internal")
with patch("changedetectionio.notification.apprise_plugin.custom_handlers.is_private_hostname", return_value=True):
import os
os.environ.pop("ALLOW_IANA_RESTRICTED_ADDRESSES", None)
with pytest.raises(ValueError, match="ALLOW_IANA_RESTRICTED_ADDRESSES"):
apprise_http_custom_handler(
body="test body",
title="test title",
notify_type="info",
meta=meta,
)
def test_private_ip_allowed_when_env_var_set(self):
"""When ALLOW_IANA_RESTRICTED_ADDRESSES=true, requests to private IPs must go through."""
from changedetectionio.notification.apprise_plugin.custom_handlers import apprise_http_custom_handler
meta = _make_meta("post://192.168.1.100/webhook")
mock_response = MagicMock()
mock_response.raise_for_status = MagicMock()
with patch("changedetectionio.notification.apprise_plugin.custom_handlers.is_private_hostname", return_value=True), \
patch("changedetectionio.notification.apprise_plugin.custom_handlers.requests.request", return_value=mock_response) as mock_req, \
patch.dict("os.environ", {"ALLOW_IANA_RESTRICTED_ADDRESSES": "true"}):
result = apprise_http_custom_handler(
body="test body",
title="test title",
notify_type="info",
meta=meta,
)
assert result is True
mock_req.assert_called_once()
def test_public_hostname_not_blocked(self):
"""Public hostnames must not be blocked by the SSRF guard."""
from changedetectionio.notification.apprise_plugin.custom_handlers import apprise_http_custom_handler
meta = _make_meta("post://example.com/webhook")
mock_response = MagicMock()
mock_response.raise_for_status = MagicMock()
with patch("changedetectionio.notification.apprise_plugin.custom_handlers.is_private_hostname", return_value=False), \
patch("changedetectionio.notification.apprise_plugin.custom_handlers.requests.request", return_value=mock_response) as mock_req:
import os
os.environ.pop("ALLOW_IANA_RESTRICTED_ADDRESSES", None)
result = apprise_http_custom_handler(
body="test body",
title="test title",
notify_type="info",
meta=meta,
)
assert result is True
mock_req.assert_called_once()
def test_error_message_contains_env_var_hint(self):
"""The ValueError message must include the ALLOW_IANA_RESTRICTED_ADDRESSES hint."""
from changedetectionio.notification.apprise_plugin.custom_handlers import apprise_http_custom_handler
meta = _make_meta("post://10.0.0.1/api")
with patch("changedetectionio.notification.apprise_plugin.custom_handlers.is_private_hostname", return_value=True):
import os
os.environ.pop("ALLOW_IANA_RESTRICTED_ADDRESSES", None)
with pytest.raises(ValueError) as exc_info:
apprise_http_custom_handler(
body="test",
title="test",
notify_type="info",
meta=meta,
)
assert "ALLOW_IANA_RESTRICTED_ADDRESSES=true" in str(exc_info.value)
@@ -249,134 +249,5 @@ class TestDiffBuilder(unittest.TestCase):
self.assertLess(elapsed, 0.5,
f"Deepcopy too slow ({elapsed:.3f}s for 10 copies) - might be copying datastore")
class TestLLMDiffSummaryCache(unittest.TestCase):
"""Tests for get_llm_diff_summary / save_llm_diff_summary — version-pair + prompt-hash caching."""
PROMPT = 'List what changed as bullet points'
def _make_watch(self):
mock_datastore = {'settings': {'application': {}}, 'watching': {}}
watch = Watch.model(datastore_path='/tmp', __datastore=mock_datastore, default={})
watch.ensure_data_dir_exists()
return watch
def test_returns_empty_when_no_file_exists(self):
watch = self._make_watch()
assert watch.get_llm_diff_summary('1000', '2000', prompt=self.PROMPT) == ''
def test_save_and_retrieve(self):
watch = self._make_watch()
watch.save_llm_diff_summary('Price dropped to $199', '1000', '2000', prompt=self.PROMPT)
assert watch.get_llm_diff_summary('1000', '2000', prompt=self.PROMPT) == 'Price dropped to $199'
def test_different_version_pairs_are_independent(self):
watch = self._make_watch()
watch.save_llm_diff_summary('Summary A', '1000', '2000', prompt=self.PROMPT)
watch.save_llm_diff_summary('Summary B', '2000', '3000', prompt=self.PROMPT)
assert watch.get_llm_diff_summary('1000', '2000', prompt=self.PROMPT) == 'Summary A'
assert watch.get_llm_diff_summary('2000', '3000', prompt=self.PROMPT) == 'Summary B'
def test_unknown_pair_returns_empty(self):
watch = self._make_watch()
watch.save_llm_diff_summary('Summary A', '1000', '2000', prompt=self.PROMPT)
assert watch.get_llm_diff_summary('9999', '8888', prompt=self.PROMPT) == ''
def test_changed_prompt_is_a_cache_miss(self):
"""Changing the prompt must invalidate the cached summary for the same version pair."""
watch = self._make_watch()
watch.save_llm_diff_summary('Old summary', '1000', '2000', prompt='original prompt')
# Different prompt → different hash → different filename → miss
assert watch.get_llm_diff_summary('1000', '2000', prompt='new different prompt') == ''
def test_file_named_by_versions_and_prompt_hash(self):
"""Cache file must be named change-summary-{from}-to-{to}-{hash}.txt."""
import hashlib
watch = self._make_watch()
prompt = 'my summary prompt'
watch.save_llm_diff_summary('Test summary', '1776000000', '1776001000', prompt=prompt)
prompt_hash = hashlib.md5(prompt.encode()).hexdigest()[:8]
expected_path = os.path.join(
watch.data_dir,
f'change-summary-1776000000-to-1776001000-{prompt_hash}.txt'
)
assert os.path.isfile(expected_path), \
f"Expected cache file not found: {expected_path}"
with open(expected_path, 'r') as f:
assert f.read().strip() == 'Test summary'
def test_overwrite_same_pair_and_prompt(self):
watch = self._make_watch()
watch.save_llm_diff_summary('First summary', '1000', '2000', prompt=self.PROMPT)
watch.save_llm_diff_summary('Updated summary', '1000', '2000', prompt=self.PROMPT)
assert watch.get_llm_diff_summary('1000', '2000', prompt=self.PROMPT) == 'Updated summary'
class TestHistoryPathTraversal(unittest.TestCase):
"""GHSA-8757-69j2-hx56: history.txt must not allow reads outside the watch data dir."""
def _make_watch(self):
mock_datastore = {'settings': {'application': {}}, 'watching': {}}
watch = Watch.model(datastore_path='/tmp', __datastore=mock_datastore, default={})
watch.ensure_data_dir_exists()
return watch
def _write_history_txt(self, watch, lines):
"""Directly write raw lines to history.txt to simulate a restored backup."""
fname = os.path.join(watch.data_dir, watch.history_index_filename)
with open(fname, 'w', encoding='utf-8') as f:
f.writelines(lines)
def test_absolute_path_in_history_is_rejected(self):
"""An absolute path like /etc/passwd must not appear in history."""
watch = self._make_watch()
self._write_history_txt(watch, ['1000000000,/etc/passwd\n'])
history = watch.history
self.assertEqual(history, {}, "Absolute path entry must be rejected")
def test_traversal_path_in_history_is_rejected(self):
"""A relative traversal path like ../../etc/passwd must not appear in history."""
watch = self._make_watch()
self._write_history_txt(watch, ['1000000000,../../etc/passwd\n'])
history = watch.history
self.assertEqual(history, {}, "Path traversal entry must be rejected")
def test_normal_snapshot_entry_is_accepted(self):
"""A bare filename written by save_history_blob must still load correctly."""
import uuid as uuid_builder
watch = self._make_watch()
watch.save_history_blob(contents="hello world", timestamp=1000000000, snapshot_id=str(uuid_builder.uuid4()))
history = watch.history
self.assertEqual(len(history), 1, "Normal snapshot entry must be accepted")
self.assertTrue(
list(history.values())[0].startswith(watch.data_dir),
"Resolved path must be inside the watch data directory"
)
def test_get_history_snapshot_blocks_outside_path_directly(self):
"""get_history_snapshot(filepath=...) must raise if the path escapes data_dir."""
watch = self._make_watch()
with self.assertRaises(PermissionError):
watch.get_history_snapshot(filepath='/etc/passwd')
def test_get_history_snapshot_blocks_traversal_directly(self):
"""get_history_snapshot(filepath=...) must raise on ../../ traversal paths."""
watch = self._make_watch()
with self.assertRaises(PermissionError):
watch.get_history_snapshot(filepath=os.path.join(watch.data_dir, '../../etc/passwd'))
def test_resolved_path_stays_inside_data_dir(self):
"""All resolved history paths must reside within the watch's data_dir."""
import uuid as uuid_builder
watch = self._make_watch()
for ts in [1000000001, 1000000002, 1000000003]:
watch.save_history_blob(contents=f"content {ts}", timestamp=ts, snapshot_id=str(uuid_builder.uuid4()))
safe_dir = os.path.realpath(watch.data_dir)
for path in watch.history.values():
self.assertTrue(
os.path.realpath(path).startswith(safe_dir),
f"Path {path!r} escapes the watch data directory"
)
if __name__ == '__main__':
unittest.main()
@@ -1,35 +0,0 @@
#!/usr/bin/env python3
# run from dir above changedetectionio/ dir
# python3 -m pytest changedetectionio/tests/unit/test_xml_security.py
import pytest
from changedetectionio import html_tools
def _xxe_payload(file_path: str) -> str:
return f"""<?xml version="1.0"?>
<!DOCTYPE root [
<!ENTITY xxe SYSTEM "file://{file_path}">
]>
<root><item>&xxe;</item></root>"""
def test_xxe_not_expanded_xpath_filter(tmp_path):
"""xpath_filter must not expand external entities (CVE-2026-41895)."""
sentinel_file = tmp_path / "sentinel.txt"
sentinel = "xxe_sentinel_should_never_appear_in_output"
sentinel_file.write_text(sentinel)
result = html_tools.xpath_filter("//item", _xxe_payload(sentinel_file), is_xml=True)
assert sentinel not in result
def test_xxe_not_expanded_xpath1_filter(tmp_path):
"""xpath1_filter must not expand external entities (CVE-2026-41895)."""
sentinel_file = tmp_path / "sentinel.txt"
sentinel = "xxe_sentinel_should_never_appear_in_output"
sentinel_file.write_text(sentinel)
result = html_tools.xpath1_filter("//item", _xxe_payload(sentinel_file), is_xml=True)
assert sentinel not in result
+62 -193
View File
@@ -1,234 +1,103 @@
# Translators Guide
# Translation Guide
This document is for contributors who write templates (HTML) and for translators who maintain `.po` files.
It exists because fragmented `msgid`s — splitting a single sentence across multiple `_()` calls — cause
systematic translation breakage across many languages. Follow the patterns here to prevent that.
## Updating Translations
---
## Terminology
- **Always use "monitor" or "watcher"** for the concept of watching a URL — never the bare word "watch",
which translates to "clock" (e.g. `hodinky` in Czech, `시계` in Korean, `時計` in Japanese).
- Use the **shortest suitable wording** for each language. If a language naturally uses the English
derivative, prefer that.
---
## Template rules: do not fragment `msgid`s
### Why fragments break translation
The GNU gettext manual is explicit on this:
> **[Entire sentences](https://www.gnu.org/software/gettext/manual/html_node/Entire-sentences.html)**:
> Translatable strings should be entire sentences. Because gender/number declension depends on other
> parts of the sentence, half-sentence *"dumb string concatenation"* breaks in many languages other than English.
> **[No string concatenation](https://www.gnu.org/software/gettext/manual/html_node/No-string-concatenation.html)**:
> Placing adjacent `_()` calls is semantically equivalent to runtime `strcat` concatenation, so the same
> guideline applies. The manual also notes that "in some languages the translator might want to swap the
> order" of components.
> **[No embedded URLs](https://www.gnu.org/software/gettext/manual/html_node/No-embedded-URLs.html)**:
> URLs should not be written directly inside `msgid`s; they should be injected via `%(name)s` placeholders
> and values passed as kwargs.
> **[No unusual markup](https://www.gnu.org/software/gettext/manual/html_node/No-unusual-markup.html)**:
> "HTML markup, however, is common enough that it's probably ok to use in translatable strings."
Fragments break differently depending on language family:
| Language family | How fragmentation breaks it |
|---|---|
| SOV (Japanese, Korean, Turkish) | Verb-final word order can't be achieved when verb and subject are in separate fragments |
| Germanic (German) | Gender/case agreement between article and noun is lost across fragment boundaries |
| Romance (French, Spanish, Italian, Portuguese) | Adjective placement, subjunctive mood, verb agreement can't be maintained |
| Slavic (Czech, Ukrainian) | Case (driven by preposition/verb relationships) is easy to get wrong |
| CJK (Chinese, Japanese, Korean) | Modifier position and SVO-vs-topic-prominent differences can't be applied at fragment level |
A past workaround was redistributing translations across adjacent fragments and using `msgstr " "` (a
single space) to suppress unused fragments. This is fragile: as soon as the same short `msgid` is reused
in a different template, the redistributed translation is applied verbatim and breaks the new context.
---
## The four correct patterns
### Pattern 1 — Inline HTML embedding
Keep markup **inside** the `msgid`. Render with `| safe`. This also lets CJK translators decide how to
handle `<i>` (see CJK section below).
```jinja
{# BAD: three fragments; CJK translators can't see the <i> at all #}
{{ _('Helps reduce changes detected caused by sites shuffling lines around, combine with') }}
<i>{{ _('check unique lines') }}</i>
{{ _('below.') }}
{# GOOD: one msgid, rendered with |safe #}
{{ _('Helps reduce changes detected caused by sites shuffling lines around, combine with <i>check unique lines</i> below.') | safe }}
```
### Pattern 2 — URL as kwarg
Pass URLs via `%(name)s` so translators can freely reorder them.
```jinja
{# BAD: URL hardcoded between three fragments #}
{{ _('Use') }}
<a target="newwindow" href="https://github.com/caronc/apprise">{{ _('AppRise Notification URLs') }}</a>
{{ _('for notification to just about any service!') }}
{# GOOD: URL passed as kwarg, <a> embedded in the msgid #}
{{ _('Use <a target="newwindow" href="%(url)s">AppRise Notification URLs</a> for notification to just about any service!',
url='https://github.com/caronc/apprise') | safe }}
```
### Pattern 3 — Literal `{{}}` escape as kwarg
Jinja2 would double-interpolate `{{token}}` inside a `_()` call. Pass it as a kwarg instead.
```jinja
{# BAD: literal {{token}} in the middle forces splitting #}
{{ _('Accepts the') }} <code>{{ '{{token}}' }}</code> {{ _('placeholders listed below') }}
{# GOOD: literal passed as kwarg; msgid stays as an entire sentence #}
{{ _('Accepts the <code>%(token)s</code> placeholders listed below', token='{{token}}') | safe }}
```
### Pattern 4 — `{% if %}` outside the `msgid`
Move conditional branches outside `_()` so each branch is a complete sentence, not a fragment.
```jinja
{# BAD: three fragments; SOV languages can't reorder %(title)s relative to "URL or Title" #}
{{ _('URL or Title') }}{% if active_tag_uuid %} {{ _('in') }} '{{ active_tag.title }}'{% endif %}
{# GOOD: branch between two complete msgids; each language can freely reorder %(title)s #}
{% if active_tag_uuid %}
{{ _("URL or Title in '%(title)s'", title=active_tag.title) }}
{% else %}
{{ _('URL or Title') }}
{% endif %}
```
---
## CJK italic policy
CJK fonts typically have no true italic cut — `<i>` falls back to a mechanical slant that reduces
legibility. Now that `<i>` is inside `msgid`s, CJK translators can handle it per-locale. Apply this policy
for `ja` / `zh` / `zh_Hant_TW`:
| Context | Action |
|---|---|
| `<i>` used for general emphasis | Replace with `<strong>`, or drop if the emphasis is self-evident |
| `<strong><i>...</i></strong>` | Collapse to `<strong>...</strong>` |
| `<i>` wrapping a UI term (e.g. "check unique lines") | Wrap in locale-conventional quotation marks: 「」 for `ja`/`zh_Hant_TW`, `""` for `zh` |
---
## Translation workflow
**Always use these commands** — they read consistent settings from `setup.cfg` and produce minimal diffs:
To maintain consistency and minimize unnecessary changes in translation files, run these commands:
```bash
python setup.py extract_messages # Extract translatable strings from source
python setup.py update_catalog # Propagate new msgids to all .po files
python setup.py compile_catalog # Compile .po files to binary .mo files
python setup.py extract_messages # Extract translatable strings
python setup.py update_catalog # Update all language files
python setup.py compile_catalog # Compile to binary .mo files
```
Running `pybabel` directly without the project options causes reordering, rewrapping, and line-number
churn that makes diffs hard to review.
## Configuration
### Configuration
All translation settings are configured in **`../../setup.cfg`** (single source of truth).
All translation settings are in `setup.cfg` (single source of truth):
The configuration below is shown for reference - **edit `setup.cfg` to change settings**:
```ini
[extract_messages]
# Extract translatable strings from source code
mapping_file = babel.cfg
output_file = changedetectionio/translations/messages.pot
input_paths = changedetectionio
keywords = _ _l gettext
# Options to reduce unnecessary changes in .pot files
sort_by_file = true # Keeps entries ordered by file path
width = 120 # Consistent line width (prevents rewrapping)
add_location = file # Show file path only (not line numbers)
[update_catalog]
# Update existing .po files with new strings from .pot
# Note: 'locale' is omitted - Babel auto-discovers all catalogs in output_dir
input_file = changedetectionio/translations/messages.pot
output_dir = changedetectionio/translations
domain = messages
width = 120
# Options for consistent formatting
width = 120 # Consistent line width
no_fuzzy_matching = true # Avoids incorrect automatic matches
[compile_catalog]
# Compile .po files to .mo binary format
directory = changedetectionio/translations
domain = messages
```
---
**Key formatting options:**
- `sort_by_file = true` - Orders entries by file path (consistent ordering)
- `width = 120` - Fixed line width prevents text rewrapping
- `add_location = file` - Shows file path only, not line numbers (reduces git churn)
- `no_fuzzy_matching = true` - Prevents incorrect automatic fuzzy matches
## Multi-language fix process
## Why Use These Commands?
When you find a translation error in **any** language, you must check all others for the same `msgid`:
Running pybabel commands directly without consistent options causes:
- ❌ Entries get reordered differently each time
- ❌ Text gets rewrapped at different widths
- ❌ Line numbers change every edit (if not configured)
- ❌ Large diffs that make code review difficult
```bash
for lang in cs de en_GB en_US es fr it ja ko pt_BR tr uk zh zh_Hant_TW; do
echo "=== $lang ===" && grep -A1 'msgid "YourString"' changedetectionio/translations/$lang/LC_MESSAGES/messages.po
done
```
Using `python setup.py` commands ensures:
- ✅ Consistent ordering (by file path, not alphabetically)
- ✅ Consistent line width (120 characters, no rewrapping)
- ✅ File-only locations (no line number churn)
- ✅ No fuzzy matching (prevents incorrect auto-translations)
- ✅ Minimal diffs (only actual changes show up)
- ✅ Easier code review and git history
1. Identify every language with the same problem
2. Fix all affected `.po` files in the same session
3. Recompile: `python setup.py compile_catalog`
These commands read settings from `../../setup.cfg` automatically.
Never fix one language and move on.
## Supported Languages
---
- `cs` - Czech (Čeština)
- `de` - German (Deutsch)
- `en_GB` - English (UK)
- `en_US` - English (US)
- `fr` - French (Français)
- `it` - Italian (Italiano)
- `ja` - Japanese (日本語)
- `ko` - Korean (한국어)
- `pt_BR` - Portuguese (Brasil)
- `zh` - Chinese Simplified (中文简体)
- `zh_Hant_TW` - Chinese Traditional (繁體中文)
## Supported languages
## Adding a New Language
| Code | Language |
|---|---|
| `cs` | Czech (Čeština) |
| `de` | German (Deutsch) |
| `en_GB` | English (UK) |
| `en_US` | English (US) |
| `es` | Spanish (Español) |
| `fr` | French (Français) |
| `it` | Italian (Italiano) |
| `ja` | Japanese (日本語) |
| `ko` | Korean (한국어) |
| `pt_BR` | Portuguese (Brasil) |
| `tr` | Turkish (Türkçe) |
| `uk` | Ukrainian (Українська) |
| `zh` | Chinese Simplified (中文简体) |
| `zh_Hant_TW` | Chinese Traditional (繁體中文) |
1. Initialize the new language catalog:
```bash
pybabel init -i changedetectionio/translations/messages.pot -d changedetectionio/translations -l NEW_LANG_CODE
```
2. Compile it:
```bash
python setup.py compile_catalog
```
## Adding a new language
Babel will auto-discover the new language on subsequent translation updates.
```bash
pybabel init -i changedetectionio/translations/messages.pot \
-d changedetectionio/translations \
-l NEW_LANG_CODE
# Reset POT-Creation-Date to the sentinel so it matches the other catalogs
sed -i 's|^"POT-Creation-Date: .*\\n"$|"POT-Creation-Date: 1970-01-01 00:00+0000\\n"|' \
changedetectionio/translations/NEW_LANG_CODE/LC_MESSAGES/messages.po
python setup.py compile_catalog
```
## Translation Notes
Babel auto-discovers the new language on subsequent runs.
---
## CI linter
A GitHub Actions job (`lint-template-i18n`) checks for adjacent `{{ _(...) }}` calls on the same line
separated only by HTML — the primary symptom of fragmented `msgid`s. It enforces a declining baseline:
the count of existing violations may only go down, never up. When you fix a template, lower the
`BASELINE_LIMIT` in `.github/workflows/test-only.yml` by the number of lines you fixed.
See [issue #4074](https://github.com/dgtlmoon/changedetection.io/issues/4074) for full background and
[PR #4076](https://github.com/dgtlmoon/changedetection.io/pull/4076) for worked consolidation examples.
From CLAUDE.md:
- Always use "monitor" or "watcher" terminology (not "clock")
- Use the most brief wording suitable
- When finding issues in one language, check ALL languages for the same issue
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff
File diff suppressed because it is too large Load Diff

Some files were not shown because too many files have changed in this diff Show More