Compare commits

...

27 Commits

Author SHA1 Message Date
dgtlmoon
a89c30f882 adding notes 2026-02-07 03:41:02 +01:00
dgtlmoon
c6744f6969 fix test 2026-02-07 03:29:19 +01:00
dgtlmoon
01eb8f629a Adding tests and comments 2026-02-07 03:20:23 +01:00
dgtlmoon
faa7fa88cd lock fixes 2026-02-07 02:48:37 +01:00
dgtlmoon
3123bf0016 processor config fixes 2026-02-07 02:34:29 +01:00
dgtlmoon
fcadda5f09 cross platform safety 2026-02-07 02:27:02 +01:00
dgtlmoon
dc157cccd5 remove old code 2026-02-07 02:20:51 +01:00
dgtlmoon
8018742c67 remove old calls 2026-02-07 02:15:53 +01:00
dgtlmoon
e41b33269f Refactor 2026-02-07 02:01:58 +01:00
dependabot[bot]
d4d6bb2872 Bump psutil from 7.2.1 to 7.2.2 (#3844)
Some checks failed
Build and push containers / metadata (push) Has been cancelled
Build and push containers / build-push-containers (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Build distribution 📦 (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Test the built package works basically. (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Publish Python 🐍 distribution 📦 to PyPI (push) Has been cancelled
ChangeDetection.io App Test / lint-code (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-10 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-11 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-12 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-13 (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/amd64 (alpine) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/arm64 (alpine) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/amd64 (main) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/arm/v7 (main) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/arm/v8 (main) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/arm64 (main) (push) Has been cancelled
2026-02-06 19:55:04 +01:00
dependabot[bot]
45fb262386 Bump pyppeteer-ng from 2.0.0rc12 to 2.0.0rc13 (#3843)
Some checks failed
Build and push containers / metadata (push) Has been cancelled
Build and push containers / build-push-containers (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Build distribution 📦 (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Test the built package works basically. (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Publish Python 🐍 distribution 📦 to PyPI (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/amd64 (alpine) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/arm64 (alpine) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/amd64 (main) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/arm/v7 (main) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/arm/v8 (main) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/arm64 (main) (push) Has been cancelled
ChangeDetection.io App Test / lint-code (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-10 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-11 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-12 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-13 (push) Has been cancelled
2026-02-06 01:33:10 +01:00
dgtlmoon
1058debc12 Fix for When MoreThanOnePriceFound() is raised, plugins dont fire #3840 #3833
Some checks failed
Build and push containers / metadata (push) Has been cancelled
Build and push containers / build-push-containers (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Build distribution 📦 (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Test the built package works basically. (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Publish Python 🐍 distribution 📦 to PyPI (push) Has been cancelled
ChangeDetection.io App Test / lint-code (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-10 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-11 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-12 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-13 (push) Has been cancelled
2026-02-05 20:07:47 +01:00
dgtlmoon
61b41b0b16 Rebuild translations (#3842) 2026-02-05 18:17:46 +01:00
dgtlmoon
efe3afd383 UI - Favicon use lazy load for faster rendering 2026-02-05 17:21:57 +01:00
dgtlmoon
84d26640cc Adding more tests and Watch object improvements (#3841) 2026-02-05 17:01:08 +01:00
dgtlmoon
2349344d9e Improved watch global settings handling (#3839) 2026-02-05 16:40:00 +01:00
dgtlmoon
bdc2916c07 New datastore message should be warning not critical 2026-02-05 16:25:22 +01:00
dgtlmoon
4fd477a60c Improving upgrade path
Some checks failed
Build and push containers / metadata (push) Has been cancelled
Build and push containers / build-push-containers (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Build distribution 📦 (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Test the built package works basically. (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Publish Python 🐍 distribution 📦 to PyPI (push) Has been cancelled
ChangeDetection.io App Test / lint-code (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-10 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-11 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-12 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-13 (push) Has been cancelled
2026-02-05 13:00:01 +01:00
dgtlmoon
dc8b387f40 History length limit size option (#3834) 2026-02-05 12:29:20 +01:00
dgtlmoon
2149a6fe3b Memory improvement - Use builtin markupsafe instead of creating a jinja2 template env each time for small strings (#3836) 2026-02-05 10:07:36 +01:00
dgtlmoon
f77d2bac6d Favicon path - cache results 2026-02-05 09:39:50 +01:00
dgtlmoon
75ecd1b793 UI - Backups tab - styling fix
Some checks failed
Build and push containers / metadata (push) Has been cancelled
Build and push containers / build-push-containers (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Build distribution 📦 (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Test the built package works basically. (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Publish Python 🐍 distribution 📦 to PyPI (push) Has been cancelled
ChangeDetection.io App Test / lint-code (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-10 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-11 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-12 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-13 (push) Has been cancelled
CodeQL / Analyze (javascript) (push) Has been cancelled
CodeQL / Analyze (python) (push) Has been cancelled
2026-02-05 00:18:22 +01:00
dgtlmoon
4fe2a67839 Styling fix for "backups" tab Re #3821 2026-02-04 22:42:57 +01:00
dgtlmoon
5bbbe37436 UI- Fix possible bug adding tags in quickwatch form
Some checks failed
Build and push containers / metadata (push) Has been cancelled
Build and push containers / build-push-containers (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Build distribution 📦 (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Test the built package works basically. (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Publish Python 🐍 distribution 📦 to PyPI (push) Has been cancelled
ChangeDetection.io App Test / lint-code (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-10 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-11 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-12 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-13 (push) Has been cancelled
2026-02-04 14:27:54 +01:00
dgtlmoon
83d7ce0fcf Processor plugin improvements - Now supports creating your own processor (for example, monitor DNS changes) (#3739)
Some checks failed
Build and push containers / metadata (push) Has been cancelled
Build and push containers / build-push-containers (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Build distribution 📦 (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Test the built package works basically. (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Publish Python 🐍 distribution 📦 to PyPI (push) Has been cancelled
ChangeDetection.io App Test / lint-code (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-10 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-11 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-12 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-13 (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/amd64 (alpine) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/arm64 (alpine) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/amd64 (main) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/arm/v7 (main) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/arm/v8 (main) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/arm64 (main) (push) Has been cancelled
2026-02-04 14:23:08 +01:00
dependabot[bot]
6bea9909ec Bump elementpath from 5.1.0 to 5.1.1 (#3799) 2026-02-04 11:49:35 +01:00
dgtlmoon
1aabf967ef Puppeteer and Playwright browser close/shutdown improvements (#3830)
Some checks failed
Build and push containers / metadata (push) Has been cancelled
Build and push containers / build-push-containers (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Build distribution 📦 (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Test the built package works basically. (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Publish Python 🐍 distribution 📦 to PyPI (push) Has been cancelled
ChangeDetection.io App Test / lint-code (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-10 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-11 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-12 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-13 (push) Has been cancelled
2026-02-03 11:14:51 +01:00
73 changed files with 3051 additions and 1341 deletions

View File

@@ -395,6 +395,29 @@ jobs:
cd changedetectionio
./run_custom_browser_url_tests.sh
processor-plugin-tests:
runs-on: ubuntu-latest
needs: build
timeout-minutes: 20
env:
PYTHON_VERSION: ${{ inputs.python-version }}
steps:
- uses: actions/checkout@v6
- name: Download Docker image artifact
uses: actions/download-artifact@v7
with:
name: test-changedetectionio-${{ env.PYTHON_VERSION }}
path: /tmp
- name: Load Docker image
run: |
docker load -i /tmp/test-changedetectionio.tar
- name: Basic processor plugin registration and checks
run: |
docker run -e EXTRA_PACKAGES=changedetection.io-osint-processor test-changedetectionio bash -c 'cd changedetectionio;pytest -vvv -s tests/plugins/test_processor.py::test_check_plugin_processor'
# Container startup tests
container-tests:
runs-on: ubuntu-latest

View File

@@ -138,6 +138,15 @@ ENV LOGGER_LEVEL="$LOGGER_LEVEL"
ENV LC_ALL=en_US.UTF-8
WORKDIR /app
# Copy and set up entrypoint script for installing extra packages
COPY docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod +x /docker-entrypoint.sh
# Set entrypoint to handle EXTRA_PACKAGES env var
ENTRYPOINT ["/docker-entrypoint.sh"]
# Default command (can be overridden in docker-compose.yml)
CMD ["python", "./changedetection.py", "-d", "/datastore"]

View File

@@ -112,9 +112,9 @@ def sigshutdown_handler(_signo, _stack_frame):
from changedetectionio.flask_app import update_q, notification_q
update_q.close()
notification_q.close()
logger.debug("Janus queues closed successfully")
logger.debug("Queues closed successfully")
except Exception as e:
logger.critical(f"CRITICAL: Failed to close janus queues: {e}")
logger.critical(f"CRITICAL: Failed to close queues: {e}")
# Shutdown socketio server fast
from changedetectionio.flask_app import socketio_server
@@ -124,13 +124,9 @@ def sigshutdown_handler(_signo, _stack_frame):
except Exception as e:
logger.error(f"Error shutting down Socket.IO server: {str(e)}")
# Save data quickly - force immediate save using abstract method
try:
datastore.force_save_all()
logger.success('Fast sync to storage complete.')
except Exception as e:
logger.error(f"Error syncing to storage: {str(e)}")
# With immediate persistence, all data is already saved
logger.success('All data already persisted (immediate commits enabled).')
sys.exit()
def print_help():

View File

@@ -67,7 +67,7 @@ class Notifications(Resource):
clean_urls = [url.strip() for url in notification_urls if isinstance(url, str)]
self.datastore.data['settings']['application']['notification_urls'] = clean_urls
self.datastore.needs_write = True
self.datastore.commit()
return {'notification_urls': clean_urls}, 200
@@ -95,7 +95,7 @@ class Notifications(Resource):
abort(400, message="No matching notification URLs found.")
self.datastore.data['settings']['application']['notification_urls'] = notification_urls
self.datastore.needs_write = True
self.datastore.commit()
return 'OK', 204

View File

@@ -63,9 +63,11 @@ class Tag(Resource):
if request.args.get('muted', '') == 'muted':
self.datastore.data['settings']['application']['tags'][uuid]['notification_muted'] = True
self.datastore.commit()
return "OK", 200
elif request.args.get('muted', '') == 'unmuted':
self.datastore.data['settings']['application']['tags'][uuid]['notification_muted'] = False
self.datastore.commit()
return "OK", 200
return tag
@@ -79,11 +81,13 @@ class Tag(Resource):
# Delete the tag, and any tag reference
del self.datastore.data['settings']['application']['tags'][uuid]
self.datastore.commit()
# Remove tag from all watches
for watch_uuid, watch in self.datastore.data['watching'].items():
if watch.get('tags') and uuid in watch['tags']:
watch['tags'].remove(uuid)
watch.commit()
return 'OK', 204
@@ -107,7 +111,7 @@ class Tag(Resource):
return str(e), 400
tag.update(request.json)
self.datastore.needs_write_urgent = True
self.datastore.commit()
return "OK", 200

View File

@@ -66,47 +66,46 @@ class Watch(Resource):
@validate_openapi_request('getWatch')
def get(self, uuid):
"""Get information about a single watch, recheck, pause, or mute."""
import time
from copy import deepcopy
watch = None
# Retry up to 20 times if dict is being modified
# With sleep(0), this is fast: ~200µs best case, ~20ms worst case under heavy load
for attempt in range(20):
try:
watch = deepcopy(self.datastore.data['watching'].get(uuid))
break
except RuntimeError:
# Dict changed during deepcopy, retry after yielding to scheduler
# sleep(0) releases GIL and yields - no fixed delay, just lets other threads run
if attempt < 19: # Don't yield on last attempt
time.sleep(0) # Yield to scheduler (microseconds, not milliseconds)
if not watch:
# Get watch reference first (for pause/mute operations)
watch_obj = self.datastore.data['watching'].get(uuid)
if not watch_obj:
abort(404, message='No watch exists with the UUID of {}'.format(uuid))
# Create a dict copy for JSON response (with lock for thread safety)
# This is much faster than deepcopy and doesn't copy the datastore reference
# WARNING: dict() is a SHALLOW copy - nested dicts are shared with original!
# Only safe because we only ADD scalar properties (line 97-101), never modify nested dicts
# If you need to modify nested dicts, use: from copy import deepcopy; watch = deepcopy(dict(watch_obj))
with self.datastore.lock:
watch = dict(watch_obj)
if request.args.get('recheck'):
worker_pool.queue_item_async_safe(self.update_q, queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': uuid}))
return "OK", 200
if request.args.get('paused', '') == 'paused':
self.datastore.data['watching'].get(uuid).pause()
watch_obj.pause()
watch_obj.commit()
return "OK", 200
elif request.args.get('paused', '') == 'unpaused':
self.datastore.data['watching'].get(uuid).unpause()
watch_obj.unpause()
watch_obj.commit()
return "OK", 200
if request.args.get('muted', '') == 'muted':
self.datastore.data['watching'].get(uuid).mute()
watch_obj.mute()
watch_obj.commit()
return "OK", 200
elif request.args.get('muted', '') == 'unmuted':
self.datastore.data['watching'].get(uuid).unmute()
watch_obj.unmute()
watch_obj.commit()
return "OK", 200
# Return without history, get that via another API call
# Properties are not returned as a JSON, so add the required props manually
watch['history_n'] = watch.history_n
watch['history_n'] = watch_obj.history_n
# attr .last_changed will check for the last written text snapshot on change
watch['last_changed'] = watch.last_changed
watch['viewed'] = watch.viewed
watch['link'] = watch.link,
watch['last_changed'] = watch_obj.last_changed
watch['viewed'] = watch_obj.viewed
watch['link'] = watch_obj.link,
return watch
@@ -169,58 +168,19 @@ class Watch(Resource):
# Handle processor-config-* fields separately (save to JSON, not datastore)
from changedetectionio import processors
processor_config_data = {}
regular_data = {}
for key, value in request.json.items():
if key.startswith('processor_config_'):
config_key = key.replace('processor_config_', '')
if value: # Only save non-empty values
processor_config_data[config_key] = value
else:
regular_data[key] = value
# Make a mutable copy of request.json for modification
json_data = dict(request.json)
# Extract and remove processor config fields from json_data
processor_config_data = processors.extract_processor_config_from_form_data(json_data)
# Update watch with regular (non-processor-config) fields
watch.update(regular_data)
watch.update(json_data)
watch.commit()
# Save processor config to JSON file if any config data exists
if processor_config_data:
try:
processor_name = request.json.get('processor', watch.get('processor'))
if processor_name:
# Create a processor instance to access config methods
from changedetectionio.processors import difference_detection_processor
processor_instance = difference_detection_processor(self.datastore, uuid)
# Use processor name as filename so each processor keeps its own config
config_filename = f'{processor_name}.json'
processor_instance.update_extra_watch_config(config_filename, processor_config_data)
logger.debug(f"API: Saved processor config to {config_filename}: {processor_config_data}")
# Call optional edit_hook if processor has one
try:
import importlib
edit_hook_module_name = f'changedetectionio.processors.{processor_name}.edit_hook'
try:
edit_hook = importlib.import_module(edit_hook_module_name)
logger.debug(f"API: Found edit_hook module for {processor_name}")
if hasattr(edit_hook, 'on_config_save'):
logger.info(f"API: Calling edit_hook.on_config_save for {processor_name}")
# Call hook and get updated config
updated_config = edit_hook.on_config_save(watch, processor_config_data, self.datastore)
# Save updated config back to file
processor_instance.update_extra_watch_config(config_filename, updated_config)
logger.info(f"API: Edit hook updated config: {updated_config}")
else:
logger.debug(f"API: Edit hook module found but no on_config_save function")
except ModuleNotFoundError:
logger.debug(f"API: No edit_hook module for processor {processor_name} (this is normal)")
except Exception as hook_error:
logger.error(f"API: Edit hook error (non-fatal): {hook_error}", exc_info=True)
except Exception as e:
logger.error(f"API: Failed to save processor config: {e}")
# Save processor config to JSON file
processors.save_processor_config(self.datastore, uuid, processor_config_data)
return "OK", 200
@@ -464,8 +424,14 @@ class CreateWatch(Resource):
except ValidationError as e:
return str(e), 400
# Handle processor-config-* fields separately (save to JSON, not watch)
from changedetectionio import processors
extras = copy.deepcopy(json_data)
# Extract and remove processor config fields from extras
processor_config_data = processors.extract_processor_config_from_form_data(extras)
# Because we renamed 'tag' to 'tags' but don't want to change the API (can do this in v2 of the API)
tags = None
if extras.get('tag'):
@@ -475,6 +441,10 @@ class CreateWatch(Resource):
del extras['url']
new_uuid = self.datastore.add_watch(url=url, extras=extras, tag=tags)
# Save processor config to separate JSON file
if new_uuid and processor_config_data:
processors.save_processor_config(self.datastore, new_uuid, processor_config_data)
if new_uuid:
# Dont queue because the scheduler will check that it hasnt been checked before anyway
# worker_pool.queue_item_async_safe(self.update_q, queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': new_uuid}))

View File

@@ -12,9 +12,17 @@ schema = api_schema.build_watch_json_schema(watch_base_config)
schema_create_watch = copy.deepcopy(schema)
schema_create_watch['required'] = ['url']
del schema_create_watch['properties']['last_viewed']
# Allow processor_config_* fields (handled separately in endpoint)
schema_create_watch['patternProperties'] = {
'^processor_config_': {'type': ['string', 'number', 'boolean', 'object', 'array', 'null']}
}
schema_update_watch = copy.deepcopy(schema)
schema_update_watch['additionalProperties'] = False
# Allow processor_config_* fields (handled separately in endpoint)
schema_update_watch['patternProperties'] = {
'^processor_config_': {'type': ['string', 'number', 'boolean', 'object', 'array', 'null']}
}
# Tag schema is also based on watch_base since Tag inherits from it
schema_tag = copy.deepcopy(schema)

View File

@@ -102,8 +102,7 @@ def construct_blueprint(datastore: ChangeDetectionStore):
flash(gettext("Maximum number of backups reached, please remove some"), "error")
return redirect(url_for('backups.index'))
# Be sure we're written fresh - force immediate save using abstract method
datastore.force_save_all()
# With immediate persistence, all data is already saved
zip_thread = threading.Thread(
target=create_backup,
args=(datastore.datastore_path, datastore.data.get("watching")),

View File

@@ -3,10 +3,10 @@
{% from '_helpers.html' import render_simple_field, render_field %}
<div class="edit-form">
<div class="box-wrap inner">
<h4>{{ _('Backups') }}</h4>
<h2>{{ _('Backups') }}</h2>
{% if backup_running %}
<p>
<strong>{{ _('A backup is running!') }}</strong>
<span class="spinner"></span>&nbsp;<strong>{{ _('A backup is running!') }}</strong>
</p>
{% endif %}
<p>

View File

@@ -26,8 +26,9 @@ def construct_blueprint(datastore: ChangeDetectionStore, update_q, queuedWatchMe
# URL List import
if request.values.get('urls') and len(request.values.get('urls').strip()):
# Import and push into the queue for immediate update check
from changedetectionio import processors
importer_handler = import_url_list()
importer_handler.run(data=request.values.get('urls'), flash=flash, datastore=datastore, processor=request.values.get('processor', 'text_json_diff'))
importer_handler.run(data=request.values.get('urls'), flash=flash, datastore=datastore, processor=request.values.get('processor', processors.get_default_processor()))
logger.debug(f"Imported {len(importer_handler.new_uuids)} new UUIDs")
# Dont' add to queue because scheduler can see that they haven't been checked and will add them to the queue
# for uuid in importer_handler.new_uuids:

View File

@@ -20,6 +20,7 @@ def construct_blueprint(datastore: ChangeDetectionStore, update_q: PriorityQueue
datastore.data['watching'][uuid]['track_ldjson_price_data'] = PRICE_DATA_TRACK_ACCEPT
datastore.data['watching'][uuid]['processor'] = 'restock_diff'
datastore.data['watching'][uuid].clear_watch()
datastore.data['watching'][uuid].commit()
worker_pool.queue_item_async_safe(update_q, queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': uuid}))
return redirect(url_for("watchlist.index"))
@@ -27,6 +28,7 @@ def construct_blueprint(datastore: ChangeDetectionStore, update_q: PriorityQueue
@price_data_follower_blueprint.route("/<string:uuid>/reject", methods=['GET'])
def reject(uuid):
datastore.data['watching'][uuid]['track_ldjson_price_data'] = PRICE_DATA_TRACK_REJECT
datastore.data['watching'][uuid].commit()
return redirect(url_for("watchlist.index"))

View File

@@ -74,12 +74,13 @@ def construct_blueprint(datastore: ChangeDetectionStore):
del (app_update['password'])
datastore.data['settings']['application'].update(app_update)
# Handle dynamic worker count adjustment
old_worker_count = datastore.data['settings']['requests'].get('workers', 1)
new_worker_count = form.data['requests'].get('workers', 1)
datastore.data['settings']['requests'].update(form.data['requests'])
datastore.commit()
# Adjust worker count if it changed
if new_worker_count != old_worker_count:
@@ -109,13 +110,11 @@ def construct_blueprint(datastore: ChangeDetectionStore):
if not os.getenv("SALTED_PASS", False) and len(form.application.form.password.encrypted_password):
datastore.data['settings']['application']['password'] = form.application.form.password.encrypted_password
datastore.needs_write_urgent = True
datastore.commit()
flash(gettext("Password protection enabled."), 'notice')
flask_login.logout_user()
return redirect(url_for('watchlist.index'))
datastore.needs_write_urgent = True
# Also save plugin settings from the same form submission
plugin_tabs_list = get_plugin_settings_tabs()
for tab in plugin_tabs_list:
@@ -181,7 +180,7 @@ def construct_blueprint(datastore: ChangeDetectionStore):
def settings_reset_api_key():
secret = secrets.token_hex(16)
datastore.data['settings']['application']['api_access_token'] = secret
datastore.needs_write_urgent = True
datastore.commit()
flash(gettext("API Key was regenerated."))
return redirect(url_for('settings.settings_page')+'#api')
@@ -198,7 +197,7 @@ def construct_blueprint(datastore: ChangeDetectionStore):
def toggle_all_paused():
current_state = datastore.data['settings']['application'].get('all_paused', False)
datastore.data['settings']['application']['all_paused'] = not current_state
datastore.needs_write_urgent = True
datastore.commit()
if datastore.data['settings']['application']['all_paused']:
flash(gettext("Automatic scheduling paused - checks will not be queued."), 'notice')
@@ -212,7 +211,7 @@ def construct_blueprint(datastore: ChangeDetectionStore):
def toggle_all_muted():
current_state = datastore.data['settings']['application'].get('all_muted', False)
datastore.data['settings']['application']['all_muted'] = not current_state
datastore.needs_write_urgent = True
datastore.commit()
if datastore.data['settings']['application']['all_muted']:
flash(gettext("All notifications muted."), 'notice')

View File

@@ -25,7 +25,7 @@
<li class="tab"><a href="#ui-options">{{ _('UI Options') }}</a></li>
<li class="tab"><a href="#api">{{ _('API') }}</a></li>
<li class="tab"><a href="#rss">{{ _('RSS') }}</a></li>
<li class="tab"><a href="{{ url_for('backups.index') }}" class="pure-menu-link">{{ _('Backups') }}</a></li>
<li class="tab"><a href="{{ url_for('backups.index') }}">{{ _('Backups') }}</a></li>
<li class="tab"><a href="#timedate">{{ _('Time & Date') }}</a></li>
<li class="tab"><a href="#proxies">{{ _('CAPTCHA & Proxies') }}</a></li>
{% if plugin_tabs %}
@@ -59,6 +59,14 @@
{{ _('Set to') }} <strong>0</strong> {{ _('to disable') }}
</span>
</div>
<div class="pure-control-group">
{{ render_field(form.application.form.history_snapshot_max_length, class="history_snapshot_max_length") }}
<span class="pure-form-message-inline">{{ _('Limit collection of history snapshots for each watch to this number of history items.') }}
<br>
{{ _('Set to empty to disable / no limit') }}
</span>
</div>
<div class="pure-control-group">
{% if not hide_remove_pass %}
{% if current_user.is_authenticated %}

View File

@@ -59,6 +59,7 @@ def construct_blueprint(datastore: ChangeDetectionStore):
def mute(uuid):
if datastore.data['settings']['application']['tags'].get(uuid):
datastore.data['settings']['application']['tags'][uuid]['notification_muted'] = not datastore.data['settings']['application']['tags'][uuid]['notification_muted']
datastore.commit()
return redirect(url_for('tags.tags_overview_page'))
@tags_blueprint.route("/delete/<string:uuid>", methods=['GET'])
@@ -76,6 +77,7 @@ def construct_blueprint(datastore: ChangeDetectionStore):
for watch_uuid, watch in datastore.data['watching'].items():
if watch.get('tags') and tag_uuid in watch['tags']:
watch['tags'].remove(tag_uuid)
watch.commit()
removed_count += 1
logger.info(f"Background: Tag {tag_uuid} removed from {removed_count} watches")
except Exception as e:
@@ -98,6 +100,7 @@ def construct_blueprint(datastore: ChangeDetectionStore):
for watch_uuid, watch in datastore.data['watching'].items():
if watch.get('tags') and tag_uuid in watch['tags']:
watch['tags'].remove(tag_uuid)
watch.commit()
unlinked_count += 1
logger.info(f"Background: Tag {tag_uuid} unlinked from {unlinked_count} watches")
except Exception as e:
@@ -114,6 +117,7 @@ def construct_blueprint(datastore: ChangeDetectionStore):
def delete_all():
# Clear all tags from settings immediately
datastore.data['settings']['application']['tags'] = {}
datastore.commit()
# Clear tags from all watches in background thread to avoid blocking
def clear_all_tags_background():
@@ -122,6 +126,7 @@ def construct_blueprint(datastore: ChangeDetectionStore):
try:
for watch_uuid, watch in datastore.data['watching'].items():
watch['tags'] = []
watch.commit()
cleared_count += 1
logger.info(f"Background: Cleared tags from {cleared_count} watches")
except Exception as e:
@@ -216,7 +221,7 @@ def construct_blueprint(datastore: ChangeDetectionStore):
datastore.data['settings']['application']['tags'][uuid].update(form.data)
datastore.data['settings']['application']['tags'][uuid]['processor'] = 'restock_diff'
datastore.needs_write_urgent = True
datastore.commit()
flash(gettext("Updated"))
return redirect(url_for('tags.tags_overview_page'))

View File

@@ -24,7 +24,7 @@ def _handle_operations(op, uuids, datastore, worker_pool, update_q, queuedWatchM
for uuid in uuids:
if datastore.data['watching'].get(uuid):
datastore.data['watching'][uuid]['paused'] = True
datastore.mark_watch_dirty(uuid)
datastore.data['watching'][uuid].commit()
if emit_flash:
flash(gettext("{} watches paused").format(len(uuids)))
@@ -32,7 +32,7 @@ def _handle_operations(op, uuids, datastore, worker_pool, update_q, queuedWatchM
for uuid in uuids:
if datastore.data['watching'].get(uuid):
datastore.data['watching'][uuid.strip()]['paused'] = False
datastore.mark_watch_dirty(uuid)
datastore.data['watching'][uuid].commit()
if emit_flash:
flash(gettext("{} watches unpaused").format(len(uuids)))
@@ -47,7 +47,7 @@ def _handle_operations(op, uuids, datastore, worker_pool, update_q, queuedWatchM
for uuid in uuids:
if datastore.data['watching'].get(uuid):
datastore.data['watching'][uuid]['notification_muted'] = True
datastore.mark_watch_dirty(uuid)
datastore.data['watching'][uuid].commit()
if emit_flash:
flash(gettext("{} watches muted").format(len(uuids)))
@@ -55,7 +55,7 @@ def _handle_operations(op, uuids, datastore, worker_pool, update_q, queuedWatchM
for uuid in uuids:
if datastore.data['watching'].get(uuid):
datastore.data['watching'][uuid]['notification_muted'] = False
datastore.mark_watch_dirty(uuid)
datastore.data['watching'][uuid].commit()
if emit_flash:
flash(gettext("{} watches un-muted").format(len(uuids)))
@@ -71,7 +71,7 @@ def _handle_operations(op, uuids, datastore, worker_pool, update_q, queuedWatchM
for uuid in uuids:
if datastore.data['watching'].get(uuid):
datastore.data['watching'][uuid]["last_error"] = False
datastore.mark_watch_dirty(uuid)
datastore.data['watching'][uuid].commit()
if emit_flash:
flash(gettext("{} watches errors cleared").format(len(uuids)))
@@ -92,6 +92,7 @@ def _handle_operations(op, uuids, datastore, worker_pool, update_q, queuedWatchM
datastore.data['watching'][uuid]['notification_body'] = None
datastore.data['watching'][uuid]['notification_urls'] = []
datastore.data['watching'][uuid]['notification_format'] = USE_SYSTEM_DEFAULT_NOTIFICATION_FORMAT_FOR_WATCH
datastore.data['watching'][uuid].commit()
if emit_flash:
flash(gettext("{} watches set to use default notification settings").format(len(uuids)))
@@ -107,6 +108,7 @@ def _handle_operations(op, uuids, datastore, worker_pool, update_q, queuedWatchM
datastore.data['watching'][uuid]['tags'] = []
datastore.data['watching'][uuid]['tags'].append(tag_uuid)
datastore.data['watching'][uuid].commit()
if emit_flash:
flash(gettext("{} watches were tagged").format(len(uuids)))

View File

@@ -100,23 +100,21 @@ def construct_blueprint(datastore: ChangeDetectionStore):
# Get the processor type for this watch
processor_name = watch.get('processor', 'text_json_diff')
try:
# Try to import the processor's difference module
processor_module = importlib.import_module(f'changedetectionio.processors.{processor_name}.difference')
# Try to get the processor's difference module (works for both built-in and plugin processors)
from changedetectionio.processors import get_processor_submodule
processor_module = get_processor_submodule(processor_name, 'difference')
# Call the processor's render() function
if hasattr(processor_module, 'render'):
return processor_module.render(
watch=watch,
datastore=datastore,
request=request,
url_for=url_for,
render_template=render_template,
flash=flash,
redirect=redirect
)
except (ImportError, ModuleNotFoundError) as e:
logger.warning(f"Processor {processor_name} does not have a difference module, falling back to text_json_diff: {e}")
# Call the processor's render() function
if processor_module and hasattr(processor_module, 'render'):
return processor_module.render(
watch=watch,
datastore=datastore,
request=request,
url_for=url_for,
render_template=render_template,
flash=flash,
redirect=redirect
)
# Fallback: if processor doesn't have difference module, use text_json_diff as default
from changedetectionio.processors.text_json_diff.difference import render as default_render
@@ -156,23 +154,21 @@ def construct_blueprint(datastore: ChangeDetectionStore):
# Get the processor type for this watch
processor_name = watch.get('processor', 'text_json_diff')
try:
# Try to import the processor's extract module
processor_module = importlib.import_module(f'changedetectionio.processors.{processor_name}.extract')
# Try to get the processor's extract module (works for both built-in and plugin processors)
from changedetectionio.processors import get_processor_submodule
processor_module = get_processor_submodule(processor_name, 'extract')
# Call the processor's render_form() function
if hasattr(processor_module, 'render_form'):
return processor_module.render_form(
watch=watch,
datastore=datastore,
request=request,
url_for=url_for,
render_template=render_template,
flash=flash,
redirect=redirect
)
except (ImportError, ModuleNotFoundError) as e:
logger.warning(f"Processor {processor_name} does not have an extract module, falling back to base extractor: {e}")
# Call the processor's render_form() function
if processor_module and hasattr(processor_module, 'render_form'):
return processor_module.render_form(
watch=watch,
datastore=datastore,
request=request,
url_for=url_for,
render_template=render_template,
flash=flash,
redirect=redirect
)
# Fallback: if processor doesn't have extract module, use base processors.extract as default
from changedetectionio.processors.extract import render_form as default_render_form
@@ -212,24 +208,22 @@ def construct_blueprint(datastore: ChangeDetectionStore):
# Get the processor type for this watch
processor_name = watch.get('processor', 'text_json_diff')
try:
# Try to import the processor's extract module
processor_module = importlib.import_module(f'changedetectionio.processors.{processor_name}.extract')
# Try to get the processor's extract module (works for both built-in and plugin processors)
from changedetectionio.processors import get_processor_submodule
processor_module = get_processor_submodule(processor_name, 'extract')
# Call the processor's process_extraction() function
if hasattr(processor_module, 'process_extraction'):
return processor_module.process_extraction(
watch=watch,
datastore=datastore,
request=request,
url_for=url_for,
make_response=make_response,
send_from_directory=send_from_directory,
flash=flash,
redirect=redirect
)
except (ImportError, ModuleNotFoundError) as e:
logger.warning(f"Processor {processor_name} does not have an extract module, falling back to base extractor: {e}")
# Call the processor's process_extraction() function
if processor_module and hasattr(processor_module, 'process_extraction'):
return processor_module.process_extraction(
watch=watch,
datastore=datastore,
request=request,
url_for=url_for,
make_response=make_response,
send_from_directory=send_from_directory,
flash=flash,
redirect=redirect
)
# Fallback: if processor doesn't have extract module, use base processors.extract as default
from changedetectionio.processors.extract import process_extraction as default_process_extraction
@@ -279,38 +273,33 @@ def construct_blueprint(datastore: ChangeDetectionStore):
# Get the processor type for this watch
processor_name = watch.get('processor', 'text_json_diff')
try:
# Try to import the processor's difference module
processor_module = importlib.import_module(f'changedetectionio.processors.{processor_name}.difference')
# Try to get the processor's difference module (works for both built-in and plugin processors)
from changedetectionio.processors import get_processor_submodule
processor_module = get_processor_submodule(processor_name, 'difference')
# Call the processor's get_asset() function
if hasattr(processor_module, 'get_asset'):
result = processor_module.get_asset(
asset_name=asset_name,
watch=watch,
datastore=datastore,
request=request
)
# Call the processor's get_asset() function
if processor_module and hasattr(processor_module, 'get_asset'):
result = processor_module.get_asset(
asset_name=asset_name,
watch=watch,
datastore=datastore,
request=request
)
if result is None:
from flask import abort
abort(404, description=f"Asset '{asset_name}' not found")
binary_data, content_type, cache_control = result
response = make_response(binary_data)
response.headers['Content-Type'] = content_type
if cache_control:
response.headers['Cache-Control'] = cache_control
return response
else:
logger.warning(f"Processor {processor_name} does not implement get_asset()")
if result is None:
from flask import abort
abort(404, description=f"Processor '{processor_name}' does not support assets")
abort(404, description=f"Asset '{asset_name}' not found")
except (ImportError, ModuleNotFoundError) as e:
logger.warning(f"Processor {processor_name} does not have a difference module: {e}")
binary_data, content_type, cache_control = result
response = make_response(binary_data)
response.headers['Content-Type'] = content_type
if cache_control:
response.headers['Cache-Control'] = cache_control
return response
else:
logger.warning(f"Processor {processor_name} does not implement get_asset()")
from flask import abort
abort(404, description=f"Processor '{processor_name}' not found")
abort(404, description=f"Processor '{processor_name}' does not support assets")
return diff_blueprint

View File

@@ -71,8 +71,13 @@ def construct_blueprint(datastore: ChangeDetectionStore, update_q, queuedWatchMe
processor_name = datastore.data['watching'][uuid].get('processor', '')
processor_classes = next((tpl for tpl in processors.find_processors() if tpl[1] == processor_name), None)
if not processor_classes:
flash(gettext("Cannot load the edit form for processor/plugin '{}', plugin missing?").format(processor_classes[1]), 'error')
return redirect(url_for('watchlist.index'))
flash(gettext("Could not load '{}' processor, processor plugin might be missing. Please select a different processor.").format(processor_name), 'error')
# Fall back to default processor so user can still edit and change processor
processor_classes = next((tpl for tpl in processors.find_processors() if tpl[1] == 'text_json_diff'), None)
if not processor_classes:
# If even text_json_diff is missing, something is very wrong
flash(gettext("Could not load '{}' processor, processor plugin might be missing.").format(processor_name), 'error')
return redirect(url_for('watchlist.index'))
parent_module = processors.get_parent_module(processor_classes[0])
@@ -149,58 +154,10 @@ def construct_blueprint(datastore: ChangeDetectionStore, update_q, queuedWatchMe
extra_update_obj['time_between_check'] = form.time_between_check.data
# Handle processor-config-* fields separately (save to JSON, not datastore)
processor_config_data = {}
fields_to_remove = []
for field_name, field_value in form.data.items():
if field_name.startswith('processor_config_'):
config_key = field_name.replace('processor_config_', '')
if field_value: # Only save non-empty values
processor_config_data[config_key] = field_value
fields_to_remove.append(field_name)
# Save processor config to JSON file if any config data exists
if processor_config_data:
try:
processor_name = form.data.get('processor')
# Create a processor instance to access config methods
processor_instance = processors.difference_detection_processor(datastore, uuid)
# Use processor name as filename so each processor keeps its own config
config_filename = f'{processor_name}.json'
processor_instance.update_extra_watch_config(config_filename, processor_config_data)
logger.debug(f"Saved processor config to {config_filename}: {processor_config_data}")
# Call optional edit_hook if processor has one
try:
# Try to import the edit_hook module from the processor package
import importlib
edit_hook_module_name = f'changedetectionio.processors.{processor_name}.edit_hook'
try:
edit_hook = importlib.import_module(edit_hook_module_name)
logger.debug(f"Found edit_hook module for {processor_name}")
if hasattr(edit_hook, 'on_config_save'):
logger.info(f"Calling edit_hook.on_config_save for {processor_name}")
watch_obj = datastore.data['watching'][uuid]
# Call hook and get updated config
updated_config = edit_hook.on_config_save(watch_obj, processor_config_data, datastore)
# Save updated config back to file
processor_instance.update_extra_watch_config(config_filename, updated_config)
logger.info(f"Edit hook updated config: {updated_config}")
else:
logger.debug(f"Edit hook module found but no on_config_save function")
except ModuleNotFoundError:
logger.debug(f"No edit_hook module for processor {processor_name} (this is normal)")
except Exception as hook_error:
logger.error(f"Edit hook error (non-fatal): {hook_error}", exc_info=True)
except Exception as e:
logger.error(f"Failed to save processor config: {e}")
# Remove processor-config-* fields from form.data before updating datastore
for field_name in fields_to_remove:
form.data.pop(field_name, None)
# Handle processor-config-* fields separately (save to JSON, not datastore)
# IMPORTANT: These must NOT be saved to url-watches.json, only to the processor-specific JSON file
processor_config_data = processors.extract_processor_config_from_form_data(form.data)
processors.save_processor_config(datastore, uuid, processor_config_data)
# Ignore text
form_ignore_text = form.ignore_text.data
@@ -240,7 +197,11 @@ def construct_blueprint(datastore: ChangeDetectionStore, update_q, queuedWatchMe
# Recast it if need be to right data Watch handler
watch_class = processors.get_custom_watch_obj_for_processor(form.data.get('processor'))
datastore.data['watching'][uuid] = watch_class(datastore_path=datastore.datastore_path, default=datastore.data['watching'][uuid])
datastore.data['watching'][uuid] = watch_class(datastore_path=datastore.datastore_path, __datastore=datastore.data, default=datastore.data['watching'][uuid])
# Save the watch immediately
datastore.data['watching'][uuid].commit()
flash(gettext("Updated watch - unpaused!") if request.args.get('unpause_on_save') else gettext("Updated watch."))
# Cleanup any browsersteps session for this watch
@@ -250,10 +211,6 @@ def construct_blueprint(datastore: ChangeDetectionStore, update_q, queuedWatchMe
except Exception as e:
logger.debug(f"Error cleaning up browsersteps session: {e}")
# Re #286 - We wait for syncing new data to disk in another thread every 60 seconds
# But in the case something is added we should save straight away
datastore.needs_write_urgent = True
# Do not queue on edit if its not within the time range
# @todo maybe it should never queue anyway on edit...
@@ -310,6 +267,13 @@ def construct_blueprint(datastore: ChangeDetectionStore, update_q, queuedWatchMe
# Get fetcher capabilities instead of hardcoded logic
capabilities = get_fetcher_capabilities(watch, datastore)
# Add processor capabilities from module
capabilities['supports_visual_selector'] = getattr(parent_module, 'supports_visual_selector', False)
capabilities['supports_text_filters_and_triggers'] = getattr(parent_module, 'supports_text_filters_and_triggers', False)
capabilities['supports_text_filters_and_triggers_elements'] = getattr(parent_module, 'supports_text_filters_and_triggers_elements', False)
capabilities['supports_request_type'] = getattr(parent_module, 'supports_request_type', False)
app_rss_token = datastore.data['settings']['application'].get('rss_access_token'),
c = [f"processor-{watch.get('processor')}"]
@@ -422,6 +386,9 @@ def construct_blueprint(datastore: ChangeDetectionStore, update_q, queuedWatchMe
s = re.sub(r'[0-9]+', r'\\d+', s)
datastore.data["watching"][uuid]['ignore_text'].append('/' + s + '/')
# Save the updated ignore_text
datastore.data["watching"][uuid].commit()
return f"<a href={url_for('ui.ui_preview.preview_page', uuid=uuid)}>Click to preview</a>"
return edit_blueprint

View File

@@ -38,24 +38,21 @@ def construct_blueprint(datastore: ChangeDetectionStore):
# Get the processor type for this watch
processor_name = watch.get('processor', 'text_json_diff')
try:
# Try to import the processor's preview module
import importlib
processor_module = importlib.import_module(f'changedetectionio.processors.{processor_name}.preview')
# Try to get the processor's preview module (works for both built-in and plugin processors)
from changedetectionio.processors import get_processor_submodule
processor_module = get_processor_submodule(processor_name, 'preview')
# Call the processor's render() function
if hasattr(processor_module, 'render'):
return processor_module.render(
watch=watch,
datastore=datastore,
request=request,
url_for=url_for,
render_template=render_template,
flash=flash,
redirect=redirect
)
except (ImportError, ModuleNotFoundError) as e:
logger.debug(f"Processor {processor_name} does not have a preview module, using default preview: {e}")
# Call the processor's render() function
if processor_module and hasattr(processor_module, 'render'):
return processor_module.render(
watch=watch,
datastore=datastore,
request=request,
url_for=url_for,
render_template=render_template,
flash=flash,
redirect=redirect
)
# Fallback: if processor doesn't have preview module, use default text preview
content = []
@@ -160,39 +157,33 @@ def construct_blueprint(datastore: ChangeDetectionStore):
# Get the processor type for this watch
processor_name = watch.get('processor', 'text_json_diff')
try:
# Try to import the processor's preview module
import importlib
processor_module = importlib.import_module(f'changedetectionio.processors.{processor_name}.preview')
# Try to get the processor's preview module (works for both built-in and plugin processors)
from changedetectionio.processors import get_processor_submodule
processor_module = get_processor_submodule(processor_name, 'preview')
# Call the processor's get_asset() function
if hasattr(processor_module, 'get_asset'):
result = processor_module.get_asset(
asset_name=asset_name,
watch=watch,
datastore=datastore,
request=request
)
# Call the processor's get_asset() function
if processor_module and hasattr(processor_module, 'get_asset'):
result = processor_module.get_asset(
asset_name=asset_name,
watch=watch,
datastore=datastore,
request=request
)
if result is None:
from flask import abort
abort(404, description=f"Asset '{asset_name}' not found")
binary_data, content_type, cache_control = result
response = make_response(binary_data)
response.headers['Content-Type'] = content_type
if cache_control:
response.headers['Cache-Control'] = cache_control
return response
else:
logger.warning(f"Processor {processor_name} does not implement get_asset()")
if result is None:
from flask import abort
abort(404, description=f"Processor '{processor_name}' does not support assets")
abort(404, description=f"Asset '{asset_name}' not found")
except (ImportError, ModuleNotFoundError) as e:
logger.warning(f"Processor {processor_name} does not have a preview module: {e}")
binary_data, content_type, cache_control = result
response = make_response(binary_data)
response.headers['Content-Type'] = content_type
if cache_control:
response.headers['Cache-Control'] = cache_control
return response
else:
logger.warning(f"Processor {processor_name} does not implement get_asset()")
from flask import abort
abort(404, description=f"Processor '{processor_name}' not found")
abort(404, description=f"Processor '{processor_name}' does not support assets")
return preview_blueprint

View File

@@ -45,14 +45,19 @@
<div class="tabs collapsable">
<ul>
<li class="tab"><a href="#general">{{ _('General') }}</a></li>
{% if capabilities.supports_request_type %}
<li class="tab"><a href="#request">{{ _('Request') }}</a></li>
{% endif %}
{% if extra_tab_content %}
<li class="tab"><a href="#extras_tab">{{ extra_tab_content }}</a></li>
{% endif %}
{% if capabilities.supports_browser_steps %}
<li class="tab"><a id="browsersteps-tab" href="#browser-steps">{{ _('Browser Steps') }}</a></li>
<!-- should goto extra forms? -->
{% if watch['processor'] == 'text_json_diff' or watch['processor'] == 'image_ssim_diff' %}
{% endif %}
{% if capabilities.supports_visual_selector %}
<li class="tab"><a id="visualselector-tab" href="#visualselector">{{ _('Visual Filter Selector') }}</a></li>
{% endif %}
{% if capabilities.supports_text_filters_and_triggers %}
<li class="tab" id="filters-and-triggers-tab"><a href="#filters-and-triggers">{{ _('Filters & Triggers') }}</a></li>
<li class="tab" id="conditions-tab"><a href="#conditions">{{ _('Conditions') }}</a></li>
{% endif %}
@@ -110,12 +115,20 @@
{{ _('Sends a notification when the filter can no longer be seen on the page, good for knowing when the page changed and your filter will not work anymore.') }}
</span>
</div>
<div class="pure-control-group">
{{ render_field(form.history_snapshot_max_length, class="history_snapshot_max_length") }}
<span class="pure-form-message-inline">{{ _('Limit collection of history snapshots for each watch to this number of history items.') }}
<br>
{{ _('Set to empty to use system settings default') }}
</span>
</div>
<div class="pure-control-group">
{{ render_ternary_field(form.use_page_title_in_list) }}
</div>
</fieldset>
</div>
{% if capabilities.supports_request_type %}
<div class="tab-pane-inner" id="request">
<div class="pure-control-group inline-radio">
{{ render_field(form.fetch_backend, class="fetch-backend") }}
@@ -203,6 +216,7 @@ Math: {{ 1 + 1 }}") }}
</div>
</fieldset>
</div>
{% endif %}
<div class="tab-pane-inner" id="browser-steps">
{% if capabilities.supports_browser_steps %}
@@ -283,8 +297,7 @@ Math: {{ 1 + 1 }}") }}
</fieldset>
</div>
{% if watch['processor'] == 'text_json_diff' or watch['processor'] == 'image_ssim_diff' %}
{% if capabilities.supports_text_filters_and_triggers %}
<div class="tab-pane-inner" id="conditions">
<script>
const verify_condition_rule_url="{{url_for('conditions.verify_condition_single_rule', watch_uuid=uuid)}}";
@@ -303,7 +316,9 @@ Math: {{ 1 + 1 }}") }}
<span id="activate-text-preview" class="pure-button pure-button-primary button-xsmall">{{ _('Activate preview') }}</span>
<div>
<div id="edit-text-filter">
<div class="pure-control-group" id="pro-tips">
{% if capabilities.supports_text_filters_and_triggers_elements %}
<div class="pure-control-group" id="pro-tips">
<strong>{{ _('Pro-tips:') }}</strong><br>
<ul>
<li>
@@ -314,8 +329,8 @@ Math: {{ 1 + 1 }}") }}
</li>
</ul>
</div>
{% include "edit/include_subtract.html" %}
{% endif %}
<div class="text-filtering border-fieldset">
<fieldset class="pure-group" id="text-filtering-type-options">
<h3>{{ _('Text filtering') }}</h3>
@@ -374,7 +389,7 @@ Math: {{ 1 + 1 }}") }}
{{ extra_form_content|safe }}
</div>
{% endif %}
{% if watch['processor'] == 'text_json_diff' or watch['processor'] == 'image_ssim_diff' %}
{% if capabilities.supports_visual_selector %}
<div class="tab-pane-inner visual-selector-ui" id="visualselector">
<img class="beta-logo" src="{{url_for('static_content', group='images', filename='beta-logo.png')}}" alt="New beta functionality">
@@ -386,7 +401,7 @@ Math: {{ 1 + 1 }}") }}
{{ _('The Visual Selector tool lets you select the') }} <i>{{ _('text') }}</i> {{ _('elements that will be used for the change detection. It automatically fills-in the filters in the "CSS/JSONPath/JQ/XPath Filters" box of the') }} <a href="#filters-and-triggers">{{ _('Filters & Triggers') }}</a> {{ _('tab. Use') }} <strong>{{ _('Shift+Click') }}</strong> {{ _('to select multiple items.') }}
</span>
{% if watch['processor'] == 'image_ssim_diff' %}
{% if watch['processor'] == 'image_ssim_diff' %} {# @todo, integrate with image_ssim_diff selector better, use some extra form ? #}
<div id="selection-mode-controls" style="margin: 10px 0; padding: 10px; background: var(--color-background-tab); border-radius: 5px;">
<label style="font-weight: 600; margin-right: 15px;">{{ _('Selection Mode:') }}</label>
<label style="margin-right: 15px;">

View File

@@ -24,8 +24,9 @@ def construct_blueprint(datastore: ChangeDetectionStore, update_q, queuedWatchMe
flash(gettext('Warning, URL {} already exists').format(url), "notice")
add_paused = request.form.get('edit_and_watch_submit_button') != None
processor = request.form.get('processor', 'text_json_diff')
new_uuid = datastore.add_watch(url=url, tag=request.form.get('tags').strip(), extras={'paused': add_paused, 'processor': processor})
from changedetectionio import processors
processor = request.form.get('processor', processors.get_default_processor())
new_uuid = datastore.add_watch(url=url, tag=request.form.get('tags','').strip(), extras={'paused': add_paused, 'processor': processor})
if new_uuid:
if add_paused:
@@ -38,4 +39,4 @@ def construct_blueprint(datastore: ChangeDetectionStore, update_q, queuedWatchMe
return redirect(url_for('watchlist.index', tag=request.args.get('tag','')))
return views_blueprint
return views_blueprint

View File

@@ -39,7 +39,7 @@ def construct_blueprint(datastore: ChangeDetectionStore, update_q, queuedWatchMe
elif op == 'mute':
datastore.data['watching'][uuid].toggle_mute()
datastore.needs_write = True
datastore.data['watching'][uuid].commit()
return redirect(url_for('watchlist.index', tag = active_tag_uuid))
# Sort by last_changed and add the uuid which is usually the key..

View File

@@ -1,5 +1,9 @@
{%- extends 'base.html' -%}
{%- block content -%}
{%- set tips = [
_("Changedetection.io can monitor more than just web-pages! See our plugins!") ~ ' <a href="https://changedetection.io/plugins">' ~ _('More info') ~ '</a>',
_("You can also add 'shared' watches.") ~ ' <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Sharing-a-Watch">' ~ _('More info') ~ '</a>'
] -%}
{%- from '_helpers.html' import render_simple_field, render_field, render_nolabel_field, sort_by_title -%}
<script src="{{url_for('static_content', group='js', filename='jquery-3.6.0.min.js')}}"></script>
<script src="{{url_for('static_content', group='js', filename='watch-overview.js')}}" defer></script>
@@ -10,6 +14,46 @@
// Initialize Feather icons after the page loads
document.addEventListener('DOMContentLoaded', function() {
feather.replace();
// Intersection Observer for lazy loading favicons
// Only load favicon images when they enter the viewport
if ('IntersectionObserver' in window) {
const faviconObserver = new IntersectionObserver((entries, observer) => {
entries.forEach(entry => {
if (entry.isIntersecting) {
const img = entry.target;
const src = img.getAttribute('data-src');
if (src) {
// Load the actual favicon
img.src = src;
img.removeAttribute('data-src');
}
// Stop observing this image
observer.unobserve(img);
}
});
}, {
// Start loading slightly before the image enters viewport
rootMargin: '50px',
threshold: 0.01
});
// Observe all lazy favicon images
document.querySelectorAll('.lazy-favicon').forEach(img => {
faviconObserver.observe(img);
});
} else {
// Fallback for older browsers: load all favicons immediately
document.querySelectorAll('.lazy-favicon').forEach(img => {
const src = img.getAttribute('data-src');
if (src) {
img.src = src;
img.removeAttribute('data-src');
}
});
}
});
</script>
<style>
@@ -69,7 +113,9 @@ html[data-darkmode="true"] .watch-tag-list.tag-{{ class_name }} {
</div>
</fieldset>
<span style="color:#eee; font-size: 80%;"><img alt="{{ _('Create a shareable link') }}" style="height: 1em;display:inline-block;" src="{{url_for('static_content', group='images', filename='spread-white.svg')}}" > {{ _("Tip: You can also add 'shared' watches.") }} <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Sharing-a-Watch">{{ _('More info') }}</a></span>
<span style="color:#eee; font-size: 80%;">
<strong>Tip: </strong> {{ tips | random | safe }}
</span>
</form>
</div>
<div class="box">
@@ -200,28 +246,38 @@ html[data-darkmode="true"] .watch-tag-list.tag-{{ class_name }} {
<td class="title-col inline">
<div class="flex-wrapper">
{% if 'favicons_enabled' not in ui_settings or ui_settings['favicons_enabled'] %}
<div>{# A page might have hundreds of these images, set IMG options for lazy loading, don't set SRC if we dont have it so it doesnt fetch the placeholder' #}
<img alt="Favicon thumbnail" class="favicon" loading="lazy" decoding="async" fetchpriority="low" {% if favicon %} src="{{url_for('static_content', group='favicon', filename=watch.uuid)}}" {% else %} src='data:image/svg+xml;utf8,%3Csvg xmlns="http://www.w3.org/2000/svg" width="7.087" height="7.087" viewBox="0 0 7.087 7.087"%3E%3Ccircle cx="3.543" cy="3.543" r="3.279" stroke="%23e1e1e1" stroke-width="0.45" fill="none" opacity="0.74"/%3E%3C/svg%3E' {% endif %} >
<div>
{# Intersection Observer lazy loading: store real URL in data-src, load only when visible in viewport #}
<img alt="Favicon thumbnail"
class="favicon lazy-favicon"
loading="lazy"
decoding="async"
fetchpriority="low"
{% if favicon %}
data-src="{{url_for('static_content', group='favicon', filename=watch.uuid)}}"
{% endif %}
src='data:image/svg+xml;utf8,%3Csvg xmlns="http://www.w3.org/2000/svg" width="7.087" height="7.087" viewBox="0 0 7.087 7.087"%3E%3Ccircle cx="3.543" cy="3.543" r="3.279" stroke="%23e1e1e1" stroke-width="0.45" fill="none" opacity="0.74"/%3E%3C/svg%3E'>
</div>
{% endif %}
<div>
<span class="watch-title">
{% if system_use_url_watchlist or watch.get('use_page_title_in_list') %}
{{ watch.label }}
{% else %}
{{ watch.get('title') or watch.link }}
{% endif %}
<a class="external" target="_blank" rel="noopener" href="{{ watch.link.replace('source:','') }}">&nbsp;</a>
</span>
{%- if watch['processor'] and watch['processor'] in processor_badge_texts -%}
<span class="processor-badge processor-badge-{{ watch['processor'] }}" title="{{ processor_descriptions.get(watch['processor'], watch['processor']) }}">{{ processor_badge_texts[watch['processor']] }}</span>
{%- endif -%}
<span class="watch-title">
{% if system_use_url_watchlist or watch.get('use_page_title_in_list') %}
{{ watch.label }}
{% else %}
{{ watch.get('title') or watch.link }}
{% endif %}
<a class="external" target="_blank" rel="noopener" href="{{ watch.link.replace('source:','') }}">&nbsp;</a>
</span>
<div class="error-text" style="display:none;">{{ watch.compile_error_texts(has_proxies=datastore.proxy_list)|safe }}</div>
{%- if watch['processor'] == 'text_json_diff' -%}
{%- if watch['has_ldjson_price_data'] and not watch['track_ldjson_price_data'] -%}
<div class="ldjson-price-track-offer">Switch to Restock & Price watch mode? <a href="{{url_for('price_data_follower.accept', uuid=watch.uuid)}}" class="pure-button button-xsmall">Yes</a> <a href="{{url_for('price_data_follower.reject', uuid=watch.uuid)}}" class="">No</a></div>
{%- endif -%}
{%- endif -%}
{%- if watch['processor'] and watch['processor'] in processor_badge_texts -%}
<span class="processor-badge processor-badge-{{ watch['processor'] }}" title="{{ processor_descriptions.get(watch['processor'], watch['processor']) }}">{{ processor_badge_texts[watch['processor']] }}</span>
{%- endif -%}
{%- for watch_tag_uuid, watch_tag in datastore.get_all_tags_for_watch(watch['uuid']).items() -%}
<a href="{{url_for('watchlist.index', tag=watch_tag_uuid) }}" class="watch-tag-list tag-{{ watch_tag.title|sanitize_tag_class }}">{{ watch_tag.title }}</a>
{%- endfor -%}

View File

@@ -1,3 +1,4 @@
import asyncio
import gc
import json
import os
@@ -349,12 +350,7 @@ class fetcher(Fetcher):
if self.status_code != 200 and not ignore_status_codes:
screenshot = await capture_full_page_async(self.page, screenshot_format=self.screenshot_format, watch_uuid=watch_uuid, lock_viewport_elements=self.lock_viewport_elements)
# Cleanup before raising to prevent memory leak
await self.page.close()
await context.close()
await browser.close()
# Force garbage collection to release Playwright resources immediately
gc.collect()
# Finally block will handle cleanup
raise Non200ErrorCodeReceived(url=url, status_code=self.status_code, screenshot=screenshot)
if not empty_pages_are_a_change and len((await self.page.content()).strip()) == 0:
@@ -370,12 +366,7 @@ class fetcher(Fetcher):
try:
await self.iterate_browser_steps(start_url=url)
except BrowserStepsStepException:
try:
await context.close()
await browser.close()
except Exception as e:
# Fine, could be messy situation
pass
# Finally block will handle cleanup
raise
await self.page.wait_for_timeout(extra_wait * 1000)
@@ -424,35 +415,40 @@ class fetcher(Fetcher):
raise ScreenshotUnavailable(url=url, status_code=self.status_code)
finally:
# Request garbage collection one more time before closing
# Clean up resources properly with timeouts to prevent hanging
try:
await self.page.request_gc()
except:
pass
# Clean up resources properly
try:
await self.page.request_gc()
except:
pass
if hasattr(self, 'page') and self.page:
await self.page.request_gc()
await asyncio.wait_for(self.page.close(), timeout=5.0)
logger.debug(f"Successfully closed page for {url}")
except asyncio.TimeoutError:
logger.warning(f"Timed out closing page for {url} (5s)")
except Exception as e:
logger.warning(f"Error closing page for {url}: {e}")
finally:
self.page = None
try:
await self.page.close()
except:
pass
self.page = None
if context:
await asyncio.wait_for(context.close(), timeout=5.0)
logger.debug(f"Successfully closed context for {url}")
except asyncio.TimeoutError:
logger.warning(f"Timed out closing context for {url} (5s)")
except Exception as e:
logger.warning(f"Error closing context for {url}: {e}")
finally:
context = None
try:
await context.close()
except:
pass
context = None
try:
await browser.close()
except:
pass
browser = None
if browser:
await asyncio.wait_for(browser.close(), timeout=5.0)
logger.debug(f"Successfully closed browser connection for {url}")
except asyncio.TimeoutError:
logger.warning(f"Timed out closing browser connection for {url} (5s)")
except Exception as e:
logger.warning(f"Error closing browser for {url}: {e}")
finally:
browser = None
# Force Python GC to release Playwright resources immediately
# Playwright objects can have circular references that delay cleanup

View File

@@ -1,4 +1,5 @@
import asyncio
import gc
import json
import os
import websockets.exceptions
@@ -221,19 +222,36 @@ class fetcher(Fetcher):
self.browser_connection_url += f"{r}--proxy-server={proxy_url}"
async def quit(self, watch=None):
try:
await self.page.close()
del self.page
except Exception as e:
pass
watch_uuid = watch.get('uuid') if watch else 'unknown'
# Close page
try:
await self.browser.close()
del self.browser
if hasattr(self, 'page') and self.page:
await asyncio.wait_for(self.page.close(), timeout=5.0)
logger.debug(f"[{watch_uuid}] Page closed successfully")
except asyncio.TimeoutError:
logger.warning(f"[{watch_uuid}] Timed out closing page (5s)")
except Exception as e:
pass
logger.warning(f"[{watch_uuid}] Error closing page: {e}")
finally:
self.page = None
logger.info("Cleanup puppeteer complete.")
# Close browser connection
try:
if hasattr(self, 'browser') and self.browser:
await asyncio.wait_for(self.browser.close(), timeout=5.0)
logger.debug(f"[{watch_uuid}] Browser closed successfully")
except asyncio.TimeoutError:
logger.warning(f"[{watch_uuid}] Timed out closing browser (5s)")
except Exception as e:
logger.warning(f"[{watch_uuid}] Error closing browser: {e}")
finally:
self.browser = None
logger.info(f"[{watch_uuid}] Cleanup puppeteer complete")
# Force garbage collection to release resources
gc.collect()
async def fetch_page(self,
current_include_filters,
@@ -263,9 +281,11 @@ class fetcher(Fetcher):
# Connect directly using the specified browser_ws_endpoint
# @todo timeout
try:
logger.debug(f"[{watch_uuid}] Connecting to browser at {self.browser_connection_url}")
self.browser = await pyppeteer_instance.connect(browserWSEndpoint=self.browser_connection_url,
ignoreHTTPSErrors=True
)
logger.debug(f"[{watch_uuid}] Browser connected successfully")
except websockets.exceptions.InvalidStatusCode as e:
raise BrowserConnectError(msg=f"Error while trying to connect the browser, Code {e.status_code} (check your access, whitelist IP, password etc)")
except websockets.exceptions.InvalidURI:
@@ -274,7 +294,18 @@ class fetcher(Fetcher):
raise BrowserConnectError(msg=f"Error connecting to the browser - Exception '{str(e)}'")
# more reliable is to just request a new page
self.page = await self.browser.newPage()
try:
logger.debug(f"[{watch_uuid}] Creating new page")
self.page = await self.browser.newPage()
logger.debug(f"[{watch_uuid}] Page created successfully")
except Exception as e:
logger.error(f"[{watch_uuid}] Failed to create new page: {e}")
# Browser is connected but page creation failed - must cleanup browser
try:
await asyncio.wait_for(self.browser.close(), timeout=3.0)
except Exception as cleanup_error:
logger.error(f"[{watch_uuid}] Failed to cleanup browser after page creation failure: {cleanup_error}")
raise
# Add console handler to capture console.log from favicon fetcher
#self.page.on('console', lambda msg: logger.debug(f"Browser console [{msg.type}]: {msg.text}"))
@@ -343,6 +374,12 @@ class fetcher(Fetcher):
w = extra_wait - 2 if extra_wait > 4 else 2
logger.debug(f"Waiting {w} seconds before calling Page.stopLoading...")
await asyncio.sleep(w)
# Check if page still exists (might have been closed due to error during sleep)
if not self.page or not hasattr(self.page, '_client'):
logger.debug("Page already closed, skipping stopLoading")
return
logger.debug("Issuing stopLoading command...")
await self.page._client.send('Page.stopLoading')
logger.debug("stopLoading command sent!")
@@ -368,7 +405,9 @@ class fetcher(Fetcher):
asyncio.create_task(handle_frame_navigation())
response = await self.page.goto(url, timeout=0)
await asyncio.sleep(1 + extra_wait)
await self.page._client.send('Page.stopLoading')
# Check if page still exists before sending command
if self.page and hasattr(self.page, '_client'):
await self.page._client.send('Page.stopLoading')
if response:
break
@@ -437,15 +476,9 @@ class fetcher(Fetcher):
logger.debug(f"Screenshot format {self.screenshot_format}")
self.screenshot = await capture_full_page(page=self.page, screenshot_format=self.screenshot_format, watch_uuid=watch_uuid, lock_viewport_elements=self.lock_viewport_elements)
# Force aggressive memory cleanup - pyppeteer base64 decode creates temporary buffers
# Force garbage collection - pyppeteer base64 decode creates temporary buffers
import gc
gc.collect()
# Release C-level memory from base64 decode back to OS
try:
import ctypes
ctypes.CDLL('libc.so.6').malloc_trim(0)
except Exception:
pass
self.xpath_data = await self.page.evaluate(XPATH_ELEMENT_JS, {
"visualselector_xpath_selectors": visualselector_xpath_selectors,
"max_height": MAX_TOTAL_HEIGHT

View File

@@ -730,7 +730,7 @@ class quickWatchForm(Form):
url = fields.URLField(_l('URL'), validators=[validateURL()])
tags = StringTagUUID(_l('Group tag'), validators=[validators.Optional()])
watch_submit_button = SubmitField(_l('Watch'), render_kw={"class": "pure-button pure-button-primary"})
processor = RadioField(_l('Processor'), choices=lambda: processors.available_processors(), default="text_json_diff")
processor = RadioField(_l('Processor'), choices=lambda: processors.available_processors(), default=processors.get_default_processor)
edit_and_watch_submit_button = SubmitField(_l('Edit > Watch'), render_kw={"class": "pure-button pure-button-primary"})
@@ -749,7 +749,7 @@ class commonSettingsForm(Form):
notification_format = SelectField(_l('Notification format'), choices=list(valid_notification_formats.items()))
notification_title = StringField(_l('Notification Title'), default='ChangeDetection.io Notification - {{ watch_url }}', validators=[validators.Optional(), ValidateJinja2Template()])
notification_urls = StringListField(_l('Notification URL List'), validators=[validators.Optional(), ValidateAppRiseServers(), ValidateJinja2Template()])
processor = RadioField( label=_l("Processor - What do you want to achieve?"), choices=lambda: processors.available_processors(), default="text_json_diff")
processor = RadioField( label=_l("Processor - What do you want to achieve?"), choices=lambda: processors.available_processors(), default=processors.get_default_processor)
scheduler_timezone_default = StringField(_l("Default timezone for watch check scheduler"), render_kw={"list": "timezones"}, validators=[validateTimeZoneName()])
webdriver_delay = IntegerField(_l('Wait seconds before extracting text'), validators=[validators.Optional(), validators.NumberRange(min=1, message=_l("Should contain one or more seconds"))])
@@ -763,7 +763,7 @@ class commonSettingsForm(Form):
class importForm(Form):
processor = RadioField(_l('Processor'), choices=lambda: processors.available_processors(), default="text_json_diff")
processor = RadioField(_l('Processor'), choices=lambda: processors.available_processors(), default=processors.get_default_processor)
urls = TextAreaField(_l('URLs'))
xlsx_file = FileField(_l('Upload .xlsx file'), validators=[FileAllowed(['xlsx'], _l('Must be .xlsx file!'))])
file_mapping = SelectField(_l('File mapping'), [validators.DataRequired()], choices={('wachete', 'Wachete mapping'), ('custom','Custom mapping')})
@@ -837,6 +837,8 @@ class processor_text_json_diff_form(commonSettingsForm):
conditions = FieldList(FormField(ConditionFormRow), min_entries=1) # Add rule logic here
use_page_title_in_list = TernaryNoneBooleanField(_l('Use page <title> in list'), default=None)
history_snapshot_max_length = IntegerField(_l('Number of history items per watch to keep'), render_kw={"style": "width: 5em;"}, validators=[validators.Optional(), validators.NumberRange(min=2)])
def extra_tab_content(self):
return None
@@ -1034,6 +1036,8 @@ class globalSettingsApplicationForm(commonSettingsForm):
render_kw={"style": "width: 5em;"},
validators=[validators.NumberRange(min=0,
message=_l("Should contain zero or more attempts"))])
history_snapshot_max_length = IntegerField(_l('Number of history items per watch to keep'), render_kw={"style": "width: 5em;"}, validators=[validators.Optional(), validators.NumberRange(min=2)])
ui = FormField(globalSettingsApplicationUIForm)

View File

@@ -52,7 +52,13 @@ def render(template_str, **args: t.Any) -> str:
return output[:JINJA2_MAX_RETURN_PAYLOAD_SIZE]
def render_fully_escaped(content):
env = jinja2.sandbox.ImmutableSandboxedEnvironment(autoescape=True)
template = env.from_string("{{ some_html|e }}")
return template.render(some_html=content)
"""
Escape HTML content safely.
MEMORY LEAK FIX: Use markupsafe.escape() directly instead of creating
Jinja2 environments (was causing 1M+ compilations per page load).
Simpler, faster, and no concerns about environment state.
"""
from markupsafe import escape
return str(escape(content))

View File

@@ -46,6 +46,7 @@ class model(dict):
'filter_failure_notification_threshold_attempts': _FILTER_FAILURE_THRESHOLD_ATTEMPTS_DEFAULT,
'global_ignore_text': [], # List of text to ignore when calculating the comparison checksum
'global_subtractive_selectors': [],
'history_snapshot_max_length': None,
'ignore_whitespace': True,
'ignore_status_codes': False, #@todo implement, as ternary.
'ssim_threshold': '0.96', # Default SSIM threshold for screenshot comparison

View File

@@ -1,10 +1,52 @@
"""
Tag/Group domain model for organizing and overriding watch settings.
ARCHITECTURE NOTE: Configuration Override Hierarchy
===================================================
Tags can override Watch settings when overrides_watch=True.
Current implementation requires manual checking in processors:
for tag_uuid in watch.get('tags'):
tag = datastore['settings']['application']['tags'][tag_uuid]
if tag.get('overrides_watch'):
restock_settings = tag.get('restock_settings', {})
break
With Pydantic, this would be automatic via chain resolution:
Watch → Tag (first with overrides_watch) → Global
See: Watch.py model docstring for full Pydantic architecture explanation
See: processors/restock_diff/processor.py:184-192 for current manual implementation
"""
from changedetectionio.model import watch_base
class model(watch_base):
"""
Tag domain model - groups watches and can override their settings.
Tags inherit from watch_base to reuse all the same fields as Watch.
When overrides_watch=True, tag settings take precedence over watch settings
for all watches in this tag/group.
Fields:
overrides_watch (bool): If True, this tag's settings override watch settings
title (str): Display name for this tag/group
uuid (str): Unique identifier
... (all fields from watch_base can be set as tag-level overrides)
Resolution order when overrides_watch=True:
Watch.field → Tag.field (if overrides_watch) → Global.field
"""
def __init__(self, *arg, **kw):
# Store datastore reference (optional for Tags, but good for consistency)
self.__datastore = kw.get('__datastore')
if kw.get('__datastore'):
del kw['__datastore']
super(model, self).__init__(*arg, **kw)
self['overrides_watch'] = kw.get('default', {}).get('overrides_watch')

View File

@@ -1,3 +1,32 @@
"""
Watch domain model for change detection monitoring.
ARCHITECTURE NOTE: Configuration Override Hierarchy
===================================================
This module implements Watch objects that inherit from dict (technical debt).
The dream architecture would use Pydantic for:
1. CHAIN RESOLUTION (Watch → Tag → Global Settings)
- Current: Manual resolution scattered across codebase
- Future: @computed_field properties with automatic resolution
- Examples: resolved_fetch_backend, resolved_restock_settings, etc.
2. DATABASE BACKEND ABSTRACTION
- Current: Domain model tightly coupled to file-based JSON storage
- Future: Domain model (Pydantic) separate from persistence layer
- Enables: Easy migration to PostgreSQL, MongoDB, etc.
3. TYPE SAFETY & VALIDATION
- Current: Dict access with no compile-time checks
- Future: Type hints, IDE autocomplete, validation at boundaries
See class model docstring for detailed explanation and examples.
See: processors/restock_diff/processor.py:184-192 for manual resolution example
"""
import gc
from copy import copy
from blinker import signal
from changedetectionio.validate_url import is_safe_valid_url
@@ -13,7 +42,7 @@ from .. import jinja2_custom as safe_jinja
from ..html_tools import TRANSLATE_WHITESPACE_TABLE
FAVICON_RESAVE_THRESHOLD_SECONDS=86400
BROTLI_COMPRESS_SIZE_THRESHOLD = int(os.getenv('SNAPSHOT_BROTLI_COMPRESSION_THRESHOLD', 1024))
BROTLI_COMPRESS_SIZE_THRESHOLD = int(os.getenv('SNAPSHOT_BROTLI_COMPRESSION_THRESHOLD', 1024*20))
minimum_seconds_recheck_time = int(os.getenv('MINIMUM_SECONDS_RECHECK_TIME', 3))
mtable = {'seconds': 1, 'minutes': 60, 'hours': 3600, 'days': 86400, 'weeks': 86400 * 7}
@@ -101,6 +130,99 @@ def _brotli_save(contents, filepath, mode=None, fallback_uncompressed=False):
class model(watch_base):
"""
Watch domain model for monitoring URL changes.
Inherits from watch_base (which inherits dict) - see watch_base docstring for field documentation.
## Configuration Override Hierarchy (Chain Resolution)
The dream architecture uses a 3-level resolution chain:
Watch settings → Tag/Group settings → Global settings
Current implementation is MANUAL (see processor.py:184-192 for example):
- Processors manually check watch.get('field')
- Then loop through watch.tags to find first tag with overrides_watch=True
- Finally fall back to datastore['settings']['application']['field']
FUTURE: Pydantic-based chain resolution would enable:
```python
# Instead of manual resolution in every processor:
restock_settings = watch.get('restock_settings', {})
for tag_uuid in watch.get('tags'):
tag = datastore['settings']['application']['tags'][tag_uuid]
if tag.get('overrides_watch'):
restock_settings = tag.get('restock_settings', {})
break
# Clean computed properties with automatic resolution:
@computed_field
def resolved_restock_settings(self) -> dict:
if self.restock_settings:
return self.restock_settings
for tag_uuid in self.tags:
tag = self._datastore.get_tag(tag_uuid)
if tag.overrides_watch and tag.restock_settings:
return tag.restock_settings
return self._datastore.settings.restock_settings or {}
# Usage: watch.resolved_restock_settings (automatic, type-safe, tested once)
```
Benefits of Pydantic migration:
1. Single source of truth for resolution logic (not scattered across processors)
2. Type safety + IDE autocomplete (watch.resolved_fetch_backend vs dict navigation)
3. Database backend abstraction (domain model separate from persistence)
4. Automatic validation at boundaries
5. Self-documenting via type hints
6. Easy to test resolution independently
Resolution chain examples that would benefit:
- fetch_backend: watch → tag → global (see get_fetch_backend property)
- notification_urls: watch → tag → global
- time_between_check: watch → global (see threshold_seconds)
- restock_settings: watch → tag (see processors/restock_diff/processor.py:184-192)
- history_snapshot_max_length: watch → global (see save_history_blob:550-556)
- All processor_config_* settings could use tag overrides
## Database Backend Abstraction with Pydantic
Current: Watch inherits dict, tightly coupled to file-based JSON storage
Future: Domain model (Watch) separate from persistence layer
```python
# Domain model (database-agnostic)
class Watch(BaseModel):
uuid: str
url: str
# ... validation, business logic
# Pluggable backends
class DataStoreBackend(ABC):
def save_watch(self, watch: Watch): ...
def load_watch(self, uuid: str) -> Watch: ...
# Implementations: FileBackend, MongoBackend, PostgresBackend, etc.
```
This would enable:
- Easy migration between storage backends (file → postgres → mongodb)
- Pydantic handles serialization/deserialization automatically
- Domain logic stays clean (no storage concerns in Watch methods)
## Migration Path
Given existing codebase, incremental migration recommended:
1. Create Pydantic models alongside existing dict-based models
2. Add .to_pydantic() / .from_pydantic() bridge methods
3. Gradually migrate code to use Pydantic models
4. Remove dict inheritance once migration complete
See: watch_base docstring for technical debt discussion
See: processors/restock_diff/processor.py:184-192 for manual resolution example
See: Watch.py:550-556 for nested dict navigation that would become watch.resolved_*
"""
__newest_history_key = None
__history_n = 0
jitter_seconds = 0
@@ -109,8 +231,15 @@ class model(watch_base):
self.__datastore_path = kw.get('datastore_path')
if kw.get('datastore_path'):
del kw['datastore_path']
self.__datastore = kw.get('__datastore')
if not self.__datastore:
raise ValueError("Watch object requires '__datastore' reference - cannot access global settings without it")
if kw.get('__datastore'):
del kw['__datastore']
super(model, self).__init__(*arg, **kw)
if kw.get('default'):
self.update(kw['default'])
del kw['default']
@@ -121,6 +250,9 @@ class model(watch_base):
# Be sure the cached timestamp is ready
bump = self.history
# Note: __deepcopy__, __getstate__, and __setstate__ are inherited from watch_base
# This prevents memory leaks by sharing __datastore reference instead of copying it
@property
def viewed(self):
# Don't return viewed when last_viewed is 0 and newest_key is 0
@@ -230,8 +362,30 @@ class model(watch_base):
@property
def get_fetch_backend(self):
"""
Like just using the `fetch_backend` key but there could be some logic
:return:
Get the fetch backend for this watch with special case handling.
CHAIN RESOLUTION OPPORTUNITY:
Currently returns watch.fetch_backend directly, but doesn't implement
Watch → Tag → Global resolution chain. With Pydantic:
@computed_field
def resolved_fetch_backend(self) -> str:
# Special case: PDFs always use html_requests
if self.is_pdf:
return 'html_requests'
# Watch override
if self.fetch_backend and self.fetch_backend != 'system':
return self.fetch_backend
# Tag override (first tag with overrides_watch=True wins)
for tag_uuid in self.tags:
tag = self._datastore.get_tag(tag_uuid)
if tag.overrides_watch and tag.fetch_backend:
return tag.fetch_backend
# Global default
return self._datastore.settings.fetch_backend
"""
# Maybe also if is_image etc?
# This is because chrome/playwright wont render the PDF in the browser and we will just fetch it and use pdf2html to see the text.
@@ -423,16 +577,49 @@ class model(watch_base):
with open(filepath, 'r', encoding='utf-8', errors='ignore') as f:
return f.read()
def _write_atomic(self, dest, data):
def _write_atomic(self, dest, data, mode='wb'):
"""Write data atomically to dest using a temp file"""
if not os.path.exists(dest):
import tempfile
with tempfile.NamedTemporaryFile('wb', delete=False, dir=self.watch_data_dir) as tmp:
tmp.write(data)
tmp.flush()
os.fsync(tmp.fileno())
tmp_path = tmp.name
os.replace(tmp_path, dest)
import tempfile
with tempfile.NamedTemporaryFile(mode, delete=False, dir=self.watch_data_dir) as tmp:
tmp.write(data)
tmp.flush()
os.fsync(tmp.fileno())
tmp_path = tmp.name
os.replace(tmp_path, dest)
def history_trim(self, newest_n_items):
from pathlib import Path
# Sort by timestamp (key)
sorted_items = sorted(self.history.items(), key=lambda x: int(x[0]))
keep_part = dict(sorted_items[-newest_n_items:])
delete_part = dict(sorted_items[:-newest_n_items])
logger.info( f"[{self.get('uuid')}] Trimming history to most recent {newest_n_items} items, keeping {len(keep_part)} items deleting {len(delete_part)} items.")
if delete_part:
for item in delete_part.items():
try:
Path(item[1]).unlink(missing_ok=True)
except Exception as e:
logger.critical(f"{str(e)}")
finally:
logger.debug(f"[{self.get('uuid')}] Deleted {item[1]} history snapshot")
try:
dest = os.path.join(self.watch_data_dir, self.history_index_filename)
output = "\r\n".join(
f"{k},{Path(v).name}"
for k, v in keep_part.items()
)+"\r\n"
self._write_atomic(dest=dest, data=output, mode='w')
except Exception as e:
logger.critical(f"{str(e)}")
finally:
logger.debug(f"[{self.get('uuid')}] Updated history index {dest}")
# reimport
bump = self.history
gc.collect()
# Save some text file to the appropriate path and bump the history
# result_obj from fetch_site_status.run()
@@ -441,7 +628,6 @@ class model(watch_base):
logger.trace(f"{self.get('uuid')} - Updating {self.history_index_filename} with timestamp {timestamp}")
self.ensure_data_dir_exists()
skip_brotli = strtobool(os.getenv('DISABLE_BROTLI_TEXT_SNAPSHOT', 'False'))
# Binary data - detect file type and save without compression
@@ -501,6 +687,20 @@ class model(watch_base):
self.__newest_history_key = timestamp
self.__history_n += 1
# MANUAL CHAIN RESOLUTION: Watch → Global
# With Pydantic, this would become: maxlen = watch.resolved_history_snapshot_max_length
# @computed_field def resolved_history_snapshot_max_length(self) -> Optional[int]:
# if self.history_snapshot_max_length: return self.history_snapshot_max_length
# if tag := self._get_override_tag(): return tag.history_snapshot_max_length
# return self._datastore.settings.history_snapshot_max_length
maxlen = (
self.get('history_snapshot_max_length')
or (self.__datastore and self.__datastore['settings']['application'].get('history_snapshot_max_length'))
)
if maxlen and self.__history_n and self.__history_n > maxlen:
self.history_trim(newest_n_items=maxlen)
# @todo bump static cache of the last timestamp so we dont need to examine the file to set a proper ''viewed'' status
return snapshot_fname
@@ -613,6 +813,11 @@ class model(watch_base):
try:
with open(fname, 'wb') as f:
f.write(decoded)
# Invalidate favicon filename cache
if hasattr(self, '_favicon_filename_cache'):
delattr(self, '_favicon_filename_cache')
# A signal that could trigger the socket server to update the browser also
watch_check_update = signal('watch_favicon_bump')
if watch_check_update:
@@ -629,20 +834,32 @@ class model(watch_base):
Find any favicon.* file in the current working directory
and return the contents of the newest one.
MEMORY LEAK FIX: Cache the result to avoid repeated glob.glob() operations.
glob.glob() causes millions of fnmatch allocations when called for every watch on page load.
Returns:
bytes: Contents of the newest favicon file, or None if not found.
str: Basename of the newest favicon file, or None if not found.
"""
# Check cache first (prevents 26M+ allocations from repeated glob operations)
cache_key = '_favicon_filename_cache'
if hasattr(self, cache_key):
return getattr(self, cache_key)
import glob
# Search for all favicon.* files
files = glob.glob(os.path.join(self.watch_data_dir, "favicon.*"))
if not files:
return None
result = None
else:
# Find the newest by modification time
newest_file = max(files, key=os.path.getmtime)
result = os.path.basename(newest_file)
# Find the newest by modification time
newest_file = max(files, key=os.path.getmtime)
return os.path.basename(newest_file)
# Cache the result
setattr(self, cache_key, result)
return result
def get_screenshot_as_thumbnail(self, max_age=3200):
"""Return path to a square thumbnail of the most recent screenshot.
@@ -773,6 +990,57 @@ class model(watch_base):
def toggle_mute(self):
self['notification_muted'] ^= True
def commit(self):
"""
Save this watch immediately to disk using atomic write.
Replaces the old dirty-tracking system with immediate persistence.
Uses atomic write pattern (temp file + rename) for crash safety.
Fire-and-forget: Logs errors but does not raise exceptions.
Watch data remains in memory even if save fails, so next commit will retry.
"""
from loguru import logger
if not self.__datastore:
logger.error(f"Cannot commit watch {self.get('uuid')} without datastore reference")
return
if not self.watch_data_dir:
logger.error(f"Cannot commit watch {self.get('uuid')} without datastore_path")
return
# Convert to dict for serialization, excluding processor config keys
# Processor configs are stored separately in processor-specific JSON files
# Use deepcopy to prevent mutations from affecting the original Watch object
import copy
# Acquire datastore lock to prevent concurrent modifications during copy
# Take a quick shallow snapshot under lock, then deep copy outside lock
lock = self.__datastore.lock if self.__datastore and hasattr(self.__datastore, 'lock') else None
if lock:
with lock:
snapshot = dict(self)
else:
snapshot = dict(self)
# Deep copy snapshot (slower, but done outside lock to minimize contention)
watch_dict = {k: copy.deepcopy(v) for k, v in snapshot.items() if not k.startswith('processor_config_')}
# Normalize browser_steps: if no meaningful steps, save as empty list
if not self.has_browser_steps:
watch_dict['browser_steps'] = []
# Use existing atomic write helper
from changedetectionio.store.file_saving_datastore import save_watch_atomic
try:
save_watch_atomic(self.watch_data_dir, self.get('uuid'), watch_dict)
logger.debug(f"Committed watch {self.get('uuid')}")
except Exception as e:
logger.error(f"Failed to commit watch {self.get('uuid')}: {e}")
def extra_notification_token_values(self):
# Used for providing extra tokens
# return {'widget': 555}

View File

@@ -6,6 +6,147 @@ USE_SYSTEM_DEFAULT_NOTIFICATION_FORMAT_FOR_WATCH = 'System default'
CONDITIONS_MATCH_LOGIC_DEFAULT = 'ALL'
class watch_base(dict):
"""
Base watch domain model (inherits from dict for backward compatibility).
WARNING: This class inherits from dict, which violates proper encapsulation.
Dict inheritance is legacy technical debt that should be refactored to a proper
domain model (e.g., Pydantic BaseModel) for better type safety and validation.
TODO: Migrate to Pydantic BaseModel for:
- Type safety and IDE autocomplete
- Automatic validation
- Clear separation between domain model and serialization
- Database backend abstraction (file → postgres → mongodb)
- Configuration override chain resolution (Watch → Tag → Global)
- Immutability options
- Better testing
CHAIN RESOLUTION ARCHITECTURE:
The dream is a 3-level override hierarchy:
Watch settings → Tag/Group settings → Global settings
Current implementation: MANUAL resolution scattered across codebase
- Processors manually check watch.get('field')
- Loop through tags to find overrides_watch=True
- Fall back to datastore['settings']['application']['field']
Pydantic implementation: AUTOMATIC resolution via @computed_field
- Single source of truth for each setting's resolution logic
- Type-safe, testable, self-documenting
- Example: watch.resolved_fetch_backend (instead of nested dict navigation)
See: Watch.py model docstring for detailed Pydantic architecture plan
See: Tag.py model docstring for tag override explanation
See: processors/restock_diff/processor.py:184-192 for current manual example
Core Fields:
uuid (str): Unique identifier for this watch (auto-generated)
url (str): Target URL to monitor for changes
title (str|None): Custom display name (overrides page_title if set)
page_title (str|None): Title extracted from <title> tag of monitored page
tags (List[str]): List of tag UUIDs for categorization
tag (str): DEPRECATED - Old single-tag system, use tags instead
Check Configuration:
processor (str): Processor type ('text_json_diff', 'restock_diff', etc.)
fetch_backend (str): Fetcher to use ('system', 'html_requests', 'playwright', etc.)
method (str): HTTP method ('GET', 'POST', etc.)
headers (dict): Custom HTTP headers to send
proxy (str|None): Preferred proxy server
paused (bool): Whether change detection is paused
Scheduling:
time_between_check (dict): Check interval {'weeks': int, 'days': int, 'hours': int, 'minutes': int, 'seconds': int}
time_between_check_use_default (bool): Use global default interval if True
time_schedule_limit (dict): Weekly schedule limiting when checks can run
Structure: {
'enabled': bool,
'monday/tuesday/.../sunday': {
'enabled': bool,
'start_time': str ('HH:MM'),
'duration': {'hours': str, 'minutes': str}
}
}
Content Filtering:
include_filters (List[str]): CSS/XPath selectors to extract content
subtractive_selectors (List[str]): Selectors to remove from content
ignore_text (List[str]): Text patterns to ignore in change detection
trigger_text (List[str]): Text/regex that must be present to trigger change
text_should_not_be_present (List[str]): Text that should NOT be present
extract_text (List[str]): Regex patterns to extract specific text after filtering
Text Processing:
trim_text_whitespace (bool): Strip leading/trailing whitespace
sort_text_alphabetically (bool): Sort lines alphabetically before comparison
remove_duplicate_lines (bool): Remove duplicate lines
check_unique_lines (bool): Compare against all history for unique lines
strip_ignored_lines (bool|None): Remove lines matching ignore patterns
Change Detection Filters:
filter_text_added (bool): Include added text in change detection
filter_text_removed (bool): Include removed text in change detection
filter_text_replaced (bool): Include replaced text in change detection
Browser Automation:
browser_steps (List[dict]): Browser automation steps for JS-heavy sites
browser_steps_last_error_step (int|None): Last step that caused error
webdriver_delay (int|None): Seconds to wait after page load
webdriver_js_execute_code (str|None): JavaScript to execute before extraction
Restock Detection:
in_stock_only (bool): Only trigger on in-stock transitions
follow_price_changes (bool): Monitor price changes
has_ldjson_price_data (bool|None): Whether page has LD-JSON price data
track_ldjson_price_data (str|None): Track LD-JSON price data ('ACCEPT', 'REJECT', None)
price_change_threshold_percent (float|None): Minimum price change % to trigger
Notifications:
notification_urls (List[str]): Apprise URLs for notifications
notification_title (str|None): Custom notification title template
notification_body (str|None): Custom notification body template
notification_format (str): Notification format (e.g., 'System default', 'Text', 'HTML')
notification_muted (bool): Disable notifications for this watch
notification_screenshot (bool): Include screenshot in notifications
notification_alert_count (int): Number of notifications sent
last_notification_error (str|None): Last notification error message
body (str|None): DEPRECATED? Legacy notification body field
filter_failure_notification_send (bool): Send notification on filter failures
History & State:
date_created (int|None): Unix timestamp of watch creation
last_checked (int): Unix timestamp of last check
last_viewed (int): History snapshot key of last user view
last_error (str|bool): Last error message or False if no error
check_count (int): Total number of checks performed
fetch_time (float): Duration of last fetch in seconds
consecutive_filter_failures (int): Counter for consecutive filter match failures
previous_md5 (str|bool): MD5 hash of previous content
previous_md5_before_filters (str|bool): MD5 hash before filters applied
history_snapshot_max_length (int|None): Max history snapshots to keep (None = use global)
Conditions:
conditions (dict): Custom conditions for change detection logic
conditions_match_logic (str): Logic operator ('ALL', 'ANY') for conditions
Metadata:
content-type (str|None): Content-Type from last fetch
remote_server_reply (str|None): Server header from last response
ignore_status_codes (List[int]|None): HTTP status codes to ignore
use_page_title_in_list (bool|None): Display page title in watch list (None = use system default)
Instance Attributes (not serialized):
__datastore: Reference to parent DataStore (set externally after creation)
watch_data_dir: Filesystem path for this watch's data directory
Notes:
- Many fields default to None to distinguish "not set" from "set to default"
- When field is None, system-level defaults are used
- Processor-specific configs (e.g., processor_config_*) are NOT stored in watch.json
They are stored in separate {processor_name}.json files
- This class is used for both Watch and Tag objects (tags reuse the structure)
"""
def __init__(self, *arg, **kw):
self.update({
@@ -32,6 +173,7 @@ class watch_base(dict):
'filter_text_replaced': True,
'follow_price_changes': True,
'has_ldjson_price_data': None,
'history_snapshot_max_length': None,
'headers': {}, # Extra headers to send
'ignore_text': [], # List of text to ignore when calculating the comparison checksum
'ignore_status_codes': None,
@@ -139,4 +281,100 @@ class watch_base(dict):
super(watch_base, self).__init__(*arg, **kw)
if self.get('default'):
del self['default']
del self['default']
def __deepcopy__(self, memo):
"""
Custom deepcopy for all watch_base subclasses (Watch, Tag, etc.).
CRITICAL FIX: Prevents copying large reference objects like __datastore
which would cause exponential memory growth when Watch objects are deepcopied.
This is called by:
- api/Watch.py:76 (API endpoint)
- api/Tags.py:28 (Tags API)
- processors/base.py:26 (EVERY processor run)
- store/__init__.py:544 (clone watch)
- And other locations
"""
from copy import deepcopy
# Create new instance without calling __init__
cls = self.__class__
new_obj = cls.__new__(cls)
memo[id(self)] = new_obj
# Copy the dict data (all the settings)
for key, value in self.items():
new_obj[key] = deepcopy(value, memo)
# Copy instance attributes dynamically
# This handles Watch-specific attrs (like __datastore) and any future subclass attrs
for attr_name in dir(self):
# Skip methods, special attrs, and dict keys
if attr_name.startswith('_') and not attr_name.startswith('__'):
# This catches _model__datastore, _model__history_n, etc.
try:
attr_value = getattr(self, attr_name)
# Special handling: Share references to large objects instead of copying
# Examples: __datastore, __app_reference, __global_settings, etc.
if attr_name.endswith('__datastore') or attr_name.endswith('__app'):
# Share the reference (don't copy!) to prevent memory leaks
setattr(new_obj, attr_name, attr_value)
# Skip cache attributes - let them regenerate on demand
elif 'cache' in attr_name.lower():
pass # Don't copy caches
# Copy regular instance attributes
elif not callable(attr_value):
setattr(new_obj, attr_name, attr_value)
except AttributeError:
pass # Attribute doesn't exist in this instance
return new_obj
def __getstate__(self):
"""
Custom pickle serialization for all watch_base subclasses.
Excludes large reference objects (like __datastore) from serialization.
"""
# Get the dict data
state = dict(self)
# Collect instance attributes (excluding methods and large references)
instance_attrs = {}
for attr_name in dir(self):
if attr_name.startswith('_') and not attr_name.startswith('__'):
try:
attr_value = getattr(self, attr_name)
# Exclude large reference objects and caches from serialization
if not (attr_name.endswith('__datastore') or
attr_name.endswith('__app') or
'cache' in attr_name.lower() or
callable(attr_value)):
instance_attrs[attr_name] = attr_value
except AttributeError:
pass
if instance_attrs:
state['__instance_metadata__'] = instance_attrs
return state
def __setstate__(self, state):
"""
Custom pickle deserialization for all watch_base subclasses.
WARNING: Large reference objects (like __datastore) are NOT restored!
Caller must restore these references after unpickling if needed.
"""
# Extract metadata
metadata = state.pop('__instance_metadata__', {})
# Restore dict data
self.update(state)
# Restore instance attributes
for attr_name, attr_value in metadata.items():
setattr(self, attr_name, attr_value)

View File

@@ -105,6 +105,30 @@ class ChangeDetectionSpec:
"""
pass
@hookspec
def register_processor(self):
"""Register an external processor plugin.
External packages can implement this hook to register custom processors
that will be discovered alongside built-in processors.
Returns:
dict or None: Dictionary with processor information:
{
'processor_name': str, # Machine name (e.g., 'osint_recon')
'processor_module': module, # Module containing processor.py
'processor_class': class, # The perform_site_check class
'metadata': { # Optional metadata
'name': str, # Display name
'description': str, # Description
'processor_weight': int,# Sort weight (lower = higher priority)
'list_badge_text': str, # Badge text for UI
}
}
Return None if this plugin doesn't provide a processor
"""
pass
# Set up Plugin Manager
plugin_manager = pluggy.PluginManager(PLUGIN_NAMESPACE)

View File

@@ -17,9 +17,11 @@ def find_sub_packages(package_name):
return [name for _, name, is_pkg in pkgutil.iter_modules(package.__path__) if is_pkg]
@lru_cache(maxsize=1)
def find_processors():
"""
Find all subclasses of DifferenceDetectionProcessor in the specified package.
Results are cached to avoid repeated discovery.
:param package_name: The name of the package to scan for processor modules.
:return: A list of (module, class) tuples.
@@ -46,6 +48,23 @@ def find_processors():
except (ModuleNotFoundError, ImportError) as e:
logger.warning(f"Failed to import module {module_name}: {e} (find_processors())")
# Discover plugin processors via pluggy
try:
from changedetectionio.pluggy_interface import plugin_manager
plugin_results = plugin_manager.hook.register_processor()
for result in plugin_results:
if result and isinstance(result, dict):
processor_module = result.get('processor_module')
processor_name = result.get('processor_name')
if processor_module and processor_name:
processors.append((processor_module, processor_name))
plugin_path = getattr(processor_module, '__file__', 'unknown location')
logger.info(f"Registered plugin processor: {processor_name} from {plugin_path}")
except Exception as e:
logger.warning(f"Error loading plugin processors: {e}")
return processors
@@ -97,54 +116,137 @@ def find_processor_module(processor_name):
return None
def get_processor_module(processor_name):
"""
Get the actual processor module (with perform_site_check class) by name.
Works for both built-in and plugin processors.
Args:
processor_name: Processor machine name (e.g., 'text_json_diff', 'osint_recon')
Returns:
module: The processor module containing perform_site_check, or None if not found
"""
processor_classes = find_processors()
processor_tuple = next((tpl for tpl in processor_classes if tpl[1] == processor_name), None)
if processor_tuple:
# Return the actual processor module (first element of tuple)
return processor_tuple[0]
return None
def get_processor_submodule(processor_name, submodule_name):
"""
Get an optional submodule from a processor (e.g., 'difference', 'extract', 'preview').
Works for both built-in and plugin processors.
Args:
processor_name: Processor machine name (e.g., 'text_json_diff', 'osint_recon')
submodule_name: Name of the submodule (e.g., 'difference', 'extract', 'preview')
Returns:
module: The submodule if it exists, or None if not found
"""
processor_classes = find_processors()
processor_tuple = next((tpl for tpl in processor_classes if tpl[1] == processor_name), None)
if not processor_tuple:
return None
processor_module = processor_tuple[0]
parent_module = get_parent_module(processor_module)
if not parent_module:
return None
# Try to import the submodule
try:
# For built-in processors: changedetectionio.processors.text_json_diff.difference
# For plugin processors: changedetectionio_osint.difference
parent_module_name = parent_module.__name__
submodule_full_name = f"{parent_module_name}.{submodule_name}"
return importlib.import_module(submodule_full_name)
except (ModuleNotFoundError, ImportError):
return None
@lru_cache(maxsize=1)
def get_plugin_processor_metadata():
"""Get metadata from plugin processors."""
metadata = {}
try:
from changedetectionio.pluggy_interface import plugin_manager
plugin_results = plugin_manager.hook.register_processor()
for result in plugin_results:
if result and isinstance(result, dict):
processor_name = result.get('processor_name')
meta = result.get('metadata', {})
if processor_name:
metadata[processor_name] = meta
except Exception as e:
logger.warning(f"Error getting plugin processor metadata: {e}")
return metadata
def available_processors():
"""
Get a list of processors by name and description for the UI elements.
Can be filtered via ALLOWED_PROCESSORS environment variable (comma-separated list).
Can be filtered via DISABLED_PROCESSORS environment variable (comma-separated list).
:return: A list :)
"""
processor_classes = find_processors()
# Check if ALLOWED_PROCESSORS env var is set
# For now we disable it, need to make a deploy with lots of new code and this will be an overload
allowed_processors_env = os.getenv('ALLOWED_PROCESSORS', 'text_json_diff, restock_diff').strip()
allowed_processors = None
if allowed_processors_env:
# Check if DISABLED_PROCESSORS env var is set
disabled_processors_env = os.getenv('DISABLED_PROCESSORS', 'image_ssim_diff').strip()
disabled_processors = []
if disabled_processors_env:
# Parse comma-separated list and strip whitespace
allowed_processors = [p.strip() for p in allowed_processors_env.split(',') if p.strip()]
logger.info(f"ALLOWED_PROCESSORS set, filtering to: {allowed_processors}")
disabled_processors = [p.strip() for p in disabled_processors_env.split(',') if p.strip()]
logger.info(f"DISABLED_PROCESSORS set, disabling: {disabled_processors}")
available = []
plugin_metadata = get_plugin_processor_metadata()
for module, sub_package_name in processor_classes:
# Filter by allowed processors if set
if allowed_processors and sub_package_name not in allowed_processors:
logger.debug(f"Skipping processor '{sub_package_name}' (not in ALLOWED_PROCESSORS)")
# Skip disabled processors
if sub_package_name in disabled_processors:
logger.debug(f"Skipping processor '{sub_package_name}' (in DISABLED_PROCESSORS)")
continue
# Try to get the 'name' attribute from the processor module first
if hasattr(module, 'name'):
description = gettext(module.name)
# Check if this is a plugin processor
if sub_package_name in plugin_metadata:
meta = plugin_metadata[sub_package_name]
description = gettext(meta.get('name', sub_package_name))
# Plugin processors start from weight 10 to separate them from built-in processors
weight = 100 + meta.get('processor_weight', 0)
else:
# Fall back to processor_description from parent module's __init__.py
parent_module = get_parent_module(module)
if parent_module and hasattr(parent_module, 'processor_description'):
description = gettext(parent_module.processor_description)
# Try to get the 'name' attribute from the processor module first
if hasattr(module, 'name'):
description = gettext(module.name)
else:
# Final fallback to a readable name
description = sub_package_name.replace('_', ' ').title()
# Fall back to processor_description from parent module's __init__.py
parent_module = get_parent_module(module)
if parent_module and hasattr(parent_module, 'processor_description'):
description = gettext(parent_module.processor_description)
else:
# Final fallback to a readable name
description = sub_package_name.replace('_', ' ').title()
# Get weight for sorting (lower weight = higher in list)
weight = 0 # Default weight for processors without explicit weight
# Get weight for sorting (lower weight = higher in list)
weight = 0 # Default weight for processors without explicit weight
# Check processor module itself first
if hasattr(module, 'processor_weight'):
weight = module.processor_weight
else:
# Fall back to parent module (package __init__.py)
parent_module = get_parent_module(module)
if parent_module and hasattr(parent_module, 'processor_weight'):
weight = parent_module.processor_weight
# Check processor module itself first
if hasattr(module, 'processor_weight'):
weight = module.processor_weight
else:
# Fall back to parent module (package __init__.py)
parent_module = get_parent_module(module)
if parent_module and hasattr(parent_module, 'processor_weight'):
weight = parent_module.processor_weight
available.append((sub_package_name, description, weight))
@@ -155,6 +257,20 @@ def available_processors():
return [(name, desc) for name, desc, weight in available]
def get_default_processor():
"""
Get the default processor to use when none is specified.
Returns the first available processor based on weight (lowest weight = highest priority).
This ensures forms auto-select a valid processor even when DISABLED_PROCESSORS filters the list.
:return: The processor name string (e.g., 'text_json_diff')
"""
available = available_processors()
if available:
return available[0][0] # Return the processor name from first tuple
return 'text_json_diff' # Fallback if somehow no processors are available
def get_processor_badge_texts():
"""
Get a dictionary mapping processor names to their list_badge_text values.
@@ -279,3 +395,76 @@ def get_processor_badge_css():
return '\n\n'.join(css_rules)
def save_processor_config(datastore, watch_uuid, config_data):
"""
Save processor-specific configuration to JSON file.
This is a shared helper function used by both the UI edit form and API endpoints
to consistently handle processor configuration storage.
Args:
datastore: The application datastore instance
watch_uuid: UUID of the watch
config_data: Dictionary of configuration data to save (with processor_config_* prefix removed)
Returns:
bool: True if saved successfully, False otherwise
"""
if not config_data:
return True
try:
from changedetectionio.processors.base import difference_detection_processor
# Get processor name from watch
watch = datastore.data['watching'].get(watch_uuid)
if not watch:
logger.error(f"Cannot save processor config: watch {watch_uuid} not found")
return False
processor_name = watch.get('processor', 'text_json_diff')
# Create a processor instance to access config methods
processor_instance = difference_detection_processor(datastore, watch_uuid)
# Use processor name as filename so each processor keeps its own config
config_filename = f'{processor_name}.json'
processor_instance.update_extra_watch_config(config_filename, config_data)
logger.debug(f"Saved processor config to {config_filename}: {config_data}")
return True
except Exception as e:
logger.error(f"Failed to save processor config: {e}")
return False
def extract_processor_config_from_form_data(form_data):
"""
Extract processor_config_* fields from form data and return separate dicts.
This is a shared helper function used by both the UI edit form and API endpoints
to consistently handle processor configuration extraction.
IMPORTANT: This function modifies form_data in-place by removing processor_config_* fields.
Args:
form_data: Dictionary of form data (will be modified in-place)
Returns:
dict: Dictionary of processor config data (with processor_config_* prefix removed)
"""
processor_config_data = {}
# Use list() to create a copy of keys since we're modifying the dict
for field_name in list(form_data.keys()):
if field_name.startswith('processor_config_'):
config_key = field_name.replace('processor_config_', '')
# Save all values (including empty strings) to allow explicit clearing of settings
processor_config_data[config_key] = form_data[field_name]
# Remove from form_data to prevent it from reaching datastore
del form_data[field_name]
return processor_config_data

View File

@@ -23,7 +23,14 @@ class difference_detection_processor():
def __init__(self, datastore, watch_uuid):
self.datastore = datastore
self.watch_uuid = watch_uuid
# Create a stable snapshot of the watch for processing
# Why deepcopy?
# 1. Prevents "dict changed during iteration" errors if watch is modified during processing
# 2. Preserves Watch object with properties (.link, .is_pdf, etc.) - can't use dict()
# 3. Safe now: Watch.__deepcopy__() shares datastore ref (no memory leak) but copies dict data
self.watch = deepcopy(self.datastore.data['watching'].get(watch_uuid))
# Generic fetcher that should be extended (requests, playwright etc)
self.fetcher = Fetcher()

View File

@@ -12,6 +12,13 @@ processor_description = "Visual/Screenshot change detection (Fast)"
processor_name = "image_ssim_diff"
processor_weight = 2 # Lower weight = appears at top, heavier weight = appears lower (bottom)
# Processor capabilities
supports_visual_selector = True
supports_browser_steps = True
supports_text_filters_and_triggers = False
supports_text_filters_and_triggers_elements = False
supports_request_type = True
PROCESSOR_CONFIG_NAME = f"{Path(__file__).parent.name}.json"
# Subprocess timeout settings

View File

@@ -4,6 +4,13 @@ from changedetectionio.model.Watch import model as BaseWatch
from typing import Union
import re
# Processor capabilities
supports_visual_selector = True
supports_browser_steps = True
supports_text_filters_and_triggers = True
supports_text_filters_and_triggers_elements = True
supports_request_type = True
class Restock(dict):
def parse_currency(self, raw_value: str) -> Union[float, None]:

View File

@@ -193,18 +193,17 @@ class perform_site_check(difference_detection_processor):
itemprop_availability = {}
multiple_prices_found = False
# Try built-in extraction first, this will scan metadata in the HTML
try:
itemprop_availability = get_itemprop_availability(self.fetcher.content)
except MoreThanOnePriceFound as e:
# Add the real data
raise ProcessorException(message="Cannot run, more than one price detected, this plugin is only for product pages with ONE product, try the content-change detection mode.",
url=watch.get('url'),
status_code=self.fetcher.get_last_status_code(),
screenshot=self.fetcher.screenshot,
xpath_data=self.fetcher.xpath_data
)
# Don't raise immediately - let plugins try to handle this case
# Plugins might be able to determine which price is correct
logger.warning(f"Built-in detection found multiple prices on {watch.get('url')}, will try plugin override")
multiple_prices_found = True
itemprop_availability = {}
# If built-in extraction didn't get both price AND availability, try plugin override
# Only check plugin if this watch is using a fetcher that might provide better data
@@ -216,9 +215,21 @@ class perform_site_check(difference_detection_processor):
from changedetectionio.pluggy_interface import get_itemprop_availability_from_plugin
fetcher_name = watch.get('fetch_backend', 'html_requests')
# Only try plugin override if not using system default (which might be anything)
if fetcher_name and fetcher_name != 'system':
logger.debug("Calling extra plugins for getting item price/availability")
# Resolve 'system' to the actual fetcher being used
# This allows plugins to work even when watch uses "system settings default"
if fetcher_name == 'system':
# Get the actual fetcher that was used (from self.fetcher)
# Fetcher class name gives us the actual backend (e.g., 'html_requests', 'html_webdriver')
actual_fetcher = type(self.fetcher).__name__
if 'html_requests' in actual_fetcher.lower():
fetcher_name = 'html_requests'
elif 'webdriver' in actual_fetcher.lower() or 'playwright' in actual_fetcher.lower():
fetcher_name = 'html_webdriver'
logger.debug(f"Resolved 'system' fetcher to actual fetcher: {fetcher_name}")
# Try plugin override - plugins can decide if they support this fetcher
if fetcher_name:
logger.debug(f"Calling extra plugins for getting item price/availability (fetcher: {fetcher_name})")
plugin_availability = get_itemprop_availability_from_plugin(self.fetcher.content, fetcher_name, self.fetcher, watch.link)
if plugin_availability:
@@ -233,6 +244,16 @@ class perform_site_check(difference_detection_processor):
if not plugin_availability:
logger.debug("No item price/availability from plugins")
# If we had multiple prices and plugins also failed, NOW raise the exception
if multiple_prices_found and not itemprop_availability.get('price'):
raise ProcessorException(
message="Cannot run, more than one price detected, this plugin is only for product pages with ONE product, try the content-change detection mode.",
url=watch.get('url'),
status_code=self.fetcher.get_last_status_code(),
screenshot=self.fetcher.screenshot,
xpath_data=self.fetcher.xpath_data
)
# Something valid in get_itemprop_availability() by scraping metadata ?
if itemprop_availability.get('price') or itemprop_availability.get('availability'):
# Store for other usage

View File

@@ -1,6 +1,12 @@
from loguru import logger
# Processor capabilities
supports_visual_selector = True
supports_browser_steps = True
supports_text_filters_and_triggers = True
supports_text_filters_and_triggers_elements = True
supports_request_type = True
def _task(watch, update_handler):
@@ -58,7 +64,7 @@ def prepare_filter_prevew(datastore, watch_uuid, form_data):
# Only update vars that came in via the AJAX post
p = {k: v for k, v in form.data.items() if k in form_data.keys()}
tmp_watch.update(p)
blank_watch_no_filters = watch_model()
blank_watch_no_filters = watch_model(datastore_path=datastore.datastore_path, __datastore=datastore.data)
blank_watch_no_filters['url'] = tmp_watch.get('url')
latest_filename = next(reversed(tmp_watch.history))

View File

@@ -67,7 +67,7 @@ echo "-------------------- Running rest of tests in parallel -------------------
# REMOVE_REQUESTS_OLD_SCREENSHOTS disabled so that we can write a screenshot and send it in test_notifications.py without a real browser
FETCH_WORKERS=2 REMOVE_REQUESTS_OLD_SCREENSHOTS=false \
pytest tests/test_*.py \
-n 18 \
-n 8 \
--dist=load \
-vvv \
-s \

View File

@@ -9,18 +9,14 @@ from flask import (
)
from flask_babel import gettext
from ..blueprint.rss import RSS_CONTENT_FORMAT_DEFAULT
from ..html_tools import TRANSLATE_WHITESPACE_TABLE
from ..model import App, Watch, USE_SYSTEM_DEFAULT_NOTIFICATION_FORMAT_FOR_WATCH
from copy import deepcopy, copy
from ..model import App, Watch
from copy import deepcopy
from os import path, unlink
from threading import Lock
import json
import os
import re
import secrets
import sys
import threading
import time
import uuid as uuid_builder
from loguru import logger
@@ -35,7 +31,6 @@ except ImportError:
HAS_ORJSON = False
from ..processors import get_custom_watch_obj_for_processor
from ..processors.restock_diff import Restock
# Import the base class and helpers
from .file_saving_datastore import FileSavingDataStore, load_all_watches, save_watch_atomic, save_json_atomic
@@ -61,9 +56,7 @@ class ChangeDetectionStore(DatastoreUpdatesMixin, FileSavingDataStore):
# Should only be active for docker
# logging.basicConfig(filename='/dev/stdout', level=logging.INFO)
self.datastore_path = datastore_path
self.needs_write = False
self.start_time = time.time()
self.stop_thread = False
self.save_version_copy_json_db(version_tag)
self.reload_state(datastore_path=datastore_path, include_default_watches=include_default_watches, version_tag=version_tag)
@@ -137,6 +130,19 @@ class ChangeDetectionStore(DatastoreUpdatesMixin, FileSavingDataStore):
)
logger.info(f"Tag: {uuid} {tag['title']}")
def _rehydrate_watches(self):
"""Rehydrate watch entities from stored data (converts dicts to Watch objects)."""
watch_count = len(self.__data.get('watching', {}))
if watch_count == 0:
return
logger.info(f"Rehydrating {watch_count} watches...")
watching_rehydrated = {}
for uuid, watch_dict in self.__data.get('watching', {}).items():
watching_rehydrated[uuid] = self.rehydrate_entity(uuid, watch_dict)
self.__data['watching'] = watching_rehydrated
logger.success(f"Rehydrated {watch_count} watches into Watch objects")
def _load_state(self):
"""
@@ -174,7 +180,7 @@ class ChangeDetectionStore(DatastoreUpdatesMixin, FileSavingDataStore):
self.json_store_path = os.path.join(self.datastore_path, "changedetection.json")
# Base definition for all watchers (deepcopy part of #569)
self.generic_definition = deepcopy(Watch.model(datastore_path=datastore_path, default={}))
self.generic_definition = deepcopy(Watch.model(datastore_path=datastore_path, __datastore=self.__data, default={}))
# Load build SHA if available (Docker deployments)
if path.isfile('changedetectionio/source.txt'):
@@ -218,23 +224,29 @@ class ChangeDetectionStore(DatastoreUpdatesMixin, FileSavingDataStore):
logger.critical(f"Legacy datastore detected at {self.datastore_path}/url-watches.json")
logger.critical("Migration will be triggered via update_26")
# Load the legacy datastore to get its schema_version
# Load the legacy datastore
from .legacy_loader import load_legacy_format
legacy_path = os.path.join(self.datastore_path, "url-watches.json")
with open(legacy_path) as f:
self.__data = json.load(f)
legacy_data = load_legacy_format(legacy_path)
if not self.__data:
if not legacy_data:
raise Exception("Failed to load legacy datastore from url-watches.json")
# update_26 will load the legacy data again and migrate to new format
# Only run updates AFTER the legacy schema version (e.g., if legacy is at 25, only run 26+)
# Store the loaded data
self.__data = legacy_data
# CRITICAL: Rehydrate watches from dicts into Watch objects
# This ensures watches have their methods available during migration
self._rehydrate_watches()
# update_26 will save watches to individual files and create changedetection.json
# Next startup will load from new format normally
self.run_updates()
else:
# Fresh install - create new datastore
logger.critical(f"No datastore found, creating new datastore at {self.datastore_path}")
logger.warning(f"No datastore found, creating new datastore at {self.datastore_path}")
# Set schema version to latest (no updates needed)
updates_available = self.get_updates_available()
@@ -272,19 +284,19 @@ class ChangeDetectionStore(DatastoreUpdatesMixin, FileSavingDataStore):
self.__data['app_guid'] = "test-" + str(uuid_builder.uuid4())
else:
self.__data['app_guid'] = str(uuid_builder.uuid4())
self.mark_settings_dirty()
self.commit()
# Ensure RSS access token exists
if not self.__data['settings']['application'].get('rss_access_token'):
secret = secrets.token_hex(16)
self.__data['settings']['application']['rss_access_token'] = secret
self.mark_settings_dirty()
self.commit()
# Ensure API access token exists
if not self.__data['settings']['application'].get('api_access_token'):
secret = secrets.token_hex(16)
self.__data['settings']['application']['api_access_token'] = secret
self.mark_settings_dirty()
self.commit()
# Handle password reset lockfile
password_reset_lockfile = os.path.join(self.datastore_path, "removepassword.lock")
@@ -292,9 +304,6 @@ class ChangeDetectionStore(DatastoreUpdatesMixin, FileSavingDataStore):
self.remove_password()
unlink(password_reset_lockfile)
# Start the background save thread
self.start_save_thread()
def rehydrate_entity(self, uuid, entity, processor_override=None):
"""Set the dict back to the dict Watch object"""
entity['uuid'] = uuid
@@ -308,7 +317,7 @@ class ChangeDetectionStore(DatastoreUpdatesMixin, FileSavingDataStore):
if entity.get('processor') != 'text_json_diff':
logger.trace(f"Loading Watch object '{watch_class.__module__}.{watch_class.__name__}' for UUID {uuid}")
entity = watch_class(datastore_path=self.datastore_path, default=entity)
entity = watch_class(datastore_path=self.datastore_path, __datastore=self.__data, default=entity)
return entity
# ============================================================================
@@ -361,22 +370,15 @@ class ChangeDetectionStore(DatastoreUpdatesMixin, FileSavingDataStore):
Implementation of abstract method from FileSavingDataStore.
Delegates to helper function and stores results in internal data structure.
"""
watching, watch_hashes = load_all_watches(
watching = load_all_watches(
self.datastore_path,
self.rehydrate_entity,
self._compute_hash
self.rehydrate_entity
)
# Store loaded data
self.__data['watching'] = watching
self._watch_hashes = watch_hashes
# Verify all watches have hashes
missing_hashes = [uuid for uuid in watching.keys() if uuid not in watch_hashes]
if missing_hashes:
logger.error(f"WARNING: {len(missing_hashes)} watches missing hashes after load: {missing_hashes[:5]}")
else:
logger.debug(f"All {len(watching)} watches have valid hashes")
logger.debug(f"Loaded {len(watching)} watches")
def _delete_watch(self, uuid):
"""
@@ -400,7 +402,7 @@ class ChangeDetectionStore(DatastoreUpdatesMixin, FileSavingDataStore):
def set_last_viewed(self, uuid, timestamp):
logger.debug(f"Setting watch UUID: {uuid} last viewed to {int(timestamp)}")
self.data['watching'][uuid].update({'last_viewed': int(timestamp)})
self.mark_watch_dirty(uuid)
self.data['watching'][uuid].commit()
watch_check_update = signal('watch_check_update')
if watch_check_update:
@@ -408,7 +410,22 @@ class ChangeDetectionStore(DatastoreUpdatesMixin, FileSavingDataStore):
def remove_password(self):
self.__data['settings']['application']['password'] = False
self.mark_settings_dirty()
self.commit()
def commit(self):
"""
Save settings immediately to disk using atomic write.
Uses atomic write pattern (temp file + rename) for crash safety.
Fire-and-forget: Logs errors but does not raise exceptions.
Settings data remains in memory even if save fails, so next commit will retry.
"""
try:
self._save_settings()
logger.debug("Committed settings")
except Exception as e:
logger.error(f"Failed to commit settings: {e}")
def update_watch(self, uuid, update_obj):
@@ -427,7 +444,8 @@ class ChangeDetectionStore(DatastoreUpdatesMixin, FileSavingDataStore):
self.__data['watching'][uuid].update(update_obj)
self.mark_watch_dirty(uuid)
# Immediate save
self.__data['watching'][uuid].commit()
@property
def threshold_seconds(self):
@@ -488,10 +506,6 @@ class ChangeDetectionStore(DatastoreUpdatesMixin, FileSavingDataStore):
except Exception as e:
logger.error(f"Failed to delete watch {watch_uuid} from storage: {e}")
# Clean up tracking data
self._watch_hashes.pop(watch_uuid, None)
self._dirty_watches.discard(watch_uuid)
# Send delete signal
watch_delete_signal = signal('watch_deleted')
if watch_delete_signal:
@@ -513,21 +527,19 @@ class ChangeDetectionStore(DatastoreUpdatesMixin, FileSavingDataStore):
# Remove from watching dict
del self.data['watching'][uuid]
# Clean up tracking data
self._watch_hashes.pop(uuid, None)
self._dirty_watches.discard(uuid)
# Send delete signal
watch_delete_signal = signal('watch_deleted')
if watch_delete_signal:
watch_delete_signal.send(watch_uuid=uuid)
self.needs_write_urgent = True
# Clone a watch by UUID
def clone(self, uuid):
url = self.data['watching'][uuid].get('url')
extras = deepcopy(self.data['watching'][uuid])
# No need to deepcopy here - add_watch() will deepcopy extras anyway (line 569)
# Just pass a dict copy (with lock for thread safety)
# NOTE: dict() is shallow copy but safe since add_watch() deepcopies it
with self.lock:
extras = dict(self.data['watching'][uuid])
new_uuid = self.add_watch(url=url, extras=extras)
watch = self.data['watching'][new_uuid]
return new_uuid
@@ -544,7 +556,7 @@ class ChangeDetectionStore(DatastoreUpdatesMixin, FileSavingDataStore):
# Remove a watchs data but keep the entry (URL etc)
def clear_watch_history(self, uuid):
self.__data['watching'][uuid].clear_watch()
self.needs_write_urgent = True
self.__data['watching'][uuid].commit()
def add_watch(self, url, tag='', extras=None, tag_uuids=None, save_immediately=True):
@@ -639,7 +651,7 @@ class ChangeDetectionStore(DatastoreUpdatesMixin, FileSavingDataStore):
# If the processor also has its own Watch implementation
watch_class = get_custom_watch_obj_for_processor(apply_extras.get('processor'))
new_watch = watch_class(datastore_path=self.datastore_path, url=url)
new_watch = watch_class(datastore_path=self.datastore_path, __datastore=self.__data, url=url)
new_uuid = new_watch.get('uuid')
@@ -657,16 +669,9 @@ class ChangeDetectionStore(DatastoreUpdatesMixin, FileSavingDataStore):
self.__data['watching'][new_uuid] = new_watch
if save_immediately:
# Save immediately using polymorphic method
try:
self.save_watch(new_uuid, force=True)
logger.debug(f"Saved new watch {new_uuid}")
except Exception as e:
logger.error(f"Failed to save new watch {new_uuid}: {e}")
# Mark dirty for retry
self.mark_watch_dirty(new_uuid)
else:
self.mark_watch_dirty(new_uuid)
# Save immediately using commit
new_watch.commit()
logger.debug(f"Saved new watch {new_uuid}")
logger.debug(f"Added '{url}'")
@@ -858,16 +863,20 @@ class ChangeDetectionStore(DatastoreUpdatesMixin, FileSavingDataStore):
# So we use the same model as a Watch
with self.lock:
from ..model import Tag
new_tag = Tag.model(datastore_path=self.datastore_path, default={
'title': title.strip(),
'date_created': int(time.time())
})
new_tag = Tag.model(
datastore_path=self.datastore_path,
__datastore=self.__data,
default={
'title': title.strip(),
'date_created': int(time.time())
}
)
new_uuid = new_tag.get('uuid')
self.__data['settings']['application']['tags'][new_uuid] = new_tag
self.mark_settings_dirty()
self.commit()
return new_uuid
def get_all_tags_for_watch(self, uuid):
@@ -984,7 +993,7 @@ class ChangeDetectionStore(DatastoreUpdatesMixin, FileSavingDataStore):
notification_urls.append(notification_url)
self.__data['settings']['application']['notification_urls'] = notification_urls
self.mark_settings_dirty()
self.commit()
return notification_url
# Schema update methods moved to store/updates.py (DatastoreUpdatesMixin)

View File

@@ -81,20 +81,3 @@ class DataStore(ABC):
"""
pass
@abstractmethod
def force_save_all(self):
"""
Force immediate synchronous save of all data to storage.
This is the abstract method for forcing a complete save.
Different backends implement this differently:
- File backend: Mark all watches/settings dirty, then save
- Redis backend: SAVE command or pipeline flush
- SQL backend: COMMIT transaction
Used by:
- Backup creation (ensure everything is saved before backup)
- Shutdown (ensure all changes are persisted)
- Manual save operations
"""
pass

View File

@@ -1,22 +1,17 @@
"""
File-based datastore with individual watch persistence and dirty tracking.
File-based datastore with individual watch persistence and immediate commits.
This module provides the FileSavingDataStore abstract class that implements:
- Individual watch.json file persistence
- Hash-based change detection (only save what changed)
- Periodic audit scan (catches unmarked changes)
- Background save thread with batched parallel saves
- Immediate commit-based persistence (watch.commit(), datastore.commit())
- Atomic file writes safe for NFS/NAS
"""
import glob
import hashlib
import json
import os
import tempfile
import time
from concurrent.futures import ThreadPoolExecutor, as_completed
from threading import Thread
from loguru import logger
from .base import DataStore
@@ -34,19 +29,6 @@ except ImportError:
# Set to True for mission-critical deployments requiring crash consistency
FORCE_FSYNC_DATA_IS_CRITICAL = bool(strtobool(os.getenv('FORCE_FSYNC_DATA_IS_CRITICAL', 'False')))
# Save interval configuration: How often the background thread saves dirty items
# Default 10 seconds - increase for less frequent saves, decrease for more frequent
DATASTORE_SCAN_DIRTY_SAVE_INTERVAL_SECONDS = int(os.getenv('DATASTORE_SCAN_DIRTY_SAVE_INTERVAL_SECONDS', '10'))
# Rolling audit configuration: Scans a fraction of watches each cycle
# Default: Run audit every 10s, split into 5 shards
# Full audit completes every 50s (10s × 5 shards)
# With 56k watches: 56k / 5 = ~11k watches per cycle (~60ms vs 316ms for all)
# Handles dynamic watch count - recalculates shard boundaries each cycle
DATASTORE_AUDIT_INTERVAL_SECONDS = int(os.getenv('DATASTORE_AUDIT_INTERVAL_SECONDS', '10'))
DATASTORE_AUDIT_SHARDS = int(os.getenv('DATASTORE_AUDIT_SHARDS', '5'))
# ============================================================================
# Helper Functions for Atomic File Operations
# ============================================================================
@@ -61,6 +43,9 @@ def save_json_atomic(file_path, data_dict, label="file", max_size_mb=10):
- Size validation
- Proper error handling
Thread safety: Caller must hold datastore.lock to prevent concurrent modifications.
Multi-process safety: Not supported - run only one app instance per datastore.
Args:
file_path: Full path to target JSON file
data_dict: Dictionary to serialize
@@ -242,11 +227,6 @@ def load_watch_from_file(watch_json, uuid, rehydrate_entity_func):
with open(watch_json, 'r', encoding='utf-8') as f:
watch_data = json.load(f)
if watch_data.get('time_schedule_limit'):
del watch_data['time_schedule_limit']
if watch_data.get('time_between_check'):
del watch_data['time_between_check']
# Return both the raw data and the rehydrated watch
# Raw data is needed to compute hash before rehydration changes anything
watch_obj = rehydrate_entity_func(uuid, watch_data)
@@ -278,7 +258,7 @@ def load_watch_from_file(watch_json, uuid, rehydrate_entity_func):
return None, None
def load_all_watches(datastore_path, rehydrate_entity_func, compute_hash_func):
def load_all_watches(datastore_path, rehydrate_entity_func):
"""
Load all watches from individual watch.json files.
@@ -289,21 +269,17 @@ def load_all_watches(datastore_path, rehydrate_entity_func, compute_hash_func):
Args:
datastore_path: Path to the datastore directory
rehydrate_entity_func: Function to convert dict to Watch object
compute_hash_func: Function to compute hash from raw watch dict
Returns:
Tuple of (watching_dict, hashes_dict)
- watching_dict: uuid -> Watch object
- hashes_dict: uuid -> hash string (computed from raw data)
Dictionary of uuid -> Watch object
"""
start_time = time.time()
logger.info("Loading watches from individual watch.json files...")
watching = {}
watch_hashes = {}
if not os.path.exists(datastore_path):
return watching, watch_hashes
return watching
# Find all watch.json files using glob (faster than manual directory traversal)
glob_start = time.time()
@@ -322,9 +298,6 @@ def load_all_watches(datastore_path, rehydrate_entity_func, compute_hash_func):
watch, raw_data = load_watch_from_file(watch_json, uuid_dir, rehydrate_entity_func)
if watch and raw_data:
watching[uuid_dir] = watch
# Compute hash from rehydrated Watch object (as dict) to match how we compute on save
# This ensures hash matches what audit will compute from dict(watch)
watch_hashes[uuid_dir] = compute_hash_func(dict(watch))
loaded += 1
if loaded % 100 == 0:
@@ -344,7 +317,7 @@ def load_all_watches(datastore_path, rehydrate_entity_func, compute_hash_func):
else:
logger.info(f"Loaded {loaded} watches from disk in {elapsed:.2f}s ({loaded/elapsed:.0f} watches/sec)")
return watching, watch_hashes
return watching
# ============================================================================
@@ -353,151 +326,20 @@ def load_all_watches(datastore_path, rehydrate_entity_func, compute_hash_func):
class FileSavingDataStore(DataStore):
"""
Abstract datastore that provides file persistence with change tracking.
Abstract datastore that provides file persistence with immediate commits.
Features:
- Individual watch.json files (one per watch)
- Dirty tracking: Only saves items that have changed
- Hash-based change detection: Prevents unnecessary writes
- Background save thread: Non-blocking persistence
- Two-tier urgency: Standard (60s) and urgent (immediate) saves
- Immediate persistence via watch.commit() and datastore.commit()
- Atomic file writes for crash safety
Subclasses must implement:
- rehydrate_entity(): Convert dict to Watch object
- Access to internal __data structure for watch management
"""
needs_write = False
needs_write_urgent = False
stop_thread = False
# Change tracking
_dirty_watches = set() # Watch UUIDs that need saving
_dirty_settings = False # Settings changed
_watch_hashes = {} # UUID -> SHA256 hash for change detection
# Health monitoring
_last_save_time = 0 # Timestamp of last successful save
_last_audit_time = 0 # Timestamp of last audit scan
_save_cycle_count = 0 # Number of save cycles completed
_total_saves = 0 # Total watches saved (lifetime)
_save_errors = 0 # Total save errors (lifetime)
_audit_count = 0 # Number of audit scans completed
_audit_found_changes = 0 # Total unmarked changes found by audits
_audit_shard_index = 0 # Current shard being audited (rolling audit)
def __init__(self):
super().__init__()
self.save_data_thread = None
self._last_save_time = time.time()
self._last_audit_time = time.time()
def mark_watch_dirty(self, uuid):
"""
Mark a watch as needing save.
Args:
uuid: Watch UUID
"""
with self.lock:
self._dirty_watches.add(uuid)
dirty_count = len(self._dirty_watches)
# Backpressure detection - warn if dirty set grows too large
if dirty_count > 1000:
logger.critical(
f"BACKPRESSURE WARNING: {dirty_count} watches pending save! "
f"Save thread may not be keeping up with write rate. "
f"This could indicate disk I/O bottleneck or save thread failure."
)
elif dirty_count > 500:
logger.warning(
f"Dirty watch count high: {dirty_count} watches pending save. "
f"Monitoring for potential backpressure."
)
self.needs_write = True
def mark_settings_dirty(self):
"""Mark settings as needing save."""
with self.lock:
self._dirty_settings = True
self.needs_write = True
def _compute_hash(self, watch_dict):
"""
Compute SHA256 hash of watch for change detection.
Args:
watch_dict: Dictionary representation of watch
Returns:
Hex string of SHA256 hash
"""
# Use orjson for deterministic serialization if available
if HAS_ORJSON:
json_bytes = orjson.dumps(watch_dict, option=orjson.OPT_SORT_KEYS)
else:
json_str = json.dumps(watch_dict, sort_keys=True, ensure_ascii=False)
json_bytes = json_str.encode('utf-8')
return hashlib.sha256(json_bytes).hexdigest()
def save_watch(self, uuid, force=False, watch_dict=None, current_hash=None):
"""
Save a single watch if it has changed (polymorphic method).
Args:
uuid: Watch UUID
force: If True, skip hash check and save anyway
watch_dict: Pre-computed watch dictionary (optimization)
current_hash: Pre-computed hash (optimization)
Returns:
True if saved, False if skipped (unchanged)
"""
if not self._watch_exists(uuid):
logger.warning(f"Cannot save watch {uuid} - does not exist")
return False
# Get watch dict if not provided
if watch_dict is None:
watch_dict = self._get_watch_dict(uuid)
# Compute hash if not provided
if current_hash is None:
current_hash = self._compute_hash(watch_dict)
# Skip save if unchanged (unless forced)
if not force and current_hash == self._watch_hashes.get(uuid):
return False
try:
self._save_watch(uuid, watch_dict)
self._watch_hashes[uuid] = current_hash
logger.debug(f"Saved watch {uuid}")
return True
except Exception as e:
logger.error(f"Failed to save watch {uuid}: {e}")
raise
def _save_watch(self, uuid, watch_dict):
"""
Save a single watch to storage (polymorphic).
Backend-specific implementation. Subclasses override for different storage:
- File backend: Writes to {uuid}/watch.json
- Redis backend: SET watch:{uuid}
- SQL backend: UPDATE watches WHERE uuid=?
Args:
uuid: Watch UUID
watch_dict: Dictionary representation of watch
"""
# Default file implementation
watch_dir = os.path.join(self.datastore_path, uuid)
save_watch_atomic(watch_dir, uuid, watch_dict)
def _save_settings(self):
"""
@@ -510,6 +352,7 @@ class FileSavingDataStore(DataStore):
"""
raise NotImplementedError("Subclass must implement _save_settings")
def _load_watches(self):
"""
Load all watches from storage (polymorphic).
@@ -535,364 +378,4 @@ class FileSavingDataStore(DataStore):
"""
raise NotImplementedError("Subclass must implement _delete_watch")
def _save_dirty_items(self):
"""
Save dirty watches and settings.
This is the core optimization: instead of saving the entire datastore,
we only save watches that were marked dirty and settings if changed.
"""
start_time = time.time()
# Capture dirty sets under lock
with self.lock:
dirty_watches = list(self._dirty_watches)
dirty_settings = self._dirty_settings
self._dirty_watches.clear()
self._dirty_settings = False
if not dirty_watches and not dirty_settings:
return
logger.trace(f"Saving {len(dirty_watches)} dirty watches, settings_dirty={dirty_settings}")
# Save each dirty watch using the polymorphic save method
saved_count = 0
error_count = 0
skipped_unchanged = 0
# Process in batches of 50, using thread pool for parallel saves
BATCH_SIZE = 50
MAX_WORKERS = 20 # Number of parallel save threads
def save_single_watch(uuid):
"""Helper function for thread pool execution."""
try:
# Check if watch still exists (might have been deleted)
if not self._watch_exists(uuid):
# Watch was deleted, remove hash
self._watch_hashes.pop(uuid, None)
return {'status': 'deleted', 'uuid': uuid}
# Pre-check hash to avoid unnecessary save_watch() calls
watch_dict = self._get_watch_dict(uuid)
current_hash = self._compute_hash(watch_dict)
if current_hash == self._watch_hashes.get(uuid):
# Watch hasn't actually changed, skip
return {'status': 'unchanged', 'uuid': uuid}
# Pass pre-computed values to avoid redundant serialization/hashing
if self.save_watch(uuid, force=True, watch_dict=watch_dict, current_hash=current_hash):
return {'status': 'saved', 'uuid': uuid}
else:
return {'status': 'skipped', 'uuid': uuid}
except Exception as e:
logger.error(f"Error saving watch {uuid}: {e}")
return {'status': 'error', 'uuid': uuid, 'error': e}
# Process dirty watches in batches
for batch_start in range(0, len(dirty_watches), BATCH_SIZE):
batch = dirty_watches[batch_start:batch_start + BATCH_SIZE]
batch_num = (batch_start // BATCH_SIZE) + 1
total_batches = (len(dirty_watches) + BATCH_SIZE - 1) // BATCH_SIZE
if len(dirty_watches) > BATCH_SIZE:
logger.trace(f"Save batch {batch_num}/{total_batches} ({len(batch)} watches)")
# Use thread pool to save watches in parallel
with ThreadPoolExecutor(max_workers=MAX_WORKERS) as executor:
# Submit all save tasks
future_to_uuid = {executor.submit(save_single_watch, uuid): uuid for uuid in batch}
# Collect results as they complete
for future in as_completed(future_to_uuid):
result = future.result()
status = result['status']
if status == 'saved':
saved_count += 1
elif status == 'unchanged':
skipped_unchanged += 1
elif status == 'error':
error_count += 1
# Re-mark for retry
with self.lock:
self._dirty_watches.add(result['uuid'])
# 'deleted' and 'skipped' don't need special handling
# Save settings if changed
if dirty_settings:
try:
self._save_settings()
logger.debug("Saved settings")
except Exception as e:
logger.error(f"Failed to save settings: {e}")
error_count += 1
with self.lock:
self._dirty_settings = True
# Update metrics
elapsed = time.time() - start_time
self._save_cycle_count += 1
self._total_saves += saved_count
self._save_errors += error_count
self._last_save_time = time.time()
# Log performance metrics
if saved_count > 0:
avg_time_per_watch = (elapsed / saved_count) * 1000 # milliseconds
skipped_msg = f", {skipped_unchanged} unchanged" if skipped_unchanged > 0 else ""
parallel_msg = f" [parallel: {MAX_WORKERS} workers]" if saved_count > 1 else ""
logger.info(
f"Successfully saved {saved_count} watches in {elapsed:.2f}s "
f"(avg {avg_time_per_watch:.1f}ms per watch{skipped_msg}){parallel_msg}. "
f"Total: {self._total_saves} saves, {self._save_errors} errors (lifetime)"
)
elif skipped_unchanged > 0:
logger.debug(f"Save cycle: {skipped_unchanged} watches verified unchanged (hash match), nothing saved")
if error_count > 0:
logger.error(f"Save cycle completed with {error_count} errors")
self.needs_write = False
self.needs_write_urgent = False
def _watch_exists(self, uuid):
"""
Check if watch exists. Subclass must implement.
Args:
uuid: Watch UUID
Returns:
bool
"""
raise NotImplementedError("Subclass must implement _watch_exists")
def _get_watch_dict(self, uuid):
"""
Get watch as dictionary. Subclass must implement.
Args:
uuid: Watch UUID
Returns:
Dictionary representation of watch
"""
raise NotImplementedError("Subclass must implement _get_watch_dict")
def _audit_all_watches(self):
"""
Rolling audit: Scans a fraction of watches to detect unmarked changes.
Instead of scanning ALL watches at once, this scans 1/N shards per cycle.
The shard rotates each cycle, completing a full audit every N cycles.
Handles dynamic watch count - recalculates shard boundaries each cycle,
so newly added watches will be audited in subsequent cycles.
Benefits:
- Lower CPU per cycle (56k / 5 = ~11k watches vs all 56k)
- More frequent audits overall (every 50s vs every 10s)
- Spreads load evenly across time
"""
audit_start = time.time()
# Get list of all watch UUIDs (read-only, no lock needed)
try:
all_uuids = list(self.data['watching'].keys())
except (KeyError, AttributeError, RuntimeError):
# Data structure not ready or being modified
return
if not all_uuids:
return
total_watches = len(all_uuids)
# Calculate this cycle's shard boundaries
# Example: 56,278 watches / 5 shards = 11,255 watches per shard
# Shard 0: [0:11255], Shard 1: [11255:22510], etc.
shard_size = (total_watches + DATASTORE_AUDIT_SHARDS - 1) // DATASTORE_AUDIT_SHARDS
start_idx = self._audit_shard_index * shard_size
end_idx = min(start_idx + shard_size, total_watches)
# Handle wrap-around (shouldn't happen normally, but defensive)
if start_idx >= total_watches:
self._audit_shard_index = 0
start_idx = 0
end_idx = min(shard_size, total_watches)
# Audit only this shard's watches
shard_uuids = all_uuids[start_idx:end_idx]
changes_found = 0
errors = 0
for uuid in shard_uuids:
try:
# Get current watch dict and compute hash
watch_dict = self._get_watch_dict(uuid)
current_hash = self._compute_hash(watch_dict)
stored_hash = self._watch_hashes.get(uuid)
# If hash changed and not already marked dirty, mark it
if current_hash != stored_hash:
with self.lock:
if uuid not in self._dirty_watches:
self._dirty_watches.add(uuid)
changes_found += 1
logger.warning(
f"Audit detected unmarked change in watch {uuid[:8]}... current {current_hash:8} stored hash {stored_hash[:8]}"
f"(hash changed but not marked dirty)"
)
self.needs_write = True
except Exception as e:
errors += 1
logger.trace(f"Audit error for watch {uuid[:8]}...: {e}")
audit_elapsed = (time.time() - audit_start) * 1000 # milliseconds
# Advance to next shard (wrap around after last shard)
self._audit_shard_index = (self._audit_shard_index + 1) % DATASTORE_AUDIT_SHARDS
# Update metrics
self._audit_count += 1
self._audit_found_changes += changes_found
self._last_audit_time = time.time()
if changes_found > 0:
logger.warning(
f"Audit shard {self._audit_shard_index}/{DATASTORE_AUDIT_SHARDS} found {changes_found} "
f"unmarked changes in {len(shard_uuids)}/{total_watches} watches ({audit_elapsed:.1f}ms)"
)
else:
logger.trace(
f"Audit shard {self._audit_shard_index}/{DATASTORE_AUDIT_SHARDS}: "
f"{len(shard_uuids)}/{total_watches} watches checked, 0 changes ({audit_elapsed:.1f}ms)"
)
def save_datastore(self):
"""
Background thread that periodically saves dirty items and audits watches.
Runs two independent cycles:
1. Save dirty items every DATASTORE_SCAN_DIRTY_SAVE_INTERVAL_SECONDS (default 10s)
2. Rolling audit: every DATASTORE_AUDIT_INTERVAL_SECONDS (default 10s)
- Scans 1/DATASTORE_AUDIT_SHARDS watches per cycle (default 1/5)
- Full audit completes every 50s (10s × 5 shards)
- Automatically handles new/deleted watches
Uses 0.5s sleep intervals for responsiveness to urgent saves.
"""
while True:
if self.stop_thread:
# Graceful shutdown: flush any remaining dirty items before stopping
if self.needs_write or self._dirty_watches or self._dirty_settings:
logger.warning("Datastore save thread stopping - flushing remaining dirty items...")
try:
self._save_dirty_items()
logger.info("Graceful shutdown complete - all data saved")
except Exception as e:
logger.critical(f"FAILED to save dirty items during shutdown: {e}")
else:
logger.info("Datastore save thread stopping - no dirty items")
return
# Check if it's time to run audit scan (every N seconds)
if time.time() - self._last_audit_time >= DATASTORE_AUDIT_INTERVAL_SECONDS:
try:
self._audit_all_watches()
except Exception as e:
logger.error(f"Error in audit cycle: {e}")
# Save dirty items if needed
if self.needs_write or self.needs_write_urgent:
try:
self._save_dirty_items()
except Exception as e:
logger.error(f"Error in save cycle: {e}")
# Timer with early break for urgent saves
# Each iteration is 0.5 seconds, so iterations = DATASTORE_SCAN_DIRTY_SAVE_INTERVAL_SECONDS * 2
for i in range(DATASTORE_SCAN_DIRTY_SAVE_INTERVAL_SECONDS * 2):
time.sleep(0.5)
if self.stop_thread or self.needs_write_urgent:
break
def start_save_thread(self):
"""Start the background save thread."""
if not self.save_data_thread or not self.save_data_thread.is_alive():
self.save_data_thread = Thread(target=self.save_datastore, daemon=True, name="DatastoreSaver")
self.save_data_thread.start()
logger.info("Datastore save thread started")
def force_save_all(self):
"""
Force immediate synchronous save of all changes to storage.
File backend implementation of the abstract force_save_all() method.
Marks all watches and settings as dirty, then saves immediately.
Used by:
- Backup creation (ensure everything is saved before backup)
- Shutdown (ensure all changes are persisted)
- Manual save operations
"""
logger.info("Force saving all data to storage...")
# Mark everything as dirty to ensure complete save
for uuid in self.data['watching'].keys():
self.mark_watch_dirty(uuid)
self.mark_settings_dirty()
# Save immediately (synchronous)
self._save_dirty_items()
logger.success("All data saved to storage")
def get_health_status(self):
"""
Get datastore health status for monitoring.
Returns:
dict with health metrics and status
"""
now = time.time()
time_since_last_save = now - self._last_save_time
with self.lock:
dirty_count = len(self._dirty_watches)
is_thread_alive = self.save_data_thread and self.save_data_thread.is_alive()
# Determine health status
if not is_thread_alive:
status = "CRITICAL"
message = "Save thread is DEAD"
elif time_since_last_save > 300: # 5 minutes
status = "WARNING"
message = f"No save activity for {time_since_last_save:.0f}s"
elif dirty_count > 1000:
status = "WARNING"
message = f"High backpressure: {dirty_count} watches pending"
elif self._save_errors > 0 and (self._save_errors / max(self._total_saves, 1)) > 0.01:
status = "WARNING"
message = f"High error rate: {self._save_errors} errors"
else:
status = "HEALTHY"
message = "Operating normally"
return {
"status": status,
"message": message,
"thread_alive": is_thread_alive,
"dirty_watches": dirty_count,
"dirty_settings": self._dirty_settings,
"last_save_seconds_ago": int(time_since_last_save),
"save_cycles": self._save_cycle_count,
"total_saves": self._total_saves,
"total_errors": self._save_errors,
"error_rate_percent": round((self._save_errors / max(self._total_saves, 1)) * 100, 2)
}

View File

@@ -168,7 +168,7 @@ class DatastoreUpdatesMixin:
latest_update = updates_available[-1] if updates_available else 0
logger.info(f"No schema version found and no watches exist - assuming fresh install, setting schema_version to {latest_update}")
self.data['settings']['application']['schema_version'] = latest_update
self.mark_settings_dirty()
self.commit()
return # No updates needed for fresh install
else:
# Has watches but no schema version - likely old datastore, run all updates
@@ -201,14 +201,14 @@ class DatastoreUpdatesMixin:
else:
# Bump the version, important
self.data['settings']['application']['schema_version'] = update_n
self.mark_settings_dirty()
self.commit()
# CRITICAL: Mark all watches as dirty so changes are persisted
# CRITICAL: Save all watches so changes are persisted
# Most updates modify watches, and in the new individual watch.json structure,
# we need to ensure those changes are saved
logger.info(f"Marking all {len(self.data['watching'])} watches as dirty after update_{update_n} (so that it saves them to disk)")
logger.info(f"Saving all {len(self.data['watching'])} watches after update_{update_n} (so that it saves them to disk)")
for uuid in self.data['watching'].keys():
self.mark_watch_dirty(uuid)
self.data['watching'][uuid].commit()
# Save changes immediately after each update (more resilient than batching)
logger.critical(f"Saving all changes after update_{update_n}")
@@ -662,7 +662,7 @@ class DatastoreUpdatesMixin:
updates_available = self.get_updates_available()
latest_schema = updates_available[-1] if updates_available else 26
self.data['settings']['application']['schema_version'] = latest_schema
self.mark_settings_dirty()
self.commit()
logger.info(f"Set schema_version to {latest_schema} (migration complete, all watches already saved)")
logger.critical("=" * 80)

View File

@@ -308,10 +308,6 @@ def prepare_test_function(live_server, datastore_path):
# Prevent background thread from writing during cleanup/reload
datastore.needs_write = False
datastore.needs_write_urgent = False
# CRITICAL: Clean up any files from previous tests
# This ensures a completely clean directory
cleanup(datastore_path)
@@ -344,7 +340,6 @@ def prepare_test_function(live_server, datastore_path):
break
datastore.data['watching'] = {}
datastore.needs_write = True
except Exception as e:
logger.warning(f"Error during datastore cleanup: {e}")

View File

@@ -0,0 +1,41 @@
import time
from flask import url_for
from changedetectionio.tests.util import wait_for_all_checks
def test_check_plugin_processor(client, live_server, measure_memory_usage, datastore_path):
# requires os-int intelligence plugin installed (first basic one we test with)
res = client.get(url_for("watchlist.index"))
assert b'OSINT Reconnaissance' in res.data, "Must have the OSINT plugin installed at test time"
assert b'<input checked id="processor-0" name="processor" type="radio" value="text_json_diff">' in res.data, "But the first text_json_diff processor should always be selected by default in quick watch form"
res = client.post(
url_for("ui.ui_views.form_quick_watch_add"),
data={"url": 'http://127.0.0.1', "tags": '', 'processor': 'osint_recon'},
follow_redirects=True
)
assert b"Watch added" in res.data
client.get(url_for("ui.form_watch_checknow"), follow_redirects=True)
wait_for_all_checks(client)
res = client.get(
url_for("ui.ui_preview.preview_page", uuid="first"),
follow_redirects=True
)
assert b'Target: http://127.0.0.1' in res.data
assert b'DNSKEY Records' in res.data
wait_for_all_checks(client)
# Now change it to something that doesnt exist
uuid = next(iter(live_server.app.config['DATASTORE'].data['watching']))
live_server.app.config['DATASTORE'].data['watching'][uuid]['processor'] = "now_missing"
client.get(url_for("ui.form_watch_checknow"), follow_redirects=True)
wait_for_all_checks(client)
res = client.get(url_for("watchlist.index"))
assert b"Exception: Processor module" in res.data and b'now_missing' in res.data, f'Should register that the plugin is missing for {uuid}'

View File

@@ -465,7 +465,10 @@ def test_api_watch_PUT_update(client, live_server, measure_memory_usage, datasto
assert res.status_code == 400, "Should get error 400 when we give a field that doesnt exist"
# Message will come from `flask_expects_json`
assert b'Additional properties are not allowed' in res.data
# With patternProperties for processor_config_*, the error message format changed slightly
assert (b'Additional properties are not allowed' in res.data or
b'does not match any of the regexes' in res.data), \
"Should reject unknown fields with schema validation error"
# Try a XSS URL

View File

@@ -80,7 +80,10 @@ def test_openapi_validation_invalid_field_in_request_body(client, live_server, m
# Should get 400 error due to invalid field (this will be caught by internal validation)
# Note: This tests the flow where OpenAPI validation passes but internal validation catches it
assert res.status_code == 400, f"Expected 400 but got {res.status_code}"
assert b"Additional properties are not allowed" in res.data, "Should contain validation error about additional properties"
# With patternProperties for processor_config_*, the error message format changed slightly
assert (b"Additional properties are not allowed" in res.data or
b"does not match any of the regexes" in res.data), \
"Should contain validation error about additional/invalid properties"
def test_openapi_validation_import_wrong_content_type(client, live_server, measure_memory_usage, datastore_path):

View File

@@ -0,0 +1,661 @@
#!/usr/bin/env python3
"""
Tests for immediate commit-based persistence system.
Tests cover:
- Watch.commit() persistence to disk
- Concurrent commit safety (race conditions)
- Processor config separation
- Data loss prevention (settings, tags, watch modifications)
"""
import json
import os
import threading
import time
from flask import url_for
from .util import wait_for_all_checks
# ==============================================================================
# 2. Commit() Persistence Tests
# ==============================================================================
def test_watch_commit_persists_to_disk(client, live_server):
"""Test that watch.commit() actually writes to watch.json immediately"""
datastore = client.application.config.get('DATASTORE')
# Create a watch
uuid = datastore.add_watch(url='http://example.com', extras={'title': 'Original Title'})
watch = datastore.data['watching'][uuid]
# Modify and commit
watch['title'] = 'Modified Title'
watch['paused'] = True
watch.commit()
# Read directly from disk (bypass datastore cache)
watch_json_path = os.path.join(watch.watch_data_dir, 'watch.json')
assert os.path.exists(watch_json_path), "watch.json should exist on disk"
with open(watch_json_path, 'r') as f:
disk_data = json.load(f)
assert disk_data['title'] == 'Modified Title', "Title should be persisted to disk"
assert disk_data['paused'] == True, "Paused state should be persisted to disk"
assert disk_data['uuid'] == uuid, "UUID should match"
def test_watch_commit_survives_reload(client, live_server):
"""Test that committed changes survive datastore reload"""
from changedetectionio.store import ChangeDetectionStore
datastore = client.application.config.get('DATASTORE')
datastore_path = datastore.datastore_path
# Create and modify a watch
uuid = datastore.add_watch(url='http://example.com', extras={'title': 'Test Watch'})
watch = datastore.data['watching'][uuid]
watch['title'] = 'Persisted Title'
watch['paused'] = True
watch['tags'] = ['tag-1', 'tag-2']
watch.commit()
# Simulate app restart - create new datastore instance
datastore2 = ChangeDetectionStore(datastore_path=datastore_path)
datastore2.reload_state(
datastore_path=datastore_path,
include_default_watches=False,
version_tag='test'
)
# Check data survived
assert uuid in datastore2.data['watching'], "Watch should exist after reload"
reloaded_watch = datastore2.data['watching'][uuid]
assert reloaded_watch['title'] == 'Persisted Title', "Title should survive reload"
assert reloaded_watch['paused'] == True, "Paused state should survive reload"
assert reloaded_watch['tags'] == ['tag-1', 'tag-2'], "Tags should survive reload"
def test_watch_commit_atomic_on_crash(client, live_server):
"""Test that atomic writes prevent corruption (temp file pattern)"""
datastore = client.application.config.get('DATASTORE')
uuid = datastore.add_watch(url='http://example.com', extras={'title': 'Original'})
watch = datastore.data['watching'][uuid]
# First successful commit
watch['title'] = 'First Save'
watch.commit()
# Verify watch.json exists and is valid
watch_json_path = os.path.join(watch.watch_data_dir, 'watch.json')
with open(watch_json_path, 'r') as f:
data = json.load(f) # Should not raise JSONDecodeError
assert data['title'] == 'First Save'
# Second commit - even if interrupted, original file should be intact
# (atomic write uses temp file + rename, so original is never corrupted)
watch['title'] = 'Second Save'
watch.commit()
with open(watch_json_path, 'r') as f:
data = json.load(f)
assert data['title'] == 'Second Save'
def test_multiple_watches_commit_independently(client, live_server):
"""Test that committing one watch doesn't affect others"""
datastore = client.application.config.get('DATASTORE')
# Create multiple watches
uuid1 = datastore.add_watch(url='http://example1.com', extras={'title': 'Watch 1'})
uuid2 = datastore.add_watch(url='http://example2.com', extras={'title': 'Watch 2'})
uuid3 = datastore.add_watch(url='http://example3.com', extras={'title': 'Watch 3'})
watch1 = datastore.data['watching'][uuid1]
watch2 = datastore.data['watching'][uuid2]
watch3 = datastore.data['watching'][uuid3]
# Modify and commit only watch2
watch2['title'] = 'Modified Watch 2'
watch2['paused'] = True
watch2.commit()
# Read all from disk
def read_watch_json(uuid):
watch = datastore.data['watching'][uuid]
path = os.path.join(watch.watch_data_dir, 'watch.json')
with open(path, 'r') as f:
return json.load(f)
data1 = read_watch_json(uuid1)
data2 = read_watch_json(uuid2)
data3 = read_watch_json(uuid3)
# Only watch2 should have changes
assert data1['title'] == 'Watch 1', "Watch 1 should be unchanged"
assert data1['paused'] == False, "Watch 1 should not be paused"
assert data2['title'] == 'Modified Watch 2', "Watch 2 should be modified"
assert data2['paused'] == True, "Watch 2 should be paused"
assert data3['title'] == 'Watch 3', "Watch 3 should be unchanged"
assert data3['paused'] == False, "Watch 3 should not be paused"
# ==============================================================================
# 3. Concurrency/Race Condition Tests
# ==============================================================================
def test_concurrent_watch_commits_dont_corrupt(client, live_server):
"""Test that simultaneous commits to same watch don't corrupt JSON"""
datastore = client.application.config.get('DATASTORE')
uuid = datastore.add_watch(url='http://example.com', extras={'title': 'Test'})
watch = datastore.data['watching'][uuid]
errors = []
def modify_and_commit(field, value):
try:
watch[field] = value
watch.commit()
except Exception as e:
errors.append(e)
# Run 10 concurrent commits
threads = []
for i in range(10):
t = threading.Thread(target=modify_and_commit, args=('title', f'Title {i}'))
threads.append(t)
t.start()
for t in threads:
t.join()
# Should not have any errors
assert len(errors) == 0, f"Expected no errors, got: {errors}"
# JSON file should still be valid (not corrupted)
watch_json_path = os.path.join(watch.watch_data_dir, 'watch.json')
with open(watch_json_path, 'r') as f:
data = json.load(f) # Should not raise JSONDecodeError
assert data['uuid'] == uuid, "UUID should still be correct"
assert 'Title' in data['title'], "Title should contain 'Title'"
def test_concurrent_modifications_during_commit(client, live_server):
"""Test that modifying watch during commit doesn't cause RuntimeError"""
datastore = client.application.config.get('DATASTORE')
uuid = datastore.add_watch(url='http://example.com', extras={'title': 'Test'})
watch = datastore.data['watching'][uuid]
errors = []
stop_flag = threading.Event()
def keep_modifying():
"""Continuously modify watch"""
try:
i = 0
while not stop_flag.is_set():
watch['title'] = f'Title {i}'
watch['paused'] = i % 2 == 0
i += 1
time.sleep(0.001)
except Exception as e:
errors.append(('modifier', e))
def keep_committing():
"""Continuously commit watch"""
try:
for _ in range(20):
watch.commit()
time.sleep(0.005)
except Exception as e:
errors.append(('committer', e))
# Start concurrent modification and commits
modifier = threading.Thread(target=keep_modifying)
committer = threading.Thread(target=keep_committing)
modifier.start()
committer.start()
committer.join()
stop_flag.set()
modifier.join()
# Should not have RuntimeError from dict changing during iteration
runtime_errors = [e for source, e in errors if isinstance(e, RuntimeError)]
assert len(runtime_errors) == 0, f"Should not have RuntimeError, got: {runtime_errors}"
def test_datastore_lock_protects_commit_snapshot(client, live_server):
"""Test that datastore.lock prevents race conditions during deepcopy"""
datastore = client.application.config.get('DATASTORE')
uuid = datastore.add_watch(url='http://example.com', extras={'title': 'Test'})
watch = datastore.data['watching'][uuid]
# Add some complex nested data
watch['browser_steps'] = [
{'operation': 'click', 'selector': '#foo'},
{'operation': 'wait', 'seconds': 5}
]
errors = []
commits_succeeded = [0]
def rapid_commits():
try:
for i in range(50):
watch['title'] = f'Title {i}'
watch.commit()
commits_succeeded[0] += 1
time.sleep(0.001)
except Exception as e:
errors.append(e)
# Multiple threads doing rapid commits
threads = [threading.Thread(target=rapid_commits) for _ in range(3)]
for t in threads:
t.start()
for t in threads:
t.join()
assert len(errors) == 0, f"Expected no errors, got: {errors}"
assert commits_succeeded[0] == 150, f"Expected 150 commits, got {commits_succeeded[0]}"
# Final JSON should be valid
watch_json_path = os.path.join(watch.watch_data_dir, 'watch.json')
with open(watch_json_path, 'r') as f:
data = json.load(f)
assert data['uuid'] == uuid
# ==============================================================================
# 4. Processor Config Separation Tests
# ==============================================================================
def test_processor_config_never_in_watch_json(client, live_server):
"""Test that processor_config_* fields are filtered out of watch.json"""
datastore = client.application.config.get('DATASTORE')
uuid = datastore.add_watch(
url='http://example.com',
extras={
'title': 'Test Watch',
'processor': 'restock_diff'
}
)
watch = datastore.data['watching'][uuid]
# Try to set processor config fields (these should be filtered during commit)
watch['processor_config_price_threshold'] = 10.0
watch['processor_config_some_setting'] = 'value'
watch['processor_config_another'] = {'nested': 'data'}
watch.commit()
# Read watch.json from disk
watch_json_path = os.path.join(watch.watch_data_dir, 'watch.json')
with open(watch_json_path, 'r') as f:
data = json.load(f)
# Verify processor_config_* fields are NOT in watch.json
for key in data.keys():
assert not key.startswith('processor_config_'), \
f"Found {key} in watch.json - processor configs should be in separate file!"
# Normal fields should still be there
assert data['title'] == 'Test Watch'
assert data['processor'] == 'restock_diff'
def test_api_post_saves_processor_config_separately(client, live_server):
"""Test that API POST saves processor configs to {processor}.json"""
import json
from changedetectionio.processors import extract_processor_config_from_form_data
# Get API key
api_key = live_server.app.config['DATASTORE'].data['settings']['application'].get('api_access_token')
# Create watch via API with processor config
response = client.post(
url_for("createwatch"),
data=json.dumps({
'url': 'http://example.com',
'processor': 'restock_diff',
'processor_config_price_threshold': 10.0,
'processor_config_in_stock_only': True
}),
headers={'content-type': 'application/json', 'x-api-key': api_key}
)
assert response.status_code in (200, 201), f"Expected 200/201, got {response.status_code}"
uuid = response.json.get('uuid')
assert uuid, "Should return UUID"
datastore = client.application.config.get('DATASTORE')
watch = datastore.data['watching'][uuid]
# Check that processor config file exists
processor_config_path = os.path.join(watch.watch_data_dir, 'restock_diff.json')
assert os.path.exists(processor_config_path), "Processor config file should exist"
with open(processor_config_path, 'r') as f:
config = json.load(f)
# Verify fields are saved WITHOUT processor_config_ prefix
assert config.get('price_threshold') == 10.0, "Should have price_threshold (no prefix)"
assert config.get('in_stock_only') == True, "Should have in_stock_only (no prefix)"
assert 'processor_config_price_threshold' not in config, "Should NOT have prefixed keys"
def test_api_put_saves_processor_config_separately(client, live_server):
"""Test that API PUT updates processor configs in {processor}.json"""
import json
datastore = client.application.config.get('DATASTORE')
# Get API key
api_key = live_server.app.config['DATASTORE'].data['settings']['application'].get('api_access_token')
# Create watch
uuid = datastore.add_watch(
url='http://example.com',
extras={'processor': 'restock_diff'}
)
# Update via API with processor config
response = client.put(
url_for("watch", uuid=uuid),
data=json.dumps({
'processor_config_price_threshold': 15.0,
'processor_config_min_stock': 5
}),
headers={'content-type': 'application/json', 'x-api-key': api_key}
)
# PUT might return different status codes, 200 or 204 are both OK
assert response.status_code in (200, 204), f"Expected 200/204, got {response.status_code}: {response.data}"
watch = datastore.data['watching'][uuid]
# Check processor config file
processor_config_path = os.path.join(watch.watch_data_dir, 'restock_diff.json')
assert os.path.exists(processor_config_path), "Processor config file should exist"
with open(processor_config_path, 'r') as f:
config = json.load(f)
assert config.get('price_threshold') == 15.0, "Should have updated price_threshold"
assert config.get('min_stock') == 5, "Should have min_stock"
def test_ui_edit_saves_processor_config_separately(client, live_server):
"""Test that processor_config_* fields never appear in watch.json (even from UI)"""
datastore = client.application.config.get('DATASTORE')
# Create watch
uuid = datastore.add_watch(
url='http://example.com',
extras={'processor': 'text_json_diff', 'title': 'Test'}
)
watch = datastore.data['watching'][uuid]
# Simulate someone accidentally trying to set processor_config fields directly
watch['processor_config_should_not_save'] = 'test_value'
watch['processor_config_another_field'] = 123
watch['normal_field'] = 'this_should_save'
watch.commit()
# Check watch.json has NO processor_config_* fields (main point of this test)
watch_json_path = os.path.join(watch.watch_data_dir, 'watch.json')
with open(watch_json_path, 'r') as f:
watch_data = json.load(f)
for key in watch_data.keys():
assert not key.startswith('processor_config_'), \
f"Found {key} in watch.json - processor configs should be filtered during commit"
# Verify normal fields still save
assert watch_data['normal_field'] == 'this_should_save', "Normal fields should save"
assert watch_data['title'] == 'Test', "Original fields should still be there"
def test_browser_steps_normalized_to_empty_list(client, live_server):
"""Test that meaningless browser_steps are normalized to [] during commit"""
datastore = client.application.config.get('DATASTORE')
uuid = datastore.add_watch(url='http://example.com')
watch = datastore.data['watching'][uuid]
# Set browser_steps to meaningless values
watch['browser_steps'] = [
{'operation': 'Choose one', 'selector': ''},
{'operation': 'Goto site', 'selector': ''},
{'operation': '', 'selector': '#foo'}
]
watch.commit()
# Read from disk
watch_json_path = os.path.join(watch.watch_data_dir, 'watch.json')
with open(watch_json_path, 'r') as f:
data = json.load(f)
# Should be normalized to empty list
assert data['browser_steps'] == [], "Meaningless browser_steps should be normalized to []"
# ==============================================================================
# 5. Data Loss Prevention Tests
# ==============================================================================
def test_settings_persist_after_update(client, live_server):
"""Test that settings updates are committed and survive restart"""
from changedetectionio.store import ChangeDetectionStore
datastore = client.application.config.get('DATASTORE')
datastore_path = datastore.datastore_path
# Update settings directly (bypass form validation issues)
datastore.data['settings']['application']['empty_pages_are_a_change'] = True
datastore.data['settings']['application']['fetch_backend'] = 'html_requests'
datastore.data['settings']['requests']['time_between_check']['minutes'] = 120
datastore.commit()
# Simulate restart
datastore2 = ChangeDetectionStore(datastore_path=datastore_path)
datastore2.reload_state(
datastore_path=datastore_path,
include_default_watches=False,
version_tag='test'
)
# Verify settings survived
assert datastore2.data['settings']['application']['empty_pages_are_a_change'] == True, "empty_pages_are_a_change should persist"
assert datastore2.data['settings']['application']['fetch_backend'] == 'html_requests', "fetch_backend should persist"
assert datastore2.data['settings']['requests']['time_between_check']['minutes'] == 120, "time_between_check should persist"
def test_tag_mute_persists(client, live_server):
"""Test that tag mute/unmute operations persist"""
from changedetectionio.store import ChangeDetectionStore
datastore = client.application.config.get('DATASTORE')
datastore_path = datastore.datastore_path
# Add a tag
tag_uuid = datastore.add_tag('Test Tag')
# Mute the tag
response = client.get(url_for("tags.mute", uuid=tag_uuid))
assert response.status_code == 302 # Redirect
# Verify muted in memory
assert datastore.data['settings']['application']['tags'][tag_uuid]['notification_muted'] == True
# Simulate restart
datastore2 = ChangeDetectionStore(datastore_path=datastore_path)
datastore2.reload_state(
datastore_path=datastore_path,
include_default_watches=False,
version_tag='test'
)
# Verify mute state survived
assert tag_uuid in datastore2.data['settings']['application']['tags']
assert datastore2.data['settings']['application']['tags'][tag_uuid]['notification_muted'] == True
def test_tag_delete_removes_from_watches(client, live_server):
"""Test that deleting a tag removes it from all watches"""
datastore = client.application.config.get('DATASTORE')
# Create a tag
tag_uuid = datastore.add_tag('Test Tag')
# Create watches with this tag
uuid1 = datastore.add_watch(url='http://example1.com')
uuid2 = datastore.add_watch(url='http://example2.com')
uuid3 = datastore.add_watch(url='http://example3.com')
watch1 = datastore.data['watching'][uuid1]
watch2 = datastore.data['watching'][uuid2]
watch3 = datastore.data['watching'][uuid3]
watch1['tags'] = [tag_uuid]
watch1.commit()
watch2['tags'] = [tag_uuid, 'other-tag']
watch2.commit()
# watch3 has no tags
# Delete the tag
response = client.get(url_for("tags.delete", uuid=tag_uuid))
assert response.status_code == 302
# Wait for background thread to complete
time.sleep(1)
# Tag should be removed from settings
assert tag_uuid not in datastore.data['settings']['application']['tags']
# Tag should be removed from watches and persisted
def check_watch_tags(uuid):
watch = datastore.data['watching'][uuid]
watch_json_path = os.path.join(watch.watch_data_dir, 'watch.json')
with open(watch_json_path, 'r') as f:
return json.load(f)['tags']
assert tag_uuid not in check_watch_tags(uuid1), "Tag should be removed from watch1"
assert tag_uuid not in check_watch_tags(uuid2), "Tag should be removed from watch2"
assert 'other-tag' in check_watch_tags(uuid2), "Other tags should remain in watch2"
assert check_watch_tags(uuid3) == [], "Watch3 should still have empty tags"
def test_watch_pause_unpause_persists(client, live_server):
"""Test that pause/unpause operations commit and persist"""
datastore = client.application.config.get('DATASTORE')
# Get API key
api_key = live_server.app.config['DATASTORE'].data['settings']['application'].get('api_access_token')
uuid = datastore.add_watch(url='http://example.com')
watch = datastore.data['watching'][uuid]
# Pause via API
response = client.get(url_for("watch", uuid=uuid, paused='paused'), headers={'x-api-key': api_key})
assert response.status_code == 200
# Check persisted to disk
watch_json_path = os.path.join(watch.watch_data_dir, 'watch.json')
with open(watch_json_path, 'r') as f:
data = json.load(f)
assert data['paused'] == True, "Pause should be persisted"
# Unpause
response = client.get(url_for("watch", uuid=uuid, paused='unpaused'), headers={'x-api-key': api_key})
assert response.status_code == 200
with open(watch_json_path, 'r') as f:
data = json.load(f)
assert data['paused'] == False, "Unpause should be persisted"
def test_watch_mute_unmute_persists(client, live_server):
"""Test that mute/unmute operations commit and persist"""
datastore = client.application.config.get('DATASTORE')
# Get API key
api_key = live_server.app.config['DATASTORE'].data['settings']['application'].get('api_access_token')
uuid = datastore.add_watch(url='http://example.com')
watch = datastore.data['watching'][uuid]
# Mute via API
response = client.get(url_for("watch", uuid=uuid, muted='muted'), headers={'x-api-key': api_key})
assert response.status_code == 200
# Check persisted to disk
watch_json_path = os.path.join(watch.watch_data_dir, 'watch.json')
with open(watch_json_path, 'r') as f:
data = json.load(f)
assert data['notification_muted'] == True, "Mute should be persisted"
# Unmute
response = client.get(url_for("watch", uuid=uuid, muted='unmuted'), headers={'x-api-key': api_key})
assert response.status_code == 200
with open(watch_json_path, 'r') as f:
data = json.load(f)
assert data['notification_muted'] == False, "Unmute should be persisted"
def test_ui_watch_edit_persists_all_fields(client, live_server):
"""Test that UI watch edit form persists all modified fields"""
from changedetectionio.store import ChangeDetectionStore
datastore = client.application.config.get('DATASTORE')
datastore_path = datastore.datastore_path
# Create watch
uuid = datastore.add_watch(url='http://example.com')
# Edit via UI with multiple field changes
response = client.post(
url_for("ui.ui_edit.edit_page", uuid=uuid),
data={
'url': 'http://updated-example.com',
'title': 'Updated Watch Title',
'time_between_check-hours': '2',
'time_between_check-minutes': '30',
'include_filters': '#content',
'fetch_backend': 'html_requests',
'method': 'POST',
'ignore_text': 'Advertisement\nTracking'
},
follow_redirects=True
)
assert b"Updated watch" in response.data or b"Saved" in response.data
# Simulate restart
datastore2 = ChangeDetectionStore(datastore_path=datastore_path)
datastore2.reload_state(
datastore_path=datastore_path,
include_default_watches=False,
version_tag='test'
)
# Verify all fields survived
watch = datastore2.data['watching'][uuid]
assert watch['url'] == 'http://updated-example.com'
assert watch['title'] == 'Updated Watch Title'
assert watch['time_between_check']['hours'] == 2
assert watch['time_between_check']['minutes'] == 30
assert watch['fetch_backend'] == 'html_requests'
assert watch['method'] == 'POST'

View File

@@ -182,3 +182,86 @@ def test_check_text_history_view(client, live_server, measure_memory_usage, data
assert b'test-one' not in res.data
delete_all_watches(client)
def test_history_trim_global_only(client, live_server, measure_memory_usage, datastore_path):
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
uuid = None
limit = 3
for i in range(0, 10):
with open(os.path.join(datastore_path, "endpoint-content.txt"), "w") as f:
f.write(f"<html>test {i}</html>")
if not uuid:
uuid = client.application.config.get('DATASTORE').add_watch(url=test_url)
client.get(url_for("ui.form_watch_checknow"), follow_redirects=True)
wait_for_all_checks(client)
if i ==8:
watch = live_server.app.config['DATASTORE'].data['watching'][uuid]
history_n = len(list(watch.history.keys()))
logger.debug(f"History length should be at limit {limit} and it is {history_n}")
assert history_n == limit
if i == 6:
res = client.post(
url_for("settings.settings_page"),
data={"application-history_snapshot_max_length": limit},
follow_redirects=True
)
# It will need to detect one more change to start trimming it, which is really at 'start of 7'
assert b'Settings updated' in res.data
delete_all_watches(client)
def test_history_trim_global_override_in_watch(client, live_server, measure_memory_usage, datastore_path):
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
uuid = None
limit = 3
res = client.post(
url_for("settings.settings_page"),
data={"application-history_snapshot_max_length": 10000},
follow_redirects=True
)
# It will need to detect one more change to start trimming it, which is really at 'start of 7'
assert b'Settings updated' in res.data
for i in range(0, 10):
with open(os.path.join(datastore_path, "endpoint-content.txt"), "w") as f:
f.write(f"<html>test {i}</html>")
if not uuid:
uuid = client.application.config.get('DATASTORE').add_watch(url=test_url)
res = client.post(
url_for("ui.ui_edit.edit_page", uuid="first"),
data={"include_filters": "", "url": test_url, "tags": "", "headers": "", 'fetch_backend': "html_requests",
"time_between_check_use_default": "y", "history_snapshot_max_length": str(limit)},
follow_redirects=True
)
assert b"Updated watch." in res.data
wait_for_all_checks(client)
client.get(url_for("ui.form_watch_checknow"), follow_redirects=True)
wait_for_all_checks(client)
if i == 8:
watch = live_server.app.config['DATASTORE'].data['watching'][uuid]
history_n = len(list(watch.history.keys()))
logger.debug(f"History length should be at limit {limit} and it is {history_n}")
assert history_n == limit
if i == 6:
res = client.post(
url_for("settings.settings_page"),
data={"application-history_snapshot_max_length": limit},
follow_redirects=True
)
# It will need to detect one more change to start trimming it, which is really at 'start of 7'
assert b'Settings updated' in res.data
delete_all_watches(client)

View File

@@ -5,15 +5,24 @@
import unittest
import os
import pickle
from copy import deepcopy
from changedetectionio.model import Watch
from changedetectionio.model import Watch, Tag
# mostly
class TestDiffBuilder(unittest.TestCase):
def test_watch_get_suggested_from_diff_timestamp(self):
import uuid as uuid_builder
watch = Watch.model(datastore_path='/tmp', default={})
# Create minimal mock datastore for tests
mock_datastore = {
'settings': {
'application': {}
},
'watching': {}
}
watch = Watch.model(datastore_path='/tmp', __datastore=mock_datastore, default={})
watch.ensure_data_dir_exists()
@@ -49,7 +58,7 @@ class TestDiffBuilder(unittest.TestCase):
assert p == "109", "Correct when its the same time"
# new empty one
watch = Watch.model(datastore_path='/tmp', default={})
watch = Watch.model(datastore_path='/tmp', __datastore=mock_datastore, default={})
p = watch.get_from_version_based_on_last_viewed
assert p == None, "None when no history available"
@@ -61,5 +70,184 @@ class TestDiffBuilder(unittest.TestCase):
p = watch.get_from_version_based_on_last_viewed
assert p == "100", "Correct with only one history snapshot"
def test_watch_deepcopy_doesnt_copy_datastore(self):
"""
CRITICAL: Ensure deepcopy(watch) shares __datastore instead of copying it.
Without this, deepcopy causes exponential memory growth:
- 100 watches × deepcopy each = 10,000 watch objects in memory (100²)
- Memory grows from 120MB → 2GB
This test prevents regressions in the __deepcopy__ implementation.
"""
# Create mock datastore with multiple watches
mock_datastore = {
'settings': {'application': {'history_snapshot_max_length': 10}},
'watching': {}
}
# Create 3 watches that all reference the same datastore
watches = []
for i in range(3):
watch = Watch.model(
__datastore=mock_datastore,
datastore_path='/tmp/test',
default={'url': f'https://example{i}.com', 'title': f'Watch {i}'}
)
mock_datastore['watching'][watch['uuid']] = watch
watches.append(watch)
# Test 1: Deepcopy shares datastore reference (doesn't copy it)
watch_copy = deepcopy(watches[0])
self.assertIsNotNone(watch_copy._model__datastore,
"__datastore should exist in copied watch")
self.assertIs(watch_copy._model__datastore, watches[0]._model__datastore,
"__datastore should be SHARED (same object), not copied")
self.assertIs(watch_copy._model__datastore, mock_datastore,
"__datastore should reference the original datastore")
# Test 2: Dict data is properly copied (not shared)
self.assertEqual(watch_copy['title'], 'Watch 0', "Dict data should be copied")
watch_copy['title'] = 'MODIFIED'
self.assertNotEqual(watches[0]['title'], 'MODIFIED',
"Modifying copy should not affect original")
# Test 3: Verify no nested datastore copies in watch dict
# The dict should only contain watch settings, not the datastore
watch_dict = dict(watch_copy)
self.assertNotIn('__datastore', watch_dict,
"__datastore should not be in dict keys")
self.assertNotIn('_model__datastore', watch_dict,
"_model__datastore should not be in dict keys")
# Test 4: Multiple deepcopies don't cause exponential memory growth
# If datastore was copied, each copy would contain 3 watches,
# and those watches would contain the datastore, etc. (infinite recursion)
copies = []
for _ in range(5):
copies.append(deepcopy(watches[0]))
# All copies should share the same datastore
for copy in copies:
self.assertIs(copy._model__datastore, mock_datastore,
"All copies should share the original datastore")
def test_watch_pickle_doesnt_serialize_datastore(self):
"""
Ensure pickle/unpickle doesn't serialize __datastore.
This is important for multiprocessing and caching - we don't want
to serialize the entire datastore when pickling a watch.
"""
mock_datastore = {
'settings': {'application': {}},
'watching': {}
}
watch = Watch.model(
__datastore=mock_datastore,
datastore_path='/tmp/test',
default={'url': 'https://example.com', 'title': 'Test Watch'}
)
# Pickle and unpickle
pickled = pickle.dumps(watch)
unpickled_watch = pickle.loads(pickled)
# Test 1: Watch data is preserved
self.assertEqual(unpickled_watch['url'], 'https://example.com',
"Dict data should be preserved after pickle/unpickle")
# Test 2: __datastore is NOT serialized (attribute shouldn't exist after unpickle)
self.assertFalse(hasattr(unpickled_watch, '_model__datastore'),
"__datastore attribute should not exist after unpickle (not serialized)")
# Test 3: Pickled data shouldn't contain the large datastore object
# If datastore was serialized, the pickle size would be much larger
pickle_size = len(pickled)
# A single watch should be small (< 10KB), not include entire datastore
self.assertLess(pickle_size, 10000,
f"Pickled watch too large ({pickle_size} bytes) - might include datastore")
def test_tag_deepcopy_works(self):
"""
Ensure Tag objects (which also inherit from watch_base) can be deepcopied.
Tags now have optional __datastore for consistency with Watch objects.
"""
mock_datastore = {
'settings': {'application': {}},
'watching': {}
}
# Test 1: Tag without datastore (backward compatibility)
tag_without_ds = Tag.model(
datastore_path='/tmp/test',
default={'title': 'Test Tag', 'overrides_watch': True}
)
tag_copy1 = deepcopy(tag_without_ds)
self.assertEqual(tag_copy1['title'], 'Test Tag', "Tag data should be copied")
# Test 2: Tag with datastore (new pattern for consistency)
tag_with_ds = Tag.model(
datastore_path='/tmp/test',
__datastore=mock_datastore,
default={'title': 'Test Tag With DS', 'overrides_watch': True}
)
# Deepcopy should work
tag_copy2 = deepcopy(tag_with_ds)
# Test 3: Dict data is copied
self.assertEqual(tag_copy2['title'], 'Test Tag With DS', "Tag data should be copied")
# Test 4: Modifications to copy don't affect original
tag_copy2['title'] = 'MODIFIED'
self.assertNotEqual(tag_with_ds['title'], 'MODIFIED',
"Modifying copy should not affect original")
# Test 5: Tag with datastore shares it (doesn't copy it)
if hasattr(tag_with_ds, '_model__datastore'):
self.assertIs(tag_copy2._model__datastore, tag_with_ds._model__datastore,
"Tag should share __datastore reference like Watch does")
def test_watch_copy_performance(self):
"""
Verify that our __deepcopy__ implementation doesn't cause performance issues.
With the fix, deepcopy should be fast because we're sharing datastore
instead of copying it.
"""
import time
# Create a watch with large datastore (many watches)
mock_datastore = {
'settings': {'application': {}},
'watching': {}
}
# Add 100 watches to the datastore
for i in range(100):
w = Watch.model(
__datastore=mock_datastore,
datastore_path='/tmp/test',
default={'url': f'https://example{i}.com'}
)
mock_datastore['watching'][w['uuid']] = w
# Time how long deepcopy takes
watch = list(mock_datastore['watching'].values())[0]
start = time.time()
for _ in range(10):
_ = deepcopy(watch)
elapsed = time.time() - start
# Should be fast (< 0.1 seconds for 10 copies)
# If datastore was copied, it would take much longer
self.assertLess(elapsed, 0.5,
f"Deepcopy too slow ({elapsed:.3f}s for 10 copies) - might be copying datastore")
if __name__ == '__main__':
unittest.main()

View File

@@ -161,11 +161,6 @@ def extract_UUID_from_client(client):
def delete_all_watches(client=None):
# Change tracking
client.application.config.get('DATASTORE')._dirty_watches = set() # Watch UUIDs that need saving
client.application.config.get('DATASTORE')._dirty_settings = False # Settings changed
client.application.config.get('DATASTORE')._watch_hashes = {} # UUID -> SHA256 hash for change detection
uuids = list(client.application.config.get('DATASTORE').data['watching'])
for uuid in uuids:
client.application.config.get('DATASTORE').delete(uuid)

View File

@@ -7,7 +7,7 @@ msgid ""
msgstr ""
"Project-Id-Version: PROJECT VERSION\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2026-01-22 06:19+0100\n"
"POT-Creation-Date: 2026-02-05 17:47+0100\n"
"PO-Revision-Date: 2026-01-02 11:40+0100\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: cs\n"
@@ -327,6 +327,14 @@ msgstr "Nastavit na"
msgid "to disable"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html changedetectionio/blueprint/ui/templates/edit.html
msgid "Limit collection of history snapshots for each watch to this number of history items."
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Set to empty to disable / no limit"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Password protection for your changedetection.io application."
msgstr ""
@@ -345,6 +353,10 @@ msgstr ""
msgid "When a request returns no content, or the HTML does not contain any text, is this considered a change?"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr "Vyberte výchozí proxy pro všechny monitory"
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Base URL used for the"
msgstr ""
@@ -654,10 +666,6 @@ msgid ""
"whitelist the IP access instead"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr "Vyberte výchozí proxy pro všechny monitory"
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Python version:"
msgstr "Verze Pythonu:"
@@ -983,7 +991,12 @@ msgstr ""
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
msgid "Could not load '{}' processor, processor plugin might be missing. Please select a different processor."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Could not load '{}' processor, processor plugin might be missing."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
@@ -1232,6 +1245,10 @@ msgid ""
"your filter will not work anymore."
msgstr ""
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "Set to empty to use system settings default"
msgstr ""
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "method (default) where your watched site doesn't need Javascript to render."
msgstr ""
@@ -1580,6 +1597,18 @@ msgstr "zobrazeno <b>{start} - {end}</b> {record_name} z celkem <b>{total}</b>"
msgid "records"
msgstr "záznamy"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Changedetection.io can monitor more than just web-pages! See our plugins!"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr "Více informací"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "You can also add 'shared' watches."
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Add a new web page change detection watch"
msgstr "Přidejte nové monitory zjišťování změn webové stránky"
@@ -1592,18 +1621,6 @@ msgstr "Monitorovat tuto URL!"
msgid "Edit first then Watch"
msgstr "Upravit a monitorovat"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Create a shareable link"
msgstr "Vytvořte odkaz ke sdílení"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Tip: You can also add 'shared' watches."
msgstr "Tip: Můžete také přidat „sdílené“ monitory."
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr "Více informací"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Pause"
msgstr "Pauza"
@@ -2117,6 +2134,10 @@ msgstr "Přiřaďte kteroukoli z následujících možností"
msgid "Use page <title> in list"
msgstr "V seznamu použijte stránku <title>"
#: changedetectionio/forms.py
msgid "Number of history items per watch to keep"
msgstr ""
#: changedetectionio/forms.py
msgid "Body must be empty when Request Method is set to GET"
msgstr "Když je metoda požadavku nastavena na GET, tělo musí být prázdné"
@@ -2439,15 +2460,20 @@ msgstr "Změny textu webové stránky/HTML, JSON a PDF"
msgid "Detects all text changes where possible"
msgstr "Detekuje všechny změny textu, kde je to možné"
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Error fetching metadata for {}"
msgstr ""
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
msgid "Watch protocol is not permitted or invalid URL format"
msgstr ""
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Watch limit reached ({}/{} watches). Cannot add more watches."
msgstr ""
#: changedetectionio/templates/_common_fields.html
msgid "Body for all notifications — You can use"
msgstr "Tělo pro všechna oznámení — Můžete použít"
@@ -3056,3 +3082,12 @@ msgstr "Hlavní nastavení"
#~ msgid "Cleared snapshot history for all watches"
#~ msgstr "Vymazat/resetovat historii"
#~ msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
#~ msgstr ""
#~ msgid "Create a shareable link"
#~ msgstr "Vytvořte odkaz ke sdílení"
#~ msgid "Tip: You can also add 'shared' watches."
#~ msgstr "Tip: Můžete také přidat „sdílené“ monitory."

View File

@@ -7,7 +7,7 @@ msgid ""
msgstr ""
"Project-Id-Version: PROJECT VERSION\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2026-01-22 06:19+0100\n"
"POT-Creation-Date: 2026-02-05 17:47+0100\n"
"PO-Revision-Date: 2026-01-14 03:57+0100\n"
"Last-Translator: \n"
"Language: de\n"
@@ -333,6 +333,14 @@ msgstr "Setzen auf"
msgid "to disable"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html changedetectionio/blueprint/ui/templates/edit.html
msgid "Limit collection of history snapshots for each watch to this number of history items."
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Set to empty to disable / no limit"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Password protection for your changedetection.io application."
msgstr ""
@@ -351,6 +359,10 @@ msgstr ""
msgid "When a request returns no content, or the HTML does not contain any text, is this considered a change?"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr "Wählen Sie einen Standard-Proxy für alle Überwachungen"
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Base URL used for the"
msgstr ""
@@ -664,10 +676,6 @@ msgid ""
"whitelist the IP access instead"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr "Wählen Sie einen Standard-Proxy für alle Überwachungen"
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Python version:"
msgstr "Python-Version:"
@@ -999,8 +1007,13 @@ msgstr "In den Modus {} gewechselt."
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
msgstr "Das Bearbeitungsformular für den Prozessor/das Plugin „{}“ kann nicht geladen werden. Fehlt das Plugin?"
msgid "Could not load '{}' processor, processor plugin might be missing. Please select a different processor."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Could not load '{}' processor, processor plugin might be missing."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
msgid "Updated watch - unpaused!"
@@ -1254,6 +1267,10 @@ msgstr ""
"Sendet eine Benachrichtigung, wenn der Filter auf der Seite nicht mehr sichtbar ist. So wissen Sie, wann sich die "
"Seite geändert hat und Ihr Filter nicht mehr funktioniert."
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "Set to empty to use system settings default"
msgstr ""
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "method (default) where your watched site doesn't need Javascript to render."
msgstr "Methode (default), bei der Ihre überwachte Website kein Javascript zum Rendern benötigt."
@@ -1618,6 +1635,18 @@ msgstr "zeige <b>{start} - {end}</b> {record_name} von insgesamt <b>{total}</b>"
msgid "records"
msgstr "Einträge"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Changedetection.io can monitor more than just web-pages! See our plugins!"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr "Weitere Informationen"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "You can also add 'shared' watches."
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Add a new web page change detection watch"
msgstr "Fügen Sie eine neue Überwachung zur Erkennung von Webseitenänderungen hinzu"
@@ -1630,18 +1659,6 @@ msgstr "Diese URL überwachen!"
msgid "Edit first then Watch"
msgstr "Bearbeiten > Überwachen"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Create a shareable link"
msgstr "Erstellen Sie einen Link zum Teilen"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Tip: You can also add 'shared' watches."
msgstr "Tipp: Sie können auch „gemeinsame“ Überwachungen hinzufügen."
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr "Weitere Informationen"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Pause"
msgstr "Pause"
@@ -2162,6 +2179,10 @@ msgstr "Entspricht einer der folgenden Bedingungen"
msgid "Use page <title> in list"
msgstr "Verwenden Sie Seite <Titel> in der Liste"
#: changedetectionio/forms.py
msgid "Number of history items per watch to keep"
msgstr ""
#: changedetectionio/forms.py
msgid "Body must be empty when Request Method is set to GET"
msgstr "Der Textkörper muss leer sein, wenn die Anforderungsmethode auf GET gesetzt ist"
@@ -2486,15 +2507,20 @@ msgstr "Änderungen an Webseitentext/HTML, JSON und PDF"
msgid "Detects all text changes where possible"
msgstr "Erkennt nach Möglichkeit alle Textänderungen"
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Error fetching metadata for {}"
msgstr "Fehler beim Abrufen der Metadaten für {}"
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
msgid "Watch protocol is not permitted or invalid URL format"
msgstr "Das Protokoll wird nicht unterstützt oder das URL-Format ist ungültig."
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Watch limit reached ({}/{} watches). Cannot add more watches."
msgstr ""
#: changedetectionio/templates/_common_fields.html
msgid "Body for all notifications — You can use"
msgstr "Inhalt für alle Benachrichtigungen — Sie können verwenden"
@@ -3171,3 +3197,12 @@ msgstr "Haupteinstellungen"
#~ msgid "No watches available to recheck."
#~ msgstr "Keine Überwachungen verfügbar, um erneut zu überprüfen."
#~ msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
#~ msgstr "Das Bearbeitungsformular für den Prozessor/das Plugin „{}“ kann nicht geladen werden. Fehlt das Plugin?"
#~ msgid "Create a shareable link"
#~ msgstr "Erstellen Sie einen Link zum Teilen"
#~ msgid "Tip: You can also add 'shared' watches."
#~ msgstr "Tipp: Sie können auch „gemeinsame“ Überwachungen hinzufügen."

View File

@@ -7,7 +7,7 @@ msgid ""
msgstr ""
"Project-Id-Version: changedetection.io\n"
"Report-Msgid-Bugs-To: https://github.com/dgtlmoon/changedetection.io\n"
"POT-Creation-Date: 2026-01-22 06:19+0100\n"
"POT-Creation-Date: 2026-02-05 17:47+0100\n"
"PO-Revision-Date: 2026-01-12 16:33+0100\n"
"Last-Translator: British English Translation Team\n"
"Language: en_GB\n"
@@ -325,6 +325,14 @@ msgstr ""
msgid "to disable"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html changedetectionio/blueprint/ui/templates/edit.html
msgid "Limit collection of history snapshots for each watch to this number of history items."
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Set to empty to disable / no limit"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Password protection for your changedetection.io application."
msgstr ""
@@ -341,6 +349,10 @@ msgstr ""
msgid "When a request returns no content, or the HTML does not contain any text, is this considered a change?"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Base URL used for the"
msgstr ""
@@ -650,10 +662,6 @@ msgid ""
"whitelist the IP access instead"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Python version:"
msgstr ""
@@ -979,7 +987,12 @@ msgstr ""
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
msgid "Could not load '{}' processor, processor plugin might be missing. Please select a different processor."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Could not load '{}' processor, processor plugin might be missing."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
@@ -1228,6 +1241,10 @@ msgid ""
"your filter will not work anymore."
msgstr ""
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "Set to empty to use system settings default"
msgstr ""
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "method (default) where your watched site doesn't need Javascript to render."
msgstr ""
@@ -1576,6 +1593,18 @@ msgstr ""
msgid "records"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Changedetection.io can monitor more than just web-pages! See our plugins!"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "You can also add 'shared' watches."
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Add a new web page change detection watch"
msgstr ""
@@ -1588,18 +1617,6 @@ msgstr ""
msgid "Edit first then Watch"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Create a shareable link"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Tip: You can also add 'shared' watches."
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Pause"
msgstr ""
@@ -2113,6 +2130,10 @@ msgstr ""
msgid "Use page <title> in list"
msgstr ""
#: changedetectionio/forms.py
msgid "Number of history items per watch to keep"
msgstr ""
#: changedetectionio/forms.py
msgid "Body must be empty when Request Method is set to GET"
msgstr ""
@@ -2435,15 +2456,20 @@ msgstr ""
msgid "Detects all text changes where possible"
msgstr ""
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Error fetching metadata for {}"
msgstr ""
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
msgid "Watch protocol is not permitted or invalid URL format"
msgstr ""
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Watch limit reached ({}/{} watches). Cannot add more watches."
msgstr ""
#: changedetectionio/templates/_common_fields.html
msgid "Body for all notifications — You can use"
msgstr ""
@@ -3001,3 +3027,12 @@ msgstr ""
#~ msgid "No watches available to recheck."
#~ msgstr ""
#~ msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
#~ msgstr ""
#~ msgid "Create a shareable link"
#~ msgstr ""
#~ msgid "Tip: You can also add 'shared' watches."
#~ msgstr ""

View File

@@ -7,7 +7,7 @@ msgid ""
msgstr ""
"Project-Id-Version: PROJECT VERSION\n"
"Report-Msgid-Bugs-To: https://github.com/dgtlmoon/changedetection.io\n"
"POT-Creation-Date: 2026-01-22 06:19+0100\n"
"POT-Creation-Date: 2026-02-05 17:47+0100\n"
"PO-Revision-Date: 2026-01-12 16:37+0100\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: en_US\n"
@@ -325,6 +325,14 @@ msgstr ""
msgid "to disable"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html changedetectionio/blueprint/ui/templates/edit.html
msgid "Limit collection of history snapshots for each watch to this number of history items."
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Set to empty to disable / no limit"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Password protection for your changedetection.io application."
msgstr ""
@@ -341,6 +349,10 @@ msgstr ""
msgid "When a request returns no content, or the HTML does not contain any text, is this considered a change?"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Base URL used for the"
msgstr ""
@@ -650,10 +662,6 @@ msgid ""
"whitelist the IP access instead"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Python version:"
msgstr ""
@@ -979,7 +987,12 @@ msgstr ""
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
msgid "Could not load '{}' processor, processor plugin might be missing. Please select a different processor."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Could not load '{}' processor, processor plugin might be missing."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
@@ -1228,6 +1241,10 @@ msgid ""
"your filter will not work anymore."
msgstr ""
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "Set to empty to use system settings default"
msgstr ""
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "method (default) where your watched site doesn't need Javascript to render."
msgstr ""
@@ -1576,6 +1593,18 @@ msgstr ""
msgid "records"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Changedetection.io can monitor more than just web-pages! See our plugins!"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "You can also add 'shared' watches."
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Add a new web page change detection watch"
msgstr ""
@@ -1588,18 +1617,6 @@ msgstr ""
msgid "Edit first then Watch"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Create a shareable link"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Tip: You can also add 'shared' watches."
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Pause"
msgstr ""
@@ -2113,6 +2130,10 @@ msgstr ""
msgid "Use page <title> in list"
msgstr ""
#: changedetectionio/forms.py
msgid "Number of history items per watch to keep"
msgstr ""
#: changedetectionio/forms.py
msgid "Body must be empty when Request Method is set to GET"
msgstr ""
@@ -2435,15 +2456,20 @@ msgstr ""
msgid "Detects all text changes where possible"
msgstr ""
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Error fetching metadata for {}"
msgstr ""
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
msgid "Watch protocol is not permitted or invalid URL format"
msgstr ""
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Watch limit reached ({}/{} watches). Cannot add more watches."
msgstr ""
#: changedetectionio/templates/_common_fields.html
msgid "Body for all notifications — You can use"
msgstr ""
@@ -3001,3 +3027,12 @@ msgstr ""
#~ msgid "No watches available to recheck."
#~ msgstr ""
#~ msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
#~ msgstr ""
#~ msgid "Create a shareable link"
#~ msgstr ""
#~ msgid "Tip: You can also add 'shared' watches."
#~ msgstr ""

View File

@@ -7,7 +7,7 @@ msgid ""
msgstr ""
"Project-Id-Version: PROJECT VERSION\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2026-01-22 06:19+0100\n"
"POT-Creation-Date: 2026-02-05 17:47+0100\n"
"PO-Revision-Date: 2026-01-02 11:40+0100\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: fr\n"
@@ -327,6 +327,14 @@ msgstr "Définir à"
msgid "to disable"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html changedetectionio/blueprint/ui/templates/edit.html
msgid "Limit collection of history snapshots for each watch to this number of history items."
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Set to empty to disable / no limit"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Password protection for your changedetection.io application."
msgstr ""
@@ -345,6 +353,10 @@ msgstr ""
msgid "When a request returns no content, or the HTML does not contain any text, is this considered a change?"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr "Choisir un proxy par défaut pour tous les moniteurs"
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Base URL used for the"
msgstr ""
@@ -654,10 +666,6 @@ msgid ""
"whitelist the IP access instead"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr "Choisir un proxy par défaut pour tous les moniteurs"
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Python version:"
msgstr "Version Python :"
@@ -983,7 +991,12 @@ msgstr ""
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
msgid "Could not load '{}' processor, processor plugin might be missing. Please select a different processor."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Could not load '{}' processor, processor plugin might be missing."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
@@ -1234,6 +1247,10 @@ msgid ""
"your filter will not work anymore."
msgstr ""
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "Set to empty to use system settings default"
msgstr ""
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "method (default) where your watched site doesn't need Javascript to render."
msgstr ""
@@ -1582,6 +1599,18 @@ msgstr "affichage de <b>{start} - {end}</b> {record_name} sur un total de <b>{to
msgid "records"
msgstr "enregistrements"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Changedetection.io can monitor more than just web-pages! See our plugins!"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr "Plus d'informations"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "You can also add 'shared' watches."
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Add a new web page change detection watch"
msgstr "Ajouter une nouvelle surveillance de détection de changement de page Web"
@@ -1594,18 +1623,6 @@ msgstr "Surveillez cette URL !"
msgid "Edit first then Watch"
msgstr "Modifier > Surveiller"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Create a shareable link"
msgstr "Créer un lien partageable"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Tip: You can also add 'shared' watches."
msgstr "Astuce : Vous pouvez également ajouter des montres « partagées »."
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr "Plus d'informations"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Pause"
msgstr "Pause"
@@ -2123,6 +2140,10 @@ msgstr "Faites correspondre l'un des éléments suivants"
msgid "Use page <title> in list"
msgstr "Utiliser la page <titre> dans la liste"
#: changedetectionio/forms.py
msgid "Number of history items per watch to keep"
msgstr ""
#: changedetectionio/forms.py
msgid "Body must be empty when Request Method is set to GET"
msgstr "Le corps doit être vide lorsque la méthode de requête est définie sur GET"
@@ -2445,15 +2466,20 @@ msgstr "Modifications du texte de la page Web/HTML, JSON et PDF"
msgid "Detects all text changes where possible"
msgstr "Détecte toutes les modifications de texte lorsque cela est possible"
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Error fetching metadata for {}"
msgstr ""
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
msgid "Watch protocol is not permitted or invalid URL format"
msgstr ""
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Watch limit reached ({}/{} watches). Cannot add more watches."
msgstr ""
#: changedetectionio/templates/_common_fields.html
msgid "Body for all notifications — You can use"
msgstr "Corps pour toutes les notifications — Vous pouvez utiliser"
@@ -3064,3 +3090,12 @@ msgstr "Paramètres principaux"
#~ msgid "Cleared snapshot history for all watches"
#~ msgstr "Effacer/réinitialiser l'historique"
#~ msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
#~ msgstr ""
#~ msgid "Create a shareable link"
#~ msgstr "Créer un lien partageable"
#~ msgid "Tip: You can also add 'shared' watches."
#~ msgstr "Astuce : Vous pouvez également ajouter des montres « partagées »."

View File

@@ -7,7 +7,7 @@ msgid ""
msgstr ""
"Project-Id-Version: PROJECT VERSION\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2026-01-22 06:19+0100\n"
"POT-Creation-Date: 2026-02-05 17:47+0100\n"
"PO-Revision-Date: 2026-01-02 15:32+0100\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: it\n"
@@ -327,6 +327,14 @@ msgstr ""
msgid "to disable"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html changedetectionio/blueprint/ui/templates/edit.html
msgid "Limit collection of history snapshots for each watch to this number of history items."
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Set to empty to disable / no limit"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Password protection for your changedetection.io application."
msgstr ""
@@ -343,6 +351,10 @@ msgstr "Consenti accesso alla pagina cronologia quando la password è attiva (ut
msgid "When a request returns no content, or the HTML does not contain any text, is this considered a change?"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Base URL used for the"
msgstr ""
@@ -652,10 +664,6 @@ msgid ""
"whitelist the IP access instead"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Python version:"
msgstr ""
@@ -981,7 +989,12 @@ msgstr ""
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
msgid "Could not load '{}' processor, processor plugin might be missing. Please select a different processor."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Could not load '{}' processor, processor plugin might be missing."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
@@ -1230,6 +1243,10 @@ msgid ""
"your filter will not work anymore."
msgstr ""
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "Set to empty to use system settings default"
msgstr ""
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "method (default) where your watched site doesn't need Javascript to render."
msgstr ""
@@ -1578,6 +1595,18 @@ msgstr "visualizzando <b>{start} - {end}</b> {record_name} su un totale di <b>{t
msgid "records"
msgstr "record"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Changedetection.io can monitor more than just web-pages! See our plugins!"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr "Maggiori informazioni"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "You can also add 'shared' watches."
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Add a new web page change detection watch"
msgstr "Aggiungi un nuovo monitoraggio modifiche pagina web"
@@ -1590,18 +1619,6 @@ msgstr "Monitora questo URL!"
msgid "Edit first then Watch"
msgstr "Modifica > Monitora"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Create a shareable link"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Tip: You can also add 'shared' watches."
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr "Maggiori informazioni"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Pause"
msgstr "Pausa"
@@ -2115,6 +2132,10 @@ msgstr "Corrisponde a uno qualsiasi dei seguenti"
msgid "Use page <title> in list"
msgstr "Usa <title> pagina nell'elenco"
#: changedetectionio/forms.py
msgid "Number of history items per watch to keep"
msgstr ""
#: changedetectionio/forms.py
msgid "Body must be empty when Request Method is set to GET"
msgstr "Il corpo deve essere vuoto quando il metodo è impostato su GET"
@@ -2437,15 +2458,20 @@ msgstr "Modifiche testo/HTML, JSON e PDF"
msgid "Detects all text changes where possible"
msgstr "Rileva tutte le modifiche di testo possibili"
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Error fetching metadata for {}"
msgstr "Errore nel recupero metadati per {}"
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
msgid "Watch protocol is not permitted or invalid URL format"
msgstr "Protocollo non consentito o formato URL non valido"
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Watch limit reached ({}/{} watches). Cannot add more watches."
msgstr ""
#: changedetectionio/templates/_common_fields.html
msgid "Body for all notifications — You can use"
msgstr "Corpo per tutte le notifiche — Puoi usare"
@@ -3036,3 +3062,12 @@ msgstr "Impostazioni principali"
#~ msgid "Queue"
#~ msgstr "In coda"
#~ msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
#~ msgstr ""
#~ msgid "Create a shareable link"
#~ msgstr ""
#~ msgid "Tip: You can also add 'shared' watches."
#~ msgstr ""

View File

@@ -7,7 +7,7 @@ msgid ""
msgstr ""
"Project-Id-Version: PROJECT VERSION\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2026-01-22 06:19+0100\n"
"POT-Creation-Date: 2026-02-05 17:47+0100\n"
"PO-Revision-Date: 2026-01-02 11:40+0100\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: ko\n"
@@ -325,6 +325,14 @@ msgstr "설정:"
msgid "to disable"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html changedetectionio/blueprint/ui/templates/edit.html
msgid "Limit collection of history snapshots for each watch to this number of history items."
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Set to empty to disable / no limit"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Password protection for your changedetection.io application."
msgstr ""
@@ -341,6 +349,10 @@ msgstr "비밀번호 활성화 시 변경 기록 페이지 액세스 허용 (dif
msgid "When a request returns no content, or the HTML does not contain any text, is this considered a change?"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr "모든 모니터의 기본 프록시 선택"
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Base URL used for the"
msgstr ""
@@ -650,10 +662,6 @@ msgid ""
"whitelist the IP access instead"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr "모든 모니터의 기본 프록시 선택"
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Python version:"
msgstr "파이썬 버전:"
@@ -979,7 +987,12 @@ msgstr ""
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
msgid "Could not load '{}' processor, processor plugin might be missing. Please select a different processor."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Could not load '{}' processor, processor plugin might be missing."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
@@ -1228,6 +1241,10 @@ msgid ""
"your filter will not work anymore."
msgstr ""
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "Set to empty to use system settings default"
msgstr ""
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "method (default) where your watched site doesn't need Javascript to render."
msgstr ""
@@ -1576,6 +1593,18 @@ msgstr "총 <b>{total}</b>개 중 <b>{start} - {end}</b>개 {record_name} 표시
msgid "records"
msgstr "기록"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Changedetection.io can monitor more than just web-pages! See our plugins!"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr "추가 정보"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "You can also add 'shared' watches."
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Add a new web page change detection watch"
msgstr "새로운 웹 페이지 변경 감지 감시 추가"
@@ -1588,18 +1617,6 @@ msgstr "이 URL 모니터!"
msgid "Edit first then Watch"
msgstr "편집 후 모니터"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Create a shareable link"
msgstr "공유 가능한 링크 만들기"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Tip: You can also add 'shared' watches."
msgstr "팁: '공유' 시계를 추가할 수도 있습니다."
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr "추가 정보"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Pause"
msgstr "정지시키다"
@@ -2113,6 +2130,10 @@ msgstr "다음 중 하나와 일치"
msgid "Use page <title> in list"
msgstr "목록의 <제목> 페이지 사용"
#: changedetectionio/forms.py
msgid "Number of history items per watch to keep"
msgstr ""
#: changedetectionio/forms.py
msgid "Body must be empty when Request Method is set to GET"
msgstr "요청 방법이 GET으로 설정된 경우 본문이 비어 있어야 합니다."
@@ -2435,15 +2456,20 @@ msgstr "웹페이지 텍스트/HTML, JSON 및 PDF 변경"
msgid "Detects all text changes where possible"
msgstr "가능한 경우 모든 텍스트 변경 사항을 감지합니다."
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Error fetching metadata for {}"
msgstr ""
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
msgid "Watch protocol is not permitted or invalid URL format"
msgstr ""
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Watch limit reached ({}/{} watches). Cannot add more watches."
msgstr ""
#: changedetectionio/templates/_common_fields.html
msgid "Body for all notifications — You can use"
msgstr "모든 알림 본문 — 사용 가능:"
@@ -3157,3 +3183,12 @@ msgstr "기본 설정"
#~ msgid "Cleared snapshot history for all watches"
#~ msgstr "기록 지우기/재설정"
#~ msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
#~ msgstr ""
#~ msgid "Create a shareable link"
#~ msgstr "공유 가능한 링크 만들기"
#~ msgid "Tip: You can also add 'shared' watches."
#~ msgstr "팁: '공유' 시계를 추가할 수도 있습니다."

View File

@@ -6,9 +6,9 @@
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: changedetection.io 0.52.8\n"
"Project-Id-Version: changedetection.io 0.52.9\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2026-01-22 06:29+0100\n"
"POT-Creation-Date: 2026-02-05 17:47+0100\n"
"PO-Revision-Date: YEAR-MO-DA HO:MI+ZONE\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language-Team: LANGUAGE <LL@li.org>\n"
@@ -324,6 +324,14 @@ msgstr ""
msgid "to disable"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html changedetectionio/blueprint/ui/templates/edit.html
msgid "Limit collection of history snapshots for each watch to this number of history items."
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Set to empty to disable / no limit"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Password protection for your changedetection.io application."
msgstr ""
@@ -340,6 +348,10 @@ msgstr ""
msgid "When a request returns no content, or the HTML does not contain any text, is this considered a change?"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Base URL used for the"
msgstr ""
@@ -649,10 +661,6 @@ msgid ""
"whitelist the IP access instead"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Python version:"
msgstr ""
@@ -978,7 +986,12 @@ msgstr ""
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
msgid "Could not load '{}' processor, processor plugin might be missing. Please select a different processor."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Could not load '{}' processor, processor plugin might be missing."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
@@ -1227,6 +1240,10 @@ msgid ""
"your filter will not work anymore."
msgstr ""
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "Set to empty to use system settings default"
msgstr ""
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "method (default) where your watched site doesn't need Javascript to render."
msgstr ""
@@ -1575,6 +1592,18 @@ msgstr ""
msgid "records"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Changedetection.io can monitor more than just web-pages! See our plugins!"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "You can also add 'shared' watches."
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Add a new web page change detection watch"
msgstr ""
@@ -1587,18 +1616,6 @@ msgstr ""
msgid "Edit first then Watch"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Create a shareable link"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Tip: You can also add 'shared' watches."
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Pause"
msgstr ""
@@ -2112,6 +2129,10 @@ msgstr ""
msgid "Use page <title> in list"
msgstr ""
#: changedetectionio/forms.py
msgid "Number of history items per watch to keep"
msgstr ""
#: changedetectionio/forms.py
msgid "Body must be empty when Request Method is set to GET"
msgstr ""
@@ -2434,15 +2455,20 @@ msgstr ""
msgid "Detects all text changes where possible"
msgstr ""
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Error fetching metadata for {}"
msgstr ""
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
msgid "Watch protocol is not permitted or invalid URL format"
msgstr ""
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Watch limit reached ({}/{} watches). Cannot add more watches."
msgstr ""
#: changedetectionio/templates/_common_fields.html
msgid "Body for all notifications — You can use"
msgstr ""

View File

@@ -7,7 +7,7 @@ msgid ""
msgstr ""
"Project-Id-Version: PROJECT VERSION\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2026-01-22 06:19+0100\n"
"POT-Creation-Date: 2026-02-05 17:47+0100\n"
"PO-Revision-Date: 2026-01-18 21:31+0800\n"
"Last-Translator: 吾爱分享 <admin@wuaishare.cn>\n"
"Language: zh\n"
@@ -325,6 +325,14 @@ msgstr "设置为"
msgid "to disable"
msgstr "以禁用"
#: changedetectionio/blueprint/settings/templates/settings.html changedetectionio/blueprint/ui/templates/edit.html
msgid "Limit collection of history snapshots for each watch to this number of history items."
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Set to empty to disable / no limit"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Password protection for your changedetection.io application."
msgstr "为你的 changedetection.io 应用启用密码保护。"
@@ -341,6 +349,10 @@ msgstr "启用密码时允许访问监视器更改历史页面(便于共享差
msgid "When a request returns no content, or the HTML does not contain any text, is this considered a change?"
msgstr "当请求无内容返回,或 HTML 不包含任何文本时,是否视为变更?"
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr "为所有监视器选择默认代理"
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Base URL used for the"
msgstr "用于通知链接中的"
@@ -650,10 +662,6 @@ msgid ""
"whitelist the IP access instead"
msgstr "带认证的 SOCKS5 代理仅支持“明文请求”抓取器,其他抓取器请改为白名单 IP"
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr "为所有监视器选择默认代理"
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Python version:"
msgstr "Python 版本:"
@@ -979,8 +987,13 @@ msgstr "已切换到模式 - {}。"
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
msgstr "无法加载处理器/插件 '{}' 的编辑表单,插件是否缺失?"
msgid "Could not load '{}' processor, processor plugin might be missing. Please select a different processor."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Could not load '{}' processor, processor plugin might be missing."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
msgid "Updated watch - unpaused!"
@@ -1228,6 +1241,10 @@ msgid ""
"your filter will not work anymore."
msgstr "当页面上找不到该过滤器时发送通知,便于知晓页面已变化且过滤器不再适用。"
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "Set to empty to use system settings default"
msgstr ""
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "method (default) where your watched site doesn't need Javascript to render."
msgstr "方式(默认),适用于无需 JavaScript 渲染的网站。"
@@ -1576,6 +1593,18 @@ msgstr "显示第 <b>{start} - {end}</b> 条{record_name},共 <b>{total}</b>
msgid "records"
msgstr "记录"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Changedetection.io can monitor more than just web-pages! See our plugins!"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr "更多信息"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "You can also add 'shared' watches."
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Add a new web page change detection watch"
msgstr "新增网页变更监控"
@@ -1588,18 +1617,6 @@ msgstr "监控此 URL"
msgid "Edit first then Watch"
msgstr "先编辑,再监控"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Create a shareable link"
msgstr "创建可分享链接"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Tip: You can also add 'shared' watches."
msgstr "提示:你也可以添加“共享”的监控项。"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr "更多信息"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Pause"
msgstr "暂停"
@@ -2113,6 +2130,10 @@ msgstr "匹配以下任意"
msgid "Use page <title> in list"
msgstr "列表中使用页面 <title>"
#: changedetectionio/forms.py
msgid "Number of history items per watch to keep"
msgstr ""
#: changedetectionio/forms.py
msgid "Body must be empty when Request Method is set to GET"
msgstr "当请求方法为 GET 时,请求正文必须为空"
@@ -2435,15 +2456,20 @@ msgstr "网页文本/HTML、JSON 和 PDF 变更"
msgid "Detects all text changes where possible"
msgstr "尽可能检测所有文本变更"
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Error fetching metadata for {}"
msgstr "获取 {} 的元数据失败"
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
msgid "Watch protocol is not permitted or invalid URL format"
msgstr "监控协议不允许或 URL 格式无效"
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Watch limit reached ({}/{} watches). Cannot add more watches."
msgstr ""
#: changedetectionio/templates/_common_fields.html
msgid "Body for all notifications — You can use"
msgstr "所有通知的正文 — 您可以使用"
@@ -2986,3 +3012,12 @@ msgstr "否"
msgid "Main settings"
msgstr "主设置"
#~ msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
#~ msgstr "无法加载处理器/插件 '{}' 的编辑表单,插件是否缺失?"
#~ msgid "Create a shareable link"
#~ msgstr "创建可分享链接"
#~ msgid "Tip: You can also add 'shared' watches."
#~ msgstr "提示:你也可以添加“共享”的监控项。"

View File

@@ -7,7 +7,7 @@ msgid ""
msgstr ""
"Project-Id-Version: PROJECT VERSION\n"
"Report-Msgid-Bugs-To: EMAIL@ADDRESS\n"
"POT-Creation-Date: 2026-01-22 06:19+0100\n"
"POT-Creation-Date: 2026-02-05 17:47+0100\n"
"PO-Revision-Date: 2026-01-15 12:00+0800\n"
"Last-Translator: FULL NAME <EMAIL@ADDRESS>\n"
"Language: zh_Hant_TW\n"
@@ -325,6 +325,14 @@ msgstr "設置為"
msgid "to disable"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html changedetectionio/blueprint/ui/templates/edit.html
msgid "Limit collection of history snapshots for each watch to this number of history items."
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Set to empty to disable / no limit"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Password protection for your changedetection.io application."
msgstr ""
@@ -341,6 +349,10 @@ msgstr "啟用密碼時允許匿名存取監測歷史頁面"
msgid "When a request returns no content, or the HTML does not contain any text, is this considered a change?"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr "為所有監測任務選擇預設代理伺服器"
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Base URL used for the"
msgstr ""
@@ -650,10 +662,6 @@ msgid ""
"whitelist the IP access instead"
msgstr ""
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Choose a default proxy for all watches"
msgstr "為所有監測任務選擇預設代理伺服器"
#: changedetectionio/blueprint/settings/templates/settings.html
msgid "Python version:"
msgstr "Python 版本:"
@@ -979,8 +987,13 @@ msgstr "已切換至模式 - {}。"
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
msgstr "無法載入處理器 / 外掛 '{}' 的編輯表單,外掛是否遺失?"
msgid "Could not load '{}' processor, processor plugin might be missing. Please select a different processor."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
#, python-brace-format
msgid "Could not load '{}' processor, processor plugin might be missing."
msgstr ""
#: changedetectionio/blueprint/ui/edit.py
msgid "Updated watch - unpaused!"
@@ -1228,6 +1241,10 @@ msgid ""
"your filter will not work anymore."
msgstr "當頁面上找不到過濾器時發送通知,這有助於了解頁面何時變更導致您的過濾器失效。"
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "Set to empty to use system settings default"
msgstr ""
#: changedetectionio/blueprint/ui/templates/edit.html
msgid "method (default) where your watched site doesn't need Javascript to render."
msgstr "方法(預設),適用於您監測的網站不需要 Javascript 渲染的情況。"
@@ -1576,6 +1593,18 @@ msgstr "顯示第 <b>{start} - {end}</b> 條{record_name},共 <b>{total}</b>
msgid "records"
msgstr "記錄"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Changedetection.io can monitor more than just web-pages! See our plugins!"
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr "更多資訊"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "You can also add 'shared' watches."
msgstr ""
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Add a new web page change detection watch"
msgstr "新增網頁變更檢測任務"
@@ -1588,18 +1617,6 @@ msgstr "監測此 URL"
msgid "Edit first then Watch"
msgstr "先編輯後監測"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Create a shareable link"
msgstr "建立可分享連結"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Tip: You can also add 'shared' watches."
msgstr "提示:您也可以新增「共享」監測任務。"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "More info"
msgstr "更多資訊"
#: changedetectionio/blueprint/watchlist/templates/watch-overview.html
msgid "Pause"
msgstr "暫停"
@@ -2113,6 +2130,10 @@ msgstr "符合以下任一條件"
msgid "Use page <title> in list"
msgstr "在列表中使用頁面 <title>"
#: changedetectionio/forms.py
msgid "Number of history items per watch to keep"
msgstr ""
#: changedetectionio/forms.py
msgid "Body must be empty when Request Method is set to GET"
msgstr "當請求方法設為 GET 時,內容必須為空"
@@ -2435,15 +2456,20 @@ msgstr "網頁文字 / HTML、JSON 和 PDF 變更"
msgid "Detects all text changes where possible"
msgstr "盡可能檢測所有文字變更"
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Error fetching metadata for {}"
msgstr "讀取 {} 的中繼資料時發生錯誤"
#: changedetectionio/store.py
#: changedetectionio/store/__init__.py
msgid "Watch protocol is not permitted or invalid URL format"
msgstr "監測協定不被允許或 URL 格式無效"
#: changedetectionio/store/__init__.py
#, python-brace-format
msgid "Watch limit reached ({}/{} watches). Cannot add more watches."
msgstr ""
#: changedetectionio/templates/_common_fields.html
msgid "Body for all notifications — You can use"
msgstr "所有通知的內文 — 您可以使用"
@@ -3115,3 +3141,12 @@ msgstr "主設定"
#~ msgid "No watches available to recheck."
#~ msgstr "沒有可複查的監測任務。"
#~ msgid "Cannot load the edit form for processor/plugin '{}', plugin missing?"
#~ msgstr "無法載入處理器 / 外掛 '{}' 的編輯表單,外掛是否遺失?"
#~ msgid "Create a shareable link"
#~ msgstr "建立可分享連結"
#~ msgid "Tip: You can also add 'shared' watches."
#~ msgstr "提示:您也可以新增「共享」監測任務。"

View File

@@ -142,11 +142,14 @@ async def async_update_worker(worker_id, q, notification_q, app, datastore, exec
processor = watch.get('processor', 'text_json_diff')
# Init a new 'difference_detection_processor'
try:
processor_module = importlib.import_module(f"changedetectionio.processors.{processor}.processor")
except ModuleNotFoundError as e:
print(f"Processor module '{processor}' not found.")
raise e
# Use get_processor_module() to support both built-in and plugin processors
from changedetectionio.processors import get_processor_module
processor_module = get_processor_module(processor)
if not processor_module:
error_msg = f"Processor module '{processor}' not found."
logger.error(error_msg)
raise ModuleNotFoundError(error_msg)
update_handler = processor_module.perform_site_check(datastore=datastore,
watch_uuid=uuid)
@@ -365,8 +368,9 @@ async def async_update_worker(worker_id, q, notification_q, app, datastore, exec
logger.error(f"Exception (BrowserStepsInUnsupportedFetcher) reached processing watch UUID: {uuid}")
except Exception as e:
import traceback
logger.error(f"Worker {worker_id} exception processing watch UUID: {uuid}")
logger.error(str(e))
logger.exception(f"Worker {worker_id} full exception details:")
datastore.update_watch(uuid=uuid, update_obj={'last_error': "Exception: " + str(e)})
process_changedetection_results = False
@@ -385,8 +389,8 @@ async def async_update_worker(worker_id, q, notification_q, app, datastore, exec
if not datastore.data['watching'].get(uuid):
continue
logger.debug(f"Processing watch UUID: {uuid} - xpath_data length returned {len(update_handler.xpath_data) if update_handler.xpath_data else 'empty.'}")
if process_changedetection_results:
logger.debug(f"Processing watch UUID: {uuid} - xpath_data length returned {len(update_handler.xpath_data) if update_handler and update_handler.xpath_data else 'empty.'}")
if update_handler and process_changedetection_results:
try:
datastore.update_watch(uuid=uuid, update_obj=update_obj)
@@ -430,64 +434,62 @@ async def async_update_worker(worker_id, q, notification_q, app, datastore, exec
await send_content_changed_notification(uuid, notification_q, datastore)
except Exception as e:
logger.critical(f"Worker {worker_id} exception in process_changedetection_results")
logger.critical(str(e))
logger.exception(f"Worker {worker_id} full exception details:")
datastore.update_watch(uuid=uuid, update_obj={'last_error': str(e)})
# Always record attempt count
count = watch.get('check_count', 0) + 1
if update_handler: # Could be none or empty if the processor was not found
# Always record page title (used in notifications, and can change even when the content is the same)
if update_obj.get('content-type') and 'html' in update_obj.get('content-type'):
try:
page_title = html_tools.extract_title(data=update_handler.fetcher.content)
if page_title:
page_title = page_title.strip()[:2000]
logger.debug(f"UUID: {uuid} Page <title> is '{page_title}'")
datastore.update_watch(uuid=uuid, update_obj={'page_title': page_title})
except Exception as e:
logger.exception(f"Worker {worker_id} full exception details:")
logger.warning(f"UUID: {uuid} Exception when extracting <title> - {str(e)}")
# Always record page title (used in notifications, and can change even when the content is the same)
if update_obj.get('content-type') and 'html' in update_obj.get('content-type'):
# Record server header
try:
page_title = html_tools.extract_title(data=update_handler.fetcher.content)
if page_title:
page_title = page_title.strip()[:2000]
logger.debug(f"UUID: {uuid} Page <title> is '{page_title}'")
datastore.update_watch(uuid=uuid, update_obj={'page_title': page_title})
server_header = update_handler.fetcher.headers.get('server', '').strip().lower()[:255]
datastore.update_watch(uuid=uuid, update_obj={'remote_server_reply': server_header})
except Exception as e:
logger.warning(f"UUID: {uuid} Exception when extracting <title> - {str(e)}")
pass
# Record server header
try:
server_header = update_handler.fetcher.headers.get('server', '').strip().lower()[:255]
datastore.update_watch(uuid=uuid, update_obj={'remote_server_reply': server_header})
except Exception as e:
pass
# Store favicon if necessary
if update_handler.fetcher.favicon_blob and update_handler.fetcher.favicon_blob.get('base64'):
watch.bump_favicon(url=update_handler.fetcher.favicon_blob.get('url'),
favicon_base_64=update_handler.fetcher.favicon_blob.get('base64')
)
# Store favicon if necessary
if update_handler.fetcher.favicon_blob and update_handler.fetcher.favicon_blob.get('base64'):
watch.bump_favicon(url=update_handler.fetcher.favicon_blob.get('url'),
favicon_base_64=update_handler.fetcher.favicon_blob.get('base64')
)
datastore.update_watch(uuid=uuid, update_obj={'fetch_time': round(time.time() - fetch_start_time, 3),
'check_count': count})
datastore.update_watch(uuid=uuid, update_obj={'fetch_time': round(time.time() - fetch_start_time, 3),
'check_count': count})
# NOW clear fetcher content - after all processing is complete
# This is the last point where we need the fetcher data
if update_handler and hasattr(update_handler, 'fetcher') and update_handler.fetcher:
update_handler.fetcher.clear_content()
logger.debug(f"Cleared fetcher content for UUID {uuid}")
# NOW clear fetcher content - after all processing is complete
# This is the last point where we need the fetcher data
if update_handler and hasattr(update_handler, 'fetcher') and update_handler.fetcher:
update_handler.fetcher.clear_content()
logger.debug(f"Cleared fetcher content for UUID {uuid}")
# Explicitly delete update_handler to free all references
if update_handler:
del update_handler
update_handler = None
# Explicitly delete update_handler to free all references
if update_handler:
del update_handler
update_handler = None
# Force aggressive memory cleanup after clearing
# Force garbage collection
import gc
gc.collect()
try:
import ctypes
ctypes.CDLL('libc.so.6').malloc_trim(0)
except Exception:
pass
except Exception as e:
logger.error(f"Worker {worker_id} unexpected error processing {uuid}: {e}")
logger.error(f"Worker {worker_id} traceback:", exc_info=True)
logger.exception(f"Worker {worker_id} full exception details:")
# Also update the watch with error information
if datastore and uuid in datastore.data['watching']:
datastore.update_watch(uuid=uuid, update_obj={'last_error': f"Worker error: {str(e)}"})
@@ -495,49 +497,43 @@ async def async_update_worker(worker_id, q, notification_q, app, datastore, exec
finally:
# Always cleanup - this runs whether there was an exception or not
if uuid:
# Call quit() as backup (Puppeteer/Playwright have internal cleanup, but this acts as safety net)
try:
if update_handler and hasattr(update_handler, 'fetcher') and update_handler.fetcher:
await update_handler.fetcher.quit(watch=watch)
except Exception as e:
logger.error(f"Exception while cleaning/quit after calling browser: {e}")
logger.exception(f"Worker {worker_id} full exception details:")
try:
# Release UUID from processing (thread-safe)
worker_pool.release_uuid_from_processing(uuid, worker_id=worker_id)
# Send completion signal
if watch:
#logger.info(f"Worker {worker_id} sending completion signal for UUID {watch['uuid']}")
watch_check_update.send(watch_uuid=watch['uuid'])
# Explicitly clean up update_handler and all its references
# Clean up all memory references BEFORE garbage collection
if update_handler:
# Clear fetcher content using the proper method
if hasattr(update_handler, 'fetcher') and update_handler.fetcher:
update_handler.fetcher.clear_content()
# Clear processor references
if hasattr(update_handler, 'content_processor'):
update_handler.content_processor = None
del update_handler
update_handler = None
# Clear local contents variable if it still exists
# Clear large content variables
if 'contents' in locals():
del contents
# Note: We don't set watch = None here because:
# 1. watch is just a local reference to datastore.data['watching'][uuid]
# 2. Setting it to None doesn't affect the datastore
# 3. GC can't collect the object anyway (still referenced by datastore)
# 4. It would just cause confusion
# Force garbage collection after cleanup
# Force garbage collection after all references are cleared
import gc
gc.collect()
logger.debug(f"Worker {worker_id} completed watch {uuid} in {time.time()-fetch_start_time:.2f}s")
except Exception as cleanup_error:
logger.error(f"Worker {worker_id} error during cleanup: {cleanup_error}")
logger.exception(f"Worker {worker_id} full exception details:")
del(uuid)

View File

@@ -16,6 +16,13 @@ services:
# Log output levels: TRACE, DEBUG(default), INFO, SUCCESS, WARNING, ERROR, CRITICAL
# - LOGGER_LEVEL=TRACE
#
# Plugins! See https://changedetection.io/plugins for more plugins.
# Install additional Python packages (processor plugins, etc.)
# Example: Install the OSINT reconnaissance processor plugin
# - EXTRA_PACKAGES=changedetection.io-osint-processor
# Multiple packages can be installed by separating with spaces:
# - EXTRA_PACKAGES=changedetection.io-osint-processor another-plugin
#
#
# Uncomment below and the "sockpuppetbrowser" to use a real Chrome browser (It uses the "playwright" protocol)
# - PLAYWRIGHT_DRIVER_URL=ws://browser-sockpuppet-chrome:3000

28
docker-entrypoint.sh Executable file
View File

@@ -0,0 +1,28 @@
#!/bin/bash
set -e
# Install additional packages from EXTRA_PACKAGES env var
# Uses a marker file to avoid reinstalling on every container restart
INSTALLED_MARKER="/datastore/.extra_packages_installed"
CURRENT_PACKAGES="$EXTRA_PACKAGES"
if [ -n "$EXTRA_PACKAGES" ]; then
# Check if we need to install/update packages
if [ ! -f "$INSTALLED_MARKER" ] || [ "$(cat $INSTALLED_MARKER 2>/dev/null)" != "$CURRENT_PACKAGES" ]; then
echo "Installing extra packages: $EXTRA_PACKAGES"
pip3 install --no-cache-dir $EXTRA_PACKAGES
if [ $? -eq 0 ]; then
echo "$CURRENT_PACKAGES" > "$INSTALLED_MARKER"
echo "Extra packages installed successfully"
else
echo "ERROR: Failed to install extra packages"
exit 1
fi
else
echo "Extra packages already installed: $EXTRA_PACKAGES"
fi
fi
# Execute the main command
exec "$@"

View File

@@ -51,9 +51,9 @@ linkify-it-py
# - Needed for apprise/spush, and maybe others? hopefully doesnt trigger a rust compile.
# - Requires extra wheel for rPi, adds build time for arm/v8 which is not in piwheels
# Pinned to 43.0.1 for ARM compatibility (45.x may not have pre-built ARM wheels)
# Pinned to 44.x for ARM compatibility and sslyze compatibility (sslyze requires <45) and (45.x may not have pre-built ARM wheels)
# Also pinned because dependabot wants specific versions
cryptography==46.0.3
cryptography==44.0.0
# apprise mqtt https://github.com/dgtlmoon/changedetection.io/issues/315
# use any version other than 2.0.x due to https://github.com/eclipse/paho.mqtt.python/issues/814
@@ -70,7 +70,7 @@ lxml >=4.8.0,!=5.2.0,!=5.2.1,<7
# XPath 2.0-3.1 support - 4.2.0 had issues, 4.1.5 stable
# Consider updating to latest stable version periodically
elementpath==5.1.0
elementpath==5.1.1
# For fast image comparison in screenshot change detection
# opencv-python-headless is OPTIONAL (excluded from requirements.txt)
@@ -91,7 +91,7 @@ jq~=1.3; python_version >= "3.8" and sys_platform == "linux"
# playwright is installed at Dockerfile build time because it's not available on all platforms
pyppeteer-ng==2.0.0rc12
pyppeteer-ng==2.0.0rc13
pyppeteerstealth>=0.0.4
# Include pytest, so if theres a support issue we can ask them to run these tests on their setup
@@ -148,7 +148,7 @@ tzdata
pluggy ~= 1.6
# Needed for testing, cross-platform for process and system monitoring
psutil==7.2.1
psutil==7.2.2
ruff >= 0.11.2
pre_commit >= 4.2.0